YANG – A Data Modeling Language for the Network Configuration Protocol (NETCONF)

In the spirit of validating everything against a schema (validating JSON, validating CSV), here is another option – YANG:

YANG is a data modeling language for the definition of data sent over the NETCONF network configuration protocol. The name is an acronym for “Yet Another Next Generation”. The YANG data modeling language was developed by the NETMOD working group in the Internet Engineering Task Force (IETF) and was published as RFC 6020 in October 2010. The data modeling language can be used to model both configuration data as well as state data of network elements. Furthermore, YANG can be used to define the format of event notifications emitted by network elements and it allows data modelers to define the signature of remote procedure calls that can be invoked on network elements via the NETCONF protocol. The language, being protocol independent, can then be converted into any encoding format, e.g. XML or JSON, that the network configuration protocol supports.

YANG is a modular language representing data structures in an XML tree format. The data modeling language comes with a number of built-in data types. Additional application specific data types can be derived from the built-in data types. More complex reusable data structures can be represented as groupings. YANG data models can use XPATH expressions to define constraints on the elements of a YANG data model.

Like many other standards, formats, and tools developed by very smart people, YANG can be used for much more than just networking configuration.  If you data and states fit into its model, give it a try.

Here are a few resources that you might find useful in the process:

Validating JSON against schema in PHP

GitHub was rather slow yesterday, which affected the speed of installing composer dependencies (since most of them are hosted on GitHub anyway).  Staring at a slowly scrolling list of installed dependencies, I noticed something interesting.

  - Installing seld/jsonlint (1.6.0)
  - Installing justinrainbow/json-schema (5.1.0)

Of course, I’ve heard of the seld/jsonlint before.  It’s a port of zaach/jsonlint JavaScript tool to PHP, written by Jordi Boggiano, aka Seldaek, the genius who brought us composer dependency manager and packagist.org repository.

But JSON schema? What’s that?

The last time I heard the word “schema” in a non-database context, it was in the XML domain.  And I hate XML with passion.  It’s ugly and horrible and should die a quick death.  The sooner, the better.

But with all its ugliness, XML has does something right – it allows the schema definition, against which the XML file can be validated later.

Can I have the same with JSON?  Well, apparently, yes!

justinrainbow/json-schema package allows one to define a schema for what’s allowed in the JSON file, and than validate against it.  And even more than that – it supports both required values and default values too.

Seeing the package being installed right next to something by Seldaek, I figured, composer might be using it already.  A quick look in the repository confirmed my suspicion.  Composer documentation provides more information, and links to an even more helpful JSON-Schema.org.

Mind.  Officially.  Blown.

At work, we use a whole lot of configuration files for many of our projects.  Those files which are intended for tech-savvy users, are usually in JSON or PHP format, without much validation attached to them.   Those files which are for non-technical users, usually rely on even simpler formats like INI and CSV.  I see this all changing and improving soon.

But before any of that happens, I need to play around with these amazing tools.  Here’s a quick first look that I did:

  1. Install the JSON validator: composer require justinrainbow/json-schema
  2. Create an example config.json file that I will be validating.
  3. Create a simple schema.json file that defines what is valid.
  4. Create a simple index.php file to tie it altogether, mostly just coping code from the documentation.

My config.json file looks like this:

	"blah": "foobar",
	"foo": "bar"

My schema.json file looks like this:

	"type": "object",
	"properties": {
		"blah": {
			"type": "string"
		"version": {
			"type": "string",
			"default": "v1.0.0"

And, finally, my index.php file looks like this:

require_once 'vendor/autoload.php';

use JsonSchema\Validator;
use JsonSchema\Constraints\Constraint;

$config = json_decode(file_get_contents('config.json'));
$validator = new Validator; $validator->validate(
	(object)['$ref' => 'file://' . realpath('schema.json')],

if ($validator->isValid()) {
	echo "JSON validates OK\n";
} else {
	echo "JSON validation errors:\n";
	foreach ($validator->getErrors() as $error) {

print "\nResulting config:\n";

When I run it, I get the following output:

$ php index.php 
JSON validates OK

Resulting config:
stdClass Object
    [blah] => foobar
    [foo] => bar
    [version] => v1.0.0

What if I change my config.json to have something invalid, like an integer instead of a string?

	"blah": 1,
	"foo": "bar"

The validation fails with a helpful information:

$ php index.php 
JSON validation errors:
    [property] => blah
    [pointer] => /blah
    [message] => Integer value found, but a string is required
    [constraint] => type

Resulting config:
stdClass Object
    [blah] => 1
    [foo] => bar
    [version] => v1.0.0

This is great. Maybe even beyond great!

The possibilities here are endless.  First of all, we can obviously validate the configuration files.  Secondly, we can automatically generate the documentation for the supported configuration options and values.  It’s probably not going to be super fantastic at first, but it will cover ALL supported cases and will always be up-to-date.  Thirdly, this whole thing can be taken to the next level very easily, since the schema files are JSON themselves, which means schema’s can be generated on the fly.

For example, in our projects, we allow the admin/developer to specify which database field of a table is used as display field (in links and such).  Only existing database fields should be allowed.  So, we can generate the schema with available fields on project deployment, and then validate the user configuration against his particular database setup.

There are probably even better ways to utilize all this, but I’ll have to think about it, which is not easy with the mind blown…

Update (March 16, 2017): also have a look at some alternative JSON Schema validators.  JSON Guard might be a slightly better option.

Theme fixes, improvements, and polish

If you have a lot of attention for details, you probably noticed a few things moving around and changing on this blog in the last few days. You weren’t dreaming – I indeed moved changed a few things. Here is a round-up for those of you who enjoy these sort of things:

  • List of categories moved up. Since I am interested in and blog about many different things, I don’t blame you if you would like to skip some of them and read only things that you care about. I moved the list of categories higher up in the sidebar, so that you could jump directly to the topic of your choice.
  • Full posts in categories, tags, and archives. This should also make reading posts about specific things easier. You won’t need to jump to the full post page all that often now. Less clicks and all.
  • Category header images. Some categories (see Photography, Movies, and Technology for examples) will greet you with different header images (once again, thanks to Igor Gorbulinsky for his talent and time). This feature should help you out a bit while navigating the site – instant indicator of where you are.
  • Highlight of category name, tag, and search query. When you navigate to posts of a specific tag or category, you should see the term at the top of the page. Sometimes the term is highlighted, like, for example, in case of search query. Also, sometimes, you have a link to RSS feed which provides easier access to similar posts.
  • Improved RSS feed auto-discovery. Depending on where you are on the site, your browser will suggest a different set of RSS feeds to subscribe to. I’m trying to make these things as intuitive as possible.
  • Improved browser compatibility and standard compliance. A few small glitches here and there were fixed. All RSS feeds are valid now, except for those rare cases when content of specific posts causes problems. CSS is now valid and many warnings are fixed. HTML is now almost valid. There are a few issues which which are caused by WordPress bugs, but fixes for these seem to be available in the upcoming version of WordPress. In any case, it seems all theme and plugin specific issues were fixed.
  • Upgraded WordPress to version 2.3.3 .  This is the latest version with all the security fixes and such.

As you can see from the list above, all of these changes are rather cosmetic and can be classified as web site polish. None of them should cause any issues to you or your browser, and much of the misbehaving functionality should be fixed now.

If you have any ideas on suggestions on further improvements, or if you notice any misbehavior at all, please let me know.