If you missed the opportunity to attend the recent AnsibleFest Austin 2018 event, here are a couple of interesting links for you, via Jeff Geerling’s blog (aka geerlingguy):
There’s plenty of stuff to play with over the next weekend or two.
In the spirit of validating everything against a schema (validating JSON, validating CSV), here is another option – YANG:
YANG is a data modeling language for the definition of data sent over the NETCONF network configuration protocol. The name is an acronym for “Yet Another Next Generation”. The YANG data modeling language was developed by the NETMOD working group in the Internet Engineering Task Force (IETF) and was published as RFC 6020 in October 2010. The data modeling language can be used to model both configuration data as well as state data of network elements. Furthermore, YANG can be used to define the format of event notifications emitted by network elements and it allows data modelers to define the signature of remote procedure calls that can be invoked on network elements via the NETCONF protocol. The language, being protocol independent, can then be converted into any encoding format, e.g. XML or JSON, that the network configuration protocol supports.
YANG is a modular language representing data structures in an XML tree format. The data modeling language comes with a number of built-in data types. Additional application specific data types can be derived from the built-in data types. More complex reusable data structures can be represented as groupings. YANG data models can use XPATH expressions to define constraints on the elements of a YANG data model.
Like many other standards, formats, and tools developed by very smart people, YANG can be used for much more than just networking configuration. If you data and states fit into its model, give it a try.
Here are a few resources that you might find useful in the process:
Listing, Iterating, and Loading JSON in Ansible Playbooks – for those days when you need to offload part of your configuration onto external JSON files, but don’t have a spare day to try, fail and repeat.
CSV, or comma-separated values, is a very common format for managing all kinds of configurations, as well data manipulation. As the linked Wikipedia page mentions, there are a few RFCs that try to standardize the format. However, I thought, there is still a lack of schema-type standard that would allow one to define a format for particular file.
Today I came across an effort that attempts to do just that – CSV Schema Language v1.1 – an unofficial draft of the language for defining and validating CSV data. This is work in progress by the Digital Preservation team at The National Archives.
Apart from the unofficial draft of the language, there is also an Open Source CSV Validator v1.1 application, written in Scala.
In “Why Configuration Management and Provisioning are Different” Carlos Nuñez advocates for the use of specialized infrastructure provisioning tools, like Terraform, Heat, and CloudFormation, instead of relying on the configuration management tools, like Ansible or Puppet.
I agree with his argument for the rollbacks, but not so much for the maintaining state and complexity. However I’m not yet comfortable to word my disagreement – my head is all over the place with clouds, and I’m still weak on the terminology.
The article is nice regardless, and made me look at the provisioning tools once again.