Here are a couple of bits that I liked in “Why programmers are not paid in proportion to their productivity” blog post:
How can someone be 10x more productive than his peers without being noticed? In some professions such a difference would be obvious. A salesman who sells 10x as much as his peers will be noticed, and compensated accordingly. Sales are easy to measure, and some salesmen make orders of magnitude more money than others. If a bricklayer were 10x more productive than his peers this would be obvious too, but it doesn’t happen: the best bricklayers cannot lay 10x as much brick as average bricklayers. Software output cannot be measured as easily as dollars or bricks. The best programmers do not necessarily write 10x as many lines of code and they certainly do not work 10x longer hours.
Programmers are most effective when they avoid writing code.
The romantic image of an über-programmer is someone who fires up Emacs, types like a machine gun, and delivers a flawless final product from scratch. A more accurate image would be someone who stares quietly into space for a few minutes and then says “Hmm. I think I’ve seen something like this before.”
Harward Business Review runs this article: “Drunk People Are Better at Creative Problem Solving“. Here are a few quotes to get you started:
Tipsy subjects solved 13% to 20% more problems than sober subjects did.
Intoxicated subjects had more “Aha!” moments than their sober counterparts.
People under the influence submitted answers more quickly than people in the control group.
I rest my case, ladies and gentlemen.
Chris Cornutt wrote “PREPARING FOR PENTESTING (@ LONGHORN PHP 2018)” blog post for his upcoming talk at the conference. I’d gladly attend the talk, but the time and place didn’t work out for me this time. Here are a few useful links from his blog post that might come in handy for anyone evaluating the security of their PHP application and preparing for the penetration testing:
- OWASP Top 10 2017 – the ten most critical web application security risks
- PortSwigger Burp Suite (community edition)
- PHP Security Cheat Sheet
- Top 7 PHP Security Blunders
- The 2018 Guide to Building Secure PHP Software
The above are not a replacement for the talk, but if you are like me and can’t attend, these should at least get you started in the right direction.
Quality Assurance is an important part of the software development. There are many tools available that help with a variety of problems in this domain. At work, we have already been using quite a few of them – mostly those that deal with automated testing – PHPUnit, PHP CodeSniffer, Nightwatch.js, TravisCI, BitBucket Pipelines, and more.
But the above tools are mostly for software developers. With the expansion of our quality assurance efforts, I am looking at some more tools and this time around, those that are aimed more towards QA engineers and testers. One particular area that I am currently very interested in is the tool for test (and requirements) management.
My experience in this area is very limited. I just know that such tools do exist. Most of them are propitiatory and expensive, and are used by large organizations. We are not a large company. Our needs are simpler. And our budget for this is not great yet.
So, here is what I’m looking for:
- A web-based tool to manage test cases, test plans, test runs, and test results.
- This tool should support git version control.
- This tool should integrate well with GitHub and BitBucket.
- This tool should integrate well with TravisCI and BitBucket Pipelines.
- This tool should integrate well with Redmine.
- This tool should integrate well with HipChat.
- This tool must support multiple projects.
- This tool must support both manual and automated tests.
- Preferably, the tool should be Open Source software.
- Preferably, the tool should be free (as in money).
- Preferably, the tool should be written in PHP, as that’s what where we have a lot of in-house expertise.
If you know of a tool that matches all or most of the above, please let me know.
MySQL 8.0 has been released and it brings the following new features, enhancements, and more:
- SQL Window functions, Common Table Expressions, NOWAIT and SKIP LOCKED, Descending Indexes, Grouping, Regular Expressions, Character Sets, Cost Model, and Histograms.
- JSON Extended syntax, new functions, improved sorting, and partial updates. With JSON table functions you can use the SQL machinery for JSON data.
- GIS Geography support. Spatial Reference Systems (SRS), as well as SRS aware spatial datatypes, spatial indexes, and spatial functions.
- Reliability DDL statements have become atomic and crash safe, meta-data is stored in a single, transactional data dictionary. Powered by InnoDB!
- Observability Significant enhancements to Performance Schema, Information Schema, Configuration Variables, and Error Logging.
- Manageability Remote management, Undo tablespace management, and new instant DDL.
- Security OpenSSL improvements, new default authentication, SQL Roles, breaking up the super privilege, password strength, and more.
- Performance InnoDB is significantly better at Read/Write workloads, IO bound workloads, and high contention “hot spot” workloads. Added Resource Group feature to give users an option optimize for specific workloads on specific hardware by mapping user threads to CPUs.
Gergely Orosz, an engineer who worked at Uber on the large scale payments system used by the company, shares some of the distributed architecture concepts he had to learn in the blog post titled “Distributed architecture concepts I learned while building a large payments system“.
The article is very well written and easy to follow. But it’s also a goldmine of links to other resources on the subject. Here’s a list links and concepts for a quick research and/or click-through later:
- Service Level Agreements (SLAs).
- Availability / service uptime (in percentage of time a year)
- Accuracy (in percentage)
- Capacity (in requests per second)
- Latency (95% and 99%)
- Horizontal vs. vertical scaling
- Horizontal scaling is adding more machines, much preferred for distributed systems.
- Vertical scaling is upgrading machines to the more powerful ones.
- Data Durability (here‘s some more on the subject)
- Message Persistence and Durability
- Idempotency (here‘s some more on the different strategies)
- Sharding and Quorum
- The Actor Model
- Reactive Architecture
Almost a decade ago, my colleague Deepak Singh introduced the AWS Public Datasets in his post Paging Researchers, Analysts, and Developers. I’m happy to report that Deepak is still an important part of the AWS team and that the Public Datasets program is still going strong!
Today we are announcing a new take on open and public data, the Registry of Open Data on AWS, or RODA. This registry includes existing Public Datasets and allows anyone to add their own datasets so that they can be accessed and analyzed on AWS.
Currently, there are 53 data sets in the registry. Each provides a tonne of data. Subjects vary from satellite imagery and weather monitoring to political and financial information.
Hopefully, this will grow and expand with time.
One of the greatest things about the Amazon AWS services is that they save a tonne of time on the reinventing the wheel. There are numerous technologies out there and nobody has the time to dive deep, learn, and try all of them. Amazon AWS often provides ready-made templates and configurations for people who just want to try a technology or a tool, without investing too much time (and money) into figuring out all the options and tweaks.
“Get Started with Blockchain Using the new AWS Blockchain Templates” is one example of such predefined and pre-configured setup, for those who want to play around with Blockchain. Just think of how much time it would have taken somebody who just wants to spin up their own Etherium network with some basic tools and services just to check the technology out. With the predefined templates you can be up and running in minutes, and, once you are comfortable, you can spend more time rebuilding the whole thing, configuring and tweaking everything.