Eight Rules for Effective Software Production

Timofey Nevolin wrote an excellent article “Eight Rules for Effective Software Production” over at Toptal.com.  The whole thing is well worth a read, but here are the 8 rules to get you started:

  1. Understand the IT Mentality
  2. Do Not Mix Software Production and Development Methodologies
  3. Use Persistent Storage as an Extension to Human Memory
  4. Stop Wasting Time on Formal Time Estimation
  5. Understand the Cost of Switching Tasks and Juggling Priorities
  6. Use Architecture Reviews as a Way to Improve System Design
  7. Value Team Players
  8. Focus on Teamwork Organization

All of these are good and true, but if I had to pick one, I’d say that rule 4 is my personal favorite.  Sometimes I feel like I’ve spent a good quarter of my life in meetings, discussions, and other estimation sessions.  And looking back at all of them, I have to honestly say that they all of them were a waste of time.  The only useful part of an estimation session is a high-level project plan, but that can be achieved with a project planning session – much narrower goal, much more measurable, and much easier to achieve.

Programmer Interrupted

Slashdot runs a thread on “Are Remote Software Teams More Productive?“.  The original post links to a few research references that, unsurprisingly, show how expensive interruptions are to programmers, and how unprepared we are, as an industry, to deal with this problem.  I particularly liked a rather in-depth look at the issue in “Programmer Interrupted” article.

Like you, I am programmer, interrupted. Unfortunately, our understanding of interruption and remedies for them are not too far from homeopathic cures and bloodletting leeches.

Here are a few points, if the article is too long for you to handle:

Based on a analysis of 10,000 programming sessions recorded from 86 programmers using Eclipse and Visual Studio and a survey of 414 programmers (Parnin:10), we found:

  • A programmer takes between 10-15 minutes to start editing code after resuming work from an interruption.
  • When interrupted during an edit of a method, only 10% of times did a programmer resume work in less than a minute.
  • A programmer is likely to get just one uninterrupted 2-hour session in a day

And also this bit on the worst time to interrupt a programmer:

If an interrupted person is allowed to suspend their working state or reach a “good breakpoint”, then the impact of the interruption can be reduced (Trafton:03). However, programmers often need at least 7 minutes before they transition from a high memory state to low memory state (Iqbal:07). An experiment evaluating which state a programmer less desired an interruption found these states to be especially problematic (Fogarty:05):

  • During an edit, especially with concurrent edits in multiple locations.
  • Navigation and search activities.
  • Comprehending data flow and control flow in code.
  • IDE window is out of focus.

Overall, not surprising at all, but it’s nice to have some numbers and research papers to point to…

Incident pit

I came across the Wikipedia page for incident pit, which was a concept derived from analyzing multiple incident reports in diving:

The diagram shown is something that has evolved from studying many incident reports. It is important to realize that the shape of the “Pit” is in no way connected with the depth of water and that all stages can occur in very shallow water or even on the surface.

The basic concept is that as an incident develops it becomes progressively harder to extract yourself or your companion from a worsening situation. In other words the farther you become “dragged” into the pit the steeper the sides become and a return to the “normal” situation is correspondingly more difficult.

Underwater swimming may be considered to be an activity where, due to the environment and equipment plus human nature, there is a continuing process of minor incidents – illustrated by the top area of the pit.

When one of these minor incidents becomes difficult to cope with, or is further complicated by other problems usually arriving all at the same time, the situation tends to become an emergency and the first feelings of fear begin to appear – illustrated by the next layer of the pit. If the emergency is not controlled at this early stage then panic, the diver’s worst enemy, leads to almost total lack of control and the emergency becomes a serious problem – illustrated by the third layer of the pit. Progression through to the final stage of the pit from the panic situation is usually very rapid and extremely difficult to reverse and a fatality may be inevitable – illustrated by the final black stage of the pit.

The time for an incident to evolve in this way can be as short as 30 seconds or less, illustrated by the straight line passing directly through all the stages in the centre of the pit, or it may be more a slower process building up over a period of one minute or more [maybe a week!] – illustrated by the curving lines running from the [top] extremities of the pit. In this later case it represents the slowly evolving incident when the diver or group may not be aware that a serious situation is in fact developing. Between 30 seconds and about 1 minute is representative of the time required to take the necessary decisions and actions when it becomes obvious that an incident is about to happen.

The final conclusion is simple: never allow incidents to develop beyond the top normal layer of activity. If you find yourself being drawn into the second stage – the emergency – then use all of your training skill and experience to extract yourself and your companions from the pit before the sides become too steep!

I don’t think is applicable to diving only.  Similar incident pits exist in other areas of human activity (technology, business, politics, healthcare, and others come to mind) which involve crisis management.  The circumstances and the time frames might be slightly different, but overall, I think, it’s pretty similar.

 

 

Software Engineering at Google

Fergus Henderson, who has been a software engineer at Google for 10 years, published the PDF document entitled “Software Engineering at Google“, where he collects and describes key software engineering practices the company is using.

It covers the following:

  • software development – version control, build system, code review, testing, bug tracking, programming languages, debugging and profiling tools, release engineering, launch approval, post-mortems, and frequent rewrites.
  • project management – 20% time, objectives and key results (OKRs), project approval, and corporate reorganizations.
  • people management – roles, facilities, training, transfers, performance appraisal and rewards.

Some of these practices are widely known, some not so much.  There are not a lot of details, but the overall summaries should provide enough food for thought for anyone who works in the software development company or is involved in management.

 

Maintainers Don’t Scale

Daniel Vetter, one of the Linux kernel maintainers, shares some thoughts on why maintainers don’t scale, what it takes to do the job, what has changed recently and what needs to change in the future.

This reminded me of this infographic, which depicts a year (even though back in 2012 – probably much busier these days) for another kernel maintainer – Greg Kroah-Hartman.  Note that the number of emails does not include the messages on the Linux kernel mailing list (LKML), which is in its own category of busy:

The Linux kernel mailing list (LKML) is the main electronic mailing list for Linux kernel development, where the majority of the announcements, discussions, debates, and flame wars over the kernel take place. Many other mailing lists exist to discuss the different subsystems and ports of the Linux kernel, but LKML is the principal communication channel among Linux kernel developers. It is a very high-volume list, usually receiving about 1,000 messages each day, most of which are kernel code patches.