I am loving GitHub code review so much that people are suggesting that I should change my title to Comparer of Unmerged Neglectable Technicalities. Otherwise known as … nevermind.
Tag: quality assurance
Mozilla Litmus – integrated testcase management and QA tool
Water testing is not a term (for software testing)
I’ve been hearing the term “water testing” for one of the work projects that I am involved in. Â The term is used to describe the stage of the project when it’s available on the production servers with live data, but open only to a subset of the users. Â After searching around for a bit, I can’t find a reference to this term anywhere, except the water industry:
Water testing is a broad description for various procedures used to analyze water quality.
So that of course sent me on to the path of finding the correct term. Â The closest by analogy that I heard of is “smoke testing“.
The plumbing industry started using the smoke test in 1875.
Later this usage seems to have been forgotten, leading some to believe the term originated in the electronics industry: “The phrase smoke test comes from [electronic] hardware testing. You plug in a new board and turn on the power. If you see smoke coming from the board, turn off the power. You don’t have to do any more testing.”
Specifically for software development and testing:
In computer programming and software testing, smoke testing is preliminary testing to reveal simple failures severe enough to reject a prospective software release. In this case, the smoke is metaphorical. A subset of test cases that cover the most important functionality of a component or system are selected and run, to ascertain if the most crucial functions of a program work correctly. For example, a smoke test may ask basic questions like “Does the program run?”, “Does it open a window?”, or “Does clicking the main button do anything?” The purpose is to determine whether the application is so badly broken that further testing is unnecessary. As the book “Lessons Learned in Software Testing”  puts it, “smoke tests broadly cover product features in a limited time … if key features don’t work or if key bugs haven’t yet been fixed, your team won’t waste further time installing or testing”.
Smoke testing performed on a particular build is also known as a build verification test.
A daily build and smoke test is among industry best practices.
This sounds very much like “sanity testing“:
A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. It is a simple check to see if the produced material is rational (that the material’s creator was thinking rationally, applying sanity). The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. A rule-of-thumb may be checked to perform the test. The advantage of a sanity test, over performing a complete or rigorous test, is speed.
[…]
In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is often prior to a more exhaustive round of testing.
After reviewing all sorts of testing types, I think the correct term for our scenario is actually “beta testing“:
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
From 15 hours to 15 seconds: reducing a crushing build time
From 15 hours to 15 seconds: reducing a crushing build time
In summary:
- Bad Practice #1: We favoured integration tests over unit tests.
- Bad Practice #2: We had many, many features that were relatively unimportant.
- Bad Practice #3: Our integration tests were actually acceptance tests.
- Bonus tip:Â run the build entirely on the tmpfs in-memory file system.