faast.js – serverless batch computing made simple

faast.js is a new framework that makes writing serverless functions super easy. Read more about it in this introductory blog post:

Faast.js started as a side project to solve the problem of large scale software testing. Serverless functions seemed like a good fit because they could scale up to perform work in parallel, then scale down to eliminate costs when not being used. Even better, all infrastructure would be managed by the cloud provider. It seemed like a dream come true: a giant computer that could be as big as needed for the job at hand, yet could be rented in 100ms increments.
But trying to build this on AWS Lambda was challenging:
* Complex setup. Lambda throws you into the deep end with IAM roles, permissions, command line tools, web consoles, and special calling conventions. Lambda and other FaaS are oriented towards an event-based processing model, and not optimized for batch processing.
* Primitive package dependency support. Everything has to be packaged up manually in a zip file. Every change to the code or tests requires a manual re-deploy.
* Native packages. Common testing tools like puppeteer are supported only if they are compiled specially for Lambda.
* Persistent infrastructure. Logs, queues, and functions are left behind in the cloud after a job is complete. These incur costs and count towards service limits, so they need to be managed or removed, creating an unnecessary ops burden.
* Developer productivity. Debugging, high quality editor support, and other basic productivity tools are awkward or missing from serverless function development tooling.
Faast.js was born to solve these and many other practical problems, to make serverless batch processing as simple as possible.

And here’s the quick visualization of the architecture for you.

PHP in 2019

Here’s a nice overview of “PHP in 2019” for those who are still trying to avoid the language because of some preconceived or outdated reasons.

TL;DR
* PHP is actively developed with a new release each year
* Performance since the PHP 5 era has doubled, if not tripled
* There’s a extremely active eco system of frameworks, packages and platforms
* PHP has had lots of new features added to it over the past few years, and the language keeps evolving
* Tooling like static analysers has matured over the past years, and only keeps growing

And here are some of the recent features that have made it into the language:

Here’s a non-exhaustive list of new features in PHP:
* Short closures
* Null coalescing operator
* Traits
* Typed properties
* Spread operator
* JIT compiler
* FFI
* Anonymous classes
* Return type declarations
* Contemporary cryptography
* Generators
* Lots more

CSSFX – beautifully simple click-to-copy CSS effects

CSSFX is a collection of CSS effects with previews and very simple implementation instructions. Just click on the effect demo that you like, and a popup with HTML and CSS code snippets will appear, ready to be used on your site.

There’s also a GitHub repository, if you prefer it this way.

Debugging in Vim

Personally, I’m not a frequent user of debuggers. Most of the projects and code that I am involved with is easily debugged with good old “die(‘here’)”. But if you are looking for some help on how to use Vim with a debugger, have a look at the “Debugging in Vim” blog post.

Why software projects take longer than you think – a statistical model

Why software projects take longer than you think – a statistical model” is an interesting take on the problem of bad estimations in software projects. I’m not that great with math, but even then the article is very interesting. And there is a lot that I agree with.

Here’s a quote for those of you who couldn’t make it through:

Why software tasks always take longer than you think

Assuming this dataset is representative of software development (questionable!), we can infer some more numbers. We have the parameters for the t-distribution, so we can compute the mean time it takes to complete a task, without knowing the σ for that task is.


While the median blowup factor imputed from this fit is 1x (as before), the 99% percentile blowup factor is 32x, but if you go to 99.99% percentile, it’s a whopping 55 million! One (hand wavy) interpretation is that some tasks end up being essentially impossible to do. In fact, these extreme edge cases have such an outsize impact on the mean, that the mean blowup factor of any task ends up being infinite. This is pretty bad news for people trying to hit deadlines!