RelEng 2015 (Part 2)
This the second part of my report on the RelEng 2015 conference that I attended a few weeks ago. Don't forget to read part 1!
I've put up links to the talks whenever I've been able to find them.
Defect Prediction
Davide Giacomo Cavezza gave a presentation about his research into defect prediction in rapidly evolving software. The question he was trying to answer was if it's possible to predict if any given commit is defective. He compared static models against dynamic models that adapt over the lifetime of a project. The models look at various metrics about the commit such as:
Number of files the commit is changing
Entropy of the changes
The amount of time since the files being changed were last modified
Experience of the developer
His simulations showed that a dynamic model yields superior results to a static model, but that even a dynamic model doesn't give sufficiently accurate results.
I wonder about adopting something like this at Mozilla. Would a warning that a given commit looks particularly risky be a useful signal from automation?
Continuous Deployment and Schema Evolution
Michael de Jong spoke about a technique for coping with SQL schema evolution in a continuously deployed applications. There are a few major issues with changing SQL schemas in production including:
schema changes typically block other reads/writes from occurring
application logic needs to be synchronized with the schema change. If the schema change takes non-trivial time, which version of the application should be running in the meanwhile?
Michael's solution was to essentially create forks of all tables whenever the schema changes, and to identify each by a unique version. Applications need to specify which version of the schema they're working with. There's a special DB layer that manages the double writes to both versions of the schema that are live at the same time.
Securing a Deployment Pipeline
Paul Rimba spoke about security deployment pipelines.
One of the areas of focus was on the "build server". He started with the observation that a naive implementation of a Jenkins infrastructure has the master and worker on the same machine, and so the workers have the opportunity to corrupt the state of the master, or of other job types' workspaces. His approach to hardening this part of the pipeline was to isolate each job type into its own docker worker. He also mentioned using a microservices architecture to isolate each task into its own discrete component - easier to secure and reason about.
We've been moving this direction at Mozilla for our build infrastructure as well, so it's good to see our decision corroborated!
Makefile analysis
Shurui Zhou presented some data about extracting configuration data from build systems. Her goal was to determine if all possible configurations of a system were valid.
The major problem here is make (of course!). Expressions such as $(shell
uname -r)
make static analysis impossible.
We had a very good discussion about build systems following this, and how various organizations have moved themselves away from using make. Somebody described these projects as requiring a large amount of "activation energy". I thought this was a very apt description: lots of upfront cost for little visible change.
However, people unanimously agreed that make was not optimal, and that the benefits of moving to a more modern system were worth the effort.
Comments