I noticed a curious symmetry the other day. For several values of X, a dynamic approach has been gaining traction over a static approach, in some cases for several years.
X = Languages
The Ascendency of Dynamic Languages vs. Static Languages
This one is pretty obvious. It’s hard not to notice the resurgent interest in dynamically-typed languages, like Ruby, Python, Erlang, and even stalwarts like Lisp and Smalltalk.
There is a healthy debate about the relative merits of dynamic vs. static typing, but the “hotness” factor is undeniable.
X = Correctness Analysis
The Ascendency of Dynamic Correctness Analysis vs. Static Correctness Analysis
Analysis of code to prove correctness has been a research topic for years and the tools have become pretty good. If you’re in the Java world, tools like PMD and FindBugs find a lot of real and potential issues.
One thing none of these tools have ever been able to do is to analyze conformance of your code to your project’s requirements. I suppose you could probably build such tools using the same analysis techniques, but the cost would be too prohibitive for individual projects.
However, while analyzing the code statically is very hard, watching what the code actually does at runtime is more tractable and cost-effective, using automated tests.
Test-driving code results in a suite of unit, feature, and acceptance tests that do a good enough job, for most applications, of finding logic and requirements bugs. The way test-first development improves the design helps ensure correctness in the first place.
It’s worth emphasizing that automated tests exercise the code using representative data sets and scenarios, so they don’t constitute a proof of correctness. However, they are good enough for most applications.
X = Optimization
The Ascendency of Dynamic Optimization vs. Static Optimization
Perhaps the least well known of these X’s is optimization. Mature compilers like gcc have sophisticated optimizations based on static analysis of code (you can see where this is going…).
On the other hand, the javac compiler does not do a lot of optimizations. Rather, the JVM does.
The JVM watches the code execute and it performs optimizations the compiler could never do, like speculatively inlining polymorphic method calls, based on which types are actually having their methods invoked. The JVM puts in low-overhead guards to confirm that its assumptions are valid for each invocation. If not, the JVM de-optimizes the code.
The JVM can do this optimization because it sees how the code is really used at runtime, while the compiler has no idea when it looks at the code.
Just as for correctness analysis, static optimizations can only go so far. Dynamic optimizations simply bypass a lot of the difficulty and often yield better results.
Steve Yegge provided a nice overview recently of JVM optimizations, as part of a larger discussion on dynamic languages.
There are other dynamic vs. static things I could cite (think networking), but I’ll leave it at these three, for now.