Samstag, 7. September 2013


Inner Pattern to mimic Method Overwriting in Go
Implicits in Scala: Conversion on Steroids
Go-style Goroutines in Java and Scala using HawtDispatch
JDK8 lambdas and anonymous classes
Why I like Eclipse and sometimes not
Groovy 2.0 Performance compared to Java
STIters: Smalltalk-style collection iterators in Kotlin
Beyond C/C++: High-level system programming in D

Inner Pattern to mimic Method Overwriting in Go

Go [1, 2] is a relatively new statically typed programming language developed at Google that compiles to the metal. It feels like a modernized C in the way that is resembles C a lot, but has closures, garbage collection, communicating sequential processes (CSP) [3, 4, 5] and indeed very fast build times (and no exception handling, no templates, but also no macros). Build times in Go are that fast that it is almost instant and feels like working with a dynamically typed scripting language with almost no turnaround times. Performance in Go 1.2 is somewhat in the league with Java [6, 7, 8, 9]. Could be faster, but it is already a lot faster than scripting languages like Ruby, Python, or Lua. Go is also underpowered at the moment as there is room for optimization. From looking at job ads Go seems to appeal most to people in the Python, PHP or Ruby camp. Contrary to Python, Ruby, and Lua, Go does multi-threading well and makes thread synchronization easier through the use of CSP [10].


Go relies on delegation, which by Go aficionados is called embedding (see chapter "Inheritance" in [2]). There is no inheritance and hence no method overriding. This means that "f.MoreMagic()" in the last line in the code snippet below (sample shamelessly stolen from [2]) does not print "foo magic" to the console as expected but "base magic":

package main

import "fmt"

type Base struct {}
func (Base) Magic() { fmt.Print("base magic") }
func (self Base) MoreMagic() {

type Foo struct {
func (Foo) Magic() { fmt.Print("foo magic") }

f := new(Foo)
f.Magic() //=> foo magic
f.MoreMagic() //=> base magic

So is there a way to mimic method overriding in Go at a reasonable cost? Many Go developers would consider only the idea of mimicking it in Go as not in line with the language and beside the point. But method overriding offers a great deal of flexibility being able to redefine default inherited behavior and make an object or algorithm behave as appropriate for its kind in a transparent way. 

Inner Pattern

What first stroke my mind when looking at the Magic/MoreMagic sample was the inner construct [12] in the Beta programming language, which is some kind of opposite super. Applying that idea in this case I came up with a solution, which I named the "inner pattern":

package main

import "fmt"

type Animal interface {   
    ActInner(inner Animal) // tells developer to define ActInner being called from Act

type Mamal struct {

func (self Mamal) makeNoise() {
    fmt.Println("default noise")

func (self Mamal) ActMamal() {

func (self Mamal) ActInner(inner Animal) {

type Dog struct {

func (self Dog) makeNoise() {
    fmt.Println("woof! woof!")

func (self Dog) Act() {

type Cat struct {

func (self Cat) makeNoise() {
    fmt.Println("meow! meow!")

func (self Cat) Act() {

func main() {
    dog := new(Dog)
    dog.ActMamal() // prints "default noise" but not "woof! woof!"
    dog.Act() // prints "woof! woof!" as expected

    cat := new(Cat)   
    cat.ActMamal() // prints "default noise" but not "meow! meow!"
    cat.Act() // prints "meow! meow!" as expected

Note that the function makeNoise has to be public (e.g. MakeNoise) in case structs Cat and Dog are placed in separate packages which for simplicity reasons is not the case in the sample code above. Otherwise, the code would still compile but at runtime always Mamal.makeNoise would be called instead of Cat.makeNoise or Dog.makeNoise depending on the type of the receiver object).

So we get "method overriding" this way at the cost that we need to stick to some kind of convention: If there is a method in a struct delegated to that has a parameter named inner like ActInner(inner Animal), we need to add a method Act() in our "subclass":

func (self Dog) Act() {

func (self Cat) Act() {

This solution is not nicely transparent as f.ex. in Java where you would just add a method act() to your subclass that overrides the inherited method act() and that's it. Coming to think of it in C++ you can only override an inherited method if the inherited one is marked as virtual. So in C++ and other languages like Kotlin [13] or Ceylon [14] you also need to "design ahead" and think of whether a method is intended to be overridable. And this solution with actIner(inner Animal) in Go does not even carry the runtime overhead of dynamically dispatched virtual functions.

Also, in case struct Dog or Cat does not implement the function makeNoise along with function ActInner, the function Mamal.makeNoise() will be called at runtime. The Go compiler won't complain about some "subclass" Dog or Cat not implementing the "abstract" method makeNoise as for instance in Java or other OO languages that support abstract classes. There is no way round that as in the end a price has to be paid for not having a performance penalty in Go due to dynamically dispatched method calls compared to OO languages that support method overwriting.


My preliminary conclusion of all this is to use the "inner pattern" in Go as an ex post refactoring measure when code starts to lack "too much" in transparency. To apply it to begin with is too much pain where the gain in the end is uncertain. Otherwise, only apply it ex ante when it is clear from the beginning that the flexibility will be needed anyway as f.ex. with templated algorithms.

By the way, I think Rob Pike is right with what he says about CSP [10]. So I had a look how it can be done in Java. Groovy has it already [15] and for Java there is [16]. When casting CSP into an object-oriented mold you end up with something like actors. The best actor implementation for Java/Scala is probably Akka.


[1] Effective Go
[2] Google Go Primer
[3] Communicating Sequential Processes by Hoare
[4] Goroutines
[5] Race Detector
[6] Benchmarks Game
[7] Performance of Rust and Dart in Sudoku Solving
[8] Benchmarks Round Two: Parallel Go, Rust, D, Scala
[9] Benchmarking Level Generation in Go, Rust, Haskell, and D
[10] Robert Pike in "Origins of Go Concurrency" (Youtube video, from about position 29:00):

"The thing that is hard to understand until you've tried it is that the whole business about finding deadlocks and things like this doesn't come up very much. If you use mutexes and locks and shared memory it comes up about all the time. And that's because it is just too low level. If you program like this (using channels) deadlocks don't happen very much. And if they do it is very clear why, because you have got this high-level state of your program 'expressing this guy is trying to send here and he can't, because he is here'. It is very very clear as opposed to being down on the mutex and shared memory level where you don't know who owns what and why. So I'm not saying it is not a problem, but it is not harder than any other class of bugs you would have."
[11] Go FAQ Overloading
[12] Super and Inner — Together at Last! (PDF)
[13] Inheritance in Kotlin
[14] Inheritance in Ceylon
[15] CSP in Groovy
[16] Go-style Goroutines in Java and Scala using HawtDispatch

Freitag, 30. August 2013

Implicits in Scala: Conversion on Steroids

Also published on, see link

With the use of implicits in Scala you can define custom conversions that are applied implicitly by the Scala compiler. Other languages also provide support for conversion, e.g. C++ provides the conversion operator (). Implicits in Scala go beyond what the C++ conversion operator makes possible. At least I don't know of any other language where implicit conversion goes that far as in Scala. Let's have a look at some sample Scala code to demonstrate this:

class Foo { 
    def foo {

class Bar {
    def bar {

object Test
    implicit def fooToBarConverter(foo: Foo) = {
        print("before ")
        print(" after ")
        new Bar
    def main(args: Array[String]) {
        val foo = new Foo 

Running Test.main will print to the console: "before foo after bar". What is happening here? When Test.main is run the method bar is invoked on foo, which is an instance of class Foo. However, there is no such method bar defined in class Foo (and in no superclass). So the compiler looks for any implicit conversion where Foo is converted to some other type. It finds the implicit fooToBarConverter and applies it. Then it tries again to invoke bar, but this time on an instance of class Bar. As class Bar defines some method named bar the problem is resolved and compilation continues. For a more detailed description about implicits compilation rules see this article by Martin Odersky, Lex Spoon, and Bill Venners. Note that no part of the conversion code from Foo to Bar is defined neither in class Foo nor in class Bar. This is what makes Scala implicits so powerful in the given sample (and also a bit dangerous as we shall see in the following).

If we tried to get something similar accomplished in C++ we would end up with something like this (C++ code courtesy to Paavo Helde, some helpful soul on comp.lang.c++):

#include <iostream>

class Foo {
    void foo() {
        std::cout << "foo\n";

class Bar {
        Bar(Foo foo) {
                std::cout << "before\n";
                std::cout << "after\n";
    void bar() {
        std::cout << "bar\n";

void bar(Bar bar) {;

int main() {
      Foo foo;

There are also "to" conversion operators defined by the syntax like:

class Foo {
            operator Bar() const {return Bar(...);}

Note that such conversions are often considered "too automatic" for robust C++ code, and thus commonly the "operator Bar()" style conversion operators are just avoided and the single-argument constructors like Bar(Foo foo) are marked with the 'explicit' keyword so the code must explicitly mention Bar in its invocation, e.g. bar(Bar(foo)).

The C++ code including the comment in the paragraph above is courtesy to Paavo Helde. As it can be seen it is not possible in C++ to achieve the same result as with implicits in Scala: There is no way to move the conversion code completely out of both class Foo and Bar and getting things to compile. So conversion in C++ is less powerful than in Scala on one hand. On the other hand it is also less scary than implicits in Scala where it might get difficult to maintain a large Scala code base over time if implicits are not handled with care.

Looking for a matching implicit to resolve some compilation error can keep the compiler busy if it repeatedly has to look through a large code base. This is also why the compiler only tries the first matching implicit conversion it can find and aborts compilation if applying the found implicit won't resolve the issue. Also, if implicits are overused you can run into a situation where you need to step through your code with the debugger to figure out the conversion that results in a different output being created than expected. This is an issue that made the guys developing Kotlin drop implicits from their Scala-like language (see reference). The problem that you can shoot yourself into your foot when overusing implicits is well known in the Scala community, for instance in "Programming in Scala" [1] it says on page 189: "Implicits can be perilously close to "magic". When used excessively, they obfuscate the code's behavior. (...) In general, implicits can cause mysterious behavior that is hard to debug! (...) Use implicits sparingly and cautiously.".

What remains on the positive side is a powerful language feature that has often proven to be very useful if applied with care. For instance Scala implicits do a great job in glueing together disparate APIs transparently or in achieving genericity. This article only dealt with one specific aspect of implicits. Scala implicits have many other applications, see f.ex. this article by Martin Odersky, Lex Spoon, and Bill Venners.

[1] "Programming in Scala", Dean Wampler & Alex Payne, O'Reilly, September 2009, 1st Edition.

Sonntag, 9. Juni 2013

Go-style Goroutines in Java and Scala using HawtDispatch

Also published on, see link

In Google Go any method or closure can be run asynchronously when prefixed with the keyword go. According to the documentation "(...) they're called goroutines because the existing terms—threads, coroutines, processes, and so on—convey inaccurate connotations" (see reference). For that reason I also stick to the term goroutine. Goroutines are the recommended method for concurrent programming in Go. They are lightweight and you can easily create thousands of them. To make this efficient goroutines are multiplexed onto multiple OS threads. Networking and concurrency is really what Go is about.

This blog describes how to make use of HawtDispatch to achieve a very similar result in Java. HawtDispatch is a thread pooling and NIO event notification framework, which does the thread multiplexing that in Go is built into the language. There is also a Scala version of HawtDispatch. So the approach described here for Java can be applied in the same way in Scala. The code shown in this blog can be downloaded here from GitHub (includes a maven pom.xml to get HawtDispatch installed). Go provides channels as a means for goroutines to exchange information. We can model channels in Java through JDK5 BlockingQueues.

Let's have a look at some Go code that makes use of goroutines and channels (the sample code is shamelessly stolen from this article, see the chapter named "Channels"):

ch := make(chan int)

go func() {
  result := 0
  for i := 0; i < 100000000; i++ {
    result = result + i
  ch <- result

/* Do something for a while */

sum := <-ch // This will block if the calculation is not done yet
fmt.Println("The sum is: ", sum)
Making use of JDK8 default methods we can define in our Java world something like a keyword go. For that purpose I created one named async (using pre-JDK8 we would have to stick to little less elegant static methods):

public interface AsyncUtils {
    default public void async(Runnable runnable) {
The async method will execute Runnables on a random thread of a fixed size thread pool. If you wanted to implement something like actors using HawtDispatch you would use serial dispatch queues. Here is a simplistic actor implemented using HawtDispatch (with queueing being serial through the use of the queue class DispatchQueue):

public class HelloWorldActor {
    private DispatchQueue queue = Dispatch.createQueue()

    public void sayHello() {
        queue.execute(()->{ System.out.println("hello world!"); });
    public static void main(String[] args) {
        HelloWorldActor actor = new HelloWorldActor();
        actor.sayHello(); // asynchronously prints "hello world" 
To be precise the HelloWorldActor in the snippet above is more of an active object as functions are scheduled rather than messages as with actors. This little actor sample was shown to demonstrate that you can do much more with HawtDispatch than just running methods asynchronously. Now it is getting time to implement the sample in Go in Java with what we have built up so far. Here we go:

public class GoroutineTest implements AsyncUtils {  
    public void sumAsync() throws InterruptedException
        BlockingQueue<Integer> channel = new LinkedBlockingQueue<>();

            int result = 0;
            for(int i = 0; i < 100000000; i++) {
                result = result + i;

        /* Do something for a while */
        int sum = channel.take();
        System.out.println("The sum is: " + sum);

    public void tearDown() throws InterruptedException {


The code presented here would also work with pre-JDK8 since JDK8 is not a requirement for HawtDispatch. I just preferred to make use of JDK8 lambdas and defender methods to get the sample code more compact. 

Mittwoch, 2. Januar 2013

JDK8 lambdas and anonymous classes

Preview releases of the upcoming JDK8 including the long-awaited lambdas are available for several months meanwhile. Time to have a look at lambdas to see what they are and what you can expect from them.

So today, I downloaded the latest preview release of the JDK8 from to have a look at the upcoming lambdas in JDK8. To my despair, this code snippet did not compile: 

        List<Integer> ints = new ArrayList<>();

        int sum = 0;
        ints.forEach(i -> { sum += i; });

The compiler error was: "value used in lambda expression should be effectively final". The compiler complains here that the variable sum had not been declared final. Also see this blog post, that is part of the JDK8 lambda FAQ, which explains the matter (I perpetually insist on having found the issue independently from this post ;-)). So lambdas in JDK8 carry exactly the same restriction as anonymous classes and you have to resort to the same kind of workaround:

int sumArray[] = new int[] { 0 };
ints.forEach(i -> {sumArray[0] += i;}); 


This works and prints 6 as expected. Note, that the compiler did not complain here about sumArray not being declared final as it is effectively final: "A variable is effectively final if it is never assigned to after its initialization" (see link). This is a new feature in JDK8 as the code below does not compile with a pre-JDK8 if value is not declared final:

final long[] value = new long[] { 0 };
Runnable runnable = new Runnable() {          
    public void run() {
        value[0] = System.currentTimeMillis();

However, this means that JDK8 lambdas are not true closures since they cannot refer to free variables, which is a requirement for an expression to be a closure:

"When a function refers to a variable defined outside it, it's called a free variable. A function that refers to a free lexical variable is called a closure.". Paul Graham, ANSI Common Lisp, Prentice Hall, 1996, p.107.

The free variable gives the closure expression access to its environment:

"A closure is a combination of a function and an environment.". Paul Graham, ANSI Common Lisp, Prentice Hall, 1996, p.108.

In the end we can conclude that JDK8 lambdas are less verbose than anonymous classes (and there is no instantiation overhead as with anonymous classes as lambdas compile to method handles), but they carry the same restrictions as they do. The lambda specification (JSR 335) also says so explicitly: "For both lambda bodies and inner classes, local variables in the enclosing context can only be referenced if they are final or effectively final. A variable is effectively final if it is never assigned to after its initialization.". Here is also a link to an article where Neal Gafter himself (who was a member of the BGGA team) tried to explain why inner classes are no closures (read the comments section). However, all this is only a little tear drop as the usefulness of closures is preserved to a large extend. An imense amount of pre-JDK8 boilerplate code can be replaced with much more concise expressions now. And in the end, you can anyway still do this:
        int sum =, (x, y) -> x + y);

Nevertheless, the difference between JDK8 lambdas and closures is worth a note as it is good to know. There is a nice write-up about many the things you can do with JDK8 lambdas in this blog post. Here is some sample code from it: 

List<String> names = Arrays.asList("Alice", "Bob", "Charlie", "Dave");

   .mapped(e -> { return e.length(); })
   .filter(e -> e.getValue() >= 4)
   .sorted((a, b) -> a.getValue() - b.getValue())
   .forEach(e -> { System.out.println(e.getKey() + '\t' + e.getValue()); });

We can also reference a static method:

private static void sayHello() {

The lambda FAQ says about the restriction on local variable capture explained in this article: "The restriction on local variables helps to direct developers using lambdas aways from idioms involving mutation; it does not prevent them. Mutable fields are always a potential source of concurrency problems if sharing is not properly managed; disallowing field capture by lambda expressions would reduce their usefulness without doing anything to solve this general problem.". 

The author is making the point here that immutable variables, like those declared final, cannot be changed inadvertently by some other thread. A free variable referenced from within a closure expression (but declared outside the closure) is allocated somewhere on the heap, which means that it is not local to some specific stack (hence it is free). Being allocated on the heap a free variable can be seen by all other threads as well. This way a free variable can effectively become a variable shared between threads for which access needs to be synchronized to prevent stale data from happening. So the finalness restriction for JDK8 lambdas helps in avoiding that kind of trouble. Note, however, that this is only true for simple types or objects that are not nested as can be seen in the sample code in the beginning of this text with the single-element-array sumArray: the final variable sumArray cannot be re-assigned, but the single element it holds can be changed any time.

Sonntag, 21. Oktober 2012

Why I like Eclipse and sometimes not

I learned from the comedy movie Borat that a typical way to turn a statement into a humorous one is to append "not" at the end of it. So I did this as well in the title of this article. Admittedly, the main reason being though, that no one would otherwise read an article titled "Why I like Eclipse" ... ;-).

I often happen to meet people in projects that are really into NetBeans or IntelliJ IDEA and not into Eclipse at all. These people don't understand why someone like me would work with Eclipse (I also use IntelliJ IDEA quite a bit). The problem is here that explaining why I like Eclipse results in a long talk which is first about Eclipse background knowledge that demands a lot of patience and distracts people for too long time from their work. Secondly, a long talk is followed about why I feel very productive when good code browsers as in Eclipse are at my disposal. So I'm trying to explain it in this little article once and for all for the benefit of the world (eventually, you need to append again "not" at this place). Don't worry, it's not going to be one-sidedly as I will also talk about the things in Eclipse that are not that amusing. It's merely about code browsers and their differences than specifically about Eclipse.

Code Browsing

The real reason I like Eclipse is its powerful code navigation and code browsing capability only comparable to the code browsing its big idol which is the excellent Smalltalk environment. I'm willing to sacrifice a lot of other things as long as I have that. Let me quote Bertrand Meyer: “Smalltalk is not only a language but a programming environment, covering many of the aspects traditionally addressed by the hardware and the operating system. This environment is Smalltalk’s most famous contribution”. [1] This statement includes that Smalltalk not only is a language with an IDE on top, but a computing environment as such. This coherence has gone lost with Eclipse which has good and bad consequences. But this is a different topic, too long to talk about it here as well. The people that have worked with Smalltalk understand what this means. But the people that have not, only gaze fixedly at you for a moment and then continue working. So my contribution in this article is aimed at explaining what this is about. Earlier, people often have heard about Smalltalk and are willing to listen for a while. Nowadays, you have to say something like "Smalltalk is the system that had closures from the beginning already in 1980 from which later this clone was made starting with a 'J'". Otherwise people would not even stop coding for a second. Or you have to say something like "Smalltalk is the system Steve Jobs was looking at when visiting Xerox Parc (see also this article about Xerox Parc) when he said that this is the way he wants the user interface to be on the computers he is producing" (user interface with icons, moveable windows that can be collapsed, and a mouse).

What's the catch about excellent code browsing capability then? Problem is that when your code starts to grow, you somewhen reach a point where it is hard to keep oversight. Well, that is was structured programming is for: you can structure your code and then there is abstraction, information hiding, modularity, inheritance, polymorphism and more. But somewhen you can't remember any more in which class what method was placed and it is sometimes still hard to keep oversight even with abstraction and all that. I have seen people that are nevertheless able to understand their code very well using simple development tools only. Therefore, I agree that you don't necessarily need to have an IDE with good code browsers. For some people it's a necessity. For others it's a matter about comfort and maybe also developer productivity.

Eclipse’s heritage from the Smalltalk IDE

Eclipse was developed by the people of a company named OTI in Ottawa, Canada, that used to develop and market the other big Smalltalk development system at that time in the market (besides ParcPlace Smalltalk, now Cincom Smalltalk) which was first Envy/Developer and then OTI Smalltalk (I’m not sure about the name here). When the development of Eclipse started OTI was already acquired by IBM as IBM wanted to sell OTI’s Smalltalk system as IBM Smalltalk as a replacement for their failed CASE tool strategy. The product named IBM VisualAge for Smalltalk was also very successful (especially in the finance sector) at a time where there was only C++ and Smalltalk for serious production quality OO development. Later Java came along and IBM abandoned its Smalltalk system, sold it to Instantiations and jumped onto the Java train developing IBM VisualAge for Java. VisualAge for Java was very much like the Smalltalk IDE only the programming language being Java: It was an interactive development environment where almost any statement could be selected and executed at development time. You could look at your data in inspectors in which you also could send messages to objects dynamically at development time. From what I have heard VisualAge for Java itself was developed in IBM Smalltalk, but I cannot provide evidence for this. This was IMHO a very productive development environment and everything was fine as long as your application only consisted of the code you were writing. But then web development came along and this was no longer true as now, beside source code files, a plethora of all other kinds of file types came into play: html, jsp, xml, css, jar, war, ear, and much more and they all have to be bundled together. The latter was as much a problem for Smalltalk’s/VA Java’s approach to create a runtime package as the former. So VisualAge for Java was abandoned and Eclipse was developed. If you managed to get to this line, the bits of Eclipse history I had to provide are now behind you ;-).

Code Browsing in Eclipse

So far I have not mentioned why code browsing in Eclipse is so fantastic (let’s say it is better than in many other IDEs at least). There are different browsers for different things. If you are working on code files only you can use the "Java Browsing" perspective. You see the packages and their classes of your project at a glance and everything else is removed. You can still have the "Java" perspective  where your Java code and all the other types of files are visible at once. You can have all the browsers you work with side by side each in a window of its own. Select Window > Prefrences > General > Perspectives > Open a new perspective and select "In a new window". From now on every perspective you open will open in a new window of its own. Most people working with Eclipse I have seen don't know this feature at all. But this is the usual way the Smalltalk IDE was intended to be used. Then Eclipse has an equivalent for the Smalltalk class hierarchy browser. It is also not activated by default. To do so you have to go to Window > Prefrences > Java > When opening a type hierarchy and select "Open a new Type Hierarchy Perspective". I always found this browser to be very useful when working on an abstract class and some of its concrete subclasses, because you can really concentrate on just that what matters in that regard.

I once had a situation where Eclipse ran out of memory, which was probably caused by the memory demands of the OSGi implementation when building from within Eclipse. But because of me using several browsers at the same time in Eclipse as I used to do it earlier when developing with Smalltalk, some colleague was absolutely sure that having that many browsers open consumed too much memory. When I switch between perspectives that are displayed in the same window, memory remains allocated for all of them just the same way as when they are opened in their own window. No way you could switch between perspectives that quickly, otherwise. But that argument just didn't fly. Some people are that used to just working with a single window IDE that anything different appears simply weird to them.

And why I sometimes don’t like Eclipse

Eclipse provides a solid base to place an IDE on top of it for all kinds of things. Its Java plugin is also very useful. But it is not always as good at specific tasks such as code completion (IntelliJ IDEA is IMHO awesome here), refactoring (needless to say that "Refactoring was conceived in Smalltalk circles” [2]), “intra-file navigation” (jump from some JSF xhtml statement to the underlying Java code, etc.). It does not have an excellent Swing IDE builder such as NetBeans. When you develop a web application all the plugins that come into play are not nicely integrated and concerted as in IntelliJ IDEA. Also MyEclipse does not do much about this in the end. The weakness of Eclipse in short is that it stops after providing a plugin platform and a Java plugin. From then on every one is left to his own devices. A lot of nice people have developed very respectable plugins for all kinds of things, but they miss “calibration” with related plugins rendering them isolated solutions.

Then Eclipse has become sluggish and sometimes irresponsible. I’m not amused how often Eclipse is irresponsible and I have to wait till it’s responsive again. I don’t know exactly what the reason is in every case. Maybe just someones plugin is not well written and is causing this. Whatever, as already said, other IDEs don’t have this problem as all the plugins that come into play are inter-coordinated and tastefully furnished. 

Last but not least, at the time of writing (21st October 2012) Eclipse has still no support for JDK8 lambdas and default methods.This is because Eclipse’s Java compiler is built into Eclipse and cannot be easily separated (you can define a custom builder for your project which will call the javac of the JDK you defined. But the JDT will still not be able to deal with JDK8-style lambdas). So the whole thing has to be exchanged. This is probably some heritage from Smalltalk as well where the whole thing was a single system. Earlier at that time, this was unmatched coherence (compared to piping in a myriad of little Unix tools). Nowadays it's considered inflexible and monolithic. I use IntelliJ IDEA 12 EAP for my current little spare time JDK8 lambda project. So far there was never a problem to get any lambda expression compiled and to run. Simply amazing.

Last and least, I really wished NetBeans and IntelliJ IDEA had a class browser in addition like the one in Smalltalk or something like the “Java Browsing” perspective in Eclipse. When you are working on code only and no html, xml, css, or whatever files are part of your application, IMHO, there is nothing like it. But in todays world there is no way to develop an application without any xml (or json nowadays), for example. But I'm convinced there is a way to merge the best of Eclipse/NetBeans/IntelliJ IDEA with the best of the Smalltalk IDE.

1. Bertrand Meyer, Object-oriented Software Construction, Prentice Hall, 1988, p.439.

2. Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, don Roberts, Refactoring: Improving the Design of Existing Code, Addison-Wesley, 1999, p.6.


Samstag, 25. August 2012

Groovy 2.0 Performance compared to Java

Also published on, see link.

End of July 2012 Groovy 2.0 was released with support for static type checking and some performance improvements through the use of JDK7 invokedynamic and type inference as a result of type information now available through static typing.

I was interested in seeing some estimate as to how significant the performance improvements in Groovy 2.0 have turned out and how Groovy 2.0 would now compare to Java in terms of performance. In case the performance gap meanwhile had become minor or at least acceptable it would certainly be time to take a serious look at Groovy. Groovy is ready for production for a long time. Let's see whether it can compare with Java in terms of performance.

The only performance measurement I could find on the Internet was this little benchmark measurment on jlabgroovy. The measurement only consists of calculating Fibonacci numbers with and without the @CompileStatic annotation. That's it. Certainly not very meaningful to get an overall impression, but I was only interested in obtaining some rough estimate how Groovy now compares to Java concerning performance.

Java performance measurement included

Alas, no measurement was included in this little benchmark how much time Java takes to calculate Fibonacci numbers. So I "ported" the Groovy code to Java (here it is) and repeated the measurements. All measurements were done on an Intel Core2 Duo CPU E8400 3.00 GHz using JDK7u6 running on Windows 7 with Service Pack 1. I used Eclipse Juno with the Groovy plugin using the Groovy compiler version 2.0.0.xx-20120703-1400-e42-RELEASE. These are the figures I obtained without having a warm-up phase:

Groovy 2.0
without @CompileStatic
Groovy 2.0
with @CompileStatic
static ternary 4352ms 4.7 926ms 1.0 1005ms 924ms
static if 4267ms 4.7 911ms 0.9 1828ms 917ms
instance ternary 4577ms 2.7 1681ms 1.8 994ms 917ms
instance if 4592ms 2.9 1604ms 1.7 1611ms 969ms

I also did measurements with a warm-up phase of various length with the conclusion that there is no benefit for neither language with @CompileStatic or without. Since the Fibonacci algorithm is that recursive the warm-up phase seems to be "included" for any Fibonacci number that is not very small.

We can see that the performance improvements due to static typing has made quite a difference. This little comparison does not make up for an only little ambitious performance comparison. But to me the impression that static typing in Groovy in conjunction with type inference has led to significat performance improvements, the same way as with Groovy++, has become very strong. With @CompileStatic the performance of Groovy is about 1-2 times slower than Java and without about 3-5 times slower. Unhappily, the measurements for "instance ternary" and "instance if" are the slowest. Unless we want to create master pieces in programming with static functions, the measurements for "static ternary" and "static if" are not that relevant for most of the code with the ambition to be object-oriented (based on instances).


The times where Groovy was somewhat 10-20 times slower than Java (see benchmark table almost at the end of this article) are definitely over whether @CompileStatic is used or not. This means to me that Groovy is ready for applications where performance has to be somewhat comparable to Java. Earlier, Groovy (or Ruby, Closure, etc.) could only serve as a plus on your CV, because of the performance impediment (at least here in Europe).

New JVM kid on the block: Kotlin

I added the figures for Kotlin as well (here is the code). Kotlin is a relatively new statically typed JVM-based Java-compatible programming language. Kotlin is more concise than Java by supporting variable type inference, higher-order functions (closures), extension functions, mixins and first-class delegation, etc. Contrary to Groovy, it is more geared towards Scala, but also integrates well with Java. Kotlin is still under development and not officially released, yet. So the figures have to be taken with caution as the guys at JetBrains are still working on the code optimization (see KT-2687). Ideally, Kotlin should be as fast as Java (see this post). The measurements were done with the current "official" release 0.1.2580.

And what about future performance improvements?

At the time when JDK1.3 was the most recent JDK I still earned my pay with Smalltalk development. At that time the performance of VisualWorks Smalltalk (now Cincom Smalltalk) and IBM VA for Smalltalk (now owned by Instantiations) was very well comparable to Java. And Smalltalk is a dynamically typed language like pre-Groovy 2.0 and Ruby, where the compiler cannot make use of type inference to do optimizations. Because of this, it always appeared strange to me that Groovy, Ruby and other JVM-based dynamic languages had such a big performance penalty compared to Java when Smalltalk had not. Well, coming to think about it: Hot Spot runtime optimization in Java was taken from Smalltalk, anyway (see this article). Nothing beats the arrogance of a Smalltalk developer, even not a Mac enthusiast... Seems like creating a JVM-based language is easier than optimizing it's byte code. From that point of view I think that there is still room for Groovy performance improvements beyond @CompileStatic.