Category Archives: Work - Page 2

Regular Encoding of Binary Trees

Encoding of binary trees as a regular language?

A couple of months ago, a question on cstheory.SE asked whether there is a regular encoding of binary trees. My immediate gut feeling was: no way! Binary trees are equivalent to Dyck words and the language of those is not regular1.

I was able to back this feeling with an approach using generating functions. I showed that the number of binary trees with \(n\) nodes does not have a rational generating function; as all regular languages have rational structure functions, I claimed that this shows that binary trees can not be encoded with a regular language. This argument turned out to be somewhat fishy or incomplete since an encoding does not have to map all trees of size \(n\) to encodings of size \(f(n)\); at this point, the counting argument breaks down. Read more »


  1. Easily shown with Pumping Lemma for regular languages.

B. Liskov on the Power of Abstraction

Barbara Liskov giving her Turing lecture

Last week, Barbara Liskov visited Kaiserslautern and gave her talk On the Power of Abstraction. I had known her from the substitution principle named after her1 and taught to every computer scientist and programmer, but she is far more distinguished than that: Barbara Liskov is an ACM Fellow, holds the ACM Turing Award and the IEEE John von Neumann Medal and a couple of other awards, telltale of her influence on the field.

The talk she gave is actually her Turing lecture, obligatory for Turing awardees. The room was so packed that quite some people had to remain standing, and it was worthwhile to have come. You can watch the lecture—in another instance—in full here. For me as a CS novice, the lecture is a small time travel machine to the seventies: Barbara talks about the history of how abstract datatypes and related concepts came to be. As someone who grew up with languages like Java, it is hard to imagine a world where all programming there is is literally done on assembler level. So, lots of the material presented should feel trivial to the modern mind, but Barbara manages to instill the spirit of the time into her listeners. After her talk, you know why she got awards for things you take for granted today.

Since the lecture itself can be watched online, I just want to share my favorite quotes and anecdotes I wrote down during the talk; I hope I am not misquoting.

[Barbara] did not apply at MIT because she did not want to be a nerd.

I have no idea how I got that idea. [...] It was ready to be discovered.

You never need optimal performance, you need good-enough performance. [...] Programmers are far too hung up with performance.

There were no GOTOs because I believed Dijkstra.

It’s really just common sense.

Why did she get this award? Everybody knows this anyway!

Especially the last quote demonstrates, if unintentionally, the impact of Liskov’s and others’ work of that time. If something you conceive is considered common sense 30 years later, you really had a brilliant idea.


  1. She did not name it, of course. Apparently, she received an email in the 90′s by somebody asking her wether he got “her” principle right, surprising her She had not known that the principle had borne her name for years in the community.

Coding Scala with Geany and SBT

When I started programming in Scala I was naturally looking for a suitable IDE. Due to former experience and preference, I disregarded Eclipse and tried Netbeans with its Scala plugin but could not get it running properly. I had more luck with IntelliJ IDEA, that is it worked. But it is performing sluggishly and has a lot of bugs. In particular, many compiler errors are not found by IDEA, and some things it marks as errors are none. Scala’s complex type system is not all well implemented, so syntax completion is no help, either. That the Scala compiler itself runs at turtle speed was merely the last drop in my barrel of discomfort. As it was, working productively was impossible. Therefore, I decided to switch to a regular editor and simple-build-tool, something I saw mentioned everywhere people asked for good Scala IDEs. Read more »

Sick of Java? Try Scala!

You like Java for its tools and safety, but hate its verbosity? You like strong guarantees and pattern matching in functional programming, but feel constricted by its chains? You like Ruby, Python et al. for their flexibility and conciseness, but hate their unpredictability and tardiness? Rejoice, because salvation might be at hand. Late last year, I came across the relatively new programming language Scala and became immediately infatuated with it. The official website describes Scala like this:

Scala is a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way. It smoothly integrates features of object-oriented and functional languages, enabling Java and other programmers to be more productive. Code sizes are typically reduced by a factor of two to three when compared to an equivalent Java application.

That is, Scala has the full modelling power of object orientation. There is literally nothing you can do in Java and can not in Scala. In fact, Scala arguable does a number of things better than Java: everything is an object (even primitives, functions, classes), generics are more consistent and powerful and multiple inheritance is perfectly possible. Additionally, many features you come to like in functional programming are present in Scala, if not as completely as you might wish. Type inference, pattern matching and lazy members are only examples. Also, working with immutable objects (not to say values) is inherently ingrained in both language and library. Type inference together with a number of compiler trickyness can give you the illusion of dynamic programming while retaining type safety at all times.

In short, Scala enables you to model object oriented, code (almost) as concisely as in dynamic languages and (almost) as safe as in functional languages. Read more »

N. Misra on Polynomial Kernelization

Neeldhara Misra, PhD student from IMSc (Chennai, India), recently visited our department at Chalmers. I had the opportunity to attend her well-given talk about the occasional infeasibility of polynomial kernelization. I liked the ideas she presented so I want to share them; you can read her own summary and download her complete work on her website.

Idea of Kernelization

Kernelization is about formalising preprocessing for hard problems and figuring out what can and cannot be achieved. In particular, NP-complete problems are studied; for these problems no fast (i.e. with polynomial bound on runtime) algorithms have been found yet. The idea is to crunch down a given instance of size to a size that is manageable by — more or less — naive algorithms while preserving equivalence, that is the reduced instance should have a solution if and only if the original instance has one. The notion of manageability is captured by a problem and instance dependent parameter ; we say that we have a polynomial kernel if there is an equivalent problem whose size is bounded by a polynomial in (i.e. independent of ). If is small — which is often the case in practical scenarios — the kernel can be solved reasonably quickly. Of course, the reduction process should then also be fast. Remains to mention that has to be chosen wisely; only knowing at least an upper bound for enables useful kernelizations. Read more »