[lm]azy » Talk http://lmazy.verrech.net books, computer science and ramblings Tue, 29 Mar 2016 19:59:53 +0000 en-US hourly 1 http://wordpress.org/?v=3.7.1 B. Liskov on the Power of Abstraction http://lmazy.verrech.net/2011/05/b-liskov-on-the-power-of-abstraction/ http://lmazy.verrech.net/2011/05/b-liskov-on-the-power-of-abstraction/#comments Mon, 30 May 2011 10:00:06 +0000 http://lmazy.verrech.net/?p=3195 Last week, Barbara Liskov visited Kaiserslautern and gave her talk On the Power of Abstraction. I had known her from the substitution principle named after her and taught to every computer scientist and programmer, but she is far more distinguished …

Read more »

]]>

Barbara Liskov giving her Turing lecture

Last week, Barbara Liskov visited Kaiserslautern and gave her talk On the Power of Abstraction. I had known her from the substitution principle named after her1 and taught to every computer scientist and programmer, but she is far more distinguished than that: Barbara Liskov is an ACM Fellow, holds the ACM Turing Award and the IEEE John von Neumann Medal and a couple of other awards, telltale of her influence on the field.

The talk she gave is actually her Turing lecture, obligatory for Turing awardees. The room was so packed that quite some people had to remain standing, and it was worthwhile to have come. You can watch the lecture—in another instance—in full here. For me as a CS novice, the lecture is a small time travel machine to the seventies: Barbara talks about the history of how abstract datatypes and related concepts came to be. As someone who grew up with languages like Java, it is hard to imagine a world where all programming there is is literally done on assembler level. So, lots of the material presented should feel trivial to the modern mind, but Barbara manages to instill the spirit of the time into her listeners. After her talk, you know why she got awards for things you take for granted today.

Since the lecture itself can be watched online, I just want to share my favorite quotes and anecdotes I wrote down during the talk; I hope I am not misquoting.

[Barbara] did not apply at MIT because she did not want to be a nerd.

I have no idea how I got that idea. [...] It was ready to be discovered.

You never need optimal performance, you need good-enough performance. [...] Programmers are far too hung up with performance.

There were no GOTOs because I believed Dijkstra.

It’s really just common sense.

Why did she get this award? Everybody knows this anyway!

Especially the last quote demonstrates, if unintentionally, the impact of Liskov’s and others’ work of that time. If something you conceive is considered common sense 30 years later, you really had a brilliant idea.


  1. She did not name it, of course. Apparently, she received an email in the 90′s by somebody asking her wether he got “her” principle right, surprising her She had not known that the principle had borne her name for years in the community.
]]>
http://lmazy.verrech.net/2011/05/b-liskov-on-the-power-of-abstraction/feed/ 0
N. Misra on Polynomial Kernelization http://lmazy.verrech.net/2010/10/n-misra-on-polynomial-kernelization/ http://lmazy.verrech.net/2010/10/n-misra-on-polynomial-kernelization/#comments Mon, 11 Oct 2010 10:00:11 +0000 http://lmazy.verrech.net/?p=1988 Neeldhara Misra, PhD student from IMSc (Chennai, India), recently visited our department at Chalmers. I had the opportunity to attend her well-given talk about the occasional infeasibility of polynomial kernelization. I liked the ideas she presented so I want to …

Read more »

]]>
Neeldhara Misra, PhD student from IMSc (Chennai, India), recently visited our department at Chalmers. I had the opportunity to attend her well-given talk about the occasional infeasibility of polynomial kernelization. I liked the ideas she presented so I want to share them; you can read her own summary and download her complete work on her website.

Idea of Kernelization

Kernelization is about formalising preprocessing for hard problems and figuring out what can and cannot be achieved. In particular, NP-complete problems are studied; for these problems no fast (i.e. with polynomial bound on runtime) algorithms have been found yet. The idea is to crunch down a given instance of size to a size that is manageable by — more or less — naive algorithms while preserving equivalence, that is the reduced instance should have a solution if and only if the original instance has one. The notion of manageability is captured by a problem and instance dependent parameter ; we say that we have a polynomial kernel if there is an equivalent problem whose size is bounded by a polynomial in (i.e. independent of ). If is small — which is often the case in practical scenarios — the kernel can be solved reasonably quickly. Of course, the reduction process should then also be fast. Remains to mention that has to be chosen wisely; only knowing at least an upper bound for enables useful kernelizations.

Example

Consider the well-known VERTEX COVER problem in its decision version, that is:

Given a graph and , check wether there exists a set such that while .

This problem is NP-complete. Now we do some preprocessing, that is thinking. What about a vertex with more than edges? Well, it needs to be included in our set because if it was not, all adjacent vertices had to be chosen in order to cover all those edges — but there are more than , so that cannot yield a feasible solution. So we remember as taken and remove it from the graph. We now do the same thing with as target size; you can see the recursion.

This process either terminates when we have chosen more than nodes or we have arrived at a reduced graph with some parameter for which we can not eliminate any vertices. In the former case, we can immediately output — after polynomial time in . In the other case, we are left with an instance that has only vertices with less than edges. Note that at most of such vertices can only cover at most edges. Therefore, by counting edges, we can immediately output or we know that we have only vertices left, dropping isolated edges that are obviously no help at all. The (connected) graph that remains is our kernel with polynomial size in .

Note that can really be small compared to the number of vertices. Intuitively, if has some relation to the sparsity of the given graph, effects can be tremendous, since many graphs out there are very sparse. Think of Facebook with hundreds of millions of vertices but very small cliques of maybe dozens.

No Free Lunch

In the above example, choosing the classical decision problem parameter as kernelization parameter was both obvious and rewarding. This need not always be the case. Neeldhara assured me that there are problems that have polynomial kernels but only for far more obscure choices of .

Also, kernelization does not always work. Even if we consider CONNECTED VERTEX COVER, only a simple variation of VC that demands the resulting graph must be connected, no polynomial kernel exists in general. In particular, the above algorithm fails since we can no longer simply drop nodes with degree greater but must take care of connectivity.

Noteworthy

Looking at the above example, we immediately recognize a strong relation of polynomial kernelizability and fixed parameter tractability (FPT); performing any efficient preprocessing, we get a total runtime bounded by with a polynomial and any function. This is exactly the kind of runtime that makes a problem treatable, that is runtime is bounded polynomially, for a fixed .

The reverse inclusion does not hold under , though, that is there are FPT problems that have no polynomial kernel; see for example Bodlaender et al. Consequently, a whole theory has been built in order to investigate the precise capabilities of kernelization. There is apparently much to do. Good hunting!

]]>
http://lmazy.verrech.net/2010/10/n-misra-on-polynomial-kernelization/feed/ 0