Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
Interviewing job-seeking candidates is probably the most impactful thing that I do at Microsoft as far as our business is concerned. Sure, the day-to-day work of implementing the compiler is of course what I am specifically there to do. But ultimately nothing impacts the bottom line of our division more than preventing bad hires and encouraging good hires. The dozens of people that I’ve interviewed who got hired will collectively deliver much more value (or do much more damage!) than I can alone. So I think about it a lot.
I find it interesting to notice the common threads that show up in the surprisingly disparate group that is Microsoft interview candidates. A big red flag for me that I see fairly often I jokingly characterize as a form of magical thinking. Now, as someone who values diversity and cherishes the inalienable human right to believe in any old crazy thing you want, I of course do not care at all if candidates engage in magical thinking on their own time. But magical thinking about software concerns me greatly. For example, I should never be allowed anywhere near network drivers, as my beliefs about routing are clearly magical in nature.
The trouble is that I occasionally see this sort of thing in people who are not making silly jokes about it.
The technical interview question I usually ask is a deliberately vague and abstract version of a real problem that I had to solve back when I was working on the VBScript engine. The details aren’t germane to this particular discussion, but suffice to say that every candidate quickly figures out that a necessary step in solving the problem is the automatic generation of a guaranteed-to-be-unique “cookie” value of some sort. The “cookie” is used to track and control the progress of a “task” being performed by a “server”.
You can learn a lot about a candidate from their approach to this sub-problem. Candidates, sensibly enough, always try to solve the unique cookie problem by using an off-the-shelf tool that they are familiar with, like:
There are pros and cons to all of these approaches; none of them is necessarily “right” or “wrong”. (And we explore the pros and cons as the interview continues.) Where I start to see magical thinking though is when I ask the candidate to assume that their chosen solution is simply not yet implemented on their platform. Rather, they are the developer who needs to implement the tool if they want to use it in the overall solution.
I fondly remember the moment of dawning comprehension when I asked a particular candidate “But suppose you had to write the code in the database implementation that auto-generates the primary key on the table you are using to solve this problem. How would that code work?” The candidate was completely taken aback, and just stared at me for a moment before saying “wow, I never before thought about the fact that someone had to write code that does that.” Apparently in his world creating primary keys is done by the primary key pixies who live in the b-tree forest. :-)
Turns out a lot of people think that GUIDs are generated by the GUID goblins, that random numbers are created by the RNG ogres, and so on.
In reality, all of these tools I mentioned were implemented by writing code. And therefore someone had to analyze the problem space, weigh the costs and benefits of possible approaches, design a solution, implement a solution, test it, document it and ship it to customers. No magic involved!
Of course this is not to say that we shouldn’t treat abstractions as abstractions. You don’t want to be relying upon the implementation details of an abstraction, and you don’t want to waste mental cycles understanding precisely how every tool does its job right down to the movement of the electrons. Moreover, there’s nothing wrong at all with being a developer who takes existing tools and strings them together to make something new; we all do that. But what I want to know is can the candidate make their own tools? Because that’s the business my team is in: making tools.
I thought it was obvious that guids were created by goblins. Afterall, the acronym is Goblin Unique Identifier. My understanding is that they come from Gringotts.
Only Microsoft GUIDs are created by goblins (which is why they are GUIDs). Elsewhere, it is considered better practice to use unicorns for the job to produce UUIDs, as per the corresponding RFC.
As for the story - it reminded me of TheDailyWTF post another day where a group of senior developers were convinced that "finally" blocks are magical in a way Eric describes here - so much so that code in them would execute in _any_ case - even hard reset, or pulling out the plug. Here it is:
Indeed, that's one of my favourites. A finally block only executes if control leaves the try; there are all kinds of ways that control might not leave the try. -- Eric
GUIDs do not come from goblins. I have it on the best authority how this actually works:
When an application requests a new GUID, a web service call is made to a Microsoft server. That server chooses a random installation of Vista somewhere on the Internet and sends a message that crashes the most important application currently running on that system. The remote system then captures the keystrokes generated by the frustrated end-user banging on the keyboard and returns them to Microsoft as part of the "Send Error" functionality.
These random keystrokes are then given to the original requester as a new GUID.
See? No goblins.
(I could have resisted, but chose not to. :D )
> a global threadsafe counter
If that's not implemented on the platform in question, then what is? Maybe the thread safe part might not already exist, but it can't be to hard to devise one with locks or Interlocked.CompareExchange.
You might be surprised by the number of candidates who have never heard of such things.
If the candidate chooses a counter then the tack I'd take would be to see if the candidate can describe the problems inherent with counters (they wrap around, for example) and find ways to mitigate them. This alone can be made a fairly complex problem -- Eric
I'm also not sure I get the point of your followup question. You can always delve somewhat deeper by assuming whatever you want to use doesn't exist, but you're eventually going to have to assume something, so why not just start at the assumptions you're used to?
A reasonable assumption for most people is that the compiler works according to spec. That's not a reasonable assumption for the people who are implementing new compiler features! The point of the exercise is to push the problem down to the same low level of abstraction that we frequently are immersed in when implementing compiler features. -- Eric
"But magical thinking about software concerns me greatly. "
This bothers me as well. Software is not magic. I had a C# instructor tell the class the .csprog and .sln files were magic and we didn't need to know about them!
Thanks for your replies, Eric, you make good sense.
My first thoughts on the suggested solutions:
Only works if your timer granularity is good enough to guarantee that two requests will never happen at the same time (according to the timer), which it's often not. There are also massive problems with resetting clocks on your server. If you have more than one server and need to be sure cookies are unique even across different servers, things become really bad. Solving these problems is getting close to implementing a GUID.
>a global threadsafe counter
Seems easy to do for a single server, although indeed wrap-around can be a problem. Using a 64-bit counter allows you so many different values before overflow that you don't really have to worry about anything. (Although for being thread-safe you should take care that writes and reads may not be guaranteed to be atomic anymore.) Using a 128-bit counter should be enough to take care of overflow for anything ever. Synchronising between servers is a problem. Even if you have only one machine, you'd still need to synchronise between processes.
>a random number
A true random number has the advantage of working equally well in all cases, whether you have a single process on a single machine or several processes on several machines. The main disadvantage is that it's possible to generate the same value twice. The probability of this can be made arbitrarily small by generating a larger number of random bits.
But true random number generators are hard to come by. A pseudorandom generator might have a significant probablilty of generating the same value twice, even if you generate a lot of bits per value. In fact a pseudorandom generator is guaranteed to generate equal sequences for equal seeds, so you'd need to use a unique seed. How do you get a unique seed? That's back to square one.
I also have no idea how you'd implement a proper pseudorandom number generator.
Seems effective, but also overkill. It doesn't need to be globally unique (only inside your system), so why would you bother to make it so?
Obviously personally implementing a GUID is very hard. I'd try something like combining a MAC-address the server owns (to prevent different servers generating the same values), with a timestamp (to prevent the same value being generated at different times on the same server), and a counter (to prevent the same value being generated at the same time on the same server).
I suppose you could then still generate the same GUID by starting the process twice (so that the counter is 0 in both cases) at the same time on the same machine ... and probably a dozen of other things I'm missing here. Appending a random number to your GUID is probably also a good idea, just in case everything else is the same by chance or oversight.
This is a very hard solution to implement. Also note that needing to implement a timestamp, a random number generator and a counter makes it at least as hard as those other solutions.
>an auto-generated primary key field of a database table
I have no idea what this means. :)
In summary, all of the proposed solutions are merely different parts of the GUID solution. Out of the proposed solutions, only the GUID solution is strong enough in every case. Every other solution has to be combined with at least some of the others to be complete. (Unless you have a true random number generator, which should do fine by itself.)
If you have requirements that are more lax than strict uniqueness, you could leave out some of the parts of the GUID.
In conclusion, this problem is very hard. How did you solve it for VBScript, Eric?
These are all good explorations of the pros and cons of each approach. Often what I'll do in the interview is say something like "In v1.0 of the product the cookie was a 32 bit integer. Here's the server code that generated it. As you can see, this code is not portable to such and such an architecture. Can you make the server work on that architecture without changing the bit size of the cookie? Because if you change the bit size, v2 server is not going to be binary compatible with v1 client. Customers don't want to upgrade their clients every time the server is upgraded."
By restricting the problem to a particular bit size, all the solutions you proposed about throwing more bits at the problem become invalid and the problem gets harder.
That was the problem we actually faced in scripting -- we had a OS data structure with an unused field that we were using to stash away a 32 bit pointer that was meaningful only to us. On 64 bit windows the equivalent data structure still only provided 32 bits of private storage, so we needed a way to generate a mapping from a unique-per-script-engine 32 bit integer to a 64 bit pointer. Fortunately in that case the number of objects that would be reasonbly allocated over the lifetime of the program was far, far less than four billion, so a straightforward counter was sufficient. (There were three objects per "Property" declared in a VBScript program; clearly no VBScript program has a billion properties.)
You correctly note that a number of the solutions depend on assumptions about domain of uniqueness and consequences of accidentally or deliberately re-using a value. That's deliberate on my part. By making the problem far more general and abstract I see what is in candidates "box of tools", and I also see whether they can ask clarifying questions that remove enough vagueness that they can solve a real problem. Many candidates do not think to ask how many "clients" there are, what the lifetime of the "task" is, what the consequences are of recycling or colliding "cookies", and so on. It makes a big difference. -- Eric
Once long ago (around 1999), one of my friend's friends attempted a job interview at Microsoft. I still remeber the question he was asked and I have never -- and I mean, NEVER -- met anyone yet who would believe the answer Microsoftians ( :-) ) gave the boor bugger before sending him away in shame, let alone answer the question correctly.
The question was: suppose you have a HUGE linked list, which is so big you cannot index it and treat it as an array; how do you prove that it does not have loopbacks? How do you prove that its n-th element does not link you to n - m-th one, instead of the n+1-th one, as a good linked list element should?
And the answer was: you need two iterators; one travels the list element-by-element (i.e. 1, 2, 3...), the other, every second element (2, 4, 6...); IF THE TWO ITERATORS EVER MEET, you list is stuffed up somewhere. There was a mathematical proof that it is indeed the fastest way to go about it (although the poor candidate did not remember a word of it to tell us).
As I said before, I have asked that question many times, and even the most brilliant developers I have met would either snort and dismiss it as yet another 'black magic' from the Evil Microsoft ( :-) ), or launch into an angry expression of total disbelief. One guy simply said, 'But this is a stupid question: they shouldn't ask that during an interview! And what do you mean, you cannot index it? Just load it all in RAM!'
Now, I myself have no idea whether the answer is correct or not, but I really, really enjoy watching the people's anger and denial. We fear something we don't understand. When we feel fear, we get angry. When we don't see the solution, we deny the problem. And most of us need faith: in some 'good magic' of 'guardian spirits' that will not let our car crash when we text someone in the middle of a highway, for example. It's this tendency, I think, that is apparent in those 'magic-thinking' candidates: the tendency to blot out anything that pushes at the boundaries of their comfort zone.
...I've been a developer for 20 years now; I'm thinking about launching a research into the psychology of software development. And I'll say this: the only way computers can become smarter than humans is if humans become more stupid than computers, for, while the boundaries of computer intelligence are well known to the engineers, the boundaries of human stupidity are still remain unexplored. :-)
The proof is straightforward. Once both iterators are in the cyclic portion of the graph, clearly the slow iterator never makes it around one full cycle. Therefore the total cost is always proportional to the number of nodes in the graph or less.
That particular problem is not a very good interview problem, for several reasons.
First, it is now an extremely well-known interview problem; a candidate who can spit canned answers to canned questions doesn't tell us anything other than that they're well-prepared for the interview.
Second, the canonical answer that solves the problem in O(n) time depends on an "aha!" insight. Most real-world problems that we solve do not have "aha!" solutions. Usually there are many different approaches we could take to a problem, with different pros and cons that need to be weighed in terms of the costs of implementation and the benefits to the customer. That's the skill we want to test for.
And it's really unfair to the candidate to give them an "aha!" problem -- they could be really smart and still just happen to not get the particular insight, especially when under pressure.
Third, the statement of the problem is unrealistic. I've written a lot of linked list code over the years, and a lot of bugs. It is rare to introduce a cycle in a huge linked list, and a strange thing to do to write a detector. Usually when you're writing a cycle detector, it's to detect cycles in a dependency network. (The compiler has a half dozen of these.) Usually a dependency network is small enough to use external storage to track the "I've been here before" bit.
Fourth, the given solution only tells you that there is a cycle. It doesn't tell you other important information, like where the offending node that links back is! There's a cycle, great -- what error do I report? How do I break the cycle? This algorithm doesn't tell you that.
Generating random numbers really is probably best explained as magic. Just about everything that you could explain would only give you pseudorandom numbers (i.e. running the same program from the same initial state with the same inputs will give the same answer).
> I had a C# instructor tell the class the .csprog and .sln files were magic and we didn't need to know about them!
I tell that "lie" in my C# sharp class, but I think what I say is: For purposes of _this_ class, we'll treat them as magical. When you're first learning something, you have to start somewhere.
I am glad you think the Aha questions are not good interview questions. I hope, someday, you will also come to the realization that the current style of interviews is equally valid as randomly hiring people of at some guaranteed skill level. If you believe that the problem you solved with the luxury of normal work environment is also solvable by the interview candidate suffering from anxiety, well, you might be passing over a lot of good people. The standard response to this is avoiding bad hires but please think about how many hires you have made who didn't measure up to your expectation in the end.
I really believe it is very hard to test programmers. The 45 to 60 minutes is too short and artificial to be anything but select superficially smart people. In my experience, the correlation between people who go rave interview feedback and really productive people who got things done, was never great. So, I have dispensed with my list of technical questions and just try to evaluate people for some minimal expertise.
I don't believe in software magic - but I *do* believe in "scary stuff whose implementation I'd never get my head round." Concurrent GCs, the HotSpot JVM and similar technologies are prime examples. When it comes to code which has to understand the .NET or JVM memory model at a really deep level (i.e. Joe Duffy territory) it probably counts as black magic...
I don't think this is limited to implementing actual code, either. I'm savvy enough to know that it's important to have a well-defined language specification (where you can read "well-defined" in a number of different ways; I may mean any or all of them) and I have naive fantasies about being able to come up with nice ideas for what I'd like to see in a language. The gap between "having an idea" and thinking through all the ways it touches the existing language features is massive though. I'm very glad we have language pixies like Anders, Eric and Mads :)
I was once in a university algorithms class and we had the same cycle detector function as a question. Since class was over, I had a whole week to think about it. I didn't have anything else to do that week, so I sat down and thought about the problem for a few hours. I had the solution that night.
I'd never manage to find that solution in an interview. This is the kind of problem that you must take home to think about. Asking that in an interview is just wrong.
That said, it's an interesting problem with an equally interesting solution.
One of the interesting things about _not_ being a real programmer is that over my time in this business, I have now and then attempted to do things like what you describe out of a) ignorance that tools for this already exist and b) ignorance of the problems that I will encounter. ("How hard could it be?") Thus I have, for example, rassled with code to assign the next available primary key. Soon enough my initial optimism about how easy the task "should" be crumbles, of course. But it's always a learning experience; there's nothing like actually trying it to know where-all you're going to run into difficulties.
you can take a person out of the kindergarten but the nonsense just waits to come out of the mind of senior developers :)
one of the best posts ever:)
"we had a OS data structure with an unused field that we were using to stash away a 32 bit pointer that was meaningful only to us" -- so, in other words, you were already living in sin. Some sort of magical exorcism was clearly the right answer.