A colleague and I were chatting the other day and we were talking about STL implementations (in the context of a broader discussion about template meta-programming and how difficult it is).
During our discussion, I described the STL implementation as “read-only” and he instantly knew what I was talking about. As we dug in further, I realized that for many languages, you can characterize computer languages as read-only and write-only
Of course there’s a huge amount of variation here – it’s always possible to write incomprehensible code, but there are languages that just lend themselves to being read-only or write-only.
A “read-only” language is a language that anyone can understand when reading it, but you wouldn’t even begin to be able to know how to write (or modify) code in that language. Languages that are read-only tend to have very subtle syntax – it looks like something familiar, but there are magic special characters that change the meaning of the code. As I mentioned above, template meta-programming can be thought of as read-only, if you’ve ever worked with COBOL code, it also could be considered to be read-only.
Of course anyone for someone who’s very familiar with a particular language, the code written in that language is often understandable – back when I was coding in Teco on a daily basis (and there was a time when I spent weeks working on Emacs (the original Emacs written by RMS, not the replacement written by Jim Gosling) extensions), I could easily read Teco code. But that’s only when you spend all your time living and breathing the code.
 I can’t take credit for the term “read-only”, I first heard the term from Miguel de Icaza at the //Build/ conference a couple of weeks ago.
 “line noise” – that’s the random characters that are inserted into the character stream received by an acoustic modem – these beasts no longer exists in todays broadband world, but back in the day, line noise was a real problem.
Hey Larry, this comment relates to one of your previous and locked post (blogs.msdn.com/.../545451.aspx).
I have an audio that claims to perform sample rate conversion with high fidelity, and the instructions are to set the sample rate in the card's audio center equal to or higher than my source file's. According to the diagram in your post however, it seems the audio driver is at the end of the audio chain, while the mix buffer lies before it.
In the following scenario (CD @ 44.1KHz, Control Panel > Sound..> @ 16bit 48KHz , Audio card's software setting @ 48KHz), does that mean the audio will be upsampled from 44.1KHz to 48KHz by windows, then simply passed onto the card which will output that signal without conversion? Or does the card's "high fidelity audio resampling" occur at the Audio Application level, so it will instead perform all the upsampling before passing the data to the windows buffer and remains unchanged all the way to the output device?
Sorry for the offtopic comment.
I have used "read-only" for programming languages for a long time, but not quite with this meaning. I don't remember how I came up with it, but I don't think I have got it from somewhere.
Typical use: "for me Java is a read-only language". Because between C++ and .NET I was able to understand a lot from a Java program, but I was unable write anything more advanced than a hello-world.
And when I used it, everybody "go it" without explanations.