Additional profile information on Alfred Thompson at Google+
I spent the latter part of last week in Palo Alto, California working on the CS 2013 project. There are some pretty amazing people involved in this project (CS 2013 steering committee) and I learned a lot in the various discussions we had around computer science curriculum. Since we are talking about what absolutely positively much be part of undergraduate computer science curriculum it is not surprising that core concepts are an important part of the discussions. One statement that was made was that “abstraction is at the core of computer science.” This is quite true of course but it struck me that maybe we don’t talk enough about this early enough in high school computer science courses. It is an important vocabulary word but do we really talk about what it means at a deep level? Sometimes perhaps but always? Not so sure. I’m pretty sure I never spent enough time on it.
What do we mean by abstraction? There are some interesting definitions floating around that can be used for starters.
Making all of this clear to students can be tricky especially when we introduce abstract classes. So what is the simple version? I see abstraction as using symbols to represent real things. In other words, we model the real world using data (numbers mostly but also images and words) so that we can manipulate them using the computer. We can’t generate real wind or real buildings in a computer to see how buildings react to different speeds of wind but we can model the effects. Push buttons on the screen don’t actually depress but we can model that behavior by using the right abstractions. Abstraction is a tool for acting on imaginary objects that represent real things.
In a sense object oriented programming and graphical user interface programming both simplifies and complicates our discussion of abstraction. On one hand the GUI objects make it easy to model real work objects of specific types. At the same time, for some students, it makes it harder to understand models or abstractions of items that are not visible on the screen. Personally I find that properties, as implemented nicely in C# and Visual Basic, do help to picture the abstraction – the way the software object represents the physical object it models – of various real world objects. Your mileage as they say may vary of course. The fact that objects have properties and methods is what enables us to model real world objects. At the same time making the transition from physical objects to numbers that somehow represent those objects does not come naturally for everyone. It is important however that they do make the transition. This is at the heart of how computer science works.
I’m trying to work out in my mind how to involved abstraction both earlier and more consistently into a first programming course. My gut tells me that in the long run that would make understanding more of the concepts easier. Do any of you out there have particularly good discussion points, resources or lessons learned about teaching abstraction to share? Any textbook that does an especially good job of it? Or perhaps an operational definition that you find works for you and students?
Abstraction is a core concept in computer science. On the other hand it is a bit abstract for some students. (Sort of pun intended.)
The old joke used to be that “a good FORTRAN programmer can write a good FORTRAN program in any programming language.” The problem and what makes the joke not all that funny is that it involves not taking full advantage of the language in question. It is not a FORTRAN specific problem by any means. I have known many so called C++ programmers who really wrote C programs using a C++ compiler. Many programming languages are designed with an idea towards changing the way programs are written, problems are solved and how people think about problem solving. Taking on a new programming language without changing ones thinking is often a missed opportunity, at best, and a cause of serious problems at worst. I recently came across two posts inspired by one of Alan Perlis's epigrams: "A language that doesn't affect the way you think about programming is not worth knowing." This comes after the same issue from a slightly different point of view.
To some extent programming languages are idiomatic. That is to saw that there are particular ways that the language should be used. While it may be possible to use it the same or similar ways to other languages, the FORTRAN program in any language way, that is not the best most efficient way to use it. To get the most out of a language you have to think a little differently. A lot of languages are pretty similar – Java and C# for example. But there are still differences. Most obviously those differences involve libraries but there are subtle differences in the languages themselves. For example Java and put and set methods which are a little different from properties which are a different form of get and set.. The thinking involved is a little bit different.
Other languages are very different. Scheme and F# are two examples of functional languages which are a completely different paradigm from languages like Java and C#. You’d really run into trouble trying to write FORTRAN programs using one of them.
Visual languages like Scratch, Alice and Kodu are different (from each other and from other more traditional languages) in still more ways. I think we often focus on simple concepts, like loops, that feel the same but miss out on different ways of thinking about things like subroutines and methods. Kodu for example uses pages in much the way that subroutines are used in other languages. They have a a different sort of feel to me though. There isn’t a traditional return statement for example. This changes things. It means that leaving a subroutine doesn’t automatically go back to where it was called from. I’m still thinking about how best to take advantage of that. I am finding that Kodu is changing the way I think about programming. This is a surprise to me but in a good way.
At the high school level there is a tendency to stick with one programming paradigm and even often a single programming language. I wonder it that is too narrow a way to teach programming. Younger students seem to adapt to different languages and paradigms faster then older students – much faster than their teachers all too often. There isn’t a lot of room in the curriculum at most high schools to cover multiple paradigms in a single course. Over a couple of courses a school might cover a couple of languages perhaps. I recommend at least two and would prefer three. Mostly people tend to concentrate on the similarities between languages rather than the differences. There are advantages to this. The disadvantages are less clear though but I am starting to think that there is some value in talking about the differences. A good education widens ones horizons rather than focusing too narrowly. Something to think about. You know, while we are talking about making people think.
The blog posts that inspired this post are both well worth a good read.
Computers are good at games. Well sort of. Computers are good at following rules and making decisions based on rules that are programmed into them. These rules have to come from somewhere and that somewhere is people. That computers are good at playing games is do in large part to the fact that people are good at studying games. The hard part is really figuring out how a game is played and how a set of rules can be created that will enable someone (or some thing) to win on a regular basis. Once that happens a set of rules can be created to feed into a computer.
People can learn these rules and play well themselves of course. For simple games this is easy. When is the last time you lost a game of tick tac toe for example? OK so I lose occasionally but generally it is because I am careless. People are pretty good a making careless errors, of not noticing something they should notice or forgetting a rule or guideline for effective play. Computers on the other hard are good at NOT making careless errors, of always noticing things, and of never forgetting rules or guidelines. And this lack of human error is the second thing that makes computers good at games.
I recently came across a research study of the game of Connect Four (Thanks to and article in the Washington Post online called Annals of useful computer science research which someone Tweeted a link to). It’s a 90 page masters thesis from 1988. Think about it – 90 pages of game analysis. Probably more detail than you ever wanted to know about Connect Four. There is also a discussion of a computer program, with a small amount of C code shown, that analyses specific possible moves. Not a bad place to start for creating your own program to play the game. Or even to just learn to play the game better.
A couple of questions that come to mind with this sort of study though. Does too much analyses take the fun out of the game? Or does it make the game more interesting? How do you feel about playing an opponent who not only knows the game better than you do but never makes a mistake? Would playing the computer be more fun if there was some random error tossed into the mix? Is it even worth playing when you know for sure that the best you can do is a draw and that if you make the smallest misstep you will lose?How much of game play involves the humanness for you? Just a few random things to ponder on a Tuesday morning.