It goes without saying but, er, let me say Happy New Year, hope you're having a wonderful holiday, and remember that optimism is a learned behaviour, and that humans can learn to do pretty much anything. (It's exactly when the going gets tough that optimism is the best tool in your mental toolbox.)
Now the part of me that teaches software engineering is spinning a bit due to, well, writing full time. So let's get philosophical...
I'm talking about categorizing the world.
One of the things that I mention when Class Diagrams pop up is what I call the Duck-Billed Platypus problem. You know what I mean - when Victorian zoologist types (and their predecessors) categorized living organisms, they created large categories defined by multiple shared characteristics. There was a group with spines, body temperatures controlled by negative feedback, and bringing babies to term inside their bodies, after which the babies were nourished via external maternal glands. There was a group with spines, scaly skin, lacking the thermostat mechanism, bringing their young to term inside external containers with thin calciferous shells.
And then the Duck-Billed Platypus came along...
Creating software systems involves modelling the Real World™ (the requirements) and the new system (which is also a Real Thing™). So we're talking about perceiving and making sense of the world...
And when it comes to designing a new system, one of John's Rules goes like this: every project contains a duck-billed platypus.
<geekTalk>
So what do I teach? In the context of UML and OO, I'll point out the benefits of holding back the use of inheritance, and using aggregation to add the various characteristics of classes. There are some useful design patterns here, like separating out behaviour with the Strategy Pattern, or even using the Decorator Pattern (usually used for plugging in varying, cumulative functionality, as in the Lego-like build-your-own-i/o-streams of the core Java i/o API). And most people's first pattern, the Composite, still makes me smile at its recursive simplicity.
</geekTalk>
Even if you think you've got the categorization nailed (for all time? really?), then polymorphism and inheritance contain subtleties. In a Formula One racing game, if your definition of Car allows a start() operation to run when it's (virtually) raining, then the race controller expects this behaviour. A developer should be able to add a new subtype of Car in version 2.0 (if we're using inheritance) without the controller component batting a (virtual) eyelid.
In other words, if you're a real software engineer, and you specify the pre-conditions of an operation (what must be true to allow it to execute) and the post-conditions (the results, including range of values) then any subtype must match the pre-conditions of the parent (or be more open, and be careful how you understand that), and produce a result that is within the range of results produced by the parent (a narrower or equal range). For example, try creating a new subtype of Car whose start() operation throws an exception of a kind not thrown by the parent. Good luck in getting that to compile....
As an aside, none of this captures the dynamic aspects, which you also need to model. Some people think only in static categories. I have a highly intelligent friend who looks at élite marathon runners and thinks (I believe): marathon running is for thin people; I am not thin; I am not a marathon runner. All of which is true, but not necessarily eternally. (Try adding the world 'yet' to the end of the previous sentence. Or better, think about dynamic processes. If you jog a little today, and carry on doing it 6 days a week for the rest of your life - increasing to the level appropriate for you - then if you get to the stage of running a hundred miles or more every week, just what kind of physique would you have then?)
(Aside the second: what if you thought of optimism as a process, defined by an erect spine, chin held up, a hint of smiling, a steady gaze...)
Now if I'm designing an ATM, I expect it to talk (via many layers of indirection) only to bank accounts. And a classification system of bank accounts is straightforward, surely. Or is it? In the US, having fed my card to the machine, I get asked whether I'm taking money from my checking account or savings account. Huh?
That's a choice of two accounts for one card, which is impossible in the UK, using terminology which I have to translate in order to understand (current account or deposit account, realizing that 'checking' relates to cheques, not monitoring).
Then my American friends come to visit, and they can't pay for goods in shops in the normal European way, by sliding their card into the chip reader and then entering their 4-digit PIN... which is why our machines also have to support reading the magnetic strip, an outmoded 20th-century technology.
Still, it's not too hard to make the technology forgiving of ill-defined categories... in such a small conceptual domain.
Now here's the thing. I teach techniques for categorizing reality when things get slippery, but there's a massive question: when should we stop trying?
Or, when do you sit back and let humans do the understanding - individually and cooperatively - instead of trying to build 'awareness' into the software?
That's the subject of the funny and insightful Ontology is Overrated by Clay Shirky. SFnal jokes included. Enjoy!
And enjoy the heck out of 2011...