Archive for the ‘computer’ Category

Language optimized for refactoring

Friday, October 24th, 2008

One property of computer languages that is important but often seems to be overlooked is how easy it is to refactor programs written in them.

The one example that springs immediately to mind is renaming a class. In C++ this is a bit more difficult than in many languages because the constructors and destructors have the same name as the class, so you have to go and change all of those too. PHP wins here for calling them __construct and __destruct respectively.

If you are in the school of thought that has C++ method definitions in a separate file (e.g. .cpp) to class declarations (.h), you have to go and change things in two different files (even if you're just adding a method that nobody calls yet). If that class implements an COM interface defined by a .idl file then there's yet another thing you need to change.

Python's syntactically-significant whitespace is another winner here because if (for example) you put another statement in an "if" clause that currently only has one statement, you don't have to add braces.

I'm sure there are many other, deeper examples.

Once you go OOP, there's no going back

Thursday, October 23rd, 2008

Object Oriented Programming is at least as much a state of mind as a set of programming language facilities. When I learnt C++ it was a bit difficult to get used to writing object-oriented programs but now that I've been doing it for many years I can't get used to thinking about my programs any other way.

I was writing some PHP code recently and (not knowing about PHP classes) started writing it in a procedural fashion. After a while I noticed that many of the functions I was writing started to fall naturally into classes (with a first parameter that gave the function context). So it was only natural to re-write it in object-oriented style once I figured out how to do so.

In the process of doing so, I found lots of bugs in my original code (which I had thought was rather nifty). Many functions became much simpler. I also found it was much easier to do various optimizations that would have been very difficult to do without classes (such as minimizing the number of database queries). My code file did become somewhat bigger, but I attribute this to the extra indentation most lines have, and the fact that PHP requires you to write "$this->" everywhere.

I also tried writing a C program (from scratch) for the first time in a very long time a while ago. I found myself using an object-oriented style and implementing vtables as structs.

Javascript exchange site

Wednesday, October 22nd, 2008

Back in the 80s, most home computers used to boot into a dialect of BASIC. This made it very obvious how to start to learn to program - just type things in and try things out to see what works.

Modern computers are much richer in many ways but do have the disadvantage that it's less obvious how to start programming. One could even be forgiven for assuming that the typical off-the-shelf Windows Vista machine doesn't even come with a built in programming language. Actually there are 3 (at least) - the windows command shell language, VBScript and JScript (JavaScript). The windows command shell language (the descendent of the MS-DOS batch language) is ugly, badly documented and almost impossible to debug so lets skip that one. Between VBScript and JScript, the latter is better to learn because it's cross-platform and VBScript is Windows only. There are two ways (at least) to run JScript in Windows - one is through the Windows Script Host (wscript.exe or cscript.exe) and the other is through the web browser. The latter is a graphically rich, interactive and familiar environment so I think that's the way to go.

JavaScript is a much nicer language than the 8-bit BASIC dialects from the 80s but it's still not very discoverable. The tutorials and reference guides are all out there but you have to have a text editor open in one window, one browser window for your program and at least one other browser window as containing your reading material. I think that this is a problem that could be solved with a website.

I'd like to see a site which does for Javascript what computers booting into a BASIC interpreter did for BASIC - a one-stop shop for (at least beginner-level) Javascript development. It would allow you to type Javascript code right into a web page and see its output right there on the page immediately (perhaps with separate divs within the page for the Javascript code, the program's output and tutorials).

The code editor might have syntax highlighting, intellisense, a built-in debugger - whatever can be provided to make programs as easy as possible to develop.

Once you've written some code you can save it on the website and access it from anywhere. You can also share it with friends. If one person defines an object someone else can use that object in their programs. In this way, a rich ecosystem of scripts can develop.

Another possible refinement would be for the web server itself to provide some abilities that scripts can use. Perhaps just storing a small amount of data per script per user so that scripts can do some persistent stuff, or perhaps allowing some server-side JavaScript as well as the client-side scripts, to enable the writing of rich AJAX web applications.

PHP could be more secure

Monday, October 20th, 2008

Given that PHP is designed to be used to write applications that run on web servers, you'd think it would have been designed rather more with security in mind.

In particular, PHP's dynamic typing seems to be a source of security weaknesses. Dynamic typing has advantages in rapid development and code malleability but is not particularly helpful for writing secure code - security is greatly helped by being able to restrict each variable to a specific set of values and having the compiler enforce this.

Similarly with the SQL API - because the interface is all just strings instead of strongly typed objects, SQL injection vulnerabilities becomes all to easy to write.

Variable scope is another one - because there are no variable declarations it's not obvious where variables are introduced, so one could be using variables declared earlier without realizing it (this is why register_globals changed from default-on, to default-off, to deprecated to removed).

Then there are ill-concieved features like magic quotes, and missing features like cryptographically secure random number generation.

A well-designed language for web development would be secure by default when doing the most obvious thing - one shouldn't have to go out of one's way to learn what all the security pitfalls are and have to write to explicitly address each of them (and update your code when the next such pitfall is discovered).

Algorithm for finding "hot" records in a database

Wednesday, October 15th, 2008

Suppose you have a database, and (as often happens with databases) records change from time to time. Suppose also that you'd like to maintain a list of the "hottest" records, that is the ones which have been changing a lot lately.

The first thing you have to determine is whether you want to put the emphasis more on "a lot" or "lately" - i.e. you need to have a characteristic time tc such that n changes tc ago are equivalent to n times e changes now. This time determines how quickly changes "decay" into irrelevance. Depending on your application, this might be a day or so.

The next thing you might try is to keep a table of all the changes made, along with a time for each. Then you can just weight the change times according to how long ago they are and add them up. That's going to be a big table and an expensive operation, though.

A clever trick is to use a running average and "last changed" timestamp in each row of the original table. The running average starts off at 0. Each time the row is modified, calculate the number of characteristic times since the last change N = (tnow-tlast)/tc, update the average by multiplying it by e-N and adding 1 and then update the old "last changed" timestamp to tnow for the next change.

To show that this works, suppose the running average was a=1+e-N1+e-N1-N2+e-N1-N2-N3+... (one term for each change, weighted by how long ago they happened). When we update the running average it becomes 1+e-N(1+e-N1+e-N1-N2+e-N1-N2-N3+...) = 1+e-N+e-N-N1+e-N-N1-N2+... which is just what we want.

That isn't quite the end of the story though because the running averages in the table are not directly comparable to each other - if a record had a burst of activity a long time ago but then hasn't been touched since, it will have a similar activity to a record which had a similar burst of activity which has only just ended. To compute the "current" value of the running average we need to multiply a by the e-N corresponding to the time since it was last updated (without adding one this time, since we haven't added another unit of activity). This requires looking at all the records in the table though, which will be faster than the table of changes approach but might still be rather slow for a big database.

If we only care about the top (10, say) "hottest" records, we can speed it up by caching the results in a small table, and noting that scaling all the activity values by the same factor doesn't affect the ordering of the list. Suppose we have a singleton value tupdate which is the time we last updated the small table and a10 which is the activity of the 10th hottest record the last time it was changed. Whenever we change a record, take the new activity value a, multiply it by eN (note no minus sign here) where N=(tnow-tupdate)/tc and compare it to a10. If it's larger the new record should be inserted into the "top ten" table and the old 10th hottest record shuffled out (if the new record wasn't already in the table) - think of a high score table for a game. When this happens, set tupdate=tnow, multiply all the activity values in the small table by e-N and update a10 with the new value. Then when you need to display the hottest records just display this table.

There is one more complication which comes about from deleting records. If you delete a record it probably shouldn't appear in the "hottest" records list, even it was updated quite recently. But if you delete a record from the small table when it is deleted from the big table, you will only have 9 records in the small table and you'd have to go searching through the entire big table to find the new 10th record.

If records don't get deleted from your database too often, a simple workaround to this problem is to keep maybe 20 records instead of 10 in the small table so that there are plenty of "substitutes" around, and only display the top 10 of them.

The algorithms used by Digg, Reddit, StackOverflow etc. are a little more complicated than this because the records of those sites also have a "rating" which is factored in (higher rated records are considered "hotter") but which can change with time. There might be a way to deal with this by scaling the running average according to the rating and updating the hot records table when the rating changes.

JavaScript vs PHP

Monday, October 13th, 2008

In order to implement Tet4 I learnt two new languages - JavaScript (or JScript, or ECMAScript - the language has a bit of an identity crisis) and PHP. Why PHP? It's installed on my web hosting server and seems to have a huge community of people writing code in it and pre-written scripts. It may not be the ideal language for writing web server apps, but it does seem to be the most well-supported.

JavaScript seems to be a very clean, pretty language. The whole closure thing seemed a bit weird at first but once I understood that "class" is spelled "function" and "public" is spelled "this." I got to rather liking it. I especially like how each scope has access
to the variables from all the outer scopes - that saves a lot of messing about. It's very well integrated with the browser - manipulating the DOM feels very natural and not tacked on.

PHP on the other hand is a bit of a mess. It is as if its designers had a little spinner with markings "C, C++, Perl" which they spun each day to decide what languages features to copy that day. If JavaScript was sent by God, surely PHP was sent by the devil.

W3Schools has been an excellent reference for learning all this.

I have to say though that automatically promoting integers to double-precision floating point numbers on overflow is weird. On IE7, computing the value of 1111111111*1111111111 gives 1234567900987654400 (you can easily see this is wrong because it's even). This caused a rather hard-to-debug problem with my random number generator (which assumed that when multiplying two 32-bit integers together, at least the low 32 bits of the result should be correct). If you're going to automatically promote numbers, at least have the decency to use a multiple-precision integer library - there are lots around.

Touch typing is irrelevant for programmers

Saturday, October 11th, 2008

Steve Yegge argues that touch typing is a fundamental skill that all programmers should have.

I disagree. Being able to type at a reasonable speed is important, but touch typing isn't. I never learnt to do "proper" touch typing, but I type pretty fast and I don't look at the keyboard when I type. I also don't always use the same finger for each key - I use the finger which happens to be nearest to the key I want to press. I'm not overly terse when writing comments, emails and documentation. Improving my typing speed wouldn't make me more productive - the bottleneck is my brain, not my fingers.

Touch typing was invented for typing dictated text on mechanical typewriters which required a lot more force to operate than a computer keyboard. Programming is very different. Working on code there is much more punctuation, much more moving around the document and much more reading. Keeping hands in the touch typing position means that they are not in good places for punctuation and cursor movement, which is slower and leads to RSI.

Touch typists also dislike laptop keyboards. I prefer laptop keyboards to full-size keyboards - the keys are closer together and travel less far, which makes typing faster.

Programs to write programs

Wednesday, October 8th, 2008

When one needs to write a computer program but that computer program would be particularly difficult and (especially) repetitive to write, a useful pattern is to instead write a program to write the program for you. Programming in assembly language is very tedious so we have compilers. Configure scripts are fiddly and tend to be very similar so we have autoconf. Makefiles are hard to get right so we have automake. When building GCC, there are many files written in domain-specific languages (such as machine description files and opt files) which are used to used to generate the final C source code that is compiled to produce the final GCC. Even the C preprocessor is best thought of as a separate language from C itself, to allow for some limited compile-time code generation.

I wonder how far it is sensible to take this. If the program that writes the program is itself difficult to write, can we write a program to write that one too? At some point it seems that you will reach a point of diminishing returns - adding layers of code generation makes things more complicated, not less. Perhaps the kinds of programs we can understand (and therefore write and debug) are limited by the number of layers of code generation which need to be used. Perhaps with better tools for understanding these layers (such as this), we can write programs that are more sophisticated and less buggy.

Another factor is that in many languages (especially compiled languages) it is quite difficult to write code that writes code - you have to make sure that a suitable compiler is installed, invoke it etc. I hope that future compiled languages will include
code generation libraries which make this easier.

Stupid image resizing

Wednesday, October 1st, 2008

It greatly annoys me when web pages don't resize images properly. In particular, one thing that there seems to be an epidemic of at the moment is web pages that embed images at one resolution and then use styles like "max-width: 100%" in a column narrower than the image. This makes the images look horrible because most browsers (at least IE7 and Firefox 2) resize the images by just dropping columns, causing diagonal lines and curves to be all wavy. At least Firefox 3 gets this right and resamples the image but it's still a waste of bandwidth.

Along similar lines, here is an interesting article about the interactions between gamma correction and image scaling. I hadn't thought about that before (my own image resizing routines always just assumed a linear brightness scale) but this has definitely opened my eyes.

How to fix Microsoft

Monday, September 29th, 2008

My post yesterday kind of devolved a bit into ranting about how Microsoft's processes have the unindented consequence of causing the code to increase in complexity in the long term. This begs the question "how could Microsoft change to avoid this?"

Back in the 90s, Microsoft was generally considered to be an excellent place to work (much as Google is now). They still are to some extent but today that "excellent" means more "sensible" than "exciting". In the 90s, Microsoft was often described at having a culture that resembled a collection of many small startup companies rather than one large corporation. I think that was pretty much gone by the time I started working there in 2001 - by then it was very corporate (sometimes embarrasingly so). Meanwhile, startup companies are doing the kinds of development that Microsoft can only dream about, on tiny budgets.

Reading Paul Graham's essays about startups has given me some hints about how these startups can be so effective in the way that Microsoft can't, as well as why the ones that fail do.

There is a whole spectrum of corporate cultures, from two or three guys in an apartment at one end (let's call it the left end for the sake of argument) to Microsoft at the other (the right end). The left (when successful) is great at motivating their employees to do great things, rapidly prototyping and turning over problems quickly. The right is great at (eventually) producing very large, complicated, highly polished pieces of software, being consistent and avoiding risk.

To really be successful you need to work at both ends of the spectrum (and everywhere in between). The ecosystem of people founding startups, writing some great software and then getting bought by Microsoft (or Yahoo, Google, whoever) has done some pretty great things. But does Microsoft need all those startups to exist externally to itself in order to do the startup-y things? Perhaps not, if it can return to the culture it had in the 90s.

Startups need only a couple of things to exist - talented people and money. Both of these Microsoft has in spades. But they also need these people to be very motivated. Microsoft isn't a great motivator of people anymore - the difference in rewards between those who are putting in a reasonable minimum effort and those who are going all out is (in the grand scheme of things) small and it takes a long time of working very consistently hard to rise above the crowd. The average employee knows that it is very unlikely that they will ever be able to make a big difference to anything. Certain philosophical positions adopted by Microsoft mean that they are no longer highly regarded by the geeks that they need the most.

One thing Microsoft can do to revitalize itself would be to incubate startups inside itself. Allow employees to form small teams to solve some specific problem. If they succeed, they are rewarded with lots of money. Even if this approaches the amount that they might get from a successful startup this would probably be much cheaper for Microsoft than buying the equivalent startup, assuming a reasonably high success rate. These internal startups could even compete (especially if there are multiple approaches that seem promising).

These startups should be autonomous for the most part - they should be able to choose the software and programming languages that makes the most sense for them. They should not have to coordinate with other teams who might be doing similar things, or worry about treading on the toes of others. They should be allowed to concentrate on writing great software. Unlike startups in the real world, they won't have to worry about getting funding (just getting a first version working in the agreed upon time). They can have office space on campus or work in somebody's apartment if they prefer. If Paul Graham is to be believed, these factors will cause the success rates of these internal startups to be much greater than those of startups in the real world.

While the resulting software might not meet Microsoft's standards for security, localization etc., these things are probably better done at the more corporate end of the spectrum, by those employees who prefer the stability and "sensible"ness that you get with that corporate culture. The left will quickly produce imaginative, highly competitive software and the right will polish it to corporate standards - the best of both worlds.

The downside for Microsoft of this approach would be the drain of talent from the "classic" product units, and possibly also from the company altogether (if successful "startup" employees were rewarded with what Neil Stephenson describes as something rhyming with "luck you money"). But if that drain is happening already, maybe there's nothing to lose.