It would be nice to have a text editor that shows (along with information like the line and column number the cursor is on) a metric for how complex the code in the buffer is. Then refactoring would give immediate feedback - a change (hopefully a decrease) in this number.
Archive for the ‘computer’ Category
Code complexity editor feedback
Monday, July 27th, 2009Game browser
Friday, July 24th, 2009I was thinking some more about convergence of game engines recently, and started wondering what a cross between a web browser and a game engine would look like.
I think the real value in this would be lowering the barrier to entry for 3D game creation, much as the appearance of HTML and web browsers made it easy for anyone to create rich documents and publish them to the world.
The first thing we need is a very high level way of specifying simple 3D environments. I think the best interface for such a task ever conceived is that of the holodeck in Star Trek: The Next Generation. Captain Picard walks into the holodeck, which initially is an empty room. he says "Computer, create me a table" and a generic table appears. Next he says "make it pine, 2 inches taller, rotate it 45 degrees clockwise and move it 6 feet to the left". Iterating in this way, he converges on the design he had in mind. Seeing the intermediate results immediately allows him to determine what's "most wrong" and therefore needs fixing first, and may also provide inspiration in the event that he doesn't really know what he wants just yet.
The difficult thing about this interface is that one needs to have a big database of objects to start with - tables, trees, bees, books and so on. Once also needs a big database of textures, a big database of transformations and so on. In fact, there are all sorts of databases which would come in handy - animations, AI routines, material properties, object behaviours. The obvious "Web 2.0" way to populate these databases is to encourage people to publish the things they create for their own games in such a form that they can be used by other people for their games. I don't think the software should necessarily go out of its way to forbid people from making content that can only be used in their own game, but making the default be "this is free to use" would probably help a lot.
If you're creating a website today you can easily find lots of free backgrounds, buttons, menus, applets and so that have been created by the community. With the right encouragement, a similar community could form around creating game things. Put a search engine on top of this community's effort so that when you search for "chair" you get a gallery of models to choose from and you're well on your way to the holodeck interface.
To create compelling games, one needs more than just a decorated 3D space to wonder around in - there need to be challenges, there needs to be something at stake. The web model breaks down a bit here - since you can get from anywhere to anywhere else just by typing in the appropriate URL, what's to stop players from just transporting themselves to the holy grail room?
I think that any non-trivial game would generally involve writing some code (possibly in an ECMAScript-ish sort of language). That code would need to have certain powers over the player, including associating information with them (like "has the player been through this obstacle yet?") and the ability to move the player (which could be done by moving a portal through the space that the player's avatar is occupying) to send them back to the beginning if they haven't completed a required obstacle or if they are "killed" by the minotaur. In computer games, death is of course never permanent, so can be effectively emulated by teleportation.
Another web principle that I think this software should embody is decentralization. Someone who wants to create a website has many options for hosting it (depending on their needs), one of which is to run their own web server. A major problem with a system like Second Life is that there is a central server, which is a single point of failure and a monopoly of sorts over the game universe. Virtual "land" is not not free in second life, it leased from the central authority. And if that central authority decides to raise their prices, censor content they find objectionable or has a server failure, there is nothing that the users can do about it. I suspect that this is a limiting factor in SL's growth.
If no-one "owns" this universe, who decides how it "fits together" (more concretely, what's to stop everyone saying "our site is directly north of Yahoo!"). I think in this scheme one has to give up having a single Hausdorff space and instead have many worlds connected by portals. The owner of a space decides what portals go in it, how large those portals are, what their positions and orientations are, how they move and where you end up when you go through them. Portals are not necessarily bi-directional - on going through one, one cannot necessarily get back to where one was just by retracing one's steps. They are more like links on a website than the portals in Portal. Mutual portals could be constructed though, if two gamemasters cooperate with each other to do so.
Ideally a portal should be rendered in such a way that you see what's on the other side - this just means that that "world" would also be loaded by the client software and rendered from the correct perspective (though should not be able to affect the player).
I think it would be great fun to wander around and explore different game worlds this way. It might be confusing sometimes, but if a place is too confusing people probably won't create many portals to it, so the system would be self-governing to some extent.
The system I've described could run with a standard HTTP server to implement a wide variety of single-player games. But where the internet really comes into its own is real-time interaction with other people - (massively) multiplayer games. Here, things become more complicated because the movements and actions of each player need to be sent to all the other players, and conflicts need to be resolved (who actually got to that loot first?) These problems have all been solved in real-life multiplayer games - one just needs a server which does some of this "real time" stuff as well as serving the content.
While games would be probably be the initial "killer application" for this, I'm sure all sorts of other interest applications would emerge.
Shortly after I originally wrote this, Google announced their O3D API which seems like it has at least some of the same aims, though at the moment I think it's really just an in-browser 3D API rather than a 3D version of the web. However, it'll be interesting to see what it evolves into.
Vista without Aero
Thursday, July 23rd, 2009A while ago, I switched my Vista laptop from using Aero glass (which it is capable of) to the classic Windows 98 theme that I've been using (with few modifications) for over a decade. That one simple change made the machine quite significantly faster - that much faster that I decided to keep glass off despite it being that much prettier. I think part of the reason is that I tend to keep a lot of windows open at once, many of them maximized. Because the Desktop Window Manager keeps the bitmap for each open window, this uses up quite a lot of memory.
Unfortunately disabling DWM still doesn't give me back the ability to use full-screen DOS applications, which is something I occasionally miss. Still, I can use DOSBox for those tasks now.
Next time I replace my laptop, I'll have to try this experiment again and see if the performance difference has become negligable.
Steps to fix a bug properly
Wednesday, November 5th, 2008Fixing a bug in a computer program isn't always easy, but even when it seems easy there are actually a lot of steps one needs to go through to make sure it's fixed properly.
First of all, you need to make sure you have the conceptual framework to understand if there is actually a bug or not. This isn't usually a problem, but there have been a few times in my career when I've started working on a completely new and unfamiliar piece of software, and I'm not sure what it's supposed to do, how it's supposed to work or whether any given piece of behaviour is a bug or not.
Secondly, you actually need to determine if the reported problem is really a bug. While we would like it if software always followed the principle of least surprise, sometimes it's unavoidable that there are things which seem like bugs at first glance but which are really by design.
Thirdly, you need to find the defect that actually caused the problem. Just fixing up the symptoms is usually not the best way, because the defect might manifest again in a different way. Even if it doesn't, there may be performance and maintainability implications in having a problem that occurs internally and is suppressed. This is often the most difficult step to do correctly.
Fourthly, you need to determine what the correct fix is. For most bugs this is pretty easy once you've found the defect - it's often just a localized typo or obvious omission. But occasionally a bug crops up for which the correct fix requires substantial rewriting or even architectural redesign. Often (especially at places like Microsoft) in such a case the correct fix will be avoided in favour of something less impactful. This isn't necessarily a criticism - just an acknowledgement that software quality sometimes must be traded off against meeting deadlines.
Fifthly, one should determine how the defect was created in the first place. This is where the programmers who just fix bugs diverge from the programmers who really improve software quality. This step is usually just a matter of spelunking in the source code history for a while, and good tools can make the difference between this being a simple operation or a search for a needle in a haystack. Unfortunately such good tools are not universal, and this use case isn't always high priority for the authors of revision control software.
Sixthly, one should determine if there were other defects with the same root cause. For example, if a particular programmer some time ago got the wrong idea about (for example) the right way to call a particular function, they might have made the same mistake in other places. Those places will also need to be fixed. This step is especially important for security bugs, because if an attacker sees a patch which fixes one defect, they can reverse engineer it to look for unfixed similar defects.
Seventhly, one should actually fix any such similar defects which appear.
The eighth and final step is to close the loop by putting a process in place which prevents other defects with the same root cause. This may or may not be worth doing, depending on the cost of that process and the expected cost of a bug in the software. When lives are at stake, such as in life support systems and space shuttle control software, this step is really critical but if you're just writing software for fun you'll probably only do it if finding and fixing those bugs is less fun than creating and following that process.
Non-local control structures
Monday, November 3rd, 2008Most of the time in computer programming, causes are linked to effects by code at the "cause" point - i.e. if A should cause B then the routine that implements A should call the routine that implements B.
Sometimes, however, there is a need for effects to happen as a result of causes which don't know about those effects. The obvious example is COMEFROM but there are serious examples as well. Breakpoints and watchpoints when you're debugging is one, Database triggers are another.
A more subtle example is the humble destructor in C++ (which I have written about before) - it's effect is non-local in the sense that if you construct an automatic object in a scope, code will automatically run when control passes out of that scope. It's still a local effect in that the cause and the effect are in the same scope, but it's non-local in the sense that there is no explicit code at the point where the destructor is run.
Why writing web applications is fiddly
Sunday, November 2nd, 2008When you want to add some functionality to a web application, there are many pieces you have to change. First you have to add some interface element (a button maybe) to a page. Then you have to add some client-side code to get this button to do something. Most of the time you'll want that action to have some server-side effect as well, so you have to make that button send a request and implement that request in the server side code. The server will generally want to send something back the client based on the result of that request, so you have to figure out what that something is and make sure the client does the right thing with it (especially fiddly if the client is AJAX-based). That response may itself contain another possible user action, so each new feature can end up creating a whole new request/response conversation.
As well as just writing this conversation, one has to consider all the things that can go wrong (both accidentally and deliberately). Requests and responses might not reach their destinations. If they do get there they might be reordered by the network along the way. Requests might be fraudulently and so on.
Complexity metrics for computer programs
Saturday, November 1st, 2008Trying to measure the complexity of computer programs is really difficult, because just about any metric you come up with can be gamed in some way.
Cyclomatic complexity is one possible metric but this only counts loops and branches - it doesn't tell you anything about how complex the linear parts of your code are. Since expressions like "a ? b : c" can be rewritten "(!!a)*b + (!a)*c" one can also game this metric.
An often-used one is the number of lines of source code. But most languages let you arrange your source code independently of the number lines, so you can put it all on one line or put one token on each line if you're feeling particularly perverse.
Number of characters of source code is a little better but there is still scope for variation in spaces, comments and length of variable names.
We can eliminate those things the same way the compiler or interpreter does and just count the number of tokens - i.e. add up the total number of instances of identifiers, literals, keywords and operators.
This metric can still be gamed, but only at the expense of the quality of the code itself. For instance you could manually unroll loops, or sprinkle in branches that are never taken.
An interesting refinement might be to run some kind of compression algorithm over this lexed code to eliminate redundancy. Such a compression algorithm would essentially automatically refactor the code by finding and extracting common sequences. I'm not sure if it would generally be desirable to use such an algorithm to automatically refactor one's source code, but it would certainly be interesting to see its suggestions - I'm sure many programs have repeated sequences that their authors never spotted. If there are sections that should be identical but aren't because there is a bug in one of them, such a tool might even help to uncover such bugs.
It's hard to buy a wifi card that works with Linux
Friday, October 31st, 2008I recently reorganized my home wireless network a bit, and the AP that I had been using connected to my Linux box stopped working. I wanted to replace it with an internal card but it's annoyingly difficult to find a wifi card that works well with Linux.
Various chipsets are supported with Free drivers but the trouble is that you can't buy a card by chipset - you have to pick a card, research it to try to figure out what the chipset is and then see if it is supported. Even then there's no guarantee because many manufacturers make several completely different cards with different chipsets and give them the same model number (which kind of defeats the point of a model number if you ask me). And the online shopping places don't tell you the revision number of the card you're buying.
Eventually I gave up trying to find one with Free drivers and settled on this one which people seemed to be having success with. Indeed Ubuntu 8.04 recognized it straight away and connected to my network. Still, it's annoying that it's so difficult to buy a card for which Free drivers exist.
A stack of refactorings
Wednesday, October 29th, 2008I'm not sure if this is a bad habit of mine or if other programmers do this too. Sometimes after having partially written a program I'll decide I need to make some change which touches most of the code. So I'll start at the top of the program and work my way down, making that change whereever I see it needed. Partway through doing this, however, I'll notice some other similarly impactful change I want to make. Rather than adding the second refactoring to my TODO list and continuing with the first, I'll go right back up to the top of the program and work my way down again, this time making changes whereever I see either of the refactorings. I reckon I've had about as many as 5 refactorings going on at once sometimes (depending on how you count them - sometimes an earlier refactoring might supercede a later one).
Keeping all these refactorings in my head at once isn't as big a problem as it might sound, since looking at the code will jog my memory about what they are once I come across a site that needs to be changed. And all this reading of the code uncovers lots of bugs.
The downside is that I end up reading (and therefore fixing) the code at the top of my program much more than the code further down.
Game engines will converge
Saturday, October 25th, 2008At the moment, just about every computer game published has its own code for rendering graphics, simulating physics and so on. Sometimes this code is at least partially reused from game to game (e.g. Source), but each game still comes with its own tuned and updated version of it.
I think at some point the games industry will reach the point where game engines are independent of the games themselves. In other words, there will be a common "language" that game designers will use to specify how the game will work and to store assets (graphics, models, sound, music etc.) and there will be multiple client programs that can interpret this data and be used to actually play the game. Some engines will naturally be more advanced than others - these may be able to give extra realism to games not specifically written to take advantage of it. And games written for later engines may be able to run on earlier ones with some features switched off.
Many classes of software have evolved this way. For example, in the 80s and early 90s there were many different ways of having rich documents and many such documents came with their own proprietary readers. Nowadays everybody just uses HTML and the documents are almost always independent of the browsers. As far as 2D games are concerned, this convergence is already happening to some extent with flash.
3D games have always pushed hardware to its limits, so the overhead of having a game engine not tuned for a particular game has always been unacceptable. But as computer power increases, this overhead vanishes. Also, game engines are becoming more difficult to write (since there is so much technology involved in making realistic real-time images) so there are economies of scale in having a common engine. Finally, I think people will increasingly expect games to be multi-platform, which is most easily done if games are written in a portable way.
If game design does go this way, I think it will be a positive thing for the games industry - it will mean that more of the resources of the game can be devoted to art, music and story-telling. This may in turn open up whole new audiences for games.