RTF to HTML converter 1.01

Stupid me, I didn’t anchor the label at the bottom-right correctly causing the label to overlap the text areas when the app window was resized. New version is up.

Rtf2Html Version 1.01
(Requires .NET Framework 2.0)


RTF to HTML converter

I tried to find a lightweight app that converts RTF text to HTML text, mainly so that I could post formatted code from Visual Studio on the web. Unfortunately, I couldn’t find one and doing the conversion through MS Word is way too time consuming, as I would have to save to an HTML page, then go and extract out the relevant HTML. So I ended up making my own …

Rtf2Html Version 1.00
(Requires .NET Framework 2.0)

Note: The only formatting options handled are: text color, bold, italic, underline, and strike-thru.


The namespace-enum trick

Enums in C++ can be a pain in the ass once a project starts to grow in size. Enums are typically defined globally or within a fairly wide scope such that there’s usually a pretty high probability that you’ll get name clashes between enums (remember that in C++ the enum block is not a scope, so enum members must be unique across multiple enum declarations) unless, like with the DirectX or Win32 APIs, you give your enum members names which are detailed, prefixed, and very ugly looking. For example here’s some of the enum members for Direct3D render states:

typedef enum _D3DRENDERSTATETYPE {
    D3DRS_ZENABLE = 7,
    D3DRS_FILLMODE = 8,
    D3DRS_SHADEMODE = 9,
    D3DRS_ZWRITEENABLE = 14,
    D3DRS_ALPHATESTENABLE = 15,
    ...

Truthfully, the DirectX enums aren’t that bad, and most programmers (including myself) have seen worse, but is there a better way?

Recently, I’ve discovered a cool trick that I’ve started using in most of my C++ code. Just wrap the enum declaration inside of a namespace. So, as an example, you might have something like:

namespace Color
{
    
enum Color
    {
        Red,
        Green,
        Blue,
        Orange,
        Purple
    }
}

… and, because of the namespace wrapper, it’s also perfectly valid to have the following within the same scope as the code above (note the Orange enum member in both declarations):

namespace Fruit
{
    
enum Fruit
    {
        Apple,
        Orange,
        Purple
    }
}

You access an enum member by simply resolving the namespace first:

Color::Red
Color::Orange
Fruit::Orange

Simple and elegant!

The only oddity is that to declare variables that are of the enum type, you have a declaration that looks like:

Fruit::Fruit

or

Color::Color

… but overall I think that’s a small price to pay.

I am very surprised that this little trick is not mentioned much and in the few places where I have seen it mentioned (such as here), it is glossed over and its significance is not really highlighted.

EDIT: Note that C++11 introduces scoped enums which renders this trick unnecessary, if you have a compliant compiler.


LCD monitors, resolution dependence, and aspect ratio issues

A few days ago I started thinking about 2d game engines and games that run at fixed (and typically lower) resolutions (e.g. Starcraft). Many older 2d games have no choice when it comes to running at a fixed resolution b/c if the game ran at a higher resolution one of 2 things must occur:

(1) a larger portion of the game world is rendered. This can break many of the gameplay mechanisms of the game and/or decrease the difficultly of the game as the player is able to see more of the game work w/o moving the camera. This can be especially disastrous for multiplayer games where different players may be running the game at different resolutions. Another downside here is that units (which are 2d sprites), although still in proportion with the game world, appear smaller. It should be noted however that some games (typically strategy games, where you have a fog-of-war, such as Age of Empires) have supported multiple resolutions in this way.

(2) The second option is that the game’s rendered output is scaled from its original resolution. The higher the new resolution and the lower the original resolution, the worse the game looks. This is what’s done by LCD screens, since they have a fixed resolution.

Well thinking about 2d games and the fixed resolution problem got me thinking about how a modern “2d” game engine (2d only in terms of gameplay), targeting contemporary hardware, could be designed so that it could run at any resolution? Well if all the units are 3d models, the sprites used are for things such as particle fx – which will typically scale well, and the environment is either a 3d terrain or scalable 2d tiles, the resolution issue seems somewhat solved, as you can go to a higher or lower resolution gracefully … well, yes and no, there’s one other thing you have to consider that will affect any type of rendering engine …

As LCD monitors continue to grow in popularity, the new models have a certain oddity about them; they don’t conform to the historically popular 4:3 aspect ratio that many developers have depended upon. The popular resolutions now seem to be 1280×1024 (5:4) and the wide-screen resolutions such as 1920×1200 (16:10). Of course if you want to target, or eventually target, output to an HDTV you also have to worry about the HD resolutions, 1920×1080 or 1280×720 (16:9). The problem with multiple aspect ratios is that, unlike multiple resolutions at a given aspect ratio, you can’t gracefully go from one aspect ratio to another and retain the look of what your rendering, you have to letterbox/pillerbox the output, stretch the output, or modify the field-of-view (FOV).

Modifying the FOV seems to be the “right thing” to do, but this can lead to problems similar to (1) above, where players with wide-screen monitors can see more of the game world than those with normal monitors, giving them an unfair advantage.

So what’s the solution? I’m not sure. I came across this forum thread a while back (the only one I could find thru google) where 2 posters reply that the stretching really isn’t noticeable (I don’t have a wide-screen monitor, so I can’t give an opinion). FOV may or may not be an issue. Some people find letterboxing and pillarboxing annoying (pillarboxing does annoy me, but I’m comfortable with letterboxing).

Finally, one solution not mentioned is smart stretching. The HDTV my parents have does a smart-stretch on standard definition content, where it, I think, stretches the pixels further away from the center more than it does the pixels that are closer to the center. It actually looks really good and greatly minimizes (to the point where it’s not noticeable) the “squashed-down” look you’d get from running a standard stretch algorithm on a 4:3 picture.

That’s all for now 🙂


firesync and the daylight savings time bug

Seems like I’m talking too much about firesync in this blog, but anyway, another rare, but serious bug popped up in firesync on this special day. However, unlike the previous bug (see previous blog entry), this one doesn’t seem to be unique to firesync; it’s the daylight savings time bug (note: this article is informative, but some parts are not written too clearly and can be difficult to understand).

The bug popped up when I noticed firesync seemed to be thinking that all of my files were out-of-sync. I looked at the file modification times of a few files on the two PCs and noticed that modification, creation, and access times were off by 1 hour. The reason for this, as I discovered, is that the filesystem of one system was FAT32 while the other was NTFS. FAT32 time stamps are equal to the local time (so, inherently, DST information is contained within the time stamp). NTFS time stamps are based on UTC time and the local time is calculated as an offset from the UTC time. The problem (i.e. the daylight savings bug) results from the fact that Microsoft chose a somewhat odd way of handling conversion from UTC-to-local time. Instead of storing DST information as part of a time stamp (which would enable a correct conversion), NTFS systems will add 1 hour to the UTC time (during the UTC-to-local time conversion process) if daylight savings is currently in effect!

So, the solution seems to be to do all time stamp comparisons based on UTC time. Ahh, more work for firesync 2.


firesync and the copy-delete-rename problem

An interesting problem with firesync popped up a while ago. I was syncing files on my laptop and a file didn’t get updated. Thinking this was odd, I tried to sync again, and got the same problem. So I looked at the file modification times of the files on the 2 computers and noticed the problem. What happened was…

1. I had a file (we’ll call it fileA) and made a copy of it (we’ll call the copy fileB).
2. I deleted a file (we’ll call it fileC)
3. I renamed fileB to the name of fileC (thus replacing fileC with fileB)

Unfortunately, when fileB was made, Windows set the file modification time of fileB to that of fileA and fileA had a modification time <= the modification time of fileC. So when firesync saw the file it looked like fileC didn’t need to be updated.

It’s a weird and complex little problem, but the good news it that when the file copy was done, Windows gave fileB a newer creation time. So it’s a somewhat easy fix that’ll be implemented in the next version of firesync.


firesync

I finally got around to creating firesync, a file synchronization utility. I’ve been using (and testing) it over the past few days and so far I’m really happy with how it’s turned out.

I previously messed around with file synchronization when I created Doppler. Dopper didn’t quite live up to my expectations. It’s architecture and functionality was overly complex, and the result was a fairly unstable and bug-ridden piece of software. In contrast, firesync has a much simpler, or I guess I should say more structured, architecture, similar functionality, and is much more stable and usable.

One of my goals for Doppler that didn’t translate over to firesync was the idea that files should automatically be sync’d and, at the same time, the application should remain invisible and not interrupt the user’s workflow. The way I attempted to do this in Doppler was to compute an md5 checksum of the file and compare it to a checksum computed earlier (that was stored in memory). Putting the thread responsible for this to sleep for a few ms after a checksum was computed was my way of trying to maintain low CPU usage. However, there are 2 major problems with this approach. The first is that computing checksums require reading a ton of data off the hard drive and there was a very large and very noticable slow down in system performance (since the cache gets changed because your jumping around the disk computing checksums). The second problem with this approach is that even for a handful of files, it takes an increadibly long time before all the files are sync’d since the thread is going to sleep so much in order to keep CPU usage low.

firesync doesn’t compute checksums and only uses the file’s last modification date/time to figure out if one file is older than another. I was actually surprised at how fast I was able to get the last modification date/time, as well as other file attributes, (it looks like windows caches them) and it makes the sync operation very fast. In addition I threw the idea of automatic sync’ing and, consequently, low CPU usage out the window. However, given how fast you can get the last modification date/time, automatic file sync’ing might be something fun to try in the future.

Anyway, so far I’m really happy with firesync in almost every respect – interface, functionality, performance, etc. and I’m considering taking the lessons I’ve learned and doing a new backup utility, as Shoebox Backup hasn’t quite lived up to my expectations either; but that’s a while away, it’s back to game development work for now.

oh, and of course, a screen shot…



Zerospace lighting model

Work on zerospace has finally picked up in the past few weeks and a few days ago I posted the first screenshot in the zerospace blog. Not much to see as yet, just an background, starfield, and untextured model. However, I did put in some major work on the lighting system, which is visible in the screenshot (to a certain extent; the specular highlights are dull and, although they’re there, they’re only apparent when the model rotates).

Anyway, what I wanted to discuss in this post is the lighting model, since I think it’s fairly unique and it gives some amazing results.

Ambient
The ambient component is done per-vertex (i.e. all computations are done in vertex shader and color is interpolated over the face of the triangle) and consists of 7 directional lights hitting the model from various directions. It’s sort of an ultra-simplified version of ambient occlusion mapping.

Diffuse
The diffuse component is sampled from a 2d texture. This texture is a scaled down and heavily blurred version of the background texture (done offline; a heavy blur, such as the one required would be too expensive in the pixel shader). Scaling it down is simply a matter of performance, as the background texture is large (2048×2048). Blurring (I do a guassian blur) is a trick to get the diffuse lighting from a scene (this is a bit difficult to explain, I’ll try to do later in another post).

So how how is the texture mapped onto the 3d model? Spherical texture mapping! (see this article for an explanation). Note that the normal vector used to computer the texture coordinates is the the normal vector transformed by the world matrix (since I don’t want to texture map the diffuse lighting onto the 3d model in model space; this will cause the lighting to be “static” – i.e. when the model rotates the lighting values won’t change to reflect the change in orientation).

Specular
The specular component is is done per-pixel because per-vertex specular highlights usually look terrible (there are some other issues in general, but this was the main one for zerospace). For the specular lighting there is really only 1 light, a single directional light pointing down the z-axis. However, there are 8 view vectors and the specular computation is done between the single light and each of the 8 view vectors. I came up with this hack as I found that experimenting with multiple lights was more difficult than experimenting with multiple view vectors (different light vectors would cause either a too extreme or too weak specular highlight). Anyway, I do the specular computation with an exponent of 7, and I them multiple the results by 0.075 to dull the highlights.

One final note, the screenshot is no longer 100% accurate of the lighting system . I just found a major (and stupid!) mistake where I was multiply the ambient and diffuse components together instead of adding them. However, what I described above should stay the same, I just have to tweak some values to prevent the lighting from being too bright or too dark.

Also, on a more final note, this is not the complete lighting model for zerospace. Lighting from weapons, particles, etc. will also be taken into account.


The six types of gamers

I came across an interesting article on Next Generation yesterday that talks about a study that divides of gamers into 6 classes instead of the standard 2 (hardcore and casual).
  • Power gamers: This group represent 11 percent of the gamer market, but accounts for 30 cents of every dollar spent on retail and online games.
  • Social gamers: This group enjoys gaming as a way to interact with friends.
  • Leisure gamers: This group spends 58 hours per month playing games but mainly on casual titles. Nevertheless, they prefer challenging titles and show high interest in new gaming services.
  • Dormant gamers: This group loves gaming, but spends little time because of family, work, or school. They like to play with friends and family and prefer complex and challenging games.
  • Incidental gamers: This group lacks motivation, and plays games mainly out of boredom. However, they spend more than 20 hours a month playing online games.
  • Occasional gamers: This group plays puzzle, word, and board games almost exclusively.
Not really sure where I fit in here. I guess I’d like to think of myself as a power gamer, but given the amount of time I usually spend writing code instead of playing games, I’m probably more of a dormant gamer, and I have found that when I have to or want to focus more on coding, school, etc. I tend to play games where I don’t need to spend hours to finish a level. So I usually play an FPS or some type of action game (Quake 4, UT2004, or, recently, Freespace 2) instead of an RPG (still haven’t finished Guild Wars: Factions or Planespace Torment, and I haven’t touched either lately) or RTS (I’m half-way through Homeworld 2 and I’ve had it for over a year; however, to my credit, it is a very difficult game).

I really hope that the result of this study is that more is spent towards advertising games. Analyst Michael Cai states that “…Dormant gamers who are not heavy on gaming time actually have fairly good gaming motivations and spend a high dollar/gaming hour ratio. The key is to design games/services that fit these peoples’ lifestyles, maybe snack- or bite-sized games. On the other hand, the leisure gamers spend a lot of time playing casual games yet pay little money. They are ripe for game-advertising solutions.” I’m not sure I agree with him about making “bite-sized” games; while I don’t always have time to play an RTS or RPG, I’d hate for the experience to be cheapened and/or shortened when I do get the time to sit down to play. However, I think advertising is something that is sorely lacking in the video game world. It’s a fair assumption that power gamers read gaming websites, magazines, etc. and as such are exposed to previews and ads of upcoming and released games. For everyone else, it becomes a chore to find out what the latest releases are; there are virtually no TV commercials, no magazine ads outside of gaming mags (even tech mags like Wired, which on occasion does features on gaming, doesn’t have gaming ads), no billboards, no radio commercials, hell…outside of gaming websites you’d be hard pressed to find a game ad. There are exceptions of course – the new super mario bros, grand theft auto (all of them after and including 3), and halo 1/2 come to mind, as each had quite a bit of TV coverage, but I think more games need to be given the same (or better) treatment.


Helix Preview 2

I think I’ve finally settled on what the interface will look like. All the standard geometry manipulation stuff is already implemented (translate, scale, rotate). I’ve added a resize tool , which is like the scaling tool, but allows you to stretch/shrink the geometry by using handles on the sides of a bounding rectangle/box that encompasses the geometry. Next up I’ll probably add more primitives, begin working on the 3d preview, add support for light entities, and file saving/loading support (so I can have some data to work with in Curve).

I’ve also been trying to find some info on CSG (constructive solid geometry); this is what allows you to carve one primitive from another (the boolean difference operation). You can also add primitives together (boolean add) or get the result of the boolean difference (boolean intersection); however, carving is probably the most popular and important for a level editor. So far, the only useful information I’ve found is this QA on Flipcode. This is really helpful, but I think I really need something that goes into more detail. I’ll have to see if I can get my hands on either of the two papers mentioned in the QA.





oh, some of the icons used are from http://www.famfamfam.com/. They’re the work of Mark James. The icons in the lower left (the cyan-ish colored ones) are ones that I created (just messing around with wingdings font and photoshop).