Master disks for Doom
Avishkar Autar · Sep 15 2012 · Random
Caught this tweet from @idSoftware showing the original master disks for Doom:
Avishkar Autar · Sep 15 2012 · Random
Caught this tweet from @idSoftware showing the original master disks for Doom:
Avishkar Autar · Sep 12 2012 · .NET Platform
Shell extensions (at least in Windows, I can’t speak to other platforms) are not easy to write and require some pretty ugly native code, requiring Win32 and COM code. For most applications, the .NET Framework can usually provide a nice wrapper for such code, but not for shell extensions. Prior to .NET 4, using the .NET Framework was dangerous because, as explained in this forum post, an extension would attempt to inject a version of the .NET Framework (the version the shell extension was written in) into every application affected by the extension (i.e. every application making use of the shell component extended), but only one version of the .NET Framework can be loaded in a process, potentially creating conflicts with other applications using different versions of .NET.
.NET 4 can be loaded side-by-side with other versions of the framework, so shell extensions are technically possible, and I was happy to find an article + code sample detailing how to write an extension as part of Microsoft’s All-In-One Code Framework. Unfortunately, the code sample has since disappeared and, regarding .NET 4 shell extensions, Jialiang Ge of the All-In-One Code Framework team explains:
In .NET 4, with the ability to have multiple runtimes in process with any other runtime. However, writing managed shell extensions is still not supported. Microsoft recommends against writing them.
No much of an explanation, to say the least.
Avishkar Autar · Aug 25 2012 · Graphics and Rendering
I wanted to play around a bit with stipple patterns after seeing stippling done with photos on the LinkedIn news feed. However, what I’m going to present is not what LinkedIn does. LinkedIn applies the stipple pattern as a background-image on a DOM element above a <img> element with a (fairly low resolution) JPEG – the stippling may help to alleviate the negative visual impact of the low-resolution image. What I’m going to show is how to do stippling on an HTML5 canvas, which allows for a much greater degree of freedom in terms of what’s possible, but is also slower and requires a modern browser.
I’m going to make use of the GraphicsCore and FXController classes in a previous post, Gaussian blur on an HTML5 canvas. In that post I presented the concept of writing shaders as plug-in to the FXController class to apply different per-pixel effects. What I’m going to present are shaders for a few simple stipple patterns. Applying the shader is simply a matter of passing it into the constructor for the FXController class, e.g.
var theShader = Shader.crossStippleShader;
var fxCtrlr = new FXController(ctxSource, ctxDest, theShader, width, height, 100, 1);
fxCtrlr.init();
This shader has the effect of creating a checkerboard pattern.
The source pixel is preserved if (x+y) % 2 == 0, otherwise the pixel’s alpha is reduced to 66.
Shader.checkerboardStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
if( (x+y)%2 == 0) {
GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
}
else {
GraphicsCore.setPixel(bufWrite, index, r, g, b, 66);
}
}
Shader.checkerboardStippleShader.numPassesRequired = 1;
This shader blends a white pixel into the source image where x%2 == 0 && y%2 == 0, in effect creating a dotted grid pattern.
The alpha blending code is a straightforward implementation of alpha compositing, but since we’re blending with white (where r=1.0, g=1.0, and b=1.0) the equation is simplified and there is no second color value; we’re just biasing the source color by the alpha value. Also note that this is different than simply changing the alpha of the source pixel (as was done in the Checkerboard Stipple shader), here we’re always blending with white, in the previous shader we’re blending with whatever is the background of the DOM element.
Shader.dotStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
var alpha = 0.8;
var r1 = r / 255.0;
var rF = Math.floor((alpha*r1 + (1.0-alpha)) * 255.0);
var g1 = g / 255.0;
var gF = Math.floor((alpha*g1 + (1.0-alpha)) * 255.0);
var b1 = b / 255.0;
var bF = Math.floor((alpha*b1 + (1.0-alpha)) * 255.0);
if( x%2 == 0 && y%2 == 0) {
GraphicsCore.setPixel(bufWrite, index, rF, gF, bF, 255);
} else {
GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
}
}
Shader.dotStippleShader.numPassesRequired = 1;
With this shader we blend in a white pixel at every 4 pixels (x%4 == 0 && y%4 == 0, the target pixel) and also at the 4 orthogonally adjacent pixels around the target, creating a quincunx pattern.
Shader.quincunxStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
var alpha = 0.78;
var r1 = r / 255.0;
var rF = Math.floor((alpha*r1 + (1.0-alpha)) * 255.0);
var g1 = g / 255.0;
var gF = Math.floor((alpha*g1 + (1.0-alpha)) * 255.0);
var b1 = b / 255.0;
var bF = Math.floor((alpha*b1 + (1.0-alpha)) * 255.0);
if( (x%4 == 0 && y%4 == 0) ||
((x+1)%4 == 0 && y%4 == 0) ||
((x-1)%4 == 0 && y%4 == 0) ||
(x%4 == 0 && (y+1)%4 == 0) ||
(x%4 == 0 && (y-1)%4 == 0) )
{
GraphicsCore.setPixel(bufWrite, index, rF, gF, bF, 255);
}
else
{
GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
}
}
Shader.quincunxStippleShader.numPassesRequired = 1;
Similar to the quincunx stipple, but we blend in a white pixel at every 6 pixels (x%6 == 0 && y%6 == 0, the target pixel), the 4 orthogonally adjacent pixels around the target, and 4 additional pixels extending beyond the orthogonals, creating a cross (“+”) pattern.
Shader.crossStippleShader = function (imageData, bufWrite, index, x, y, r, g, b, a, passNum, frameNum, maxFrames)
{
var alpha = 0.78;
var r1 = r / 255.0;
var rF = Math.floor((alpha*r1 + (1.0-alpha)) * 255.0);
var g1 = g / 255.0;
var gF = Math.floor((alpha*g1 + (1.0-alpha)) * 255.0);
var b1 = b / 255.0;
var bF = Math.floor((alpha*b1 + (1.0-alpha)) * 255.0);
if( (x%6 == 0 && y%6 == 0) ||
((x+1)%6 == 0 && y%6 == 0) ||
((x-1)%6 == 0 && y%6 == 0) ||
((x+2)%6 == 0 && y%6 == 0) ||
((x-2)%6 == 0 && y%6 == 0) ||
(x%6 == 0 && (y+1)%6 == 0) ||
(x%6 == 0 && (y-1)%6 == 0) ||
(x%6 == 0 && (y+2)%6 == 0) ||
(x%6 == 0 && (y-2)%6 == 0)
)
{
GraphicsCore.setPixel(bufWrite, index, rF, gF, bF, 255);
}
else
{
GraphicsCore.setPixel(bufWrite, index, r, g, b, 255);
}
}
Shader.crossStippleShader.numPassesRequired = 1;
That’s all for now. There’s tons of variations possible with only minor code changes to alter blending, color, and the shape of the stipple patten.
Avishkar Autar · Aug 24 2012 · Random
Facebook recently released a new version of its iOS app in order to fix a number of issues with the previous releases. The Facebook app has been notorious for being slow, buggy, and simply lackluster overall. The problem touted by many was the fact that it was not a native iOS app in that it relied on UIWebView containers to display content; this post brings such speculation front and center. However, I’m not sure I agree that the blame lies with UIWebView. The problems with the Facebook app, as far as I could tell, centered around lag in loading content, or or simply not loading content due to connection timeouts. Two comments in the post, I think, point out the real issues:
This article is ridiculous. Not caching data is [the] fault of the developers, not Uiwebview. And seriously, blaming the slowness on HTML? What does the format of the data have to do with anything? There is no reason their dev cycle isn’t compatible with a decent iOS app. iOS development at Facebook is not a priority. Period. That is the only reason the app sucks. It sucks because they wrote a shitty app. Not because of Uiwebview. Not because they use HTML. Because they wrote a shitty app.
Late to the conversation here but Robert Jacobson makes an excellent point: your criticisms are specific to the implementation of the FB app and not the concept of hybrid HTML5 apps. You specifically point out inefficient and inconsistent web service calls that cause many of the problems. Even a native app will be at the mercy of a poorly written web service. I appreciate all the research you’ve done for this article but it is disappointing that it is getting referenced elsewhere on the web for an example of why HTML hybrid apps are bad. You’ve got a good critique of the Facebook iOS app implementation here. This does not even come close to a fair analysis of HTML5 hybrid mobile applications as an implementation strategy.
The last quote brings up an important point, many are now soured by the idea of a HTML5 hybrid mobile application, which is unfortunate because it’s a architecture that would work well for many developers and alleviates the burden (at least to some degree) of supporting multiple mobile platforms.
Looking ahead, I’d bet on HTML5 and web technologies in general. You’ll never get native performance, but as mobile devices become more powerful the tradeoff, platform flexibility and ease-of-development in exchange for lower performance, will become a non-issue for all but the most intensive applications (e.g. games). I toyed around with PhoneGap (before it was bought by Adobe and split off into PhoneGap and Apache Cordova) last year and was pretty happy with the results. I did have a number of frustrations, but those weren’t due to performance, but to bugs and rendering issues in a number of HTML5 mobile frameworks (jQuery Mobile, jQTouch) and webkit mobile itself (support for position:fixed; not available in iOS 4 at the time, supported in iOS 5 which had just hit the market); I’m not pessimistic here, these are things that will improve (or have already improved) as mobile software development matures.
Avishkar Autar · Aug 7 2012 · Languages
From Professor Nicklaus Wirth, the father of Pascal, the Modulas, and Oberon on OOP,
“Many people tend to look at programming styles and languages like religions: if you belong to one, you cannot belong to others. But this analogy is another fallacy. It is maintained for commercial reasons only. Object-oriented programming (OOP) solidly rests on the principles and concepts of traditional procedural programming (PP). OOP has not added a single novel concept, but it emphasizes two concepts much more strongly that was done with procedural programming. The fist such concept is that of the procedure bound to a composite variable called object. (The binding of the procedure is the justification for it being called a method). The means for this binding is the procedure variable (or record field), available in languages since the mid 1970s. The second concept is that of constructing a new data type (called subclass) by extending a given type (the superclass).
It is worthwhile to note that along with the OOP paradigm came an entirely new terminology with the purpose of mystifying the roots of OOP. Thus, whereas you used to be able to activate a procedure by calling it, one now sends a message to the method. A new type is no longer built by extending a given type, but by defining a subclass which inherits its superclass. An interesting phenomenon is that many people learned for the first time about the important notions of data type, of encapsulation, and (perhaps) of information hiding when introduced to OOP. This alone would have made the introduction to OOP worthwhile, even if one didn’t actually make use of its essence later on.
Nevertheless, I consider OOP as an aspect of programming in the large; that is, as an aspect that logically follows programming in the small and requires sound knowledge of procedural programming. Static modularization is the first step towards OOP. It is much easier to understand and master than full OOP, it’s sufficient in most cases for writing good software, and is sadly neglected in most common languages (with the exception of Ada).
In a way, OOP falls short of its promises. Our ultimate goal is extensible programming (EP). By this, we mean the construction of hierarchies of modules, each module adding new functionality to the system. EP implies that the addition of a module is possible without any change in the existing modules. They need not even be recompiled. New modules not only add new procedures, but – more importantly – also new (extended) data types. We have demonstrated the practicality and economy of this approach with the design of the Oberon System.”
Forum post by user Rails on lazarus.freepascal.org
Avishkar Autar · Aug 1 2012 · Random
Absolutely stunning,
View from the ISS at Night from Knate Myers on Vimeo.
Avishkar Autar · Aug 1 2012 · User Interface Design
I’ve admittedly never thought much accessibility when it come to web development. While I’ve come across efforts to increase awareness of accessibility needs from time to time, such efforts never speak to actionable items that can be taken by developers.
Take for example the recent article Designing for Everyone, in UX Magazine, from which my only takeaway was a need to be aware of the accessibility needs of your audience. In theory that sounds great, but such information is not trivial to gather, you can’t simply open up Google Analytics and find out what percentage of your user base has limited motor skills. For individuals and smaller companies, I can’t see where the time or money would come from to do such user studies.
So I was pleasantly surprised when I came across the post Accessibility and web developers on Paul Irish’s blog, and saw the following:
During a Nicholas Zakas & Victor Tsaran talk years ago I finally grokked the easiest rule for a first step towards accessibility. For such a long time, we conflated functionality while JavaScript-disabled with “being accessible”. It took me years to learn that making it keyboard-navigable was the top priority.
This is personally eye-opening for me but also, from a developer’s perspective, it’s a task that can be readily tackled.
Avishkar Autar · Jul 29 2012 · Random
Gotham City by Anton Furst,
From The Architecture of the Comic Book City:
Of Gotham City’s later iterations, the city undergoes a great transformation into fully-realized dystopia, as evidenced by the Gotham of Batman: The Destroyer Series and Anton Furst’s conceptual drawings for Tim Burton’s Batman. In the Destroyer comics, which themselves acted as a tie-in to the Burton film, Batman navigates a dying metropolis, a monochromatic world of crumbling infrastructure and derelict monuments.
Avishkar Autar · Jun 9 2012 · Random
A significant number of projects I’ve worked on have, at some point, involved truncating text and tacking on an ellipsis. The problem is, it’s difficult to do it precisely, as doing so requires exact metrics for fonts used and exact dimensions of whatever area the text is contained within; information usually not readily available. So it’s usually necessary to do some sort of approximation of the maximum number of characters that can fit within the container. However, this method only works well for fixed-width fonts; with variable-width fonts the result is a significant error/over-estimate (so the string is cut off well before it needs to be), as the width of characters can be significantly different (e.g. “i” vs “m”).
One idea I had to reduce this error was to specify a relative width (i.e. weight) for certain characters. In general, most characters would have a width of 1.0, but whitespace, period, comma, lowercase ‘i’, semicolon, etc. would have a width less than 1.0. With these widths, a length function can be defined which is a sum of the character widths, and the result is a value that indicates the length of the string relative not only to the number of characters rendered but also to the width of the characters rendered. A maximum threshold is still needed, but it no longer specified an absolute maximum number of characters, instead it’s the maximum number of full-width (width = 1.0) characters that can be rendered in the given area.
Of course, someone else had the same idea, but I didn’t like the code presented – the String.replaceAll() call every iteration made me cringe. Even with immutable strings, this is something that can be done in O(n) time as long as you can reference individual characters and grab a sub-string. However, one issue presented in the post that I didn’t think about was that strings should only be truncated at certain points (a space, a hyphen) and not simply chop off words in the middle.
My solution is in Javascript, but the operations should carry over easily to any language.
Ellipsizer = function(_maxLength) {
this.maxLength = _maxLength;
this.getLetterWidth = function(letter)
{
if(letter == '.' || letter == ' ' || letter == ',' || letter == '|' || letter == '\'' || letter == ':' || letter == ';' || letter == '!')
return 0.25;
if(letter == 'j' || letter == 'l' || letter == 'i' || letter == '^' || letter == '(' || letter == ')' || letter == '[' || letter == ']' || letter == '"')
return 0.5;
return 1;
}
this.isCharCutpoint = function(ch)
{
if(ch == ' ' || ch == '-' || ch == '.' )
return true;
return false;
}
this.cut = function(str)
{
var strWeightedLength = 0;
var lastCutpoint = 0; // point of whitespace, hyphen, etc. (point where we can cut string)
var curSubStr = '';
for(var i=0; i<str.length; i++)
{
var letter = str.charAt(i);
strWeightedLength += this.getLetterWidth(letter);
if(this.isCharCutpoint(letter))
{
lastCutpoint = i;
}
if(strWeightedLength >= this.maxLength)
{
curSubStr = str.substr(0, lastCutpoint);
break;
}
}
var result = curSubStr + "…";
return result;
}
}
$(document).ready(function() {
$('#ellipsize').click(function() {
$('#text-to-cut p').each(function() {
var str = $(this).text();
var ellipsizer = new Ellipsizer(64);
$(this).text( ellipsizer.cut(str) );
});
return false;
});
});
Maximum length of 64 to provide for the following constraints:
There’s quite a bit that can be improved here, mainly in recognizing more characters as quarter- or half-width characters, or having more classes of widths (0.3, 0.75, etc.) and assigning characters to them. However, for a first-pass, this seems to work pretty well and it’s important to remember that this is still a pretty rough approximation.
Avishkar Autar · May 15 2012 · Random
Browsing Wikipedia and came across this interesting tidbit,
In human–computer interaction, baby duck syndrome denotes the tendency for computer users to “imprint” on the first system they learn, then judge other systems by their similarity to that first system. The result is that “users generally prefer systems similar to those they learned on and dislike unfamiliar systems.” The issue may present itself relatively early in a computer user’s experience, and has been observed to impede education of students in new software systems.