|
Although I'm glad to see OpenID is picking up steam, I'm kind of worried about the fact that Microsoft is jumping on the bandwagon this early. It almost certainly means they're going to try to extend and extinguish it, which kind of sucks. Hopefully there's enough other companies out there supporting the official OpenID standard that they won't have much effect. Only time will tell, I guess.
[ 4 Comments... ]
Microsoft on Standards.
[ 3 Comments... ]
An interesting article on the future of Google Book Search over on ArsTechnica. It definitely has potential, I think mostly because Google's already at critical mass to pull it off.
The problem with other e-book formats seems to be that publishers aren't willing to put in the effort to convert the books in new formats unless there's a sizable number of people using the format, and people aren't willing to go out and buy the e-book readers unless there's a sizable number of books available. Basically a chicken-and-egg problem, except Google managed to crack the egg by ignoring publishers and converting all the books anyway. They also cracked the chicken by delivering it in a format that most people can read already (i.e. over the web).
My guess is that once this starts picking up steam, Google (or possibly somebody else) will release a mobile book-viewing client, not unlike Google Maps for Mobile or Gmail for Mobile. As for Sony, well, they're going to be left in the dust wondering what went wrong. Seems like they're ending up in that position quite often these days.
[ 2 Comments... ]
Various things have happened since my last post: it started snowing, the iPhone was announced, I finished the first real-time assignment twice (the first attempt used blocking I/O, which was bad), and started learning/using Lisp for compilers. I'm sure other stuff happened in between, but meh.
Lisp is a pretty cool language. I'm not convinced it's useful for a lot of things in today's software world, but it's definitely handy for writing a compiler. In fact, my only real problem with it is indentation. It's hard to come up with a consistent Lisp style that is readable. If I use a 4-space indent, the code shoots off to the right side of the screen almost immediately, but a 2-space indent makes it hard to visually line up things at the same indentation level.
I found myself wishing vim came with a built-in ruler thing so that it would be easier to line up the code, but it doesn't. So I went for the low-tech solution instead: I wrote a little app that throws up a screen with a black background and bright vertical lines every 12 pixels. I maximize the window, then open up my Lisp code in an OS X Terminal in front. Turning on the Terminal transparency allows me to see through to my vertical-line app, which effectively gives me vertical lines behind my code, spaced at a width corresponding to 2 characters. Problem solved. :)
Now if I could just figure out a way to do this programatically every time I open up a .lisp file in vim...
[ 0 Comments... ]
I find it odd that my memory works differently than most people I know. Most people can remember day-to-day events in a lot of detail; if you were to ask them what they did that day, they could probably describe their actions with a fair amount of detail. It's a lot harder for me to do that sort of thing. I find that stuff filed away in my memory only pops up when triggered by a similar or related action/event.. it just seems more "associative" than most people. On the one hand, this makes it a lot harder to have conversations or tell stories, because those require a "linear" sort of memory. On the other hand, I tend to remember things that are more functionally useful to the situation at hand, which is handy for problem-solving. A mixed bag, as always.
It's interesting to look at this from a historical perspective.. back before paper was invented storytelling used to be the primary means of passing on history. Memory was encoded into stories, or linear narratives. Of course, people still had associative memory, which is pretty essential to survival. (If you happened to run into a bear, would you rather (a) immediately remember to scare it away by standing tall, or (b) remember a story about a guy who faced a bear once, and then seek through it until you found the part where he sca... too late.)
Once we got paper/other forms of permanent storage, the emphasis on storytelling was reduced. It was no longer absolutely required from a survival standpoint, but was still useful (telling stories is a great method of communication in general). The trend seems to have continued - now that we have asynchronous forms of communication (with a time delay between responses, like email), the need for immediate storytelling has diminished even further. For instance, most of my blog entries take a while to write - I go back and edit sentences all the time. I don't have to spew it all out correctly the first time, like I would have to during a face-to-face conversation, so my sub-par linear memory gets the job done.
I'm just glad I didn't live a hundred years ago, or I'd have been b0rked. Then again, it might be more a result of environment than genetics, so maybe not. I do tend to prefer asynchronous communication in a lot of cases, so maybe my brain has automatically adapted away from a more linear memory. It's hard to distinguish cause from effect.
I'm curious as to how other people remember things.. more linear? or more associative? or some other method?
[ 0 Comments... ]
So, it's the new year (happy new year, all!) and classes are back in half swing. As opposed to full swing, because I haven't yet had a class that has run the full allotted time. Started with CLAS 202 yesterday, which was just a 10-minute intro that was enough to convince me to drop the course. After that was SE 465 (first lecture was cancelled, possibly because they haven't yet found a prof to teach it - that would be amusing :)). Today was CS 444, GEOG 101 (which I'm taking instead of CLAS 202), and CS 452 (aka "the trains course"). The first lecture for PSYCH 101 will be on Tuesday.
Compilers and Real-time are on track to be the most useful courses I'll have taken in university. CLAS 202 was supposed to be a bird course with only multiple-choice tests/exams, but that turned out not to be the case. Also, the prof sounded pretty boring, so I decided to drop it. After a quick download of the Schedule of Classes database and couple of shell scripts, I got a list of all classes with over 300 students. With classes that big, the profs can't really assign a significant amount of work, or they'll never have time to mark it all. Anyway, GEOG 101 seemed like a good choice, and the prof was pretty decent in lecture today, so looks like that's what I'm sticking with.
[ 4 Comments... ]
iTunes goes down due to excess demand. This kind of sucks, in more ways than the obvious. It's a sign that Apple is getting too big for its boots.
One of the things that I dislike about the modern economy is the implication that a bigger company is more successful. By total revenue or profits, that may be the case, but in terms of product quality and satisfaction delivered, it definitely is not. This idea goes hand-in-hand with one of my previous posts about how in order to grab more users, the product has to devolve into a pile of mediocre-ly implemented features. This, in turn, kills overall satisfaction and brand loyalty. (Brand loyalty itself borders on irrational behavior so I'm not such a big fan of it; loss of brand loyalty, however, is a definite indicator that the brand is doing something wrong).
One of the concepts we covered in SE 362 was that of conceptual integrity (from the MMM). Although Brooks had a slightly different meaning in mind, I think conceptual integrity applies to a lot of things, and that loss of conceptual integrity is the single most common reason why great things turn into crap. Conceptual integrity, at least the way I mean it, is basically a measure of how much of the original inventor/visionary's concept makes it into the final product undiluted.
[ Add comment here ] Ideas are extremely difficult to pass on losslessly; trying to do so is much like a game of telephone, where every link in the chain modifies it as it's being passed along. Everybody has their own vision of what they would like the end product to be, and when things are designed by a committee, they always turn out poorly. But that's old news. The problem here is that as companies get larger and larger, it becomes harder and harder to avoid the design-by-committee trap. In a sense, the only difference between something like Windows, which takes 24 people to do the "off" button on the start menu, and something like SkyOS, written almost entirely by one person. One of these is a great OS, the other is not. I'll let you figure out which is which :)
Of course, there's always a few people with the rare ability to take a vision, use it to inspire others, and get everybody on the same page. This can only work up to a certain extent. Microsoft got too big for Bill Gates a while ago; I think Apple is starting to get too big for Steve Jobs. It's certainly too big for anybody else, which is probably why Apple was floundering until Jobs came back.
[ 0 Comments... ]
Another blog modification, this one stolen from Steve Yegge's gripes about how blogging software sucks in general. For longer posts, I can now add spots for inline commenting. So for example, at the end of this paragraph, there is a point where you can add comments. The inline comments work more or less the same way as the regular comments (yay! for code reuse). The main difference is that when viewing the entry, you can show/hide the inline comments (note this only applies on the entry page where you can actually see the comments, not on the main blog page or in the RSS/Atom feeds). Also, as much as I hated to do it, the show/hide functionality required using Javascript. Ugh, I know. However, if you don't have Javascript, the page should degrade well and you'll still be able to see the inline comments; you just won't be able to hide them.
[ 3 comments here ] And now, on to the main topic: disappearing islands (yoinked from Slashdot). It's like The Day After Tomorrow, except not. Well, they're both caused by global warming. For the people in Waterloo/Toronto, you may have noticed the lack of snow on the ground outside. Guess what? That snow is occupied elsewhere, causing world sea levels to rise, and kicking people off their beloved islands. Why? Watch An Inconvenient Truth. Although it's probably too late now to do much except start building bunkers. And dikes. Lots of dikes.
[ 3 Comments... ]
the much-desired feature is now available: you can edit your comments!
you can only edit comments that you made while you were logged in (otherwise they're stored as anonymous comments, so there's no proof you made them), and you must be logged in to edit your comments. more incentive to get/use OpenID! :)
to edit a comment, just to go the comment, and there should be an "Edit" link in the title bar of the comment. the timestamp on the edited comment will NOT change. i considered having it change the timestamp, but figured that most edits would be relatively minor (or should be, anyway), so there's no point pushing it back up to the top of the RSS feed by editing the timestamp. if y'all want the timestamp to be updated too, let me know.
[ 0 Comments... ]
Usually you can solve just about any problem by adding more layers of abstraction. Here's my recently-discovered exception to that rule:
#include <stdio.h>
#include <setjmp.h>
#define DOESNT_WORK
int setjmpWrapper( void *jmpbuf ) {
return setjmp( *(jmp_buf *)jmpbuf );
}
void longjmpWrapper( void *jmpbuf, int retVal ) {
longjmp( *(jmp_buf *)jmpbuf, retVal );
}
int main( int argc, char ** argv ) {
jmp_buf jmpBuffer;
#ifdef DOESNT_WORK
if (setjmpWrapper( &jmpBuffer )) {
printf( "Success!\n" );
return 0;
}
longjmpWrapper( &jmpBuffer, 1 );
#else
if (setjmp( jmpBuffer )) {
printf( "Success!\n" );
return 0;
}
longjmp( jmpBuffer, 1 );
#endif
return 1;
}
Wrappers on setjmp/longjmp code don't really work. If you think about it, it makes sense - setjmp copies the current registers into the jump buffer, so that you can roll back the machine state when you longjmp. However, the problem here is that if you use the wrapper, the stack gets unwound and rewound after the call to setjmp (i.e. when you return from the wrapper, a stack frame is thrown out). This means when you call the longjmp wrapper, the stack has changed, and restoring the stack pointer to what it used to be causes Bad Things (TM) to happen, like returning into random memory. Fortunately, it's not too hard to debug, since starting to execute random memory locations almost always throws some hardware exception, and tracing the last valid instruction gets you to the longjmp code pretty fast.
[ 0 Comments... ]
|