Blog



All timestamps are based on your local time of:

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

Viewport madness2012-04-26 22:32:41

One of the things I've been working on for the last while is the integration between the Fennec front-end and the core graphics APIs. One of the main problems in this kind of work is having to deal with multiple different coordinate systems and units of measurement. For coordinate systems, we have:

  • the page, which is the area determined by the content being rendered
  • the screen, which is determined by the physical hardware the user has
  • the viewport, which is the part of the page the user sees on the screen
  • the displayport, which is the area Gecko is actually rendering to graphics buffers
  • the CSS viewport, which is the base relative to which some content dimensions are calculated

For units of measurements, we have:

  • device pixels, whose size is fixed by the hardware
  • CSS pixels, which can grow or shrink relative to device pixels depending on the zoom
  • app units, which are what Gecko uses internally for layout calculations, and are 1/60 of a CSS pixel

With all these different frames of reference and units of measurement it sometimes gets pretty hard to keep it all straight. As a tool to help me visualize this, I wrote a simple SVG file that lets me quickly simulate certain behaviours. The SVG file shows a device overlaid on top of a screenshot of CNN.com, and allows you to drag around the device so that you can see how the viewport and displayport might change. It also allows you to simulate zooming in and out (using the '-' and '+' keys) and asynchronous panning (use 'p' to pin/unpin the displayport).

You can find the SVG file here (warning: it loads a ~1.5MB CNN screenshot). Feel free to play around with it, or hack it up to suit your own needs!

[ 7 Comments... ]

Managing bugmail overload2012-04-01 03:13:42

Do you suffer from bugmail overload? Do you often declare bugmail bankruptcy? Have a set of bugmail filters so complex they break every time you change them? If so, this post might be for you!

A few months ago, I too was a chronic sufferer of bugmail overload. The best approaches to handling bugmail I could glean from my fellow Mozillians revolved around using Gmail, tuning email preferences in Bugzilla, and filtering heavily. The problem, however, is that I like getting bugmail. I like being able to watch a component and stay on top of what's going on. It's not so much a case of getting too much bugmail, as it is a case of not being able to process it fast enough.

So I decided to write a bugmail dashboard that would solve this problem. In a nutshell, it scrapes incoming bugmail and extracts all the relevant pieces of information, stuffing them into a MySQL database. A web-based front-end then displays that information in a very compact representation that allows me to view large volumes of bugmail efficiently. Here is a screenshot of what it looks like. And here is a random subset of features:

  • Presents information from multiple bugs on the screen simultaneously, so you don't have to wade through different emails to read everything.
  • Prioritizes bugs into columns; the leftmost column is most important and rightmost column is least important.
  • Marking stuff is as viewed is done on a per-bug basis (using XMLHttpRequest so it's nice and responsive).
  • Marking one bug as viewed moves the next bug up so you don't even have to move your mouse, kinda like Firefox's tab close buttons!
  • Interface is very mobile-friendly so you can easily process bugmail on your phone during idle moments.


I've been using and improving this dashboard for a couple of months now, and it's getting stable enough that others may want to try it out. I still consider it "alpha" quality though, largely because it's non-trivial to set up and get going. If you're interested in trying it out, grab the source from github.com/staktrace/bugmash and take a look at the README.html file for instructions on how to set it up.

If you have any feedback, please send me email and/or file bugs in github and/or submit pull requests!

[ 0 Comments... ]

Principles2012-03-27 22:04:59

Since you're reading this blog, I assume you have nothing better to do. So, you should watch this video: Inventing on Principle. It's an hour long but would be worth the time investment even if it took up a whole day. (If you're not a software developer you should watch it anyway, the guts of the talk is not software-specific.)

[ 0 Comments... ]

Compile-time JS syntax checking2012-03-21 23:42:28

Something that's bitten me more than once during Fennec development is making some changes to browser.js, going through the super-long build cycle, running my code, only to find out there was a syntax error of some sort in my change, which causes Fennec to not start up correctly. When this happens, it is a huge waste of time, and quite frustrating to boot.

Although I've filed bug 715242 to track this issue and fix it properly as part of the build process, I've also thrown together a quick way to do this in my local builds.

First, build the js engine as a standalone binary for your machine. If you have a host build of mozilla-central you might already have a working copy in your <mozilla-central-objdir>/js/src directory. If not, do the following (adapted from the SpiderMonkey build instructions):

mkdir -p ~/tmp/js-build
pushd ~/tmp/js-build
<mozilla-central-srcdir>/js/src/configure
make
popd


This will build a "js" binary that you can run with the -c flag to syntax-check JS files. Put this on your $PATH somewhere. Unfortunately, some of the .js files you'll want to syntax check (such as browser.js) are also preprocessed. Figuring out where in the build to find the post-processed versions of these files was too much work, so I went with the simple and easy approach:

grep -v "^#" $1 > ~/tmp/check-this.js
js -c ~/tmp/check-this.js


If you throw the above into a script and run it with a .js file as an argument, it will exit with zero on success and nonzero on failure, so you can conditionally run the rest of your build. I do this in my build scripts, which you can see in my github repo at https://github.com/staktrace/moz-scripts/ - look at the jscheck and build-android.sh files in particular.

[ 0 Comments... ]

Mozilla2012-03-17 19:17:30

So it's been about five and a half months since I started working at Mozilla. Kind of ironic that I haven't yet blogged about it, considering they encourage their employees to blog early and often, and being able to blog about work stuff was definitely one of the things that attracted me to Mozilla in the first place.

Officially I'm on the mobile platform team, working on the parts of the code that deal with the mobile browsers (i.e. Firefox for Android, aka Fennec) interaction with the core Gecko rendering engine. However, since we've been focused on a rewrite of the entire Fennec front-end, everybody on the mobile team has been working on whatever needs to be done, and the lines are pretty blurry anyway. Currently I'm working on the pan/zoom behaviour of Fennec, ensuring the user can navigate around the page smoothly and efficiently, encountering as little "checkerboarding" as possible.

Working at Mozilla so far has been pretty interesting. In the world of software, it's almost the polar opposite of RIM (particularly RIM as it was when I left). Working on open source software in an open development process is obviously a large part of it. All our code is publicly available, and a lot of the discussion that goes on happens on Bugzilla and IRC, where anybody can see and participate.

But even more important, and quite surprising to me, is the emphasis on community. Mozilla has a huge focus on building a community around the web - to the extent that every office has a dedicated community space where they host events. For example, during the last week at the Toronto office, we had a "Girls learning code" event where 11-14 year old girls could come and learn about the web and technology in general. This is an aspect of Mozilla that I think a lot of people aren't really aware of (I was only vaguely aware of it before I started here), but it is a core part of the company and mission.

I don't want to ramble on right now, but I plan to blog on Mozilla-specific things, both technical and non-technical, in the future. I suspect some of those posts will not be of general interest, so I'll keep them off the main blog, but you can get at them by going to https://staktrace.com/spout/?tag=mozilla. The RSS feed will likewise be at https://staktrace.com/spout/getrss.php?tag=mozilla. Feel free to subscribe to that if you want to see Mozilla-specific blog content.

[ 4 Comments... ]

Meta-nationals, revisited2012-03-12 23:25:53

One of my first posts on this blog was about the idea of a meta-national, or a company that encompasses a country. I specifically cited IBM as an example of a company getting so large it might become one. Interestingly, IBM now has a high school in New York, so it hasn't let me down on that front.

But more interesting is what NASA says. I hadn't really thought about the implications of this before, but it hit me that if space exploration and colonization is going to be done by companies rather than governments, they're likely to want to establish their own laws rather than abide by existing governmental legal structures. In fact, I could see companies becoming the new "countries" as we expand out from Earth, and meta-national companies expanding to meta-planetary companies.

Thoughts on what the next millenium holds with respect to clustering of people? Will we still have countries, or companies, or anything else?

[ 7 Comments... ]

RetroShare2012-03-05 08:57:11

RetroShare. I've been looking for something like this for a long time, and I'm amazed I haven't heard about until yesterday (via Slashdot, no less). It heartily gets my stamp of approval. It is a truly decentralized communication platform. If anybody is interested in trying it out and wants to add me to their network, grab my public key from this page.

[ 1 Comment... ]

Subjectivity2012-02-21 19:52:51

As an undergraduate, it's easy to believe that your assignments and exams are marked based on some objective criteria and your mark in a course is strictly based on your knowledge of the material. As a teaching assistant, you come to realize the truth, which is that it's nearly impossible to come up with a set of objective criteria to mark free-form (i.e. essay-type) responses from students. Back when I was a TA, the marking schemes we got for evaluating assignments and exams were never good enough. There were always students who would put down something that was unspecified or ambiguous in the marking scheme, and we would have to use our best judgement when marking those. This was particularly true on exams.

In fact, when marking some of the larger problems on exams, one of the approaches many TA's (myself included) adopted is to first decide whether or not the student actually knows the answer. If yes, then we assume the student starts with full marks, and we deduct marks for each significant error. If the student doesn't know the answer though, we do the opposite: assume the students starts at zero, and add part marks for things they got right. This approach works pretty well in achieving the ultimate goal of the whole process, which is to separate the students who know the material from those who don't.

The part of the process I want to focus on is the first part: where the TA uses his or her best judgement to decide if the student knows the answer to the question or not. This is probably the single most important factor in the student's mark, and it is essentially a subjective decision. Having a more objective marking scheme, which covered every possible student response unambiguously, would eliminate this subjectiveness. However, it would also be impractical as it would take too much time for the instructor to create such a marking scheme, and there is no guarantee that it would actually produce better results.

I'm pretty sure that this subjectiv-ization of what are supposedly objective processes happens in a lot of places. For instance, consider driving exams. Sure, the examiner has a checklist of items and takes away points for things like not checking your blind spot, but I'd bet that the driving exam (at least ones in Ontario) are a fundamentally subjective test. If the examiner feels that you are confident, safe, and in control while behind the wheel, you pass.

A key point here is that somebody who passes the subjective test might very well fail a strictly objective test, and vice-versa. You could test a driver using video equipment to ensure they check the rearview mirror every n seconds, or turn their head x degrees when checking their blind spot. But somebody who does all that may still be unsafe on the road. The subjective test allows the examiner to ensure that the driver is following the "spirit of the law" rather than the "letter of the law" when it comes to safe driving.

I further hypothesize that this disparity in the results of subjective and objective tests applies to other domains as well. One that comes to mind is smartphone purchasing. Users buy smartphones based on subjective decisions (whichever happens to appeal to them more). If asked "why?", they will then justify their subjective decision with faked objective criteria like "this phone has feature X" or "this phone is faster". Although those statements may be true, they are like scores on a driving test: made up on the spot to justify the subjective decision.

Based on my experience, many people involved in the production of said smartphones (in engineering disciplines at least) focus on the objective metrics. They think that if they implement "feature X" or make their response time faster, they will win over those users. I don't think it works that way, though. Implementing "feature X" will just make users complain about missing "feature Y" instead, because implementing "feature X" doesn't change the outcome of their subjective decision. To do that you have to resort to psychology or other such disciplines.

[ 4 Comments... ]

De-google-ifying2012-01-26 01:47:36

With Google changing their privacy policies on March 1st, I finally had a deadline that I could use to combat my procrastination. So today I took a large step in extricating myself from Google dependencies; that of switching away from GMail and wiping down as much of my Google account of data as possible.

I've been using customized emails for all my online accounts (banks, etc.) for a while now, so I didn't have to update anything on that front. Downloading all of my email out of GMail and wiping it clean was also trivial with the mail downloader I wrote a while back. I also set up the GMail account to forward (and then delete from Google's servers) any stray email that comes in.

The main problem, which took me nearly a couple of hours, was extracting the list of people I needed to notify of my email address change. To do this I grepped all of my downloaded GMail messages for the "From" header, and then used a combination of sed, sort, uniq until I got a frequency-sorted list of all the email addresses that had sent me email. After a few more greps to get rid of obvious stuff (tech support, mailing lists, etc.) I was down to ~1400 addresses. I then had to sift through the results manually and extracted the 100 or so people from whom I could expect to receive email in the future, and to whom I sent out a change of email notification. I'll need to make sure I keep enough metadata in my new email/contact list to make this process less painful if I ever have to do it again.

Other than GMail, I didn't have very much in my Google account. The Google dashboard made it pretty easy to find it all - I exported a copy of whatever was in Google Docs and then unsubscribed myself from those documents, unsubscribed from the Google Groups I was a member of, and deleted the one calendar event and analytics account I had from long ago, and erased the mostly empty orkut and Youtube profiles.

The one problem I now face is that I do actually want to remain subscribed to one of the groups I was in, and it seems to be hard (if not impossible) to do that without a Google account. If it comes to that I can use a secondary Google account for this purpose; I'm still satisfied by how much I was able to reduce the surface area exposed to Google.

[ 0 Comments... ]

Blog comment spam2012-01-22 23:18:55

Since spambots/humans-for-hire seem to be getting through my captcha and posting spam comments on this blog more frequently, I've added another layer of security to deal with it. If a comment is posted by a user that is not logged in, then the comment will not appear in any RSS/Atom feeds until it has been approved by me. If it's spam, I'll just delete it so it will never show up in the feeds. It will still show up on the site as part of the comments under a blog post, though.

This should prevent the most common annoyance - spam comments showing up to users following this site via RSS - while not affecting anything else. In particular, if there's a lot of comments coming in on a particular post, some of which are spam and some not, and I happen to be away from a computer and cannot approve the comments, people who are posting in the comment thread will still be able to see the full comment thread and reply to the unapproved comments.

[ 0 Comments... ]

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

 
 
(c) Kartikaya Gupta, 2004-2025. User comments owned by their respective posters. All rights reserved.
You are accessing this website via IPv4. Consider upgrading to IPv6!