Blog



All timestamps are based on your local time of:

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

Asynchronous scrolling in Firefox2015-11-30 13:32:51

In the Firefox family of products, we've had asynchronous scrolling (aka async pan/zoom, aka APZ, aka compositor-thread scrolling) in Firefox OS and Firefox for Android for a while - even though they had different implementations, with different behaviors. We are now in the process of taking the Firefox OS implementation and bringing it to all our other platforms - including desktop and Android. After much hard work by many people, including but not limited to :botond, :dvander, :mattwoodrow, :mstange, :rbarker, :roc, :snorp, and :tn, we finally have APZ enabled on the nightly channel for both desktop and Android. We're working hard on fixing outstanding bugs and getting the quality up before we let it ride the trains out to DevEdition, Beta, and the release channel.

If you want to try it on desktop, note that APZ requires e10s to be enabled, and is currently only enabled for mousewheel/trackpad scrolling. We do have plans to implement it for other input types as well, although that may not happen in the initial release.

Although getting the basic machinery working took some effort, we're now mostly done with that and are facing a different but equally challenging aspect of this change - the fallout on web content. Modern web pages have access to many different APIs via JS and CSS, and implement many interesting scroll-linked effects, often triggered by the scroll event or driven by a loop on the main thread. With APZ, these approaches don't work quite so well because inherently the user-visible scrolling is async from the main thread where JS runs, and we generally avoid blocking the compositor on main-thread JS. This can result in jank or jitter for some of these effects, even though the main page scrolling itself remains smooth. I picked a few of the simpler scroll effects to discuss in a bit more detail below - not a comprehensive list by any means, but hopefully enough to help you get a feel for some of the nuances here.

Smooth scrolling

Smooth scrolling - that is, animating the scroll to a particular scroll offset - is something that is fairly common on web pages. Many pages do this using a JS loop to animate the scroll position. Without taking advantage of APZ, this will still work, but can result in less-than-optimal smoothness and framerate, because the main thread can be busy with doing other things.

Since Firefox 36, we've had support for the scroll-behavior CSS property which allows content to achieve the same effect without the JS loop. Our implementation for scroll-behavior without APZ enabled still runs on the main thread, though, and so can still end up being janky if the main thread is busy. With APZ enabled, the scroll-behavior implementation triggers the scroll animation on the compositor thread, so it should be smooth regardless of load on the main thread. Polyfills for scroll-behavior or old-school implementations in JS will remain synchronous, so for best performance we recommend switching to the CSS property where possible. That way as APZ rolls out to release, you'll get the benefits automatically.

Here is a simple example page that has a spinloop to block the main thread for 500ms at a time. Without APZ, clicking on the buttons results in a very janky/abrupt scroll, but with APZ it should be smooth.

position:sticky

Another common paradigm seen on the web is "sticky" elements - they scroll with the page for a bit, and then turn into position:fixed elements after a point. Again, this is usually implemented with JS listening for scroll events and updating the styles on the elements based on the scroll offset. With APZ, scroll events are going to be delayed relative to what the user is seeing, since the scroll events arrive on the main thread while scrolling is happening on the compositor thread. This will result in glitches as the user scrolls.

Our recommended approach here is to use position:sticky when possible, which we have supported since Firefox 32, and which we have support for in the compositor. This CSS property allows the element to scroll normally but take on the behavior of position:fixed beyond a threshold, even with APZ enabled. This isn't supported across all browsers yet, but there are a number of polyfills available - see the resources tab on the Can I Use position:sticky page for some options.

Again, here is a simple example page that has a spinloop to block the main thread for 500ms at a time. With APZ, the JS version will be laggy but the position:sticky version should always remain in the right place.

Parallax

Parallax. Oh boy. There's a lot of different ways to do this, but almost all of them rely on listening to scroll events and updating element styles based on that. For the same reasons as described in the previous section, implementations of parallax scrolling that are based on scroll events are going to be lagging behind the user's actual scroll position. Until recently, we didn't have a solution for this problem.

However, a few days ago :mattwoodrow landed compositor support for asynchronous scroll adjustments of 3D transforms, which allows a pure CSS parallax implementation to work smoothly with APZ. Keith Clark has a good writeup on how to do this, so I'm just going to point you there. All of his demo pages should scroll smoothly in Nightly with APZ enabled.

Unfortunately, it looks like this CSS-based approach may not work well across all browsers, so please make sure to test carefully if you want to try it out. Also, if you have suggestions on other methods on implementing parallax so that it doesn't rely on a responsive main thread, please let us know. For example, :mstange created one at http://tests.themasta.com/transform-fixed-parallax.html which we should be able to support in the compositor without too much difficulty.

Other features

I know that there are other interesting scroll-linked effects that people are doing or want to do on the web, and we'd really like to support them with asynchronous scrolling. The Blink team has a bunch of different proposals for browser APIs that can help with these sorts of things, including things like CompositorWorker and scroll customization. For more information and to join the discussion on these, please see the public-houdini mailing list. We'd love to get your feedback!

(Thanks to :botond and :mstange for reading a draft of this post and providing feedback.)

[ 6 Comments... ]

Management, TRIBE, and other thoughts2015-05-31 13:07:46

At the start of 2014, I became a "manager". At least in the sense that I had a couple of people reporting to me. Like most developers-turned-managers I was unsure if management was something I wanted to do but I figured it was worth trying at least. Somebody recommended the book First, Break All The Rules to me as a good book on management, so I picked up a copy and read it.

The book is based on data from many thousands of interviews and surveys that the Gallup organization did, across all sorts of organizations. There were lots of interesting points in the book, but the main takeaway relevant here was that people who build on their strengths instead of trying to correct their weaknesses are generally happier and more successful. This leads to some obvious follow-up questions: how do you know what your strengths are? What does it mean to "build on your strengths"?

To answer the first question I got the sequel, Now, Discover Your Strengths, which includes a single-use code for the online StrengthsFinder assessment. I read the book, took the assessment, and got a list of my top 5 strengths. While interesting, the list was kind of disappointing, mostly because I didn't really know what to do with it. Perhaps the next book in the series, Go Put Your Strengths To Work, would have explained but at this point I was disillusioned and didn't bother reading it.

Fast-forward to a month ago, when I finally got to attend the first TRIBE session. I'd heard good things about it, without really knowing anything specific about what it was about. Shortly before it started though, they sent us a copy of Strengths Based Leadership, which is a book based on the same Gallup data as the aforementioned books, and includes a code to the 2.0 version of the same online StrengthsFinder assessment. I read the book and took the new assessment (3 of the 5 strengths I got matched my initial results; the variance is explained on their FAQ page) but didn't really end up with much more information than I had before.

However, the TRIBE session changed that. It was during the session that I learned the answer to my earlier question about what it means to "build on strengths". If you're familiar with the 4 stages of competence, that TRIBE session took me from "unconscious incompetence" to "conscious incompetence" with regard to using my strengths - it made me aware of when I'm using my strengths and when I'm not, and to be more purposeful about when to use them. (Two asides: (1) the TRIBE session also included other useful things, so I do recommend attending and (2) being able to give something a name is incredibly powerful, but perhaps that's worth a whole 'nother blog post).

At this point, I'm still not 100% sure if being a manager is really for me. On the one hand, the strengths I have are not really aligned with the strengths needed to be a good manager. On the other hand, the Strengths Based Leadership book does provide some useful tips on how to leverage whatever strengths you do have to help you fulfill the basic leadership functions. I'm also not really sold on the idea that your strengths are roughly constant over your lifetime. Having read about neuroplasticity I think your strengths might change over time just based on how you live and view your life. That's not really a case for or against being a manager or leader, it just means that you'd have to be ready to adapt to an evolving set of strengths.

Thankfully, at Mozilla, unlike many other companies, it is possible to "grow" without getting pushed into management. The Mozilla staff engineer level descriptions provide two tracks - one as an individual contributor and one as a manager (assuming these descriptions are still current - and since the page was last touched almost 2 years ago it might very well not be!). At many companies this is not even an option.

For now I'm going to try to level up to "conscious competence" with respect to using my strengths and see where that gets me. Probably by then the path ahead will be more clear.

[ 0 Comments... ]

Firewalling for fun and safety2015-01-04 21:17:41

TL;DR: If you have a home wi-fi network, think about setting multiple separate VLANs as a "defense in depth" technique to protect hosts from malware.

The long version: A few years ago when I last needed to get a router, I got one which came with DD-WRT out of the box (made by Buffalo). I got it because DD-WRT (and Tomato) were all the rage back then and I wanted to try it out. While I was setting it up I noticed I could set up multiple Wi-Fi SSIDs on my home network, each with different authentication parameters. So I decided to create two - one for my own use (WPA2 encrypted) and one for guests (with a hidden SSID and no encryption). That way when somebody came over and wanted to use my Wi-Fi I could just give them the (hidden) SSID name and they would be able to connect without a password.

This turned out to a pretty good idea and served me well. Since then though I've acquired many more devices that also need Wi-Fi access and in the interest of security I've made my setup a little more complex. Consider the webcam I bought a few months ago. It shipped from somewhere in China and comes with software that I totally don't trust. Not only is it not open-source, it's not upgradeable and regularly tries to talk to some Amazon EC2 server. It would be pretty bad if malware managed to infect the webcam and not only used it to spy on me, but also used as a staging area to attack other devices on my network.

(Aside: most people with home Wi-Fi networks implicitly treat the router as a firewall, in that random devices outside the network can't directly connect to devices inside the network. For the most part this is true, but of course it's not hard for a persistent attacker to do periodic port scans to see if there are any hosts inside your network listening for connections via UPnP or whatever, and use that as an entrance vector if the service has vulnerabilities.)

Anyway, back to the webcam. I ended up only allowing it connect to an isolated Wi-Fi network and used firewall rules on the router to prevent all access to or from it, except to a single server which could access a single port on it. That server basically extracted the webcam feed and exposed it to the rest of my network. Doing this isn't a perfect solution but it adds a layer of security that makes it harder for malware to penetrate.

There's a ton of other Wi-Fi devices on my network - a printer, various smartphones, a couple of Sonos devices, and so on. As the "Internet of Things" grows this list is bound to grow as well. If you care about ensuring the security of machines on your network, and not letting become part of some random hacker's botnet, knowing how to turn your router into a full-fledged firewall is a very useful tool indeed. Even if you choose not to lock things down to the extent that I do, simply monitoring connections between devices inside your network and hosts outside your network can be a huge help.

[ 1 Comment... ]

Killing the office suite2014-11-15 11:44:20

Have you ever had the experience of trying to write a document in MS Word (or Open/LibreOffice) and it keeps "correcting" your formatting to something you don't want? The last time I experienced that was about a year ago, and that was when I decided "screw it, I'll just write this in HTML instead". That was a good decision.

Pretty much anything you might want to use a word processor for, you can do in HTML - and oftentimes it's simpler. Sure, there's a bit of a learning curve if you don't know HTML, but that's true for anything. Now anytime I need to create "a document" (a letter, random notes or signs to print, etc.) I always do it in HTML rather than LibreOffice, and I'm the happier for it. I keep all my data in git repositories, and so it's a bonus that these documents are now in a plaintext format rather than a binary blob.

I realized that this is probably part of a trend - a lot of people I know nowadays to "powerpoint" presentations using web technologies such as reveal.js. I haven't seen many people comment on using web tech to do word processing, but I know I do it. The only big "office suite" thing left is the spreadsheet. It would be awesome if somebody wrote a drop-in JS spreadsheet library that you could include into a HTML page and instantly turn a table element into a spreadsheet.

I'm reminded of this old blog post by Joel Spolsky: How Trello is different. He talks about most of the people who use Excel really just use it because it provides a table format for entering things, rather than it's computational ability. HTML already provides that, but whenever I've tried doing that I find the markup/content ratio too high, so it always seemed like a pain. It would be nice to have a WSYIWYG tool that let you build a table (or spreadsheet) and import/export it as raw HTML that you can publish, print, share, etc.

As an addendum, that blog post by Joel also introduced me to the concept of unshipped code as "inventory", which is one of the reasons I really hate finding old bugs sitting around in Bugzilla with perfectly good patches that never landed!

[ 6 Comments... ]

Building a NAS2014-10-28 21:24:15

I've been wanting to build a NAS (network-attached storage) box for a while now, and the ominous creaking noises from the laptop I was previously using as a file server prompted me to finally take action. I wanted to build rather than buy because (a) I wanted more control over the machine and OS, (b) I figured I'd learn something along the way and (c) thought it might be cheaper. This blog posts documents the decisions and mistakes I made and problems I ran into.

First step was figuring out the level of data redundancy and storage space I wanted. After reading up on the different RAID levels I figured 4 drives with 3 TB each in a RAID5 configuration would suit my needs for the next few years. I don't have a huge amount of data so the ~9TB of usable space sounded fine, and being able to survive single-drive failures sounded sufficient to me. For all critical data I keep a copy on a separate machine as well.

I chose to go with software RAID rather than hardware because I've read horror stories of hardware RAID controllers going obsolete and being unable to find a replacement, rendering the data unreadable. That didn't sound good. With an open-source software RAID controller at least you can get the source code and have a shot at recovering your data if things go bad.

With this in mind I started looking at software options - a bit of searching took me to FreeNAS which sounded exactly like what I wanted. However after reading through random threads in the user forums it seemed like the FreeNAS people are very focused on using ZFS and hardware setups with ECC RAM. From what I gleaned, using ZFS without ECC RAM is a bad idea, because errors in the RAM can cause ZFS to corrupt your data silently and unrecoverably (and worse, it causes propagation of the corruption). A system that makes bad situations worse didn't sound so good to me.

I could have still gone with ZFS with ECC RAM but from some rudimentary searching it sounded like it would increase the cost significantly, and frankly I didn't see the point. So instead I decided to go with NAS4Free (which actually was the original FreeNAS before iXsystems bought the trademark and forked the code) which allows using a UFS file system in a software RAID5 configuration.

So with the software decisions made, it was time to pick hardware. I used this guide by Sam Kear as a starting point and modified a few things here and there. I ended up with this parts list that I mostly ordered from canadadirect.com. (Aside: I wish I had discovered pcpartpicker.com earlier in the process as it would have saved me a lot of time). They shipped things to me in 5 different packages which arrived on 4 different days using 3 different shipping services. Woo! The parts I didn't get from canadadirect.com I picked up at a local Canada Computers store. Then, last weekend, I put it all together.

It's been a while since I've built a box so I screwed up a few things and had to rewind (twice) to fix them. Took about 3 hours in total for assembly; somebody who knew what they were doing could have done it in less than one. I mostly blame lack of documentation with the chassis since there were a bunch of different screws and it wasn't obvious which ones I had to use for what. They all worked for mounting the motherboard but only one of them was actually correct and using the wrong one meant trouble later.

In terms of the hardware compatibility I think my choices were mostly sound, but there were a few hitches. The case and motherboard both support up to 6 SATA drives (I'm using 4, giving me some room to grow). However, the PSU only came with 4 SATA power connectors which means I'll need to get some adaptors or maybe a different PSU if I need to add drives. The other problem was that the chassis comes with three fans (two small ones at the front, one big one at the back) but there was only one chassis power connector on the motherboard. I plugged the big fan in and so far the machine seems to be staying pretty cool so I'm not too worried. Does seem like a waste to have those extra unused fans though.

Finally, I booted it up using a monitor/keyboard borrowed from another machine, and ran memtest86 to make sure the RAM was good. It was, so I flashed the NAS4Free LiveUSB onto a USB drive and booted it up. Unfortunately after booting into NAS4Free my keyboard stopped working. I had to disable the USB 3.0 stuff in the BIOS to get around that. I don't really care about having USB 3.0 support on this machine so not a big deal. It took me some time to figure out what installation mode I wanted to use NAS4Free in. I decided to do a full install onto a second USB drive and not have a swap partition (figured hosting swap over USB would be slow and probably unnecessary).

So installing that was easy enough, and I was able to boot into the full NAS4Free install and configure it to have a software RAID5 on the four disks. Things generally seemed OK and I started copying stuff over.. and then the box rebooted. It also managed to corrupt my installation somehow, so I had to start over from the LiveUSB stick and re-install. I had saved the config from the first time so it was easy to get it back up again, and once again I started putting data on there. Again it rebooted, although this time it didn't corrupt my installation. This was getting worrying, particularly since the system log files provided no indication as to what went wrong.

My first suspicion was that the RAID wasn't fully initialized and so copying data onto it resulted in badness. The array was "rebuilding" and I'm supposed to be able to use it then, but I figured I might as well wait until it was done. Turns out it's going to be rebuilding for the next ~20 days because RAID5 has to read/write the entire disk to initialize fully and in the days of multi-terabyte disk this takes forever. So in retrospect perhaps RAID5 was a poor choice for such large disks.

Anyway in order to debug the rebooting, I looked up the FreeBSD kernel debugging documentation, and that requires having a swap partition that the kernel can dump a crash report to. So I reinstalled and set up a swap partition this time. This seemed to magically fix the rebooting problem entirely, so I suspect the RAID drivers just don't deal well when there's no swap, or something. Not an easy situation to debug if it only happens with no swap partition but you need a swap partition to get a kernel dump.

So, things were good, and I started copying more data over and configuring more stuff and so on. The next problem I ran into was the USB drive to which I had installed NAS4Free started crapping out with read/write errors. This wasn't so great but by this point I'd already reinstalled it about 6 or 7 times, so I reinstalled again onto a different USB stick. The one that was crapping out seems to still work fine in other machines, so I'm not sure what the problem was there. The new one that I used, however, was extremely slow. Things that took seconds on the previous drive took minutes on this one. So I switched again to yet another drive, this time an old 2.5" internal drive that I have mounted in an enclosure through USB.

And finally, after installing the OS at least I've-lost-count-how-many times, I have a NAS that seems stable and appears to work well. To be fair, reinstalling the OS is a pretty painless process and by the end I could do it in less than 10 minutes from sticking in the LiveUSB to a fully-configured working system. Being able to download the config file (which includes not just the NAS config but also user accounts and so on) makes it pretty painless to restore your system to exactly the way it was. The only additional things I had to do were install a few FreeBSD packages and unpack a tarball into my home directory to get some stuff I wanted. At no point was any of the data on the RAID array itself lost or corrupted, so I'm pretty happy about that.

In conclusion, setup was a bit of a pain, mostly due to unclear documentation and flaky USB drives (or drivers) but now that I have it set up it seems to be working well. If I ever have to do it over I might go for something other than RAID5 just because of the long rebuild time but so far it hasn't been an actual problem.

[ 3 Comments... ]

Google-free android usage2014-10-18 22:42:19

When I switched from using a BlackBerry to an Android phone a few years ago it really irked me that the only way to keep my contacts info on the phone was to also let Google sync them into their cloud. This may not be true universally (I think some samsung phones will let you store contacts to the SD card) but it was true for phone I was using then and is true on the Nexus 4 I'm using now. It took a lot of painful digging through Android source and googling, but I successfully ended up writing a bunch of code to get around this.

I've been meaning to put up the code and post this for a while, but kept procrastinating because the code wasn't generic/pretty enough to publish. It still isn't but it's better to post it anyway in case somebody finds it useful, so that's what I'm doing.

In a nutshell, what I wrote is an Android app that includes (a) an account authenticator, (b) a contacts sync adapter and (c) a calendar sync adapter. On a stock Android phone this will allow you to create an "account" on the device and add contacts/calendar entries to it.

Note that I wrote this to interface with the way I already have my data stored, so the account creation process actually tries to validate the entered credentials against a webhost, and the the contacts sync adapter is actually a working one-way sync adapter that will download contact info from a remote server in vcard format and update the local database. The calendar sync adapter, though, is just a dummy. You're encouraged to rip out the parts that you don't want and use the rest as you see fit. It's mostly meant to be a working example of how this can be accomplished.

The net effect is that you can store contacts and calendar entries on the device so they don't get synced to Google, but you can still use the built-in contacts and calendar apps to manipulate them. This benefits from much better integration with the rest of the OS than if you were to use a third-party contacts or calendar app.

Source code is on Github: staktrace/pimple-android.

[ 10 Comments... ]

Maker Party shout-out2014-09-18 10:48:00

I've blogged before about the power of web scale; about how important it is to ensure that everybody can use the web and to keep it as level of a playing field as possible. That's why I love hearing about announcements like this one: 127K Makers, 2513 Events, 86 Countries, and One Party That Just Won't Quit. Getting more people all around the world to learn about how the web works and keeping that playing field level is one of the reasons I love working at Mozilla. Even though I'm not directly involved in Maker Party, it's great to see projects like this having such a huge impact!

[ 0 Comments... ]

Cracking libxul2014-05-22 09:02:20

For a while now I've been wanting to take a look inside libxul to see why it's so big. In particular I wanted to know what the impact of using templates so heavily in our code was - things like nsTArray and nsRefPtr are probably used on hundreds of different types throughout our codebase. Last night I was have trouble sleeping so I decided to crack open libxul and see if I could figure it out. I didn't persist enough to get the exact answers I wanted, but I got close enough. It was also kind of fun and I figured I'd post about it partly as an educational thing and partly to inspire others to dig deeper into this.

First step: build libxul. I had a debug build on my Linux machine with recent gecko, so I just used the libxul.so from that.

Second step: disassemble libxul.

objdump -d libxul.so > libxul.disasm


Although I've looked at disassemblies before I had to look at the file in vim a little bit to figure the best way to parse it to get what I wanted, which was the size of every function defined in the library. This turned out to be a fairly simple awk script.

Third step: get function sizes. (snippet below is reformatted for easier reading)

awk 'BEGIN { addr=0; label="";}
     /:$/ && !/Disassembly of section/ { naddr = sprintf("%d", "0x" $1);
                                         print (naddr-addr), label;
                                         addr=naddr;
                                         label=$2 }'
    libxul.disasm > libxul.sizes


For those of you unfamiliar with awk, this identifies every line that ends in a colon, but doesn't have the text "Disassembly of section" (I determined this would be sufficient to match the line that starts off every function disassembly). It then takes the address (which is in hex in the dump), converts it to decimal, and subtracts it from the address of the previous matching line. Finally it dumps out the size/name pairs. I inspected the file to make sure it looked ok, and removed a bad line at the top of the file (easier to fix it manually than fix the awk script).

Now that I had the size of each function, I did a quick sanity check to make sure it added up to a reasonable number:

awk '{ total += $1 } END { print total }' libxul.sizes
40263032


The value spit out is around 40 megs. This seemed to be in the right order of magnitude for code in libxul so I proceeded further.

Fourth step: see what's biggest!

sort -rn libxul.sizes | head -n 20
57984 <_ZL9InterpretP9JSContextRN2js8RunStateE>:
43798 <_ZN20nsHtml5AttributeName17initializeStaticsEv>:
41614 <_ZN22nsWindowMemoryReporter14CollectReportsEP25nsIMemoryReporterCallbackP11nsISupports>:
39792 <_Z7JS_Initv>:
32722 <vp9_fdct32x32_sse2>:
28674 <encode_mcu_huff>:
24365 <_Z7yyparseP13TParseContext>:
21800 <_ZN18nsHtml5ElementName17initializeStaticsEv>:
20558 <_ZN7mozilla3dom14PContentParent17OnMessageReceivedERKN3IPC7MessageE.part.1247>:
20302 <_ZN16nsHtml5Tokenizer9stateLoopI23nsHtml5ViewSourcePolicyEEiiDsiPDsbii>:
18367 <sctp_setopt>:
17900 <vp9_find_best_sub_pixel_comp_tree>:
16952 <_ZN7mozilla3dom13PBrowserChild17OnMessageReceivedERKN3IPC7MessageE>:
16096 <vp9_sad64x64x4d_sse2>:
15996 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE17EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
15594 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE16EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14963 <vp9_idct32x32_1024_add_sse2>:
14838 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE4EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14792 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE21EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14740 <_ZN16nsHtml5Tokenizer9stateLoopI19nsHtml5SilentPolicyEEiiDsiPDsbii>:


That output looks reasonable. Top of the list is something to do with interpreting JS, followed by some HTML name static initializer thing. Guessing from the symbol names it seems like everything there would be pretty big. So far so good.

Fifth step: see how much space nsTArray takes up. As you can see above, the function names in the disassembly are mangled, and while I could spend some time trying to figure out how to demangle them it didn't seem particularly worth the time. Instead I just looked for symbols that started with nsTArray_Impl which by visual inspection seemed to match what I was looking for, and would at least give me a ballpark figure.

grep "<_ZN13nsTArray_Impl" libxul.sizes | awk '{ total += $1 } END { print total }'
377522


That's around 377k of stuff just to deal with nsTArray_Impl functions. You can compare that to the total libxul number and the largest functions listed above to get a sense of how much that is. I did the same for nsRefPtr and got 92k. Looking for ZNSt6vector, which I presume is the std::vector class, returned 101k.

That more or less answered the questions I had and gave me an idea of how much space was being used by a particular template class. I tried a few more things like grouping by the first 20 characters of the function name and summing up the sizes, but it didn't give particularly useful results. I had hoped it would approximate the total size taken up by each class but because of the variability in name lengths I would really need a demangler before being able to get that.

[ 6 Comments... ]

Brendan as CEO2014-03-31 16:37:55

I would not vote for Brendan if he were running for president. However I fully support him as CEO of Mozilla.

Why the difference? Simply because as Mozilla's CEO, his personal views on LGBT (at least what one can infer from monetary support to Prop 8) do not have any measurable chance of making any difference in what Mozilla does or Mozilla's mission. It's not like we're going to ship Firefox OS phones to everybody... except LGBT individuals. There's a zero chance of that happening.

From what I've read so far (and I would love to be corrected) it seems like people who are asking Brendan to step down are doing so as a matter of principle rather than a matter of possible consequence. They feel very strongly about LGBT equality, and rightly so. And therefore they do not want to see any person who is at all opposed to that cause take any position of power, as a general principle. This totally makes sense, and given two CEO candidates who are identical except for their views on LGBT issues, I too would pick the pro-LGBT one.

But that's not the situation we have. I don't know who the other CEO candidates are or were, but I can say with confidence that there's nobody else in the world who can match Brendan in some areas that are very relevant to Mozilla's mission. I don't know exactly what qualities we need in a CEO right now but I'm pretty sure that dedication and commitment to Mozilla's mission, as well as technical expertise, are going to be pretty high on that list. That's why I support Brendan as CEO despite his views.

If you're reading this, you are probably a strong supporter of Mozilla's mission. If you don't want Brendan as CEO because of his views, it's because you are being forced into making a tough choice - you have to choose between the "open web" affiliation on your personal identity and the "LGBT" affiliation on your personal identity. That's a hard choice for anybody, and I don't think anybody can fault you regardless of what you choose.

If you choose to go further and boycott Mozilla and Mozilla's products because of the CEO's views, you have a right to do that too. However I would like to understand how you think this will help with either the open web or LGBT rights. I believe that switching from Firefox to Chrome will not change Brendan or anybody else's views on LGBT rights, and will actively harm the open web. The only winner there is Google's revenue stream. If you disagree with this I would love to know why. You may wish to boycott Mozilla products as a matter of principle, and I can't argue with that. But please make sure that the benefit you gain from doing so outweighs the cost.

[ 5 Comments... ]

Javascript login shell2014-03-23 10:46:29

I was playing around with node.js this weekend, and I realized that it's not that hard to end up with a Javascript-based login shell. A basic one can be obtained by simply installing node.js and ShellJS on top of it. Add CoffeeScript to get a slightly less verbose syntax. For example:

kats@kgupta-pc shelljs$ coffee
coffee> require './global.js'
{}
coffee> ls()
[ 'LICENSE',
  'README.md',
  'bin',
  'global.js',
  'make.js',
  'package.json',
  'scripts',
  'shell.js',
  'src',
  'test' ]
coffee> cat 'global.js'
'var shell = require(\'./shell.js\');\nfor (var cmd in shell)\n  global[cmd] = shell[cmd];\n'
coffee> cp('global.js', 'foo.tmp')
undefined
coffee> cat 'foo.tmp'
'var shell = require(\'./shell.js\');\nfor (var cmd in shell)\n  global[cmd] = shell[cmd];\n'
coffee> rm 'foo.tmp'
undefined
coffee> 


Basically, if you're in a JS REPL (node.js or coffee) and you have access to functions that wrap shell utilities (which is what ShellJS provides some of) then you can use that setup as your login shell instead of bash or zsh or whatever else you might be using.

I'm a big fan of bash but I am sometimes frustrated with some things in it, such as hard-to-use variable manipulation and the fact that loops sometimes create subshells and make state manipulation hard. Being able to write scripts in JS instead of bash would solve that quite nicely. There are probably other use cases in which having a JS shell as your login shell would be quite handy.

[ 1 Comment... ]

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

 
 
(c) Kartikaya Gupta, 2004-2025. User comments owned by their respective posters. All rights reserved.
You are accessing this website via IPv4. Consider upgrading to IPv6!