|
The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.
(From Thinking, Fast and Slow, by Daniel Kahneman)
When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.
In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).
Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.
[ 2 Comments... ]
My girlfriend was writing a blog post and wanted to put some tables in. Unfortunately the Wordpress interface wasn't being very helpful with adding tables so she took a PNG screenshot of the tables in Word and inserted those in. While the solution worked fine for her, I was aware that turning text into images is generally not a great solution for the web, for a number of reasons (bad for accessibility, less dynamic layout, slower page load, etc.).
I realized this was a perfect opportunity for a little webmaking experience. I fired up Thimble on my laptop, and with her looking over my shoulder, created a simple two-row table with the style she wanted, showing her some of the different style attributes that could be used. Once I had the start of her table, I published it as a "make". She was then able to view it on her laptop and remix it, putting in the rest of the data she wanted. She also created a second table (complete with bullet points and red text), and pasted the HTML back into the Wordpress HTML editor.
I think the part I like best about tools like Thimble are that they are very low-overhead to use. You don't need to sign in to start using it, and the interface is simple and obvious. For publishing the make, I did have to sign in, but with Persona, so I didn't have to create a new password for it. (If it required creating a more heavyweight account I would have just copied the HTML into pastebin or something, which is not nearly as cool.) And of course, the live preview of the HTML in Thimble is pretty neat too :)
Oh, and you should definitely read her blog post: Get Moving!
[ 1 Comment... ]
Please read Unraveling coordinate systems, part 1 for the backstory. This post describes how CSS transforms fit into the world of coordinate systems I described there.
But first, a quick status update: since I wrote that post, a fair amount of code has been converted to use the new typed classes (CSSPoint, LayoutDevicePoint, etc.). Special thanks to Ms2ger who has been incrementally converting various bits of layout code to use these where appropriate. The conversion has turned up some type mismatches and flushed out bugs on B2G, Fennec, and recently even Metro.
And now, on to the main event: CSS transforms. CSS transforms make things complicated because they are not really applied in layout code, but saved as a transform on the layer and applied at composition time. (Recall that the layer can be thought of as the rendered texture that gets uploaded to the graphics hardware). From a quick reading of the CSS transforms spec this makes sense, because the transforms are not meant to affect the layout properties of elements. However it complicates coordinate system work because a "transform: scale(2)" CSS style, for instance, doesn't change the size of the element in LayoutDevicePixels but does affect the ScreenPixel size.
If you recall from part 1, I defined LayoutDevicePixels as taking into account the device DPI and full zoom amounts; they're basically the coordinate system that the layout code outputs positioning data in. If there is an element with a 2x scale CSS transform applied, then the ScreenPixel height of the element must be twice the LayoutDevicePixel height of the element (for the moment let us ignore any OMTC transforms and resolution changes that might also be applied). However, there are no restrictions on the size of the layer, which means the LayerPixel that sits in between the LayoutDevicePixel and ScreenPixel can be anything.
So, we could, for example, have the layer be the same size as the untransformed element (1 LayoutDevicePixel == 1 LayerPixel) and then do the entire 2x scale in hardware (1 LayerPixel == 2 ScreenPixels). Or we could do something like have gecko render at higher density (1 LayoutDevicePixel == 2 LayerPixels) and then not scale it in hardware (1 LayerPixel == 1 ScreenPixel). We could even render it 4x in gecko and then scale it down in hardware by 2x, if we wanted. Rendering it small in gecko and scaling it up in hardware results in a blurry image, and rendering it large in gecko and scaling it down in hardware is a waste of memory/CPU. In theory, the scaling that results in the best visual effect without wasting resources is to have no hardware scaling and have gecko render exactly at the specified CSS transform. However, in practice, we use some heuristics to figure out the best tradeoff here because the CSS transform can change over time.
And while we're on the subject, a similar thing actually happens with the OMTC transform. If we take a regular page and pinch-zoom it to 2x, what actually happens on the layout side is that the presShell resolution is set to 2x, the CSS transform on the page is updated to be 0.5x, and the OMTC transform is again 2x. So the net effect is that the page is scaled up by 2x, with 1 LayoutDevicePixel == 2 LayerPixels == 2 ScreenPixels, but the fact that the CSS transform on the layer is modified is important from an implementation point of view.
Ok, so that's all well and good, and gives us nice-looking CSS transforms efficiently, at the expense of some complicated coordinate transforms. Now we can flesh out our summary from last time to be a little more precise:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x CSS-driven resolution x OMTC-driven resolution = LayerPixel
LayerPixel x CSS transforms x OMTC transforms = ScreenPixel
But that's not all. Note that CSS transforms apply to elements in the DOM rather than the entire document. This means that different layers in the layer tree can have different transforms, and these transforms accumulate when mapping a particular layer up to the screen. Madness! That means there's no longer just one set of LayerPixels but many! And with the work that I've been doing to make subframes asynchronously scrollable, many of the layers can have their own individual OMTC-driven resolutions and OMTC transforms as well! More madness! (In practice only top-level layers for PresShells will have OMTC resolutions but I wouldn't count on that always remaining the case.)
So to update the summary again, I need to introduce some layers. Assume that layer L is some random layer in the tree, it has a parent layer M, grandparent layer N, and so on up until the root layer R. Then:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x CSS-driven resolution (for R) x OMTC-driven resolution (for R) = LayerPixel (for R)
LayerPixel (for R) x CSS-driven resolution (for Q) x OMTC-driven resolution (for Q) = LayerPixel (for Q)
LayerPixel (for Q) x CSS-driven resolution (for P) x OMTC-driven resolution (for P) = LayerPixel (for P)
...
LayerPixel (for M) x CSS-driven resolution (for L) x OMTC-driven resolution (for L) = LayerPixel (for L)
LayerPixel (for L) x CSS transform (for L) x OMTC transform (for L) x CSS transform (for M) x OMTC transform (for M) x ... x CSS transform (for R) x OMTC transform (for R) = ScreenPixel
Simple, right? :) If I get around to it, the next post will cover how input events are mapped back from screen space to layout space. (It's not as straightforward as you might think.)
Thanks to Robert O'Callahan for reading a draft of this post and pointing out some things to fix.
[ 1 Comment... ]
For the last few weeks I've been hacking away at a jungle of coordinate systems in the graphics code, trying to make code easier to understand and to work with. This work is technically part of the effort to make subframes asynchronously scrollable in OMTC, but it has helped us to fix other bugs as well. This post a braindump of the coordinate systems I've uncovered and the mental model I have.
As of this writing, I have defined four "pixel" types in layout/base/Units.h. These are CSSPixel, LayoutDevicePixel, LayerPixel, and ScreenPixel. CSSPixel is the simplest - it represents a CSS pixel, which is what web content authors use to specify dimensions of things. Almost every Web API [1] deals with CSS pixels.
In the days of yore, 1 CSS pixel corresponded exactly to one screen pixel. That is, when you created a div that was 50 CSS pixels wide, it would show up as 50 pixels wide on your screen. If you have a desktop monitor that is 1024 pixels wide [2] and you create a div that is 1024 CSS pixels wide, it takes up your entire monitor width. Makes sense, right?
Now, let us enter the world of HiDPI display devices. On HiDPI devices, the screen pixels are so small and packed together so tightly that mapping 1 CSS pixel to 1 screen pixel doesn't make sense any more. Sure, you can fit more on the screen, but it's too tiny to be readable or useful. So we have this concept of a widget scale. The nsIWidget::GetDefaultScale() returns a scaling factor to account for HiDPI displays. For example, see the gonk implementation which makes B2G perform a scale amount based on the actual DPI of the device. This introduces a new coordinate system that layout refers to as "device pixels" and what I've called LayoutDevicePixels.
In the layout code, the widget scale affects two main things. One is the size of the CSS viewport [3]. If we have a widget scale of 2.0, then each CSS pixel is 2 screen pixels wide, which means that on a 1024x768 screen with a maximized browser window, you can only fit 512 CSS pixels across. So the CSS viewport width is adjusted to account for this. The other is the scale factor from app units [4] to screen pixels. Instead of the usual 60 app units being converted to 1 screen pixel you would normally get, setting a 2.0 widget scale results in 30 app units being converted to 1 screen pixel. This effectively makes each CSS pixel twice as wide when being rendered to the screen, which is the point of the whole exercise.
But that's not all! There is another factor that comes into play between CSS pixels and LayoutDevicePixels, and that is the "full zoom". This is when, on desktop Firefox, you perform the "Zoom In" or "Zoom Out" actions. Doing a "Zoom In" action increases the full zoom, which works almost identically to increasing the widget scale. That is, setting a zoom of 110% is pretty much equivalent to having a widget scale of 1.1. The only difference I'm aware of is that if you specify your CSS dimensions in real-world units such as inches, then they are affected by the widget scale but not by the "full zoom". Tricksy!
So to summarize what we have so far: in a world with just CSSPixels and LayoutDevicePixels, dimensions are specified in CSSPixels by content authors, and the browser maps them to LayoutDevicePixels based on the widget scale and full zoom. These LayoutDevicePixels are then displayed 1:1 on screen pixels as defined by the underlying platform.
But what about mobile? Welcome to the land of OMTC and pinch-zoom. OMTC stands for off-main-thread compositor, and is what allows you to pinch a page on Fennec and have it instantly zoom. What's happening here is the painted page is transformed in OpenGL [5], without Gecko really knowing about what's going on. Since Gecko isn't repainting anything, this is super fast, and allows us to animate pinch-zoom at 60 frames per second (or close to it).
Unfortunately, it also introduces a new coordinate system, because the LayoutDevicePixels that Gecko produced are no longer displayed 1:1 on the screen. Gecko could have painted something 10 LayoutDevicePixels wide (and so the texture uploaded to the graphics hardware would be 10 pixels wide) but then the user does a pinch-zoom and BAM! now it's taking up 20 pixels on the screen, because we told OpenGL to scale it up by 2x. So here we have our third pixel type defined: the ScreenPixel. In the preceding example the pinch-zoom produced a scale factor of 2.0 and so 10 LayoutDevicePixels would get mapped to 20 ScreenPixels.
Now say you're viewing a page in Fennec and zoom in using pinch-zoom. And then you zoom in some more. And then some more. If all we did was take the LayoutDevicePixels and tell OpenGL to render them bigger by scaling it in hardware, you would end up with a very pixellated and blurry view of the page. In order to make it look good again, we have to go back to Gecko and tell it to repaint the visible area of the page at a higher density, allowing us to remove the OpenGL scaling. For example, instead of rendering a paragraph of text into a texture and scaling that up in OpenGL to display a single word really big, we can tell Gecko to just render that one word really big, and to use up the entire texture to do it. This is done by a call to nsIPresShell::SetResolution().
Setting the resolution doesn't change our definition of CSSPixels or LayoutDevicePixels, but it does change something. To describe this change, we need to introduce a new coordinate system. This is the LayerPixel [6], and it sits between LayoutDevicePixel and ScreenPixel. That is, the resolution changes how many LayerPixels are produced for each LayoutDevicePixel, and now pinch-zooming affects the scale between LayerPixels and ScreenPixels. This is a little cyclical because as you perform a pinch-zoom, LayerPixels and ScreenPixels get farther apart in size. Then, once you finish the zoom, we tell Gecko to re-render the content at a new resolution such that LayerPixels and ScreenPixels are the same size once again, and we render the new LayerPixels at a 1:1 scale on the screen. When this happens the visible content goes from being blurry back to being sharp, which you can see in Fennec if you pay close attention when zooming in.
So to summarize, here is what we have now:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x resolution = LayerPixel
LayerPixel x OMTC transforms = ScreenPixel
And so ends my braindump. Hopefully what I've written above will not change going forward, and the terms I've used can become part of the Gecko developer vocabulary, so that when dealing with code in different coordinate systems it's much easier to agree on what we mean.
(Update Aug 5, 2013: I now have a part 2 that describes how CSS transforms fit into this picture.)
Notes:
[1] The only exception I can think of is window.outerWidth, which is in screen pixels. (Edit 2013-06-27: I was wrong, I think outerWidth is also in CSS pixels.)
[2] Note that the number of screen pixels displayed on a physical device may be modified by the operating system. For example, you can change your screen resolution from 1024x768 to 800x600, which changes the size and number of screen pixels. That's fine - we don't need to account for that in our code as the OS takes care of it.
[3] The CSS viewport affects how wide the page is laid out by the layout code. One way of visualizing this is that if you have a page with just plain text, the text will be wrapped so that it doesn't exceed the CSS viewport width.
[4] App units are what layout does all of its calculations in. One app unit is exactly one-sixtieth of a CSS pixel. The "60" was chosen because it has many integer factors and so allows representing common fractions losslessly.
[5] The actual graphics system in use depends on the platform. OpenGL is just an example.
[6] When Gecko paints the various elements on a page, it flattens them into "layers" that it hands off to the graphics stack. This is where the term LayerPixel comes from.
[ 10 Comments... ]
For a while now we've had problems stemming from having to deal with too many coordinate systems. I blogged about this before and the problem has only gotten worse. The AsyncPanZoomController class in particular deals with a variety of coordinate systems and it's often not clear which coordinate system a particular variable is in.
To try and deal with these code complexity issues, Anthony Jones suggested adding template parameters to some of the gfx classes so that we can enforce unit conversions at compile-time and annotate variables with which units they're in. I filed bug 865735 for this, and after some discussion with roc, Bas, jrmuizel, BenWa, tn, Cwiiis and some others, we agreed on a way to do this. I landed that patch and it was merged to m-c today.
The patch allows for incrementally adding units to uses of gfx::Point, gfx::Size, and gfx::Rect (and their Int variations). By default all instances of these classes are tagged as UnknownUnits. As we update bits of code they will be changed to things like CSSPixels, LayerPixels, ScreenPixels, and so on, depending on what they are. I've been working on a couple of patches that start adding this extra information to pieces of code; these patches can be found on bug 877726 and bug 877728. Doing this is slow work, but very parallelizable, so I would appreciate any help I can get. If you know of some graphics code that uses these classes, and you know what units they data they hold are in, please file a bug with patches to convert them. Feel free to CC me and/or make it depend on bug 865735.
They key files that define the templates and units are located at gfx/2d/Point.h, gfx/2d/Rect.h, and layout/base/Units.h. gfx::Point is now just a typedef to gfx::PointTyped<UnknownUnits>. Replacing UnknownUnits with CSSPixel gives us the new type gfx::PointTyped<CSSPixel> to represent points in CSS pixel coordinate systems. Since this is long and unwieldy to type, we have typedef'd this to CSSPoint. Similar changes will be done for the other classes (e.g. CSSIntPoint, CSSRect) and units (e.g. LayerPoint) as we start propagating them. The neat thing about the templating structure we came up with is that gfx::PointTyped<CSSPixel> actually extends from CSSPixel, so we can add methods (e.g. conversion to app units) there that are available on all CSSPoint instances but not other unit classes.
As far as I understand things, layout code currently always keeps nsPoint instances in app units (1/60 of a CSS pixel). nsIntPoint, however, can be used for a variety of units, and many of these (particularly the ones outside layout/) should be converted to gfx::IntPointTyped<something>. What layout code refers to as "device pixels" is generally not the same thing that graphics code refers to as "device pixels", so I'm trying to avoid using the term "device pixels" entirely in graphics code - I prefer to use "layer pixels", "display pixels", and "screen pixels" as needed. For non-OMTC platforms "screen pixels" and "layer pixels" should be equivalent, and for platforms without hidpi adjustments, "display pixels" and "screen pixels" should be the same.
While converting code I've run into many places where new point variables are constructed using the .x and .y members of a different point variable. It's important to ensure that such code is carefully audited to make sure that the old and new variables are in the same units. If it's not clear, it's best to stick in a call to FromUnknownUnits(...) so that it is implicitly flagged for follow-up when other nearby code gets converted.
[ 0 Comments... ]
Something I've wanted for a while is a way to receive notifications of commits to specific folders in mozilla-central, with some reasonable amount of diff included. Well, turns out there are now a bunch of ways to do this, so here's a quick rundown.
- hgweb's Atom feed - I'm only including this for completeness, but you can get an Atom feed of all changes to the repository by using the RSS icon at the bottom of any page in the repo. Unfortunately, the feed
is for all changes to the repo (can't filter by particular folders/files) and doesn't include diffs, so it's limited in usefulness.
- Dave Townsend's Hg Change Feed - Dave Townsend recently set up a more comprehensive feed system and blogged about this (see blog post or jump straight to the tree navigator to subscribe). He has it set up for comm-central, mozilla-central, and mozilla-inbound, and you can watch any folder/file in any of these repos. Pretty nifty! However, the RSS feeds don't include diffs.
- RSS to email options - Both of the above options give you RSS or Atom feeds, but some people prefer to get this as email. IFTTT recipes are a simple way to do this; Dave Townsend set up one for the toolkit folder and Margaret created one for the mobile folder. If you use Zimbra to manage your mail, you can also use that to get RSS as email; you can set this up in the web interface when you create a new folder. As is to be expected, these options just convert the RSS item to email, and so also don't contain diffs.
- My mailing lists - Since I really really wanted to get diffs in the email notifications, I decided to roll my own solution. I already have an Amazon EC2 instance running for ZNC, so I cloned mozilla-central there, and wrote a quick script to pull the tree, look for changes in specific folders, and send an email using Amazon SES (free for the volume I'm sending at). The email goes to a mailman instance I also set up on this domain, so anybody who is interested can sign up. Right now I have mailing lists for mobile/ and widget/android/, since that's what I'm interested in, but I can add other folders if there is sufficient demand. With this approach it requires some work on my part to set it up for specific files and folders, but I get diffs in the emails and can customize it further as I find things that would be useful. [UPDATE 2014-04: Due to general unreliability and lack of widespread use I have decommissioned these lists]
Did I miss any options? Post in the comments.
[ 3 Comments... ]
Here's my hypothesis: Twitter has subsumed RSS.
Pretty much any news site or blogger you would usually want to follow has both RSS and Twitter, and Twitter is just much simpler to use. It's a lot easier to follow somebody on Twitter than it is to add an RSS feed to a web-based aggregator like Google Reader. If you don't use a web-based aggregator you have portability/syncing issues where you have to set up your feeds on individual devices and keep them all in sync. Most importantly, Twitter serves as single notification stream that includes simple messages (tweets) and links to other longer/larger content.
So really, it's Twitter's fault that Google Reader is going away. But they're not the only one to blame. Firefox 4 (IIRC) removed the RSS button, making it that much harder to use RSS. Sure, you can argue very few people used it anyway, so removing it was a good idea, but that doesn't mean it didn't accelerate the downfall of RSS. I think other browsers have also made the RSS subscription flow less easy to use over the last few years.
Ordinarily I wouldn't really care about this, except that I kind of like the open web. Twitter is not the open web. I don't want Twitter to end up as the only way for me to subscribe to content. I fear that Google killing Reader is a sign that RSS use is already dwindling, and unless we act to save it, it will die completely.
Thankfully, the fine folks at Digg are building a Reader replacement, which I think is great. Not because I'll use it, but because it'll help slow (and hopefully reverse) the death of RSS.
It would be cool to see their reader clone (or any other web-based aggregators) take on Twitter directly, by making it as easy to reply/comment on articles as possible, and by having a lightweight way to "tweet" new content directly from the aggregator. For the former, it might be necessary to extend RSS to include things like URLs that accept HTTP POST replies or comments, and to specify the format of those POSTs (yay open standards). The latter will take a bit more architecting, probably requiring your feed aggregator to also be your feed publisher, so that it can insert these "tweets" into your feed along with your blog and/or other content.
I don't exactly know what would work and what wouldn't, and I'm probably the wrong person to be asking anyway since I don't use Twitter. But I do use RSS, and I hope that the only thing that kills RSS is another, even better, open standard.
[ 16 Comments... ]
Last week I landed bug 726335 which adds support for sharing the currently visible tab URL via Android Beam in Fennec. I also had two follow-up patches because I forgot to check the API levels in which the relevant APIs where added... doh! But anyway, if you have two Android NFC-enabled devices running ICS or higher, and Nightly (FF 22) running on one or both of them, you can hold the devices back-to-back until the Android Beam UI shows up, and then tap the screen to share the current tab's URL.
We should be Beam-compatible with the stock browser, so if the device you're sending the URL to doesn't have Fennec installed they can still open the URL in the stock browser, and you should be able to receive URLs sent from the stock browser in Fennec. Note also that if you are currently on a private browsing tab, this feature is disabled.
While sharing the current tab's URL is an obvious starting point, there are probably other things that could be shared over NFC. What else would you like to be able to share from Fennec?
[ 7 Comments... ]
I recently finished reading Quiet: The Power of Introverts in a World That Can't Stop Talking by Susan Cain, recommended to me by my girlfriend. We're both introverts and, as a result, often have to deal with some friction in the extrovert-centric society we live in. I've mostly made my peace with this, and to be honest, didn't think the book would be particularly useful. However, I underestimated the amount of insight and value that comes from thinking about and researching a topic for many years, as Susan Cain has done here. The book is most definitely a worthwhile read.
The book goes into some depth about research that has been done about the biological and psychological differences between introverts and extroverts. According to the book, somewhere between 20 and 50 percent of humans are introverts (although of course introversion is more of a gradient than a sharp distinction), and so the chances are good that you deal with both kinds of people on a regular basis. Dealing with them effectively, and in particular harnessing their productivity (e.g. at work) requires understanding what makes them tick and what turns them off. The book covers this really well and has plenty of advice for both introverts and extroverts.
The proportion of introversion is higher in disciplines like software engineering just because the nature of the task involves long periods of focus and concentration and some degree of "living inside your head". Speaking from personal experience, I know that many of the Mozillians I interact with daily are introverted to a significant degree, although of course there are many who are not as well.
Culturally, Mozilla provides a lot of freedom for contributors to shape their own environment and work in the way that suits them best. However, there are still times when an extroverted streak comes into play. Work weeks, for example, are usually a source of many extrovert-centric activities. The book has some useful advice on strategies for introverts to make the most of, and recover from, such situations.
Susan Cain also discusses some of the larger effects of western society's bias towards extroversion. For example, schools are increasingly designed to reward extroversion at the expense of the introverted schoolchildren who have just as much (or more) potential and talent. I found the discussion of that topic to also be pretty interesting and I suspect it might be one of the reasons that online learning solutions like the Khan Academy are becoming so popular.
Overall it's a pretty good book and I would it recommend it to anybody.
[ 0 Comments... ]
The MemShrink team (and John Schoenick in particular) have been maintaining Are we Slim Yet (AWSY), a memory usage tracker for Firefox. It has been quite useful in monitoring Firefox's memory usage over time and catching regressions. I recently got a similar benchmark working for Fennec. It's a little more sketchy because it runs on a device plugged in to the box under my desk that freezes every once in a while, but the data it's generating still seems to be quite reliable, and lets us find memory usage regressions in Fennec.
You can find the data at areweslimyet.com/mobile. It gets run on all working inbound builds, so the regression ranges tend to be pretty tight. For the most part regressions in the "Start" and "StartSettled" data lines are the most interesting since those indicate startup baseline memory regressions.
If you have ideas on how to improve AWSY, please file an issue (or even better, pull requests!) in the github repository at Nephyrin/MozAreWeSlimYet.
[ 4 Comments... ]
|