All timestamps are based on your local time of:

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

Javascript login shell2014-03-23 10:46:29

I was playing around with node.js this weekend, and I realized that it's not that hard to end up with a Javascript-based login shell. A basic one can be obtained by simply installing node.js and ShellJS on top of it. Add CoffeeScript to get a slightly less verbose syntax. For example:

kats@kgupta-pc shelljs$ coffee
coffee> require './global.js'
coffee> ls()
  'test' ]
coffee> cat 'global.js'
'var shell = require(\'./shell.js\');\nfor (var cmd in shell)\n  global[cmd] = shell[cmd];\n'
coffee> cp('global.js', 'foo.tmp')
coffee> cat 'foo.tmp'
'var shell = require(\'./shell.js\');\nfor (var cmd in shell)\n  global[cmd] = shell[cmd];\n'
coffee> rm 'foo.tmp'

Basically, if you're in a JS REPL (node.js or coffee) and you have access to functions that wrap shell utilities (which is what ShellJS provides some of) then you can use that setup as your login shell instead of bash or zsh or whatever else you might be using.

I'm a big fan of bash but I am sometimes frustrated with some things in it, such as hard-to-use variable manipulation and the fact that loops sometimes create subshells and make state manipulation hard. Being able to write scripts in JS instead of bash would solve that quite nicely. There are probably other use cases in which having a JS shell as your login shell would be quite handy.

[ 1 Comment... ]

The Project Premortem2014-03-14 21:33:23

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.

(From Thinking, Fast and Slow, by Daniel Kahneman)

When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.

In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).

Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.


More books2014-02-11 21:58:38

It's been a while since I've posted, but I've been getting through a lot of books that have been queued up on my reading list for a while. Quick rundown:

  • The old way (Elizabeth Marshall Thomas) - An account of the Kalahari bushmen, written by one of the first outsiders to live/interact with them. At a high level the book is similar to The World Until Yesterday, in that it relates how a pre-agricultural civilization used to live. I found it pretty interesting but probably not everybody's cup of tea. Books like these always make me more aware of how so many things we assume are "normal" really aren't.
  • Nothing to Envy (Barbara Demic) - The book follows the lives of a few people who lived in and escaped from North Korea. Quite well written. There were definitely some parts that took me by surprise - one of those "fact is stranger than fiction" things. If anybody ever does a detailed psychoanalysis of Kim Jong Il I would like to read it.
  • Mindset: The New Psychology of Success (Carol Dweck) - This is one of those books everybody should read, ideally before they have children. Life-changing in some ways. I think this book has been popular enough that some of its messages have seeped out into "general knowledge" but there's still a lot of stuff there that I hadn't encountered before.
  • Moonwalking with Einstein (Joshua Foer) - Ehh. It was certainly an entertaining read, but of little practical value. He describes how to create memory palaces so that you can rapidly memorize things like decks of cards, but that sort of stuff doesn't help me with being absent-minded and forgetting where I left my phone. There's some good discussion in the book about the pros and cons (and history of) of developing your memory which I found interesting.
  • Revelation Space (Alastair Reynolds) - Science fiction book. Pretty good overall although I was unsatisfied with the ending.
  • Your Money or Your Life (Vicki Robin, Joe Dominguez) - Pretty comprehensive book on personal finance management. I only skimmed this because there wasn't much in here that I didn't already know, either from reading The Wealthy Barber or my own experiments. But a good book if you're looking for something in this category.
  • Influence: The Psychology of Persuasion (Robert Cialdini) - Another must-read book. All about the subtle ways people exert influence on you, and what you can do to defend against it. What surprises me here is how easy it is to drastically improve the odds that somebody will agree to do something they fundamentally don't want to just by using a few of these tricks. (You can also use this knowledge to influence others, although the book is not written from that standpoint.)
  • Drawing on the Right Side of the Brain (Betty Edwards) - This book teaches you how to draw, and more importantly, how to see things differently. I haven't finished this book yet but I have gotten through enough to know it's good. If you're looking for a hobby I suggest picking up this book. Note that my best drawings prior to starting this book are in the form of stick figures, but I'm already confident that I will be able to draw well after finishing this book and practicing some.
  • Dogfight (Fred Vogelstein) - I started this book recently but abandoned it. I don't know why I even started to read it, but it wasn't worth the time.

That is all.


A little Webmaker success story2013-10-12 15:51:23

My girlfriend was writing a blog post and wanted to put some tables in. Unfortunately the Wordpress interface wasn't being very helpful with adding tables so she took a PNG screenshot of the tables in Word and inserted those in. While the solution worked fine for her, I was aware that turning text into images is generally not a great solution for the web, for a number of reasons (bad for accessibility, less dynamic layout, slower page load, etc.).

I realized this was a perfect opportunity for a little webmaking experience. I fired up Thimble on my laptop, and with her looking over my shoulder, created a simple two-row table with the style she wanted, showing her some of the different style attributes that could be used. Once I had the start of her table, I published it as a "make". She was then able to view it on her laptop and remix it, putting in the rest of the data she wanted. She also created a second table (complete with bullet points and red text), and pasted the HTML back into the Wordpress HTML editor.

I think the part I like best about tools like Thimble are that they are very low-overhead to use. You don't need to sign in to start using it, and the interface is simple and obvious. For publishing the make, I did have to sign in, but with Persona, so I didn't have to create a new password for it. (If it required creating a more heavyweight account I would have just copied the HTML into pastebin or something, which is not nearly as cool.) And of course, the live preview of the HTML in Thimble is pretty neat too :)

Oh, and you should definitely read her blog post: Get Moving!

[ 1 Comment... ]

Collapse2013-08-09 21:13:16

Following up to Guns, Germs, and Steel and The World Until Yesterday, I also read Collapse: How Societies Choose to Fail or Succeed by Jared Diamond. This one was a lot harder to read, and much less interesting to me. There were certainly some good lessons here, but I felt like he repeated himself too much, and that the book could have been 30% of the size it actually was.

It was interesting learn about how some past societies (e.g. on Easter Island) behaved and why they eventually died out. It does feel a bit like the world on a whole is on a course to repeat those mistakes.


Unraveling coordinate systems, part 22013-08-05 19:47:56

Please read Unraveling coordinate systems, part 1 for the backstory. This post describes how CSS transforms fit into the world of coordinate systems I described there.

But first, a quick status update: since I wrote that post, a fair amount of code has been converted to use the new typed classes (CSSPoint, LayoutDevicePoint, etc.). Special thanks to Ms2ger who has been incrementally converting various bits of layout code to use these where appropriate. The conversion has turned up some type mismatches and flushed out bugs on B2G, Fennec, and recently even Metro.

And now, on to the main event: CSS transforms. CSS transforms make things complicated because they are not really applied in layout code, but saved as a transform on the layer and applied at composition time. (Recall that the layer can be thought of as the rendered texture that gets uploaded to the graphics hardware). From a quick reading of the CSS transforms spec this makes sense, because the transforms are not meant to affect the layout properties of elements. However it complicates coordinate system work because a "transform: scale(2)" CSS style, for instance, doesn't change the size of the element in LayoutDevicePixels but does affect the ScreenPixel size.

If you recall from part 1, I defined LayoutDevicePixels as taking into account the device DPI and full zoom amounts; they're basically the coordinate system that the layout code outputs positioning data in. If there is an element with a 2x scale CSS transform applied, then the ScreenPixel height of the element must be twice the LayoutDevicePixel height of the element (for the moment let us ignore any OMTC transforms and resolution changes that might also be applied). However, there are no restrictions on the size of the layer, which means the LayerPixel that sits in between the LayoutDevicePixel and ScreenPixel can be anything.

So, we could, for example, have the layer be the same size as the untransformed element (1 LayoutDevicePixel == 1 LayerPixel) and then do the entire 2x scale in hardware (1 LayerPixel == 2 ScreenPixels). Or we could do something like have gecko render at higher density (1 LayoutDevicePixel == 2 LayerPixels) and then not scale it in hardware (1 LayerPixel == 1 ScreenPixel). We could even render it 4x in gecko and then scale it down in hardware by 2x, if we wanted. Rendering it small in gecko and scaling it up in hardware results in a blurry image, and rendering it large in gecko and scaling it down in hardware is a waste of memory/CPU. In theory, the scaling that results in the best visual effect without wasting resources is to have no hardware scaling and have gecko render exactly at the specified CSS transform. However, in practice, we use some heuristics to figure out the best tradeoff here because the CSS transform can change over time.

And while we're on the subject, a similar thing actually happens with the OMTC transform. If we take a regular page and pinch-zoom it to 2x, what actually happens on the layout side is that the presShell resolution is set to 2x, the CSS transform on the page is updated to be 0.5x, and the OMTC transform is again 2x. So the net effect is that the page is scaled up by 2x, with 1 LayoutDevicePixel == 2 LayerPixels == 2 ScreenPixels, but the fact that the CSS transform on the layer is modified is important from an implementation point of view.

Ok, so that's all well and good, and gives us nice-looking CSS transforms efficiently, at the expense of some complicated coordinate transforms. Now we can flesh out our summary from last time to be a little more precise:

CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x CSS-driven resolution x OMTC-driven resolution = LayerPixel
LayerPixel x CSS transforms x OMTC transforms = ScreenPixel

But that's not all. Note that CSS transforms apply to elements in the DOM rather than the entire document. This means that different layers in the layer tree can have different transforms, and these transforms accumulate when mapping a particular layer up to the screen. Madness! That means there's no longer just one set of LayerPixels but many! And with the work that I've been doing to make subframes asynchronously scrollable, many of the layers can have their own individual OMTC-driven resolutions and OMTC transforms as well! More madness! (In practice only top-level layers for PresShells will have OMTC resolutions but I wouldn't count on that always remaining the case.)

So to update the summary again, I need to introduce some layers. Assume that layer L is some random layer in the tree, it has a parent layer M, grandparent layer N, and so on up until the root layer R. Then:

CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x CSS-driven resolution (for R) x OMTC-driven resolution (for R) = LayerPixel (for R)
LayerPixel (for R) x CSS-driven resolution (for Q) x OMTC-driven resolution (for Q) = LayerPixel (for Q)
LayerPixel (for Q) x CSS-driven resolution (for P) x OMTC-driven resolution (for P) = LayerPixel (for P)
LayerPixel (for M) x CSS-driven resolution (for L) x OMTC-driven resolution (for L) = LayerPixel (for L)
LayerPixel (for L) x CSS transform (for L) x OMTC transform (for L) x CSS transform (for M) x OMTC transform (for M) x ... x CSS transform (for R) x OMTC transform (for R) = ScreenPixel

Simple, right? :) If I get around to it, the next post will cover how input events are mapped back from screen space to layout space. (It's not as straightforward as you might think.)

Thanks to Robert O'Callahan for reading a draft of this post and pointing out some things to fix.

[ 1 Comment... ]

Unraveling coordinate systems2013-06-24 22:28:21

For the last few weeks I've been hacking away at a jungle of coordinate systems in the graphics code, trying to make code easier to understand and to work with. This work is technically part of the effort to make subframes asynchronously scrollable in OMTC, but it has helped us to fix other bugs as well. This post a braindump of the coordinate systems I've uncovered and the mental model I have.

As of this writing, I have defined four "pixel" types in layout/base/Units.h. These are CSSPixel, LayoutDevicePixel, LayerPixel, and ScreenPixel. CSSPixel is the simplest - it represents a CSS pixel, which is what web content authors use to specify dimensions of things. Almost every Web API [1] deals with CSS pixels.

In the days of yore, 1 CSS pixel corresponded exactly to one screen pixel. That is, when you created a div that was 50 CSS pixels wide, it would show up as 50 pixels wide on your screen. If you have a desktop monitor that is 1024 pixels wide [2] and you create a div that is 1024 CSS pixels wide, it takes up your entire monitor width. Makes sense, right?

Now, let us enter the world of HiDPI display devices. On HiDPI devices, the screen pixels are so small and packed together so tightly that mapping 1 CSS pixel to 1 screen pixel doesn't make sense any more. Sure, you can fit more on the screen, but it's too tiny to be readable or useful. So we have this concept of a widget scale. The nsIWidget::GetDefaultScale() returns a scaling factor to account for HiDPI displays. For example, see the gonk implementation which makes B2G perform a scale amount based on the actual DPI of the device. This introduces a new coordinate system that layout refers to as "device pixels" and what I've called LayoutDevicePixels.

In the layout code, the widget scale affects two main things. One is the size of the CSS viewport [3]. If we have a widget scale of 2.0, then each CSS pixel is 2 screen pixels wide, which means that on a 1024x768 screen with a maximized browser window, you can only fit 512 CSS pixels across. So the CSS viewport width is adjusted to account for this. The other is the scale factor from app units [4] to screen pixels. Instead of the usual 60 app units being converted to 1 screen pixel you would normally get, setting a 2.0 widget scale results in 30 app units being converted to 1 screen pixel. This effectively makes each CSS pixel twice as wide when being rendered to the screen, which is the point of the whole exercise.

But that's not all! There is another factor that comes into play between CSS pixels and LayoutDevicePixels, and that is the "full zoom". This is when, on desktop Firefox, you perform the "Zoom In" or "Zoom Out" actions. Doing a "Zoom In" action increases the full zoom, which works almost identically to increasing the widget scale. That is, setting a zoom of 110% is pretty much equivalent to having a widget scale of 1.1. The only difference I'm aware of is that if you specify your CSS dimensions in real-world units such as inches, then they are affected by the widget scale but not by the "full zoom". Tricksy!

So to summarize what we have so far: in a world with just CSSPixels and LayoutDevicePixels, dimensions are specified in CSSPixels by content authors, and the browser maps them to LayoutDevicePixels based on the widget scale and full zoom. These LayoutDevicePixels are then displayed 1:1 on screen pixels as defined by the underlying platform.

But what about mobile? Welcome to the land of OMTC and pinch-zoom. OMTC stands for off-main-thread compositor, and is what allows you to pinch a page on Fennec and have it instantly zoom. What's happening here is the painted page is transformed in OpenGL [5], without Gecko really knowing about what's going on. Since Gecko isn't repainting anything, this is super fast, and allows us to animate pinch-zoom at 60 frames per second (or close to it).

Unfortunately, it also introduces a new coordinate system, because the LayoutDevicePixels that Gecko produced are no longer displayed 1:1 on the screen. Gecko could have painted something 10 LayoutDevicePixels wide (and so the texture uploaded to the graphics hardware would be 10 pixels wide) but then the user does a pinch-zoom and BAM! now it's taking up 20 pixels on the screen, because we told OpenGL to scale it up by 2x. So here we have our third pixel type defined: the ScreenPixel. In the preceding example the pinch-zoom produced a scale factor of 2.0 and so 10 LayoutDevicePixels would get mapped to 20 ScreenPixels.

Now say you're viewing a page in Fennec and zoom in using pinch-zoom. And then you zoom in some more. And then some more. If all we did was take the LayoutDevicePixels and tell OpenGL to render them bigger by scaling it in hardware, you would end up with a very pixellated and blurry view of the page. In order to make it look good again, we have to go back to Gecko and tell it to repaint the visible area of the page at a higher density, allowing us to remove the OpenGL scaling. For example, instead of rendering a paragraph of text into a texture and scaling that up in OpenGL to display a single word really big, we can tell Gecko to just render that one word really big, and to use up the entire texture to do it. This is done by a call to nsIPresShell::SetResolution().

Setting the resolution doesn't change our definition of CSSPixels or LayoutDevicePixels, but it does change something. To describe this change, we need to introduce a new coordinate system. This is the LayerPixel [6], and it sits between LayoutDevicePixel and ScreenPixel. That is, the resolution changes how many LayerPixels are produced for each LayoutDevicePixel, and now pinch-zooming affects the scale between LayerPixels and ScreenPixels. This is a little cyclical because as you perform a pinch-zoom, LayerPixels and ScreenPixels get farther apart in size. Then, once you finish the zoom, we tell Gecko to re-render the content at a new resolution such that LayerPixels and ScreenPixels are the same size once again, and we render the new LayerPixels at a 1:1 scale on the screen. When this happens the visible content goes from being blurry back to being sharp, which you can see in Fennec if you pay close attention when zooming in.

So to summarize, here is what we have now:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x resolution = LayerPixel
LayerPixel x OMTC transforms = ScreenPixel

And so ends my braindump. Hopefully what I've written above will not change going forward, and the terms I've used can become part of the Gecko developer vocabulary, so that when dealing with code in different coordinate systems it's much easier to agree on what we mean.

(Update Aug 5, 2013: I now have a part 2 that describes how CSS transforms fit into this picture.)

[1] The only exception I can think of is window.outerWidth, which is in screen pixels. (Edit 2013-06-27: I was wrong, I think outerWidth is also in CSS pixels.)
[2] Note that the number of screen pixels displayed on a physical device may be modified by the operating system. For example, you can change your screen resolution from 1024x768 to 800x600, which changes the size and number of screen pixels. That's fine - we don't need to account for that in our code as the OS takes care of it.
[3] The CSS viewport affects how wide the page is laid out by the layout code. One way of visualizing this is that if you have a page with just plain text, the text will be wrapped so that it doesn't exceed the CSS viewport width.
[4] App units are what layout does all of its calculations in. One app unit is exactly one-sixtieth of a CSS pixel. The "60" was chosen because it has many integer factors and so allows representing common fractions losslessly.
[5] The actual graphics system in use depends on the platform. OpenGL is just an example.
[6] When Gecko paints the various elements on a page, it flattens them into "layers" that it hands off to the graphics stack. This is where the term LayerPixel comes from.


Taming coordinate systems2013-05-30 12:31:31

For a while now we've had problems stemming from having to deal with too many coordinate systems. I blogged about this before and the problem has only gotten worse. The AsyncPanZoomController class in particular deals with a variety of coordinate systems and it's often not clear which coordinate system a particular variable is in.

To try and deal with these code complexity issues, Anthony Jones suggested adding template parameters to some of the gfx classes so that we can enforce unit conversions at compile-time and annotate variables with which units they're in. I filed bug 865735 for this, and after some discussion with roc, Bas, jrmuizel, BenWa, tn, Cwiiis and some others, we agreed on a way to do this. I landed that patch and it was merged to m-c today.

The patch allows for incrementally adding units to uses of gfx::Point, gfx::Size, and gfx::Rect (and their Int variations). By default all instances of these classes are tagged as UnknownUnits. As we update bits of code they will be changed to things like CSSPixels, LayerPixels, ScreenPixels, and so on, depending on what they are. I've been working on a couple of patches that start adding this extra information to pieces of code; these patches can be found on bug 877726 and bug 877728. Doing this is slow work, but very parallelizable, so I would appreciate any help I can get. If you know of some graphics code that uses these classes, and you know what units they data they hold are in, please file a bug with patches to convert them. Feel free to CC me and/or make it depend on bug 865735.

They key files that define the templates and units are located at gfx/2d/Point.h, gfx/2d/Rect.h, and layout/base/Units.h. gfx::Point is now just a typedef to gfx::PointTyped<UnknownUnits>. Replacing UnknownUnits with CSSPixel gives us the new type gfx::PointTyped<CSSPixel> to represent points in CSS pixel coordinate systems. Since this is long and unwieldy to type, we have typedef'd this to CSSPoint. Similar changes will be done for the other classes (e.g. CSSIntPoint, CSSRect) and units (e.g. LayerPoint) as we start propagating them. The neat thing about the templating structure we came up with is that gfx::PointTyped<CSSPixel> actually extends from CSSPixel, so we can add methods (e.g. conversion to app units) there that are available on all CSSPoint instances but not other unit classes.

As far as I understand things, layout code currently always keeps nsPoint instances in app units (1/60 of a CSS pixel). nsIntPoint, however, can be used for a variety of units, and many of these (particularly the ones outside layout/) should be converted to gfx::IntPointTyped<something>. What layout code refers to as "device pixels" is generally not the same thing that graphics code refers to as "device pixels", so I'm trying to avoid using the term "device pixels" entirely in graphics code - I prefer to use "layer pixels", "display pixels", and "screen pixels" as needed. For non-OMTC platforms "screen pixels" and "layer pixels" should be equivalent, and for platforms without hidpi adjustments, "display pixels" and "screen pixels" should be the same.

While converting code I've run into many places where new point variables are constructed using the .x and .y members of a different point variable. It's important to ensure that such code is carefully audited to make sure that the old and new variables are in the same units. If it's not clear, it's best to stick in a call to FromUnknownUnits(...) so that it is implicitly flagged for follow-up when other nearby code gets converted.


The Inner Game of Tennis2013-05-12 16:07:18

The Inner Game of Tennis, by Tim Gallwey, despite the title, is not really about tennis. Well, maybe about 3% of the book is actually about tennis, but for the rest of the book he only uses tennis as a concrete example of how to apply the principles of the "inner game" described in the book. The "inner game" here refers to a number of things, but generally speaking covers the ability to unlock your full potential without being held back by your fears, doubts, anxieties, and other insecurities.

Although the book is not really specific to tennis, it is geared towards people who have tried to excel at some physical discipline. It describes the mental obstacles that such people will be able to identify with, and describes in good detail how to overcome those obstacles. That being said, the book does also apply to non-physical activities and daily life, but if those are your primary interests then perhaps one of the other Inner Game books by the same author will be more suitable (I haven't read them so I can't comment further).

Personally I read this book because I thought it would help me with the mental obstacles I encounter while training parkour. It certainly does address exactly the problems I've experienced, but goes above and beyond that. I found the book hugely insightful and would recommend it to anybody who practices a physical discipline (and if you don't, you should). I haven't yet actually tried applying the techniques described in the book so I can't say how effective they are, but even if they are completely useless to me the book was still well worth it for the additional insights it provided.


Watching commits to mozilla-central2013-04-22 09:11:47

Something I've wanted for a while is a way to receive notifications of commits to specific folders in mozilla-central, with some reasonable amount of diff included. Well, turns out there are now a bunch of ways to do this, so here's a quick rundown.

  • hgweb's Atom feed - I'm only including this for completeness, but you can get an Atom feed of all changes to the repository by using the RSS icon at the bottom of any page in the repo. Unfortunately, the feed is for all changes to the repo (can't filter by particular folders/files) and doesn't include diffs, so it's limited in usefulness.
  • Dave Townsend's Hg Change Feed - Dave Townsend recently set up a more comprehensive feed system and blogged about this (see blog post or jump straight to the tree navigator to subscribe). He has it set up for comm-central, mozilla-central, and mozilla-inbound, and you can watch any folder/file in any of these repos. Pretty nifty! However, the RSS feeds don't include diffs.
  • RSS to email options - Both of the above options give you RSS or Atom feeds, but some people prefer to get this as email. IFTTT recipes are a simple way to do this; Dave Townsend set up one for the toolkit folder and Margaret created one for the mobile folder. If you use Zimbra to manage your mail, you can also use that to get RSS as email; you can set this up in the web interface when you create a new folder. As is to be expected, these options just convert the RSS item to email, and so also don't contain diffs.
  • My mailing lists - Since I really really wanted to get diffs in the email notifications, I decided to roll my own solution. I already have an Amazon EC2 instance running for ZNC, so I cloned mozilla-central there, and wrote a quick script to pull the tree, look for changes in specific folders, and send an email using Amazon SES (free for the volume I'm sending at). The email goes to a mailman instance I also set up on this domain, so anybody who is interested can sign up. Right now I have mailing lists for mobile/ and widget/android/, since that's what I'm interested in, but I can add other folders if there is sufficient demand. With this approach it requires some work on my part to set it up for specific files and folders, but I get diffs in the emails and can customize it further as I find things that would be useful. [UPDATE 2014-04: Due to general unreliability and lack of widespread use I have decommissioned these lists]

Did I miss any options? Post in the comments.


[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

(c) Kartikaya Gupta, 2004-2019. User comments owned by their respective posters. All rights reserved.
You are accessing this website via IPv4. Consider upgrading to IPv6!