|
My girlfriend was writing a blog post and wanted to put some tables in. Unfortunately the Wordpress interface wasn't being very helpful with adding tables so she took a PNG screenshot of the tables in Word and inserted those in. While the solution worked fine for her, I was aware that turning text into images is generally not a great solution for the web, for a number of reasons (bad for accessibility, less dynamic layout, slower page load, etc.).
I realized this was a perfect opportunity for a little webmaking experience. I fired up Thimble on my laptop, and with her looking over my shoulder, created a simple two-row table with the style she wanted, showing her some of the different style attributes that could be used. Once I had the start of her table, I published it as a "make". She was then able to view it on her laptop and remix it, putting in the rest of the data she wanted. She also created a second table (complete with bullet points and red text), and pasted the HTML back into the Wordpress HTML editor.
I think the part I like best about tools like Thimble are that they are very low-overhead to use. You don't need to sign in to start using it, and the interface is simple and obvious. For publishing the make, I did have to sign in, but with Persona, so I didn't have to create a new password for it. (If it required creating a more heavyweight account I would have just copied the HTML into pastebin or something, which is not nearly as cool.) And of course, the live preview of the HTML in Thimble is pretty neat too :)
Oh, and you should definitely read her blog post: Get Moving!
[ 1 Comment... ]
Following up to Guns, Germs, and Steel and The World Until Yesterday, I also read Collapse: How Societies Choose to Fail or Succeed by Jared Diamond. This one was a lot harder to read, and much less interesting to me. There were certainly some good lessons here, but I felt like he repeated himself too much, and that the book could have been 30% of the size it actually was.
It was interesting learn about how some past societies (e.g. on Easter Island) behaved and why they eventually died out. It does feel a bit like the world on a whole is on a course to repeat those mistakes.
[ 0 Comments... ]
Please read Unraveling coordinate systems, part 1 for the backstory. This post describes how CSS transforms fit into the world of coordinate systems I described there.
But first, a quick status update: since I wrote that post, a fair amount of code has been converted to use the new typed classes (CSSPoint, LayoutDevicePoint, etc.). Special thanks to Ms2ger who has been incrementally converting various bits of layout code to use these where appropriate. The conversion has turned up some type mismatches and flushed out bugs on B2G, Fennec, and recently even Metro.
And now, on to the main event: CSS transforms. CSS transforms make things complicated because they are not really applied in layout code, but saved as a transform on the layer and applied at composition time. (Recall that the layer can be thought of as the rendered texture that gets uploaded to the graphics hardware). From a quick reading of the CSS transforms spec this makes sense, because the transforms are not meant to affect the layout properties of elements. However it complicates coordinate system work because a "transform: scale(2)" CSS style, for instance, doesn't change the size of the element in LayoutDevicePixels but does affect the ScreenPixel size.
If you recall from part 1, I defined LayoutDevicePixels as taking into account the device DPI and full zoom amounts; they're basically the coordinate system that the layout code outputs positioning data in. If there is an element with a 2x scale CSS transform applied, then the ScreenPixel height of the element must be twice the LayoutDevicePixel height of the element (for the moment let us ignore any OMTC transforms and resolution changes that might also be applied). However, there are no restrictions on the size of the layer, which means the LayerPixel that sits in between the LayoutDevicePixel and ScreenPixel can be anything.
So, we could, for example, have the layer be the same size as the untransformed element (1 LayoutDevicePixel == 1 LayerPixel) and then do the entire 2x scale in hardware (1 LayerPixel == 2 ScreenPixels). Or we could do something like have gecko render at higher density (1 LayoutDevicePixel == 2 LayerPixels) and then not scale it in hardware (1 LayerPixel == 1 ScreenPixel). We could even render it 4x in gecko and then scale it down in hardware by 2x, if we wanted. Rendering it small in gecko and scaling it up in hardware results in a blurry image, and rendering it large in gecko and scaling it down in hardware is a waste of memory/CPU. In theory, the scaling that results in the best visual effect without wasting resources is to have no hardware scaling and have gecko render exactly at the specified CSS transform. However, in practice, we use some heuristics to figure out the best tradeoff here because the CSS transform can change over time.
And while we're on the subject, a similar thing actually happens with the OMTC transform. If we take a regular page and pinch-zoom it to 2x, what actually happens on the layout side is that the presShell resolution is set to 2x, the CSS transform on the page is updated to be 0.5x, and the OMTC transform is again 2x. So the net effect is that the page is scaled up by 2x, with 1 LayoutDevicePixel == 2 LayerPixels == 2 ScreenPixels, but the fact that the CSS transform on the layer is modified is important from an implementation point of view.
Ok, so that's all well and good, and gives us nice-looking CSS transforms efficiently, at the expense of some complicated coordinate transforms. Now we can flesh out our summary from last time to be a little more precise:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x CSS-driven resolution x OMTC-driven resolution = LayerPixel
LayerPixel x CSS transforms x OMTC transforms = ScreenPixel
But that's not all. Note that CSS transforms apply to elements in the DOM rather than the entire document. This means that different layers in the layer tree can have different transforms, and these transforms accumulate when mapping a particular layer up to the screen. Madness! That means there's no longer just one set of LayerPixels but many! And with the work that I've been doing to make subframes asynchronously scrollable, many of the layers can have their own individual OMTC-driven resolutions and OMTC transforms as well! More madness! (In practice only top-level layers for PresShells will have OMTC resolutions but I wouldn't count on that always remaining the case.)
So to update the summary again, I need to introduce some layers. Assume that layer L is some random layer in the tree, it has a parent layer M, grandparent layer N, and so on up until the root layer R. Then:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x CSS-driven resolution (for R) x OMTC-driven resolution (for R) = LayerPixel (for R)
LayerPixel (for R) x CSS-driven resolution (for Q) x OMTC-driven resolution (for Q) = LayerPixel (for Q)
LayerPixel (for Q) x CSS-driven resolution (for P) x OMTC-driven resolution (for P) = LayerPixel (for P)
...
LayerPixel (for M) x CSS-driven resolution (for L) x OMTC-driven resolution (for L) = LayerPixel (for L)
LayerPixel (for L) x CSS transform (for L) x OMTC transform (for L) x CSS transform (for M) x OMTC transform (for M) x ... x CSS transform (for R) x OMTC transform (for R) = ScreenPixel
Simple, right? :) If I get around to it, the next post will cover how input events are mapped back from screen space to layout space. (It's not as straightforward as you might think.)
Thanks to Robert O'Callahan for reading a draft of this post and pointing out some things to fix.
[ 1 Comment... ]
For the last few weeks I've been hacking away at a jungle of coordinate systems in the graphics code, trying to make code easier to understand and to work with. This work is technically part of the effort to make subframes asynchronously scrollable in OMTC, but it has helped us to fix other bugs as well. This post a braindump of the coordinate systems I've uncovered and the mental model I have.
As of this writing, I have defined four "pixel" types in layout/base/Units.h. These are CSSPixel, LayoutDevicePixel, LayerPixel, and ScreenPixel. CSSPixel is the simplest - it represents a CSS pixel, which is what web content authors use to specify dimensions of things. Almost every Web API [1] deals with CSS pixels.
In the days of yore, 1 CSS pixel corresponded exactly to one screen pixel. That is, when you created a div that was 50 CSS pixels wide, it would show up as 50 pixels wide on your screen. If you have a desktop monitor that is 1024 pixels wide [2] and you create a div that is 1024 CSS pixels wide, it takes up your entire monitor width. Makes sense, right?
Now, let us enter the world of HiDPI display devices. On HiDPI devices, the screen pixels are so small and packed together so tightly that mapping 1 CSS pixel to 1 screen pixel doesn't make sense any more. Sure, you can fit more on the screen, but it's too tiny to be readable or useful. So we have this concept of a widget scale. The nsIWidget::GetDefaultScale() returns a scaling factor to account for HiDPI displays. For example, see the gonk implementation which makes B2G perform a scale amount based on the actual DPI of the device. This introduces a new coordinate system that layout refers to as "device pixels" and what I've called LayoutDevicePixels.
In the layout code, the widget scale affects two main things. One is the size of the CSS viewport [3]. If we have a widget scale of 2.0, then each CSS pixel is 2 screen pixels wide, which means that on a 1024x768 screen with a maximized browser window, you can only fit 512 CSS pixels across. So the CSS viewport width is adjusted to account for this. The other is the scale factor from app units [4] to screen pixels. Instead of the usual 60 app units being converted to 1 screen pixel you would normally get, setting a 2.0 widget scale results in 30 app units being converted to 1 screen pixel. This effectively makes each CSS pixel twice as wide when being rendered to the screen, which is the point of the whole exercise.
But that's not all! There is another factor that comes into play between CSS pixels and LayoutDevicePixels, and that is the "full zoom". This is when, on desktop Firefox, you perform the "Zoom In" or "Zoom Out" actions. Doing a "Zoom In" action increases the full zoom, which works almost identically to increasing the widget scale. That is, setting a zoom of 110% is pretty much equivalent to having a widget scale of 1.1. The only difference I'm aware of is that if you specify your CSS dimensions in real-world units such as inches, then they are affected by the widget scale but not by the "full zoom". Tricksy!
So to summarize what we have so far: in a world with just CSSPixels and LayoutDevicePixels, dimensions are specified in CSSPixels by content authors, and the browser maps them to LayoutDevicePixels based on the widget scale and full zoom. These LayoutDevicePixels are then displayed 1:1 on screen pixels as defined by the underlying platform.
But what about mobile? Welcome to the land of OMTC and pinch-zoom. OMTC stands for off-main-thread compositor, and is what allows you to pinch a page on Fennec and have it instantly zoom. What's happening here is the painted page is transformed in OpenGL [5], without Gecko really knowing about what's going on. Since Gecko isn't repainting anything, this is super fast, and allows us to animate pinch-zoom at 60 frames per second (or close to it).
Unfortunately, it also introduces a new coordinate system, because the LayoutDevicePixels that Gecko produced are no longer displayed 1:1 on the screen. Gecko could have painted something 10 LayoutDevicePixels wide (and so the texture uploaded to the graphics hardware would be 10 pixels wide) but then the user does a pinch-zoom and BAM! now it's taking up 20 pixels on the screen, because we told OpenGL to scale it up by 2x. So here we have our third pixel type defined: the ScreenPixel. In the preceding example the pinch-zoom produced a scale factor of 2.0 and so 10 LayoutDevicePixels would get mapped to 20 ScreenPixels.
Now say you're viewing a page in Fennec and zoom in using pinch-zoom. And then you zoom in some more. And then some more. If all we did was take the LayoutDevicePixels and tell OpenGL to render them bigger by scaling it in hardware, you would end up with a very pixellated and blurry view of the page. In order to make it look good again, we have to go back to Gecko and tell it to repaint the visible area of the page at a higher density, allowing us to remove the OpenGL scaling. For example, instead of rendering a paragraph of text into a texture and scaling that up in OpenGL to display a single word really big, we can tell Gecko to just render that one word really big, and to use up the entire texture to do it. This is done by a call to nsIPresShell::SetResolution().
Setting the resolution doesn't change our definition of CSSPixels or LayoutDevicePixels, but it does change something. To describe this change, we need to introduce a new coordinate system. This is the LayerPixel [6], and it sits between LayoutDevicePixel and ScreenPixel. That is, the resolution changes how many LayerPixels are produced for each LayoutDevicePixel, and now pinch-zooming affects the scale between LayerPixels and ScreenPixels. This is a little cyclical because as you perform a pinch-zoom, LayerPixels and ScreenPixels get farther apart in size. Then, once you finish the zoom, we tell Gecko to re-render the content at a new resolution such that LayerPixels and ScreenPixels are the same size once again, and we render the new LayerPixels at a 1:1 scale on the screen. When this happens the visible content goes from being blurry back to being sharp, which you can see in Fennec if you pay close attention when zooming in.
So to summarize, here is what we have now:
CSSPixel x widget scale x full zoom = LayoutDevicePixel
LayoutDevicePixel x resolution = LayerPixel
LayerPixel x OMTC transforms = ScreenPixel
And so ends my braindump. Hopefully what I've written above will not change going forward, and the terms I've used can become part of the Gecko developer vocabulary, so that when dealing with code in different coordinate systems it's much easier to agree on what we mean.
(Update Aug 5, 2013: I now have a part 2 that describes how CSS transforms fit into this picture.)
Notes:
[1] The only exception I can think of is window.outerWidth, which is in screen pixels. (Edit 2013-06-27: I was wrong, I think outerWidth is also in CSS pixels.)
[2] Note that the number of screen pixels displayed on a physical device may be modified by the operating system. For example, you can change your screen resolution from 1024x768 to 800x600, which changes the size and number of screen pixels. That's fine - we don't need to account for that in our code as the OS takes care of it.
[3] The CSS viewport affects how wide the page is laid out by the layout code. One way of visualizing this is that if you have a page with just plain text, the text will be wrapped so that it doesn't exceed the CSS viewport width.
[4] App units are what layout does all of its calculations in. One app unit is exactly one-sixtieth of a CSS pixel. The "60" was chosen because it has many integer factors and so allows representing common fractions losslessly.
[5] The actual graphics system in use depends on the platform. OpenGL is just an example.
[6] When Gecko paints the various elements on a page, it flattens them into "layers" that it hands off to the graphics stack. This is where the term LayerPixel comes from.
[ 10 Comments... ]
For a while now we've had problems stemming from having to deal with too many coordinate systems. I blogged about this before and the problem has only gotten worse. The AsyncPanZoomController class in particular deals with a variety of coordinate systems and it's often not clear which coordinate system a particular variable is in.
To try and deal with these code complexity issues, Anthony Jones suggested adding template parameters to some of the gfx classes so that we can enforce unit conversions at compile-time and annotate variables with which units they're in. I filed bug 865735 for this, and after some discussion with roc, Bas, jrmuizel, BenWa, tn, Cwiiis and some others, we agreed on a way to do this. I landed that patch and it was merged to m-c today.
The patch allows for incrementally adding units to uses of gfx::Point, gfx::Size, and gfx::Rect (and their Int variations). By default all instances of these classes are tagged as UnknownUnits. As we update bits of code they will be changed to things like CSSPixels, LayerPixels, ScreenPixels, and so on, depending on what they are. I've been working on a couple of patches that start adding this extra information to pieces of code; these patches can be found on bug 877726 and bug 877728. Doing this is slow work, but very parallelizable, so I would appreciate any help I can get. If you know of some graphics code that uses these classes, and you know what units they data they hold are in, please file a bug with patches to convert them. Feel free to CC me and/or make it depend on bug 865735.
They key files that define the templates and units are located at gfx/2d/Point.h, gfx/2d/Rect.h, and layout/base/Units.h. gfx::Point is now just a typedef to gfx::PointTyped<UnknownUnits>. Replacing UnknownUnits with CSSPixel gives us the new type gfx::PointTyped<CSSPixel> to represent points in CSS pixel coordinate systems. Since this is long and unwieldy to type, we have typedef'd this to CSSPoint. Similar changes will be done for the other classes (e.g. CSSIntPoint, CSSRect) and units (e.g. LayerPoint) as we start propagating them. The neat thing about the templating structure we came up with is that gfx::PointTyped<CSSPixel> actually extends from CSSPixel, so we can add methods (e.g. conversion to app units) there that are available on all CSSPoint instances but not other unit classes.
As far as I understand things, layout code currently always keeps nsPoint instances in app units (1/60 of a CSS pixel). nsIntPoint, however, can be used for a variety of units, and many of these (particularly the ones outside layout/) should be converted to gfx::IntPointTyped<something>. What layout code refers to as "device pixels" is generally not the same thing that graphics code refers to as "device pixels", so I'm trying to avoid using the term "device pixels" entirely in graphics code - I prefer to use "layer pixels", "display pixels", and "screen pixels" as needed. For non-OMTC platforms "screen pixels" and "layer pixels" should be equivalent, and for platforms without hidpi adjustments, "display pixels" and "screen pixels" should be the same.
While converting code I've run into many places where new point variables are constructed using the .x and .y members of a different point variable. It's important to ensure that such code is carefully audited to make sure that the old and new variables are in the same units. If it's not clear, it's best to stick in a call to FromUnknownUnits(...) so that it is implicitly flagged for follow-up when other nearby code gets converted.
[ 0 Comments... ]
The Inner Game of Tennis, by Tim Gallwey, despite the title, is not really about tennis. Well, maybe about 3% of the book is actually about tennis, but for the rest of the book he only uses tennis as a concrete example of how to apply the principles of the "inner game" described in the book. The "inner game" here refers to a number of things, but generally speaking covers the ability to unlock your full potential without being held back by your fears, doubts, anxieties, and other insecurities.
Although the book is not really specific to tennis, it is geared towards people who have tried to excel at some physical discipline. It describes the mental obstacles that such people will be able to identify with, and describes in good detail how to overcome those obstacles. That being said, the book does also apply to non-physical activities and daily life, but if those are your primary interests then perhaps one of the other Inner Game books by the same author will be more suitable (I haven't read them so I can't comment further).
Personally I read this book because I thought it would help me with the mental obstacles I encounter while training parkour. It certainly does address exactly the problems I've experienced, but goes above and beyond that. I found the book hugely insightful and would recommend it to anybody who practices a physical discipline (and if you don't, you should). I haven't yet actually tried applying the techniques described in the book so I can't say how effective they are, but even if they are completely useless to me the book was still well worth it for the additional insights it provided.
[ 0 Comments... ]
Something I've wanted for a while is a way to receive notifications of commits to specific folders in mozilla-central, with some reasonable amount of diff included. Well, turns out there are now a bunch of ways to do this, so here's a quick rundown.
- hgweb's Atom feed - I'm only including this for completeness, but you can get an Atom feed of all changes to the repository by using the RSS icon at the bottom of any page in the repo. Unfortunately, the feed
is for all changes to the repo (can't filter by particular folders/files) and doesn't include diffs, so it's limited in usefulness.
- Dave Townsend's Hg Change Feed - Dave Townsend recently set up a more comprehensive feed system and blogged about this (see blog post or jump straight to the tree navigator to subscribe). He has it set up for comm-central, mozilla-central, and mozilla-inbound, and you can watch any folder/file in any of these repos. Pretty nifty! However, the RSS feeds don't include diffs.
- RSS to email options - Both of the above options give you RSS or Atom feeds, but some people prefer to get this as email. IFTTT recipes are a simple way to do this; Dave Townsend set up one for the toolkit folder and Margaret created one for the mobile folder. If you use Zimbra to manage your mail, you can also use that to get RSS as email; you can set this up in the web interface when you create a new folder. As is to be expected, these options just convert the RSS item to email, and so also don't contain diffs.
- My mailing lists - Since I really really wanted to get diffs in the email notifications, I decided to roll my own solution. I already have an Amazon EC2 instance running for ZNC, so I cloned mozilla-central there, and wrote a quick script to pull the tree, look for changes in specific folders, and send an email using Amazon SES (free for the volume I'm sending at). The email goes to a mailman instance I also set up on this domain, so anybody who is interested can sign up. Right now I have mailing lists for mobile/ and widget/android/, since that's what I'm interested in, but I can add other folders if there is sufficient demand. With this approach it requires some work on my part to set it up for specific files and folders, but I get diffs in the emails and can customize it further as I find things that would be useful. [UPDATE 2014-04: Due to general unreliability and lack of widespread use I have decommissioned these lists]
Did I miss any options? Post in the comments.
[ 3 Comments... ]
Here's my hypothesis: Twitter has subsumed RSS.
Pretty much any news site or blogger you would usually want to follow has both RSS and Twitter, and Twitter is just much simpler to use. It's a lot easier to follow somebody on Twitter than it is to add an RSS feed to a web-based aggregator like Google Reader. If you don't use a web-based aggregator you have portability/syncing issues where you have to set up your feeds on individual devices and keep them all in sync. Most importantly, Twitter serves as single notification stream that includes simple messages (tweets) and links to other longer/larger content.
So really, it's Twitter's fault that Google Reader is going away. But they're not the only one to blame. Firefox 4 (IIRC) removed the RSS button, making it that much harder to use RSS. Sure, you can argue very few people used it anyway, so removing it was a good idea, but that doesn't mean it didn't accelerate the downfall of RSS. I think other browsers have also made the RSS subscription flow less easy to use over the last few years.
Ordinarily I wouldn't really care about this, except that I kind of like the open web. Twitter is not the open web. I don't want Twitter to end up as the only way for me to subscribe to content. I fear that Google killing Reader is a sign that RSS use is already dwindling, and unless we act to save it, it will die completely.
Thankfully, the fine folks at Digg are building a Reader replacement, which I think is great. Not because I'll use it, but because it'll help slow (and hopefully reverse) the death of RSS.
It would be cool to see their reader clone (or any other web-based aggregators) take on Twitter directly, by making it as easy to reply/comment on articles as possible, and by having a lightweight way to "tweet" new content directly from the aggregator. For the former, it might be necessary to extend RSS to include things like URLs that accept HTTP POST replies or comments, and to specify the format of those POSTs (yay open standards). The latter will take a bit more architecting, probably requiring your feed aggregator to also be your feed publisher, so that it can insert these "tweets" into your feed along with your blog and/or other content.
I don't exactly know what would work and what wouldn't, and I'm probably the wrong person to be asking anyway since I don't use Twitter. But I do use RSS, and I hope that the only thing that kills RSS is another, even better, open standard.
[ 16 Comments... ]
A lot of people believe in some science. Some people believe in a lot of science. And a few people believe exclusively in science. Some common non-scientific beliefs include things like paranormal activity, an afterlife, intelligent design - things that are generally mutually exclusive with science as we know it today.
People who believe strongly in science sometimes have a hard time understanding how other people can not believe in science. The gap between people who believe in evolution and those who believe in intelligent design, for example, is huge, with many uncompromising extremists on either side. I have a theory for how this comes to be, or at least a theory that helps me make sense of the situation.
Take the Monty Hall problem. In this problem, there is a prize hidden behind one of three doors, and you have to guess which one. After you pick a door, one of the other doors is shown to be a losing door. You then have the option of switching doors to the other unopened door or sticking with your original pick. For the vast majority of people, intuition suggests that the probability remains unchanged, and there is no advantage to switching. However, statistics says otherwise: the probability of winning is higher if you switch.
This is one of the simplest examples I can think of where human intuition is demonstrably wrong. If people are accepting of statistics, then upon being shown the logic behind this, they will realize that their intuition is wrong, and choose to discard it. However, I expect that some people place so much value in their own intuition that they refuse to believe the statistics and continue to believe their intuition. There is no real practical fallout from this - most people never run into this problem in their daily lives, and even if they did, they'd end up choosing the right door slightly less frequently. Big deal. Choosing to believe your intuition here is something that is easily done, because it doesn't noticeably impact your life for the worse, and is not fundamentally incompatible with other beliefs that you might hold.
Of course, the Monty Hall problem is just one small example. There are many examples that can be pulled from many scientific fields where common human intuition is just plain wrong. And human intuition varies from person to person, too. For some people, intuition strongly suggests creatures as complex as humans must have been designed and created by some other entity. Science says otherwise. As with the Monty Hall problem, some people, upon being shown the scientific evidence, will choose to discard their intuition and believe in evolution instead of intelligent design. But other people will not. And critically, choosing to believe intelligent design doesn't noticeably impact your life for the worse (except for having to constantly engage in debate with scientists, which is more of a meta-problem), and is not fundamentally incompatible with other beliefs that you might hold.
Some of you may disagree with that last bit - to science-believers, intelligent design is fundamentally incompatible with other beliefs. But of course, this all depends on what you believe. You don't need to believe in a all-powerful god to believe in intelligent design; it could just as easily have been an alien race that did the designing. You don't have to reject fossils; the alien race might have planted those on purpose to disguise the truth. And so on - there are many ways to make a particular belief compatible with other things that you believe. To a scientist, such beliefs violate Occam's razor (and generally sound increasingly outrageous) but to somebody who doesn't believe in Occam's razor or in making hypotheses falsifiable it's not a problem in the least. It's pretty easy to come up with a set of internally-consistent beliefs that includes intelligent design, and many people have done exactly that.
The thing that's important to me is whether or not people are forced into such belief sets, or whether they choose them. It's one thing to consciously choose to reject evolution and choose intelligent design - there are valid reasons for doing so, and I have no problem with that. Even implicitly rejecting evolution because it sounds too hare-brained and unconsciously choosing intelligent design is fine by me - it's still a choice on the part of the individual. However, I dislike it when people are forced into a choice because of their ancestors or society. Legislation that bans evolution from schools is one way this choice is forced upon people, and that seems wrong to me. On the other hand, discriminating against people who believe in religion or intelligent design is another way that this choice is forced upon people, and that too, seems wrong to me.
Let people believe what they want. If you believe in evolution, then you should realize that there are selection pressures already at work that will, in the long run, weed out the incorrect beliefs. If the world were filled with Monty Hall problems, people who trusted statistics over their intuition would thrive - that's just survival of the fittest. Who knows - over time, human intuition might even evolve to match the science!
[ 0 Comments... ]
Yet another book review: The World Until Yesterday, also by Jared Diamond (author of Guns, Germs and Steel which I reviewed previously). Personally I found this book even better than GGS because it provides more practical knowledge about small-scale societies and specific things that they do better than we do in our large-scale societies (as well as things they do worse). As he keeps saying, the history of the world is filled with natural experiments - different societies tried different things; some failed and some succeeded, and we should try to combine the best attributes of these societies into our own. In some cases adopting changes is hard because it requires large changes to how our society is structured, but there are lots of cases where we can adopt the changes on an individual basis as well.
The book covers different aspects of societies in different chapters - personally I found some chapters (e.g. the ones on food and raising children) more interesting than others (e.g. the ones on wars and peace). However I'm glad I struggled through the early uninteresting chapters as they do provide some useful context for the more interesting chapters later. Overall I found this to a very well-balanced book, and just like GGS, gives me a larger perspective on how the world works, which is really cool.
[ 0 Comments... ]
|