|
World IPv6 Day is coming soon, on June 8. For those of who don't know what it is, it's when a bunch of major websites (Google, Facebook, etc.) will go dual-stack (serving their website via both IPv4 and IPv6 simultaneously). The reason this hasn't really been done yet is because according to previous measurements, this will break things for around 0.1% of clients.
If you haven't yet done so, you should go to test-ipv6.com from every browser/machine that you use to access the web, to see if you have an incompatible client. If you do, you should follow the instructions to fix things. If you don't, then a good chunk of the Internet will not work for you on June 8. More importantly, all those websites participating in IPv6 day will see a drop in users on that day, and revert back to IPv4-only. On the other hand, if you (and everybody else) fix your setup so that you can successfully browse dual-stack sites, then dual-stack is more likely to gain widespread adoption, thereby breaking the chicken-and-egg adoption problem IPv6 has been having so far.
Note that being compatible with dual-stack doesn't mean you have to do the more complicated setup to get yourself an IPv6 address. It just means that you can connect with IPv4 to a site that supports both IPv4 and IPv6. Although if you want to start using IPv6 now, you're quite welcome to do that as well.
[ 0 Comments... ]
And so begins the battle over 3D printing copyright. This is going to get interesting. 3D printing in general is one of those disruptive technologies that will have wide-ranging repercussions on society and how we live, so I really hope the right decisions are made early on.
[ 0 Comments... ]
It occurred to me today that one of the reasons religion is important is because it lets you practice faith. Faith, as I see it, is believing in something even if there is no rational reason to believe it. Even though I'm not a religious person, and generally dislike religion, I do think that being able to have faith is important.
The reason for this is that faith lets you bootstrap virtuous cycles. If you believe something good will happen, then it could become a self-fulfilling prophecy because your behavior changes so as to make it happen. The key is that you have to believe it will happen even if there is no rational reason to believe it. This requires faith.
If you've never experienced this before, or if you're an extreme rationalist, you may strongly disagree with the previous paragraph. And there's probably nothing I can say that will convince you otherwise, so I'm not going to try. You should probably stop reading now and go do something else.
Getting back to religion - the one fundamental thing that is common to all religions is faith. Every religion that I can think of requires you to have faith in the existence of some being or process, and to maintain that belief regardless of external evidence. While this has some rather obvious disadvantages, it does also have the advantage of giving you a lot of practice with maintaining faith. And if you have a lot of practice with faith, then it becomes easier to pull it out of your bag of mental tricks and use it when the need arises.
There's also the caveat that just because something could become a self-fulfilling prophecy doesn't mean it will. And that's true, it won't always work. Again having a lot of practice (think 10,000 hours) can improve your ability to determine when it will and when it won't.
[ 4 Comments... ]
Danny Hillis TED talk. Awesome because it's the first time I've seen somebody tackle this problem from an analytical point of view rather than a statistical point of view. As I've complained about before, I don't like how statistics are currently being used in complex human fields like biology. Instead of doing statistical analyses of how many people get heart disease after increasing their cholesterol intake, we should find the exact chemical pathway that starts at cholesterol and ends at heart disease. (Not that I believe cholesterol causes heart disease specifically; just an example). This is a step in right direction.
RIM recently got pwned by Jaime Murai in a scathing blog post. They responded with promises to do something. If you ask me, one of the first things they should do is copy Microsoft (I can't believe I'm saying this) and dogfood their developer program. There's a huge gap between the APIs that are available to RIM employees and those that are exposed to third-party developers. Unless RIM-internal devs start feeling the pain that third-party devs have to go through, the situation isn't going to improve. I recently discovered while trying to write a BB app that there's no publicly available StringTokenizer class in the API. I'm pretty sure I remember there being one in net.rim.device.api.util, but it just isn't exposed.
GeoHot is fighting Sony in court, and asking for donations. Donate. Sony sold people PS3s which could run Linux, and then removed that functionality via an update months later. This is the equivalent of selling you a couch, and then coming in a couple of months later and taking all the cushions back. If this sort of crap is legal then we're all doomed. It's one thing to license software with restrictions on distribution; it's entirely another to sell hardware and then destroy its value.
[ 0 Comments... ]
A couple of days ago I experienced the first significant data loss event I've had in a while. The hard drive on my Macbook Pro died. The last full backup I had was from December, although I did also have a partial backup from last week. Most of the data that I lost wasn't super-critical, but still quite annoying to lose. After diagnosing the issue (complete hard drive failure, since it didn't even show up in the Disk Utility on the recovery DVD-boot) I googled for potential fixes that didn't involve spending tons of money. I got a set of screwdrivers and took out the drive, cleaned off the connectors, and put it back in. Somehow this made things worse, since now it won't even boot from the DVD. Je ne suis pas doué avec l'hardware. In retrospect this may have been a good thing since the next step would have been to just get a replacement hard drive for an otherwise still-possibly-ok laptop. Had it failed after buying the replacement hard drive I would have been even more frustrated.
Anyway, I resigned myself to getting a new laptop. As with the last time I was in the market for a laptop, I would prefer to get a non-Apple laptop assuming it fits my needs. And my needs aren't all that extravagant, considering I'm increasingly becoming hardware and OS-agnostic. But as with last time, it seems to be ridiculously hard to find somebody who will actually sell me a laptop. Last time I tried this, I wasn't in a hurry so I looked around online, only to find out that most computer companies have non-functional websites that don't actually allow you to successfully place an order. This time around I was in more of a hurry to get a replacement, so I visited stores in the area. As it turns out, physical stores are no better at selling things.
First I went to Canada Computers. Of the 10-15 laptops they had on display, I narrowed it down to two, and after some contemplation, picked one. The sales guy checked his inventory and said that there were two somewhere in the back. He went off to find one, coming back 10 minutes later to inform me that a bunch of stuff back there was mislabeled and he would have to dig around some more. After another 10 minutes he came back with a box that looked promising, but unfortunately contained the wrong laptop. Finally he admitted defeat and told me that he could sell me the display model, which I didn't really want.
Next stop was Computer XS, where I got my desktop machine eons ago. They had a pretty dismal selection of laptops; the only sub-15" models were a bunch of underpowered Lenovo Thinkpads (I don't have anything against the new Thinkpads, but these were positively ancient - they had memory measured in MB instead of GB. Imagine that!). That was a quick in-and-out.
Final stop was FutureShop. Now they had a whole slew of laptops out, and unlike Canada Computers, some of them were even on! I could check out the screen brightness (albeit in their artificially-lit environment) and everything! They also had a bunch of HP laptops that were pretty good, and it took me a good 10 minutes to make a decision. But I finally settled on the one that best matched what I wanted, and asked to buy it. And then it was déjà vu all over again. Inventory tracking my ass. He said they'd have more coming in tomorrow so I'll be going back to see if they have it, but given my experiences so far I'm not very hopeful. I'm this close to just getting another Macbook.
I think the main reason that Apple is selling so many laptops is that, well, they don't go out of your way to stop you from buying them. How hard is it to put up a sign saying "out of stock" next to the display model? Or to keep track of how many you actually have in the back? Or to create a website that allows you to buy a laptop? Seriously. Given all the advertising and inherent human need to accumulate crapown stuff, people really want to buy new shiny laptops. Apple's are shinier than most, sure, but the others have their advantages too. And I, for one, would love to fork over my hard-earned money and actually buy one. If only they'd let me.
[ 9 Comments... ]
So I was thinking more about my earlier post on the music industry. The scheme probably won't work for any number of reasons, but it made me realize the fundamental difference between music and everything else - copying it is free. Of course this applies to anything that can be digitized, not just music. Software, movies, etc. all fall into this boat. Making and distributing copies is considered "piracy", but from a moral standpoint, is it wrong?
The answer depends on how you define theft. If you think of it as depriving somebody else of the ability to use the resource, then the answer is no, since the copy leaves the original intact. But if you think of it as obtaining something without paying for it (or in general, without contributing to the cost of production of the object), then the answer is yes. With physical objects theft does both of these things, so we never needed laws before that distinguished between the two facets. But for the digital domain, we definitely do. And as 3-D printers (Thing-O-Matic, RepRap, etc.) get more common and refined, this problem will apply to physical objects as well.
I believe that there is a "right" answer to this question, but I'm not sure what it is. There's strong arguments for both sides, and I think some of the implications of "free" copying are as yet unknown.
[ 0 Comments... ]
Interesting article about how Tunisia started keylogging passwords for anybody logging in to Facebook through a Tunisian ISP. The article praises Facebook for implementing countermeasures, but really Facebook is just stupid for not using SSL to begin with. Especially given the existence of FireSheep that lets you trivially hijack unencrypted browsing sessions on unsecured Wi-Fi networks.
The article doesn't go into the technical details, but according to this page Tunisia was getting their ISPs to inject a script on the login page to steal the password before submitting the login form. So even though the form submit itself was encrypted, they were still able to grab the password. Facebook's response was to change the page with the login form to be https so that the ISPs wouldn't be able to inject the script. It stopped the Tunisian government, but not for technical reasons. Facebook is still vulnerable to exactly the same problem, because an ISP can simply rewrite the pages pointing to the login page to use http links instead of https. In fact, if you access any insecure page on the domain, the ISP can pretty much rewrite all the links to keep you insecure. The average user wouldn't know that all their data was being snatched.
(On a related note, this sort of attack is exactly why my site forces you to type in the https URL directly into the address bar).
[ 0 Comments... ]
[This blog post is an extension from part of the presentation I gave in class a few days ago.]
There's no one true definition of engineering, but a lot of the definitions you'll find are variants on the same theme: an engineering discipline consists of the application of a science to solve a problem. The problem with all of these definitions is that they downplay the "application" part - that is, the human component of engineering.
The way I see it, engineering is a combination of two main factors. One is the principles and properties from the underlying science. The other is the mind of the human putting the principles and properties together. Different engineering disciplines have different amounts of these two factors, and that has all sorts of implications.
It's probably easier to see with an example. Let's say we stumble through a wormhole into a different universe where our current rules of physics don't apply. Naturally we start a new scientific discipline dedicated to figuring out how that universe operates. After some investigation, we discover three laws. Law one is "if two objects collide, their masses automatically double". Law two is "if an object reaches a mass of 100kg or greater, it splits into three equal pieces". Law three is "if you have an object of 64kg, it turns into a wormhole that sucks you back to our normal universe". Now, you want to return to our normal universe, so you grab all the rocks you can find and weigh them. You find that you have three rocks with masses 1kg, 32kg and 96kg.
The science in this alternate universe provides us with three laws. As engineers, we have different ways in which we can combine these laws in order to solve a problem (getting home). For example, we could collide the 1kg and 32kg rocks together, resulting in 2kg and 64kg rocks. We can then use the 64kg rock to get home. Or we can collide the 1kg and 96kg rocks together, resulting in 2kg and 192kg rocks. The 192kg rock would then split into 3 64kg rocks, which we could use to get home. Or we could collide the 32kg and 96kg rocks, which would also get us home. So with these three rocks, given our stated goal, there's three distinct solutions. If we sent a bunch of engineers over to this alternate universe, it is highly unlikely that they would all use the same solution to get back.
If you view an engineering discipline as a combination of a science and a human mind, then it makes sense to try and figure out how much of the engineering is science and how much of it is the human component. With the example above, for that one problem, the science and the solution space are both bounded. That is, there are only three laws in the science, and only three solutions. But if we added another rock of 1kg, there would be 31 different solutions instead of just three. In this scenario, the solution that actually gets selected is more a function of the human mind and not as determined by the science. On the other hand, if you started out with two rocks of 32kg each, then there's only one possible solution to the problem. In this case, the human component plays a negligible role and the science plays a much bigger role in the engineering solution.
If we take this a step further, and look at the space of all possible problems that can be solved in this universe, and aggregate the science/human ratio over them all, we can figure out how much of the overall engineering discipline depends on the science and how much depends on the human component. (Note: I don't actually know what metrics and aggregation operations would be best to use here. I'm just doing thought experiments.)
So where does this take us? Well, different engineering disciplines are going to have different science/human ratios based on what kind of problems they have and what kind of properties the underlying science has. I think that the greater the human component, the more "complex" we consider the engineering discipline to be. Software engineering, in particular, has a huge number of possible solutions to any problem, and I think that's related to why it is so complex. The human variable in any software engineering project has a huge impact on how the final solution turns out. This is why I claimed in my previous post on experimentation that the human mind is such a HUGE variable in any software program, and any experimentation while leaving this variable uncontrolled results in nonsense results.
[ 0 Comments... ]
[This blog post is adapted from part of a presentation I gave in class a few days ago, on a topic I've been thinking about for a while.]
Recently there was a big brouhaha because of some research by Dr. Bem at Cornell. He's a psychologist who did a series of experiments, and 8 out of 9 of his experiments showed that precognition is possible. He's a respected researcher and the paper was peer-reviewed and is scheduled to be published soon in a prominent journal. A lot of people disagree with his conclusions for obvious reasons, and there's been a lot of discussion about how to interpret his results and whether or not his methodology and/or analysis was flawed and/or biased. I particularly like this rebuttal (PDF) of his approach.
Another example of a similar nature is the book Good Calories, Bad Calories which I was reading not too long ago (but didn't finish). It looks at a lot of studies in the field on nutrition and rips apart a lot of them. Most of these studies are not reproducible or even contradict each other, and often have conclusions that are not supported by the data.
The point that I'm trying to make is that when it comes to using statistics to analyze data, there is almost no consensus on how to do it correctly, despite the fact that we've been doing it for decades. It's pretty absurd, if you ask me. There's all sorts of pitfalls that people regularly get caught in, such as Simpson's Paradox, just because it's unclear which variables that were changed in the experiment are relevant to the outcome and which are not.
Take a simple example - that of the boiling point of water. The value of the boiling point is a function of a number of factors, like the atmospheric pressure and salinity of the water. However, it's not a function of other things, such as the heat source that's used to heat up the water. If you aren't aware of which variables affect the results and which do not, you might do something like run a few trials at sea level and run a few trials on top of a mountain, and then average (or more generally, statistically analyze) the results to get a final answer.
But of course, if you average measurements taken at different atmospheric pressures, you get a value that's garbage. It reflects neither the boiling point at sea level nor the boiling point on the mountain. It's the boiling point of somewhere in between, but only because the boiling point is a monotonic function with respect to pressure. If it were some other kind of function, the average would truly be just a nonsense number, even though it looks like a real result.
This is a trivial example and has very few variables. But a lot of the sciences that deal with human subjects do this all the time. Examples abound in psychology, medicine, nutrition, and of course, software engineering. For example, consider the classic software experiment to find out if technology A is better than technology B. You get a bunch of programmers, make sure they are trained equally on A and B, and have them sit down and do a task. Then you average the results from A and average the results from B and compare the two, and conclude that A (or B) is better. But the huge flaw in any experiment of this kind is that the thing you're measuring (the final code produced) is a function of both the technology (A or B) and the mind of the programmer. And the programmer's mind is a HUGE variable, a function of all sorts of things like education and experience and social influence and genetics.
In the boiling water example, it doesn't make sense to average two measurements from different pressures. Instead, it's better to state the result as a function that takes pressure as input and returns the boiling point as the output. Similarly, I think that for the software experiments, it doesn't make sense to just average the results from different programmers. Instead, a better (although currently infeasible) approach would be to represent the programmer as a vector of traits, and to give a function that takes as input such a vector and returns as output whether A or B is better. The vector would have to include every trait that we determine to be relevant to the software engineering process (that is, whether it affects the code that programmers write), so determining exactly what traits should be included is probably impossible. However, if even a few of the main traits can be isolated, we can start getting results that approximate something meaningful, rather than just being nonsense that looks like a real result.
[ 0 Comments... ]
I've moved my website from its previous home at stakface.com here to staktrace.com. Also switched from BlueHost, whose level of service has been declining significantly for a while now, to DreamHost, which seems to be much better. Also all pages on this site are now HTTPS. If you try to access any URL on this domain via HTTP instead of HTTPS, you'll get an error page telling you to use an HTTPS URL. As the page explains, auto-redirecting to HTTPS is still vulnerable to certain classes of man-in-the-middle attacks, so I prefer to take the approach where you have to fix your URL manually and learn not to do again.
For the most part everything here should work as it did before, but let me know if you run into any problems, either by commenting on this post or via the contact form. If you have any links/bookmarks/RSS readers pointing to the old site, you should update them to point to the new site; just replace "http://stakface.com" with "https://staktrace.com" and leave the rest of the URL the same.
[ 0 Comments... ]
|