Blog



All timestamps are based on your local time of:

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

The design of design, part 32011-01-04 22:26:34

A few final notes on The Design of Design. In Chapter 10, Brooks talks about the "budgeted resource" (i.e. the resource that is the most constrained) when designing something. One of the things he mentions here is the use of "surrogates" - something is used as an approximation for the actual budgeted resource, since the actual budgeted resource is hard to measure. This is basically the same thing I called an "indicator" in an earlier post; he says the same thing I said, but I like the way he says it better. I also like the word "surrogate" more than "indicator", since the connotations are more in line with the idea it expresses.

In Chapter 11, on constraints, he talks about how adding constraints can help narrow the scope of the project and make it easier to come up with a good design. He specifically cites programming language design as an example - a special-purpose language is easier to design well compared to a general-purpose language. This reminded me of an article on language-oriented programming that I read a while ago. The article claims that LOP allows the programmer more freedom of expression, which makes sense. Since LOP allows the programmer to more losslessly express their mental model, it allows for a better design and implementation. (This ties into yesterday's post on mental models).

Another interesting point in the book is in Chapter 15, where he talks about how disciplines have become increasingly specialized, resulting in a greater separation between design, implementation, and use. As an example, he cites how Henry Ford built his own car, but today no computer engineer can physically make his own chips. I think this same thing happens on a smaller scale for individual products as well. When I first worked at RIM as a co-op in 2003, all the employees had a BlackBerry, and used the devices extensively. The designers and implementers were also users, and the process of dogfooding was one of the reasons the devices worked so well.

Today, all the employees still have and use their devices, but something has changed. Specifically, the user market RIM is targeting is no longer the same, and so the employees are no longer representative of the user population. Dogfooding now still helps, but not nearly as much as it used to, since the features the users care about the most are not necessarily the features that get exercised the most internally. This has practical, noticeable consequences. Since RIM's development culture grew up on the concept of using dogfooding to ensure quality, they didn't develop a strong culture for other forms of quality control, such as automated testing. This has resulted in a gradual decline in overall device quality, something that a lot of people have been complaining about lately. The good news is that it's not hard to fix - they just have to stop relying as much on dogfooding and work on other more robust methods of quality control (which is something they're working on).

In general, I think this is a problem for most software projects that follow the Bazaar model of software development. The first guideline put forward by Raymond in his essay is that "every good work of software starts by scratching a developer's personal itch". I don't know if that's always true, but I think that as any such software grows, it will reach a point where it has features not really used by the developer. This also results in a "progressive divorce of the designer from ... the user", as Brooks puts it. Therefore projects that grow past this point have to face the same issues of miscommunication that Brooks talks about in this section.

In Chapter 16, Brooks mentions in passing that "complete modularity also has drawbacks ... optimized designs have components that achieve multiple goals." It occurred to me when reading this that there is a distinction between modular goals and modular components. A component that satisfies multiple goals is good, but a goal that is satisfied by multiple components is bad. He seems to define "complete modularity" as a one-to-one mapping between a goal and a component, whereas I would just leave it at components that don't interact with each other. I might just be a semantic definition issue, but it's something to think more about.

Anyway, that's all I have. It's a pretty thought-provoking book, and full of lots of good advice and insight, so if you're interested in designing stuff, I definitely recommend reading it.

[ 0 Comments... ]

The design of design, part 22011-01-04 01:15:14

This post is about conceptual integrity and mental models. I've mentioned this topic before, but in the years since I've come to appreciate much more just how important it is. Brooks talks about conceptual integrity in Chapter 6 of The Design of Design, in the context of collaboration and teamwork. He says that "the solo designer or artist usually produces works with this integrity subconsciously; he tends to make each microdecision the same way each time he encounters it" and that this results in a distinctive design style and conceptual integrity.

I hadn't really thought about this aspect of it before reading this book, but it makes perfect sense because the best designs are simply expressions of a mental model held by somebody. It starts off when the designer has a complete model that is simple enough to completely hold in her head. In order to do this, a lot of the details need to be omitted from the mental model; the simplified essence of the design is what is imagined. That mental model is then used to generate the design, resolving the details as the design is created. The details are basically the microdecisions that Brooks refers to in the quote above. Each detail is resolved so that it is aligned with the overall goal of the project and the mental model.

Note that being able to hold the complete model in your head is essential for this. If the mental model is not simple enough, then the details will get resolved while only considering a subset of the model, and not the entire project. This results in microdecisions that are not perfectly aligned with the goals of the project, inconsistent style, and finally, a poor design.

In the context of team design, the mental model must be shared between all members of the team in order for the design to maintain a consistent style and remain coherent. I'm pretty sure that this is impossible to do perfectly. At team sizes of greater than two people, the difference in mental models is easily noticeable. Teams of size two are something of a special case - I still think that their mental models are not perfectly consistent, but it's much harder to detect. Brooks has a section in the book titled "Two-Person Teams Are Magical" which describes how the interaction between the members of a two-person team can lead to synchronization of effort and mental models. I think that makes sense, but only for as long as the two people are working together and freely exchanging ideas. As soon as they separate and start thinking about the model apart from one another, their mental models diverge faster and the coherence is harder to maintain.

One of the reasons for this is that transferring a mental model from one person to another is always lossy (at least with current non-telepathic communication channels). If you think about it, this transmission of mental models is something we've been trying to perfect for millenia - the transfer of the mental model from teacher to student is the essence of teaching in any domain. The reason it works better in a two-person team than in a three-person (or larger) team is basically an issue of psychology. If you are brainstorming and trying to explain your ideas to a number of listeners, and one of those listeners understands it before others do, you will tend to favor communication with that listener over the others. This results in any other listeners being "left behind" as the brainstorming proceeds. In the case of a two-person team, there are no other listeners to get left behind, so this situation doesn't arise. There's probably a whole raft of other human psychology factors that come into play here, but I'm pretty sure that the majority of them favor coherence in two-person teams over larger teams.

As an aside, Brooks also says that "any product ... must nevertheless be conceptually coherent to the single mind of the user." This is another point that I didn't pay much attention to before, but that illustrates exactly why a divide-and-conquer approach doesn't result in user-friendly designs. That is, if you have a team of designers, and each of them handles the design for a separate component of the project rather than sharing the same mental model, then each component will have conceptual integrity within itself, but the project as a whole will lack it. This means a user, who needs to use the entire system, will have to replicate the models from each member of the design team, which is much harder to do.

This last point reminds me of the git user manual, currently the single best piece of documentation I have ever read. Very few other tutorials on git (or any other RCS, or any other system, for that matter) try to help the user build a mental model of what is happening under the covers. They mostly give the user a bunch of commands for specific tasks (e.g. creating a new code branch) and expect them to figure out the rest.

The git manual, on the other hand, has a section on "Understanding history" early on, which allows the user to start building a mental model of what git is doing. The rest of the manual the explains the git commands in terms of that mental model, so that git is perfectly predictable and completely intuitive to the user. By "the user" here I really mean "me", since this is what happened when I read the manual. Literally overnight, I went from being a git n00b to being completely unafraid of git, and able to carry out any operations I wanted. If you haven't read the git user manual, and especially if you've never used git before, I strongly suggest you do so. I'm curious to see if it works the same for others as it did for me.

I'm convinced that Linus Torvalds had the kind of mental model I'm talking about when he designed git, and that is the reason why the design is so coherent. Since he was also the one who wrote the original version of the user manual, it's no surprise that it allows the reader to (approximately) re-create that original mental model and use that to understand git operations. I'm also convinced that the design coherence and simplicity of git, a direct result of the mental model Torvalds had, is the reason git is so widely popular today, despite alternatives that arguably offer more features with no significant disadvantage.

[ 0 Comments... ]

The design of design, part 12011-01-02 20:55:13

I recently read The Design of Design - a book by Fred Brooks (he of the MMM). It was pretty interesting, and I would recommend it to any designers in the computer industry. Although Brooks tries to abstract out the design concepts from being tied to specific domains, a lot of the examples he uses are from software/hardware, so people working in other domains might find it harder to follow. There are also some case studies near the end of the book which also look at design projects in other domains, such as home renovations, book writing, etc., but I found the case studies to not be all that useful to begin with. Anyway, as I was reading I had some thoughts about stuff he said. This is the first in a series of posts on this topic.

(Warning: this post turned out to be more rambling than I expected, since I didn't really have a clear point to make. But composing this post helped me think about this stuff more clearly, so I'm going to post it anyway instead of just deleting it.)

In Chapter 4, he talks about how, because of imperfect communication and human "sins", we need to create contracts that identify the deliverables and lock down requirements and constraints. In software, the problem is usually that the contracts are created too early, before a full understanding of the project and its design can be reached. This is partly because with software, it's often hard to tell where the design stops and the implementation starts. However, when I think about my own experiences, I've noticed there's usually a point where I know I've hammered out all the unknowns for a particular component - I have a fairly concrete idea of what I need to do, what data structures and algorithms I'm going to use, and the overall architecture of the code that I'm working on. Although I haven't considered all the details, and resolving those details may affect the component architecture I have in mind, it's highly unlikely that those details will cause a change in the overall design of the project. This is the point where I would consider the "design" done.

The problem is that this point occurs pretty late in the project. For one thing, I tend to favor top-down designs and work on components sequentially. Within each component, I would estimate that the "design is done" point occurs after 60%-70% of the total time I spend on the component. So if I have a project with four equal-sized components, with my usual programming style, I would only be finished with all the design work after completely finishing the first three components and finishing ~65% of the work on the fourth component. This would put me at ~91% done for the whole project. Even if I shifted things around and did all the design work for the components first, it would still take me 60%-70% of the total time spent on the project just to complete the design.

Now if the project is something that I'm doing for my own needs, then this isn't a problem. But in industry, the design might be one of the factors used in coming up with an time/cost estimate for the whole project. Taking ~65% of the total time to provide this estimate is a little unreasonable. But if that's how long it takes to do the design, then that's how long it takes, and I can't think of a way around that. The other option to reduce the estimation time is to find other ways to come up with an estimate - ways that don't require a full design of the project. This is what often happens in industry now - estimates are created based on the amount of time it took for previous projects of similar scope. What I'm afraid of is that by using alternate estimation techniques, people might think there's no longer a need to do any design work* at all, and go straight to haphazardly attempting an implementation which will be poorly designed.

* In this context, I'm using a specific meaning of the word "design" from the book - an interactive process that is used more for helping the customer discover the requirements than for helping the developer determine an implementation strategy. I guess this could largely be called "requirements elicitation" instead of "design work", but the former term doesn't cover everything the latter implies.

[ 0 Comments... ]

[ « Newer ][ View: List | Cloud | Calendar | Latest comments | Photo albums ][ Older » ]

 
 
(c) Kartikaya Gupta, 2004-2024. User comments owned by their respective posters. All rights reserved.
You are accessing this website via IPv4. Consider upgrading to IPv6!