you set up the


David Benjamin in conversation with DH, LV and LW. Recorded March 3rd, 2014.

LV: We looked at the history of the word “Parametricism,” coined by Patrik Schumacher. Regardless of its origin, we are interested in knowing what parametric design is for you.

DB: Parametric is an overused term. I sometimes quote my computer science colleague: “that is a ridiculous notion. Every software is parametric.” I have had a similar attitude to yours, where I try to demystify the term, to get beyond it, so that ideas and their unfolding into techniques can be evaluated more precisely. That was the motivation behind a series of events called “Post-Parametric” that I organized along with a friend and Computer Science faculty member, Michael Reed. We had a series of presentations distributed among five or six events aiming to question, broaden, and re-frame computation in design. We want to understand what could be possible in the coming five to ten years, skipping the word “parametric” because it’s meaningless at this point.

LW: What is your definition of post-parametric?

DB: It’s just to say, let’s get beyond parametric as a term.

LW: What constitutes a post-parametric operation?

DB: There is no specific operation or tool. We are asking in the series, what are the possibilities of computation in design? Any possibility should be considered through the light of post-parametric, starting from scratch and thinking more precisely about terms, tools, and techniques. One of the presenters from IBM’s Watson proposed that maybe an interesting future would be to feed massive amounts of data into a machine that can be taught to be intelligent in new circumstances. There is no term for that yet, but it is a pretty sophisticated idea that we can debate.

Another faculty of Computer Science, Eitan Grinspun, is working on what he sometimes calls “directed simulation.” We all know that we can take a computer model and digitally simulate its structural performance, the flow of air around it, its environmental impact, etc. Up until very recently it has required a huge amount of computational resources to undertake those studies. Grinspun and others have been working very hard to allow users to have real-time feedback from simulation. If you change the shape of the building slightly, you see the impact it has on the street below. You change it back and see a different impact. If you could see, in real time, the performance ranges of multiple criteria, such as structure, wind, etc., and read those results simultaneously, it could change the way we design things.

DH: This seems to deal with dynamic processes specifically.

What you describe is a system where you adjust something and watch the result, as opposed to a process in which you set up something that grows into a beautiful form that looks dynamic but is perfectly still. What were the origins of this idea of real-time dynamics processing for you as it pertains to architecture? Where else do you see it happening?

DB: It relates to something that I don’t typically think about in the same line of conversation with parametrics. For a long time, I have been personally interested in what I call “Living Architecture,” bringing architecture to life. This is related to the Living Architecture Lab that I direct here at Columbia. Some of the earliest experiments involved sensors and actuators. Can we use simple versions of electronics and computers to do things like sense air quality and open up a breathing envelope based on the results? It is interesting to me that buildings are already alive in some ways, changeable over time, and adaptive to different people and the environment. Because of recent technological developments in biology, it is now possible to use living organisms as a way to make architecture more alive. This has the potential to change the way we design and think about architecture. Computation can be a way to bridge that. Our understanding of biology is advancing incredibly fast. We know a lot more about biological systems like genomes, humans, and any other organisms than we did ten years ago.

Also, it has become increasingly possible to do things like cut and paste DNA in order to change living organisms. Part of the way science is advancing is through computation. If biologists can take this advanced understanding of these complex systems and put it into software, then designers and other non-experts can start using it. These systems are so complex that it’s unlikely that architects and designers, at least in the near future, are going to make progress on actually manipulating biology. However, if it can be encapsulated in a computer model, architects can take advantage of it. In other words, I think there is an interesting potential for what we’ve called “bio-computation,” which could be another future direction of new developments in computation and design.

LW: Computation requires quantification. In your research and your work, have you encountered anything that can’t really be quantified? How do you deal with that?

DB: That’s a great question. The short answer is: of course. I would never want to pretend that everything about design, or architecture, could be quantified, or could be encapsulated in a computer model. The question is, what are some good techniques to deal with that? I’ve had a lot of great experience in exploring that with my students in the past eight years or so. I’ve always made it very clear with my students that it’s important for them to take a position on what they do in a computer workflow, like in the C-BIP workflow, or like in the multi-objective optimization workflow I’ve explored in other studios.

It’s important to understand that those workflows are only accomplishing part of your desires and responsibilities as an architect. There are still some interesting debates and discussion around whether you can take things that are on the surface qualitative and try to compute with them. I wouldn’t want anyone to pretend that you could put everything in the computer. If you did, you would have to recognize that as your position.

One of the speakers at “Post-parametric,” an amazing researcher named Kevin Slavin, has the famous TED talk about how algorithms are ruling our lives without us quite knowing about it. It’s a similar idea in concept. We need to become more aware of it. What are the assumptions going on that are affecting us? There are thousands of assumptions built into the actual software application, whether its Grasshopper, CATIA, or whatever. Someone programmed those, and those have rules and assumptions that are affecting your design. Plus, everyone knows that if you put in points and lines into a parametric model and allow the parameters to change, you’re never going to get anything out that’s not points and lines. Who made that decision? You did. You set up the model. There are hundreds of decisions like that that happen in any design. It seems like it is not even worth asking whether there is anything outside of the computer. Of course the computer isn’t doing everything for you. You made hundreds of decisions and the computer application made hundreds of decisions. It’s important to be aware of those decisions and control them.

DH: The computer compresses them. There’s a great many happening at a greater speed.

DB: What I think is important about the idea of parametrics right now comes down to a series of assumptions. What most people mean by parametric is that you have a series of inputs in a 3d model that you can change. You can change the values of those inputs and all of a sudden you get the ability to generate hundreds, or thousands, or tens of thousands, or hundreds of thousands of possible models. Very easily, you can get into a situation where there are literally billions of combinatorial possibilities. That can be interesting, but in itself it’s not necessarily great. Parametric is not about some image of a gradient of variation of an object in a field, which I think is the image everyone has in their heads, which is a problem. Any model can have a lot of variations. It doesn’t have to be a smooth gradient, and it doesn’t even have to be an array. Endless variation for variation’s sake is not that helpful. How can we be smart about using those variations for something that we desire and ideally to enhance our creativity.

Another thing to be concerned about, which I think parametric design is actually being used for with interesting effects, is the ability to generate a lot of possibilities, evaluate them for things like cost, revenue, or other levels of what I call “cold-blooded efficiency,” and then hone in on only the things which are helping to achieve some bottom line.

I think that’s a more real, and long-lasting, and scary possibility of parametric design. Once you can generate buildings of combinatorial possibilities and evaluate them for things like rentable floor area and cost of materials, then all of this power of computation is going to be used to make boring buildings, or—possibly worse than boring—buildings whose values don’t resonate with society. One really interesting possibility that has been underexplored is using the exact same tools not to generate patterns, not to generate cold-blooded efficiency, but to discover things that you want that wouldn’t have occurred to you otherwise. It’s basically a way to enhance your creativity and to show you something new.

LW: Right, it’s more of a tool, rather than a style or any formal representation that signifies complexity.

DH: It seems like the architect becomes an assessor of values, or translator of values. Is the role of the architect to carve out or identify values, and then produce something new that is unexpected, and then assess it against those values? Do you think that’s where architecture circles back and regains agency from the computer as a monolithic terror that takes away creativity? What do you assess? How do you create an aesthetic and moral position?

DB: The same framework used by the “cold-blooded efficiency” model could be used by a more creative model.

If you were to say, “Okay, I don’t just want to value things like revenue and efficiency. I want to value public space, environmental impact, and a new aesthetic.” You could pick those and try to measure them as you’re exploring these combinatorial possibilities. I think that it can get really interesting if the parametric model itself can be used to frame a public discussion and public debate about values. We could get one version of Hudson Yards that has a high public space score but a low environmental score. We could get another version that’s vice versa, and we should debate about them. Which one is more important? Is it important to have a medium score of both? When do we allow one score to go higher? As opposed to taking away our ability to discuss, to debate, to have agency, to exercise judgment, these models almost demand it. I love this, and I teach this all the time with my studios that use optimization software in the framework of numbers. As soon as you have two objectives or more, there is not a single result that’s mathematically best. You have a whole set of results. You can say some are better than others, but even the best the computer can ever do is give you a set of designs. You have to choose between them, according to your judgment and values. It’s not like the numbers are a totally different framework. You still have to debate and decide. It’s not like there is one world of automated objectivity and one world of just sketching on paper.

LV: Right now, we know that you have been teaching experimental studios, like the bio-computation studios or C-BIP.

Do you think there are possibilities for a more radical and alternative pedagogical framework?

DB: I think there are many possible ways. Who knows? In five years we may have some better pedagogical frameworks. I do think that it’s a good time to question the idea of the single visionary as the studio critic who trains twelve solitary geniuses to work entirely on their own and pretend that they’re inventing their whole world. It seems like a good time to do that for a number of reasons. First, the technology allows for the transfer of computer models with embedded intelligence. That enables a transfer of knowledge that wasn’t as easy before. In C-BIP, using a building element from a previous studio or from another person in the studio is a transfer of knowledge that is of a slightly different type than would be possible before this kind of technology. For a variety of reasons this generation of students, including you guys, is ready for that, almost demanding that. You’re leading the way more than the faculty right now. Even if there were geniuses among us—which there aren’t—they wouldn’t possibly be able to perform every role necessary to create a building the way they could have fifty years ago. It’s fitting because it adds immediate hooks into the profession right now, which is more collaborative and interdisciplinary than ever. It also has hooks into academic traditions, because schools are redefining themselves right now. It’s a good time for questioning authorship and collaboration more.

Google image search: simulation