Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Design Tools: Future Design Environments for Visualizing Building Performance, Lecture notes of Computer Networks

The need for 'design tools' in the future of Computer Aided Architectural Design (CAAD) and how they can predict building performance. The author emphasizes that architects need more than just computer aided design and drafting systems (CAD) to design a good building. The document explores the different components of design tools that will be needed for the design environment of the future.

Typology: Lecture notes

2022/2023

Uploaded on 05/11/2023

sandipp
sandipp 🇺🇸

4.3

(11)

1 document

1 / 11

Toggle sidebar

Related documents


Partial preview of the text

Download Design Tools: Future Design Environments for Visualizing Building Performance and more Lecture notes Computer Networks in PDF only on Docsity! CAAD futures Digital Proceedings 1991 485 485 31. Design Tools: Future Design Environments for Visualizing Building Performance Murray Milne Graduate School of Architecture and Urban Planning University of California Los Angeles, California, 90024-1467, USA In the future of Computer Aided Architectural Design (CAAD), architects clearly need more then just computer aided design and drafting systems (CAD). Unquestionably CAD systems continue to become increasingly powerful, but there is more to designing a good building than its three-dimensional existence, especially in the eyes of all the non-architects of the world: users, owners, contractors, regulators, environmentalists. . . . The ultimate measure of a building's quality has something to do with how well it behaves over time. Predictions about its performance have many different dimensions; how much it costs to build, to operate, and to demolish; how comfortable it is; how effectively people can perform their functions in it; how much energy it uses or wastes. Every year dozens of building performance simulation programs are being written that can predict performance over time along any of these dimensions. That is why the need for both CAD systems and performance predictors can be taken for granted, and why instead it may be more interesting to speculate about the need for `design tools'. A design tool can be defined as a piece of software that is easy and natural for architects to use, that easily accommodates three- dimensional representations of the building, and that predicts something useful about building's performance There are at least five different components of design tools that will be needed for design environment of the future. Introduction: It is easy to say that computers will profoundly change the way architects design buildings. The tough question is to figure out exactly what that future design environment be like. These environments will be powered, at least in part, by a never-ending series of CAD system up-dates and new releases, each one more wonderful than the last at describing buildings in three-space. All of these computerized drafting systems are the direct linear descendents of the tradition of representing buildings that began in ancient times with papyrus, then in modern times with linen and india ink. Computers make it possible, however, to do something that is profoundly different from what architects could do before. Like having a Magic Lamp with a Genie inside, we have the ability to bring a design to life, run it forward in time, and watch how it behaves - not just what it looks like, but how warm or cold it feels, how bright or dark it looks, what it sounds like, and how well it accommodates human activities. This opens a new world of opportunities that for future design environments that we are just beginning to explore. CAAD futures Digital Proceedings 1991 486 486 However, there is a cost to be paid from letting that Genie escape from the lamp. Architects in the future will undoubtedly face a perpetual series of increasingly stringent performance criteria, in large part because computers make it possible to predict performance with increasing precision. A useful analogy might be the continually increasing sophistication of medical diagnosis and treatment. Once the resource is available, patients insist on nothing less, as will our clients for design services in the future. The only difference is that the building sciences lag far behind most other areas of technological development. Another example might be the ever tightening standards imposed on the automobile manufacturers for fuel efficiency and emissions reduction. Here legislation drives technology. The same is already true in architecture where new building codes, especially in the areas of life safety, energy, and the environment have "made a market" for all sorts of technological innovations that otherwise would languish because they were judged non-economic. Certainly in the future this trend will only continue, given the ever-expanding aspirations for public safety, and concern for the environment, global warming, and the consumption of energy. The fact that software is already available which can predict how our buildings will perform, makes it inevitable that those criteria will be imposed sooner or later. In short, it will happen because we can do it. Many architects will see all this as a threat to their design freedom, but others will see it as an opportunity to increase our profession's relevance and value to society, and to expand our own intellectual horizons. CAD systems, like AutoCad for example, are becoming increasingly sophisticated at manipulating shapes, forms, colors, and textures in three-space. While they can represent reflected ceiling plans and wall details, they have no direct way to manipulate the luminous, thermal, or auditory environment, or even structural performance or life safety today. We need completely separate simulation models to do that. Unfortunately we have no obvious way to link all of them together or to the CAD system. The interface of each piece of software is so idiosyncratic that no one (not even an architect) can master them all. It can be argued that the process of designing a building involves the simultaneous manipulation of two distinctly different models; the geometric representation of the building in three-space, and a time-line representation of each of the different measures of its performance, such as its energy consumption, visual comfort, operating cost, etc. In the olden days before computers, the architect's task was to externalize the former by hand, and to create fantasies about the latter. Design Tools What exactly is a design tool? It can be defined as a piece of computer software that has an easy-to-use interface, that allows the manipulation of the building's three-dimensional representation, and that shows the architect something useful about the performance of the building. CAAD futures Digital Proceedings 1991 489 489 2. The Napkin: Quick Sketch as a Form of Non-verbal Inputs I wonder how many buildings have begun with a sketch on a paper napkin or the back of an envelope. The architect's earliest synthesis of relational and spatial issues needs to get instantly recorded and considered whole. Later versions become more literal and comprehensible, but the first sketch is just some squiggles and scratches that are incomprehensible to anyone who was not present at the conception. At this point the "building" is contradictory, ambiguous, and incomplete, certainly not something you would yet want to have to input into a solids modeler. Imagine two or three architects around a drafting table, trying to solve a problem. The only thing an outsider would see is a series of incomprehensible sketches accompanied by grunting and head nodding, then suddenly one of them triumphantly draws another equally incomprehensible squiggle to which the others respond with relief, louder grunts, affirmative head nods, and now even spoken words will be heard like "yes...right...that's it." Here is classic right brain activity, where information is being generated and shared without requiring intervening verbalization. Architects are especially skilled at this kind of non-verbal communication, especially as it relates to building form. It is one of our unique skills and one of the things that makes our profession so much fun. The kind of resonance and synchrony architects can feel with their co-designers, in the future must be part of their interactions with computers. Something quite wonderful happens when the architect becomes completely immersed in the right-brain non-verbal world; there is magic in the speed and efficiency with which spatial concepts are generated and manipulated. Being forced to return to the verbal left-brain world is a jarring and disconcerting experience. There is information in the act of drawing that computers could grasp; the weight of a line, the speed with which it is drawn, the sequence in which things are assembled. Imagine a computer program that knew if an X through a line meant to cut it in half here, or delete the whole line, or rotate everything about this point. Instead of a static drawing on a piece of paper, imagine a dynamic electronic image that could be stretched, rotated, flipped over, selectively erased, or de-constructed and re-formed in new ways. The real test of such a system would be if two architects who did not speak the same language, could work together on the same design. In fact, at UCLA we developed the prototype for such a system that can understand the architects gestures and modify the drawing accordingly (and it is now being marketed in Japan).[4] Another problem is how to assign non-spatial attributes without stopping to use words and numbers. For every piece of the building the computer needs to know its R-value, reflectivity, thermal mass, weight, cost, presence of a vapor barrier, order of assembly, etc. Imagine a set of paint-pots that the architect could dip into and flood different surfaces with something like off-white stucco, standing seam copper roof, or terra-cotta Mexican floor tiles.[5] Imagine a box full of windows and doors that could be spotted onto the facade with a touch of the pen. The computer could of course render all of this in a spectacularly photo- realistic way, but more important is that now secretly attached to every surface is all the rest of the data needed to describe the building. Most of the current CAD systems, although quite powerful, still do not have these capabilities. People who want to use them require a great deal of training (which is totally non-productive activity). Almost every action requires at least one translation between spatial CAAD futures Digital Proceedings 1991 490 490 and verbal thinking which imposes an unnecessary cognitive load with its attendant inefficiencies and risks of error. Today's CAD systems virtually ignore non-spatial data, and in the end still only produce a frozen snapshot, having no way to bring the building to life and watch it run forward in time. Thus the Design Environment of the Future needs some type of intuitively obvious way for an architect to communicate with a computer about how to describe three-dimensional building forms and see how they perform. 3. Running the Clock Ahead: Building Performance Models Predicting the way a building might perform in the future is a fine art. Clearly there are many different ways to evaluate the success of a design; cost, energy, fame, number of design awards, etc. Obviously the degree of confidence of each of these different measures also varies widely. Separate disciplines have sprung up around each of these building performance modeling areas. The oldest group is interested in simulating the structural behavior of buildings. But the fastest growing group is energy modelers; today there are dozens of different theoretical approaches and hundreds of different programs, all of which predict some aspect of how energy is used to heat and cool the building. Three distinctions are worth pointing out: phase, focus, and speed. In each phase of the design process some kinds of energy models are appropriate while others are not. For instance, at the very beginning of the design process a model like BLAST or DOE-2 is useless; it requires the detailed input of every single building element, it takes a long time to execute, and produces stacks of tabular data. Instead what is needed is a model that works with a minimal "fuzzy" description of the building, executes in less time then it takes the architect's mind to wonder (probably about 15 seconds), and presents results in a way that can be instantly grasped by right-brain thinkers.[6] As the architect's design becomes more focused, the model must also incorporate the more detailed representation and produce increasingly precise results. At every phase in this process, the design tool must show the architect if the latest decisions are moving the project in the right direction. Large detailed energy models come into their own toward the end of the design development phase, when the building description is essentially complete, and when there is sufficient time to run a series of parametric studies to 'fine-tune' details of the design. Daylight modeling of buildings has recently become a topic of considerable interest, but here too the issues of phase, focus, and speed apply. At the phase of the process when the design of the facade is being articulated, a daylighting design tool can help answer questions about how such things as the size and placement of windows, and the dimensions of fins, overhangs, sills, and interior lightshelves on light distribution throughout interior spaces. When the details of the electric lighting are being integrated, more highly focused parametric performance studies can be run.[7] Computational techniques are known which will make it possible in the future to evaluate glare and visual comfort. The final task for these models is code compliance.[8] But because building laws and energy codes test against minimum standards, therefore it is reasonable to expect that a building designed with good design tools should easily exceed these minimum standards. CAAD futures Digital Proceedings 1991 491 491 4. Data-land: Envisioning Building Performance Sitting face to face at every terminal are two powerful information processors, a computer and an architect. Yet everything exchanged between them must pass through slow, low resolution devices that choke off precise complex communication.[9] The computer may `know' how the building will perform, but the most challenging task is to devise effective ways to share the computer's knowledge with the human. At UCLA we have been working on ways to display building performance over time. We have developed a 3-D plot that shows the state of the variable for every hour of the day for every month of the year.[10] The resulting shapes are easy to grasp at the highest level, and at the same time are rich with information at the most detailed level. The beauty and power of this approach is that it communicates in a direct right-brain mode, allowing the architect to pick up incredibly subtle distinctions that would otherwise be lost in a page full of numbers. Tufte defines information as the recognition of differences that make a difference. We have used this technique to display all kinds of different things about the building's performance; heat flow, illumination levels, cost of energy, ventilation rates, etc. There are of course other ways to represent the performance of a building over time. For example, one could compute a kind of movie or video, and watch the way natural light moves through the room.[11] But Architects might also like to know if there is adequate light to perform a specific task, are there brightness ratios that would cause direct or reflected glare for any user, and what is the probability that users will find the space visually comfortable.[12] Imagine that the movie now shows a false-color representations of where and when any of these problems occur in the occupant's visual field; a checkerboard pattern shows where brightness ratios are too high, a splotch of purple shows the source of reflected glare. 5. Transformers: Data Conversion We all know myths about how wonderful the world becomes when we simply transform our problems into another domain. A popular Christmas gift recently was a stuffed frog. When you reached inside his mouth and pulled him inside-out he became (you guessed it) THE HANDSOME PRINCE. Here a simple and elegant translation from one data structure to another is a solution to the problem - unless you happened to be a lady frog. In the real world of CAD however, it is hard to find very many successful examples. We have a few examples of going from frogs to toads (DXF to TIF for instance), but getting all the way to a prince is another matter. Imagine how delighted energy consultants would be if buildings could be instantly translated from AutoCad to DOE-2 (DXF to BDL).[13] Then again maybe they wouldn't be delighted, because the possession of this little bit of secret knowledge is one of the reasons architects need energy consultants, but the lack of it is also one of the reasons why so few architects ever do an energy analysis in the first place. Would buildings be better if architects could immediately flip from one domain into the other instantly seeing an energy analysis, a cost comparison, an illumination level plot? Thus another thing that a design tool must be able to do is not only to let the architect talk to the CAD system about the building, but also to let the performance model query the CAD system about the building. The CAD system knows a lot of the information that the performance model needs, but unfortunately not in the format that it needs. It is safe to assume that every CAD system and every performance model has its own different way to CAAD futures Digital Proceedings 1991 494 494 5. At the University of Oregon, Charlie Brown suggested the paintpot metaphor as part of a Macintosh-based energy modeling system he was developing called Energy Scheming. 6. SOLAR-5 is being developed at UCLA to address these exact specifications of phase, focus, and speed. 7. A design tool called DAYLIT was developed at UCLA to help architects develop building facades by showing them the distribution of light through their space from any combination of windows, skylights, or clerestories. It has the added power that it can show how the building performs for all 24 hours of the day and all 12 months of the year. In addition, automatic or manually controlled lights can be added in three different zones of the room, and the resulting savings in building electric energy consumption can be shown graphically in the same way. This work was supported by a grant from Southern California Edison. 8. The California Energy Commission performs a valuable service for architects in the state by making available at almost no cost, microcomputer programs for evaluating whether a building complies with the energy code: MICRO-CHECK, CAL-RES, SCM, etc. The first versions of these programs required a huge amount of input data but gave back only one piece of output data: "complies" or "fails". More recent versions, however, are more like design tools, identifying parts of the building that are most problematic, allowing multiple alternatives to be evaluated, simplifying the input process, and producing graphic output. 9. Edward Tufte has just published an exquisite book entitled, Envisioning Information (Graphics Press), which he designed as a "celebration of the escape from flatland". He presents wonderful examples of techniques for representing complex data in elegant and easy- to-grasp ways, all of which will particularly appeal to right-brain thinkers. He describes a multitude of techniques that he loosely collects under headings like Micro/Macro Readings, Layering and Separation, Small Multiples (in which he discusses the attributes of three- dimensional plots like those produced by SOLAR-5 and DAYLIT), Color and Information, and Narratives of Space and Time. He is correct when he suggests that his book could serve as a catalog for the collection in the (hypothetical) Museum of Cognitive Art, it is an intellectual and aesthetic delight. 10. The three-dimensional plots produced by SOLAR-5 have proven to be exceptionally powerful in communicating complex data. A simple metaphor is helpful. Once students understand that the plot of a good passive solar building is saddle-shaped (maximum heat gain at mid-day in winter), and bad passive buildings are heat mountains (peak gain at mid-day in summer). When students see their first SOLAR-5 plot, you sometimes see them visibly stiffen and recoil, but within minutes they have grasped the concept and have become intellectually intrigued. Then you see them leaning closer to the terminal, gesturing knowingly, and speculating if design changes like rotating the building a few degrees will reduce a bump here and fill in a valley there. Finally when you see the CRT screen covered with fingerprints you can be sure that they fully internalized the concept. CAAD futures Digital Proceedings 1991 495 495 11. The first design tool we developed at UCLA was SOLAR-2. It shows a kind of time lapse plot of the sequence of sunlight penetration patterns as they move through the room for any month of the year. The designer can then change the window shape, fins, overhang, etc., and immediately see the new set of annual patterns. At the same time in the background it is calculating things like radiation through the glazing, percentage of window exposed to full sun, etc. Another interesting graphic that this program can generate is an axonometric view of the room, which when taken from the point of view of the sun is particularly useful to the designer because, for example, if full shading is needed it shows exactly which piece of glass is exposed, or if maximum solar gain is required it shows if any part of a fin or overhang is obstructing the window. The current micro-computer version of SOLAR-2 was written by Rong Sheu, and is described in his thesis of 1986. 12. Currently there is a good deal of interest in programs that generate photo-realistic images, calculating and displaying the brightness of every surface in the room. The more complex programs like RADIANCE developed by Lawrence Berkeley Laboratory, require something like a weekend on a computer to generate an image. The SUPERLITE program, also by L.B.L. uses a more pragmatic approach of dividing all surfaces up into a relatively large grid, then calculating the radiant exchanges only at the centroid of each grid point. At UCLA we are currently in the midst of a joint research project with USC to develop a "user friendly" front end for SUPERLITE. 13. In fact at UCLA we have a number of systems that can make some impressive transformations between very different data structures. In 1986 Den Won Lin built an Auto- Lisp extension on AutoCad that would automatically generate a scheme description for SOLAR-5. If the Designer follows certain conventions like using reserved layers, points defined counter-clockwise, and figures that must be enclosed, then by simply hitting "energy analysis" on the pull-down menu a complete SOLAR-5 run would be executed and the results instantly displayed. Another thesis just completed by Joy Chen makes the conversion from AutoCad to SUPERLITE. It requires only that the perimeter of the desired space be identified a special command, then all the three-space data would be written in the current format for SUPERLITE's .IN file. Upadi Yuliatmo is currently working under the joint USC/UCLA project to develop a translator between a generalized 3-D CAD system and the SUPERLITE .IN file. Acknowledgements All of the graduate students mentioned in this paper worked as Research Assistants with financial support from the UCLA Academic Senate, The University Energy Research Group, and the U.S. Department of Energy. In many cases they also wrote their Masters Thesis on this work. SOLAR-2 was developed in cooperation with Robin Liggett at UCLA, and DAYLIT and the SUPERLITE input interface were developed in cooperation with Marc Schiler of USC. The authors appreciation to all is gratefully acknowledged.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved