Building Analysis: Work Smart, Not Hard

The most efficient approach when undertaking any form of work is to minimise your own effort whilst maximising the potential benefits and/or impacts that may flow from it. This should also be true of simulation and analysis work. This article discusses ways you can approach a project to achieve this, presenting a simple example to illustrate this idea in practice.

The Problem

Building simulation and analysis always takes time, be it in the construction of the model or the laborious assignment of properties and parameters. To have to repeat the process several times on different models as the design progresses in both sophistication and size can become a serious burden. However, if you can resolve the analysis or simulation problem into something that can be used as a simple test in and of itself, then your hard work up front can make for a much easier life later on - not only for you but for other members of the design team.

A Simple Example

Take as an example a complex right-to-light issue. Imagine a site with hundreds of windows in the facades of surrounding buildings, each of which are likely to have their daylight availability impacted by any new development. Devising a method to check that a particular building shape complies with the allowed levels of change is work enough, let alone having to repeat this many times as the designers continually refine and develop the proposed building model.

Figure 1 - Example where any new development on blue site will affect  daylight availability on a large number of windows on adjacent buildings.
Figure 1 - Example where any new development on blue site will affect daylight availability on a large number of windows on adjacent buildings.

The aim here could be to reduce this analysis to something simple that the designers can use themselves - rather than you constantly receiving the latest model, converting it, testing it, tracking down all the problems and sending back an updated report. Instead, let the designers run a simple test themselves as often as they want until they finalise the design. Obviously this suggestion sounds good, but how exactly do you do it ?

The key is in a detailed analysis of both the nature of the problem and the parameters/criteria that influence it. In this case it is the design team's desire to know the geometric limits within which they can work and the local right-to-light regulations that place constraints on the development envelope.

The Solution

Much of the right-to-light legislation in the UK is based on guidelines produced by the Building Research Establishment [1]. In these guidelines, compliance is approached through a series of steps, each determining if it is necessary to progress onto the next. In the end, however, it is necessary for any new development to prove that the daylight available to windows in adjacent buildings is either above a prescribed threshold or has not been reduced below 80% of their original value. As a straightforward statement of performance (and therefore compliance), this allows a test to be set up which can be applied to any design proposal and either passed or failed.

Thus, if you look at the problem from a wider perspective, there must be some sort of shape based on the site volume that is right on the very edge of compliance – any higher at any point and at least one of the windows will fail. If you can accurately determine this maximally compliant envelope, you can simply provide it to the designers as a small geometric model which they can overlay on their revised design proposal.

This way, the complex task of validating right to light issues becomes a simple visual check, done as often as required by any member of the design team. It will be immediately self-evident if any part of the building projects beyond the envelope (as you will be able to see it), and is something tangible that can be quickly explained to anyone else, including the client and building control officers.

An Explanation

Calculating this optimum shape is not something you can do easily yourself. However, if you think of it as a large series of very simple tests that a computer can do - many thousands of times over and over if necessary - then it becomes more manageable and, with some serious computer time, even solvable. Thus, if we frame the problem appropriately, we should be able to turn it into a relatively simple iterative calculation.

ECOTECT can already calculate the shading mask for any surface in a model. If we use the shading mask to determine the Vertical Sky Factor (VSC) for each window, then we can use this as the metric to determine the relative amount of change per window over each iteration.

Figure 2 - An example of a shading mask showing the calculated vertical sky component (VSC) in the bottom right corner.
Figure 2 - An example of a shading mask showing the calculated vertical sky component (VSC) in the bottom right corner.

The important problem then becomes devising a way the computer can generate and modify the geometric volume of the site at each iteration, based on the results of the previous analysis. There will be many different ways of doing this, however a relatively straightforward one is to divide the site into a series of smaller segments, in this case a grid of individual squares that can be extruded at each iteration into columns.

Figure 3 - Development area of the site divided into individual grid squares.
Figure 3 - Development area of the site divided into individual grid squares.

In order to generate a compliant development envelope, the buildable area of the site is first established. In this particular case, only a part of the site is available for the new building. This area is then mapped out over the site and divided into a series of small grid sections, as shown in Figure 3.

The height of each of these sections can then be independently controlled by an analysis script. At the start of the iteration process, each grid section is assigned a starting height and a positive increment value. On each iteration, the VSC for each window is calculated and compared with its reference value. If the calculated value falls below 80%, the closest grid section is found based on its geometric distance from the centre point of the window.

The height increment for this section is then divided by negative two (-2.0) - this halves the increment of the segment and reverses its direction. This is important as the window has actually fallen below the 80% threshold so the section height must be reduced.

If the increment value of the closest grid section is already negative, then the next closest segment with a nonnegative increment is used. If, on the next iteration, the calculated value for that window increases above 80%, then the closest negative-increment segment is again halved and reversed, but only if the previously calculated value was below 80%.

The process is judged to have been resolved when the increment values of all grid sections fall below a given threshold - in this particular case 100mm. The resulting compliant development envelope is shown in Figure 4.

Figure 4 - Individual grid squares extruded to their maximum compliant heights.

In the initial development of this system it was not uncommon for individual segments to be reversed and then ‘forgotten’ once the window that caused the reversal regained its 80%. This was because windows could remain below 80% for several iterations, reversing a different grid section each time. Rather than attempt to store all the reversed sections for each window, a limitation of five consecutive iterations with a negative increment was imposed, after which the section reverted to a positive increment. Whilst this increased the total number of iterations required for the resolution of the envelope, by approximately 9% in this example, it greatly simplified the scripting task.

Such a system is flexible enough to accommodate any number of grid sections over any site layout with no limitation on the number of potentially affected windows and the complexity of the surrounding site.

Weaknesses

The major weakness in this is solution is that it really isn't a simple process. It requires insight into both the nature of each analysis problem and the diverse criteria involved to solve it. It also requires the development of bespoke scripts to manipulate model geometry that is often very specific to each site and/or situation.

Our research aim here is to identify any areas of commonality between different problem types so that we can provide more generic scripts and, in the future, maybe even built-in wizards to guide you through such problems. However, this is quite a way off yet.

Conclusion

This article is a summary/extract of two published research papers by the author that looked at generative solutions to tightly defined design problems [2] and building regulations compliance testing [3]. The aim of this research work has been to involve performance simulation and optimisation techniques much earlier in the design process to guide development of the final built form. This means devising mechanisms by which useful simulation results can be derived from relatively incomplete design models and then used to generate or modify the building geometry to improve its performance.

If you are faced with a complex analysis problem, you should always try to step back and look at it from a wider perspective. Your real aim is to resolve such problems into much simpler tests that can be applied quickly and easily, preferably by any member of the design team not just you. Whilst this at first may sound laborious, simple repetative or iterative tests are exactly what computers are good at.

Of most significance here is that designers can work equally well with both objective (quantifiable) and subjective (unquantifiable) constraints. In fact, at the earliest stages of design it is only really possible to work with subjective issues as there is usually insufficient hard information to accurately calculate anything objectively. Computer systems tend to be of little use in tasks that involve subjective or unquantifiable parameters, but excel at objective tasks with clearly defined and quantifiable parameters.

Thus, the purpose of this article is to propose the best compromise. Computational analysis and simulation can make a significant contribution at the very earliest stages of design if generating optimal solutions to very focused and tightly defined problems. No problem if the designer doesn't like the generated solution because thats exactly what designers are good at - assimilating a wide range of conflicting information and making value judgements based on many thousands of criteria at once. Each computer analysis simply fills in some knowledge gaps, adding to the designer's stockpile of quantitative information.

References

[1] Littlefair, P., 1991. Site layout planning for daylight and sunlight: a guide to good practice, Building Research Establishment Report 209 (summary).

[2] Marsh, A.J., 2005. A Computational Approach to Regulatory Compliance, Building Simulation 2005, Nineth International IBPSA Conference, Montreal, Canada (view as PDF).

[3] Marsh, A.J., Haghparast, F., 2004. The Application of Computer-Optimised Solutions to Tightly Defined Design Problems, Passive and Low Energy Architecture Conference (PLEA 2004), Eindhoven, Netherlands, 2004.




View desktop or mobile version of site.