How to use AI in architectural practice: Case study with Wardle

Melbourne-based practice Wardle began using AI tools in its design and visualization processes before generative AI (GAI) became widely known. These tools encompass parametric techniques for space arrangement and facade generation, where the boundary between parametric techniques and AI is somewhat blurred because of the involvement of primary machine learning. Wardle has used a wide range of AI tools to support visualization processes, including image upscaling and post-processing AI, content-aware relighting and texture map generation.

Roland Snooks and Gwyllim Jahn: How is GAI different to other forms of concept-generation practices?

James Loder: We see GAI as a tool to be used in support of conceptual development and not as a comparative method. We find it is most effective when we have a clear direction and can provide specific prompting to produce images that complement our conceptual narrative or a particular design decision. Using AI image generation to produce focused and curated images in lieu of searching for relevant precedent imagery is a much faster and more targeted method in support of an idea. The value of these images lies in their capacity to be evocative and compelling in unexpected ways, but their purpose and intent is almost always predetermined by the traditional practice methodology for conceptual development.

Wardle has found that AI-generated images can be valuable in communicating an idea to a client during the early phase of conceptual development.

Wardle has found that AI-generated images can be valuable in communicating an idea to a client during the early phase of conceptual development.

Image: Wardle/Midjourney

RS and GJ: What are the unexpected qualities or characteristics that have arisen in this process?

JL: Utilizing sketches, diagrams, renders or other graphic content as visual prompts often produces counterintuitive outcomes that, through their realism, become convincing alternative approaches to a particular design task and present new avenues for exploration and experimentation. This process goes beyond the passive engagement of inputting key words into a black- box AI model and assessing the outcomes – instead, fostering an almost collaborative relationship where the back-and-forth exchange of drawings and images produces outcomes that often defy initial expectations. A balance between the designer’s intuition and the AI’s ability to process and reinterpret visual cues results in a blend of human creativity and machine intelligence.

RS and GJ: Who are these images intended for?

JL: Initially, our intention was to utilize these AI-generated images strictly for in-house purposes – mainly to explore and refine conceptual ideas. But, as we experimented with various prompts and observed the diverse outputs, we discovered that certain images held significant value in communicating an idea to a client in the early phase of conceptual development. This approach has allowed us to move beyond the conventional reliance on precedent imagery, which was often confined to examples of our own past work or limited to abstract imagery and diagrams.

More discussion

See all
At Hassell, Jon Hazelwood uses Midjourney to generate images that demonstrate the quantum of biodiverse nature that is required for nature-positive cities. AI case study: Speculating on urban futures through Midjourney

Jon Hazelwood, a principal at Hassell, uses imaginative details produced by AI to spark conversations about the public realm.

Ballardong Whadjuk Elder Uncle Kelvin Garlett learns about drone-flying with Wiru Drone Solutions. Digital culture hubs: Storing Traditional knowledges for contemporary use

Researcher Susan Beetson believes that the use of emerging technologies to digitize cultural Knowledges will empower First Nations communities in built-environment design and beyond. Georgia …

Most read

Latest on site

LATEST PRODUCTS