Generating conceptual landscape design via text-to-image generative AI model
Xinyue Ye,
Tianchen Huang,
Yang Song,
Xin Li,
Galen Newman,
Dayong Jason Wu and
Yijun Zeng
Environment and Planning B, 2025, vol. 52, issue 8, 1903-1919
Abstract:
This study explores the integration of text-to-image generative AI, particularly Stable Diffusion, in conjunction with ControlNet and LoRA models in conceptual landscape design. Traditional methods in landscape design are often time-consuming and limited by the designer’s individual creativity, also often lacking efficiency in the exploration of diverse design solutions. By leveraging AI tools, we demonstrate a workflow that efficiently generates detailed and visually coherent landscape designs, including natural parks, city plazas, and courtyard gardens. Through both qualitative and quantitative evaluations, our results indicate that fine-tuned models produce superior designs compared to non-fine-tuned models, maintaining spatial consistency, control over scale, and relevant landscape elements. This research advances the efficiency of conceptual design processes and underscores the potential of AI in enhancing creativity and innovation in landscape architecture.
Keywords: Text-to-image; generative AI; conceptual design; generative design; stable diffusion; landscape design (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/23998083251316064 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:envirb:v:52:y:2025:i:8:p:1903-1919
DOI: 10.1177/23998083251316064
Access Statistics for this article
More articles in Environment and Planning B
Bibliographic data for series maintained by SAGE Publications ().