Abstract
3D generation from natural language offers significant potential to reduce expert manual modeling efforts and enhance accessibility to 3D assets. However, existing methods often yield unstructured meshes and exhibit poor interactivity, making them impractical for artistic workflows. To address these limitations, we represent 3D assets as shape programs and introduce ShapeCraft, a novel multi-agent framework for text-to-3D generation.
At its core, we propose a Graph-based Procedural Shape (GPS) representation that decomposes complex natural language into a structured graph of sub-tasks, thereby facilitating accurate LLM comprehension and interpretation of spatial relationships and semantic shape details. Specifically, LLM agents hierarchically parse user input to initialize GPS, then iteratively refine procedural modeling and painting to produce structured, textured, and interactive 3D assets.
Qualitative and quantitative experiments demonstrate ShapeCraft's superior performance in generating geometrically accurate and semantically rich 3D assets compared to existing LLM-based agents. We further show the versatility of ShapeCraft through examples of animated and user-customized editing, highlighting its potential for broader interactive applications.
Qualitative Results
Qualitative comparison of raw mesh against LLM-based methods.
Qualitative comparison with optimization-based method.
Post-modeling Editing & Animation
ShapeCraft produces editable shape programs that are editing-friendly both by LLMs and users.
Benefiting from clear component structure, LLMs can be easily prompted to generate animations from exirting shape programs.
BibTeX
@misc{zhang2025shapecraftllmagentsstructured,
title={ShapeCraft: LLM Agents for Structured, Textured and Interactive 3D Modeling},
author={Shuyuan Zhang and Chenhan Jiang and Zuoou Li and Jiankang Deng},
year={2025},
eprint={2510.17603},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.17603},
}