Scene-guided image generation for precise composition and storytelling
Make‑A‑Scene (Meta Research) is a scene-guided text-to-image research demo that combines text prompts with user sketches and per-object labels to control composition. It’s ideal for designers, researchers, and illustrators who need explicit layout control rather than black-box generation. The web demo is free to try for creative experiments; there is no commercial subscription from Meta as of launch.
Make‑A‑Scene (Meta Research) is a research demo from Meta Research that generates images by combining text prompts with user-drawn scene layouts. The tool lets users sketch objects, assign brief labels, and supply a prompt so the model composes elements in the requested arrangement—a differentiator in the image generation category. It targets creative professionals and AI researchers who need deterministic scene composition and concept exploration. The publicly available demo is free to use for experiments, while code and paper references are available for researchers to reproduce results.
Make‑A‑Scene (Meta Research) is an image-generation research project released by Meta Research in 2022 that focuses on combining human sketch input and textual prompts to generate images with explicit scene composition. Meta positioned the work as an exploration of scene-level priors and human-guided layout control for text-to-image models, rather than a consumer SaaS product. The core value proposition is to let creators specify spatial relationships and object presence via sketches and labels, producing images that reflect a user’s intended composition instead of relying solely on ambiguous text prompts.
The demo emphasizes several concrete capabilities. First, text + sketch conditioning: users can type a prompt and draw rough object shapes (scribbles or boxes) to indicate where items should appear; the model uses both signals to render a coherent scene. Second, per-object attribute labels: each sketched region can be annotated (for example, "red jacket" or "wooden table") so the generator applies the attribute to that area. Third, multi-object layout control: Make‑A‑Scene supports placing multiple objects and enforcing their spatial relationships (foreground/background and relative placement) to reduce unwanted overlaps. Fourth, style and fidelity choices: the research demo demonstrates variants from photographic to illustrative renderings by changing prompt wording and style descriptors, illustrating the model’s flexibility across visual styles.
Pricing and access are straightforward because Make‑A‑Scene is distributed as a research demo. The web demo hosted at the project site is free to try for creative experiments and preview generations; Meta did not announce paid tiers or a hosted commercial plan tied to Make‑A‑Scene. For researchers and developers who want to reproduce results, Meta’s project page links to the paper, examples, and references; those interested in running models locally will need their own compute (no official commercial support or pricing from Meta is provided). In short: free demo for experimentation, self-hosted research replication requires your own infrastructure and costs.
Who uses Make‑A‑Scene in practical workflows? AI researchers use it to test scene-conditioned generation and compare layout priors in experiments; product designers and concept artists use it to iterate on composition before detailed renders. Two concrete examples: a concept artist using it to generate 10 composition variants per hour for storyboard reference, and an academic researcher comparing layout-conditioned outputs versus unconstrained text-to-image baselines. For teams wanting production-level integrations or SLA-backed APIs, commercial alternatives such as Stable Diffusion + ControlNet or DALL·E may be more suitable.
Three capabilities that set Make‑A‑Scene (Meta Research) apart from its nearest competitors.
Current tiers and what you get at each price point. Verified against the vendor's pricing page.
| Plan | Price | What you get | Best for |
|---|---|---|---|
| Demo | Free | Web demo access for experiments; limited queue and no commercial SLA | Hobbyists and researchers testing composition control |
| Self-host / Research | Free / Custom compute | Download paper/examples and run locally; compute costs depend on user hardware | Labs and developers reproducing experiments |
Choose Make‑A‑Scene (Meta Research) over Stable Diffusion if you need explicit per-object sketch-driven layout control and research transparency.
Head-to-head comparisons between Make‑A‑Scene (Meta Research) and top alternatives: