🎨

Make‑A‑Scene (Meta Research)

Scene-guided image generation for precise composition and storytelling

Free | Freemium | Paid | Enterprise ⭐⭐⭐⭐☆ 4.3/5 🎨 Image Generation 🕒 Updated
Visit Make‑A‑Scene (Meta Research) ↗ Official website
Quick Verdict

Make‑A‑Scene (Meta Research) is a scene-guided text-to-image research demo that combines text prompts with user sketches and per-object labels to control composition. It’s ideal for designers, researchers, and illustrators who need explicit layout control rather than black-box generation. The web demo is free to try for creative experiments; there is no commercial subscription from Meta as of launch.

Make‑A‑Scene (Meta Research) is a research demo from Meta Research that generates images by combining text prompts with user-drawn scene layouts. The tool lets users sketch objects, assign brief labels, and supply a prompt so the model composes elements in the requested arrangement—a differentiator in the image generation category. It targets creative professionals and AI researchers who need deterministic scene composition and concept exploration. The publicly available demo is free to use for experiments, while code and paper references are available for researchers to reproduce results.

About Make‑A‑Scene (Meta Research)

Make‑A‑Scene (Meta Research) is an image-generation research project released by Meta Research in 2022 that focuses on combining human sketch input and textual prompts to generate images with explicit scene composition. Meta positioned the work as an exploration of scene-level priors and human-guided layout control for text-to-image models, rather than a consumer SaaS product. The core value proposition is to let creators specify spatial relationships and object presence via sketches and labels, producing images that reflect a user’s intended composition instead of relying solely on ambiguous text prompts.

The demo emphasizes several concrete capabilities. First, text + sketch conditioning: users can type a prompt and draw rough object shapes (scribbles or boxes) to indicate where items should appear; the model uses both signals to render a coherent scene. Second, per-object attribute labels: each sketched region can be annotated (for example, "red jacket" or "wooden table") so the generator applies the attribute to that area. Third, multi-object layout control: Make‑A‑Scene supports placing multiple objects and enforcing their spatial relationships (foreground/background and relative placement) to reduce unwanted overlaps. Fourth, style and fidelity choices: the research demo demonstrates variants from photographic to illustrative renderings by changing prompt wording and style descriptors, illustrating the model’s flexibility across visual styles.

Pricing and access are straightforward because Make‑A‑Scene is distributed as a research demo. The web demo hosted at the project site is free to try for creative experiments and preview generations; Meta did not announce paid tiers or a hosted commercial plan tied to Make‑A‑Scene. For researchers and developers who want to reproduce results, Meta’s project page links to the paper, examples, and references; those interested in running models locally will need their own compute (no official commercial support or pricing from Meta is provided). In short: free demo for experimentation, self-hosted research replication requires your own infrastructure and costs.

Who uses Make‑A‑Scene in practical workflows? AI researchers use it to test scene-conditioned generation and compare layout priors in experiments; product designers and concept artists use it to iterate on composition before detailed renders. Two concrete examples: a concept artist using it to generate 10 composition variants per hour for storyboard reference, and an academic researcher comparing layout-conditioned outputs versus unconstrained text-to-image baselines. For teams wanting production-level integrations or SLA-backed APIs, commercial alternatives such as Stable Diffusion + ControlNet or DALL·E may be more suitable.

What makes Make‑A‑Scene (Meta Research) different

Three capabilities that set Make‑A‑Scene (Meta Research) apart from its nearest competitors.

  • Accepts freehand sketches plus textual labels to prescribe exact object placement within a scene.
  • Exposes per-region attribute conditioning so attributes (color, material) are attached to areas.
  • Released as a research demo with accompanying paper and reproduction guidance rather than a hosted API.

Is Make‑A‑Scene (Meta Research) right for you?

✅ Best for
  • Concept artists who need rapid composition iterations for storyboards
  • UX/visual designers who require explicit layout control for mockups
  • AI researchers studying scene-conditioned generation and layout priors
  • Educators demonstrating human-in-the-loop image synthesis techniques
❌ Skip it if
  • Skip if you require an SLA-backed commercial API for production image pipelines.
  • Skip if you need built-in integrations or a paid workspace for teams.

✅ Pros

  • Explicit scene-level control via sketches and labels reduces unintended object placement.
  • Open research orientation: paper and examples allow reproducibility and academic scrutiny.
  • Good for iterating on composition and storyboarding before final renders.

❌ Cons

  • Not a commercial product: no official API, support, or paid tiers for production use.
  • Demo limits and lack of SLA mean scaling or integration requires self-hosting and compute.

Make‑A‑Scene (Meta Research) Pricing Plans

Current tiers and what you get at each price point. Verified against the vendor's pricing page.

Plan Price What you get Best for
Demo Free Web demo access for experiments; limited queue and no commercial SLA Hobbyists and researchers testing composition control
Self-host / Research Free / Custom compute Download paper/examples and run locally; compute costs depend on user hardware Labs and developers reproducing experiments

Best Use Cases

  • Concept Artist using it to generate 10 composition variants per hour
  • AI Researcher using it to compare layout-conditioned vs text-only outputs quantitatively
  • Product Designer using it to create 3 composition mockups per design brief

Integrations

GitHub (project page and references) PyTorch (research code references / common framework) Google Colab (community reproduction notebooks)

How to Use Make‑A‑Scene (Meta Research)

  1. 1
    Open the project demo page
    Visit the Make‑A‑Scene demo URL and locate the demo interface on the project page; the visible canvas and prompt field are the primary controls. Success looks like seeing the sketch canvas, text prompt box, and a Generate or Run button in the demo UI.
  2. 2
    Enter a clear text prompt
    Type a concise prompt describing scene content and style (for example, "sunlit street market, cinematic"), because the demo uses prompt wording to determine style and objects. Success means the prompt appears in the prompt field and updates alongside sketch input.
  3. 3
    Draw regions and label objects
    Use the on-canvas drawing tool to scribble object shapes or boxes and add short labels (e.g., "red bicycle") to each region; this tells the model where and what to render. Success looks like labeled regions visible on the canvas before generation.
  4. 4
    Click Generate and review outputs
    Press the demo's Generate or Run button to produce image variants; inspect results, tweak prompt/labels, and re-run to iterate. Success is receiving generated images reflecting your layout and attributes for further download or note-taking.

Make‑A‑Scene (Meta Research) vs Alternatives

Bottom line

Choose Make‑A‑Scene (Meta Research) over Stable Diffusion if you need explicit per-object sketch-driven layout control and research transparency.

Head-to-head comparisons between Make‑A‑Scene (Meta Research) and top alternatives:

Compare
Make‑A‑Scene (Meta Research) vs Camunda
Read comparison →

Frequently Asked Questions

How much does Make‑A‑Scene (Meta Research) cost?+
Free web demo available; no paid Meta plan. The Make‑A‑Scene project is distributed as a research demo: the public web demo is free to try for experiments. Meta has not launched a commercial subscription or paid hosted API for Make‑A‑Scene; researchers who want to reproduce results must self-host the model and supply their own compute resources.
Is there a free version of Make‑A‑Scene (Meta Research)?+
Yes — the public web demo is free to use. The project page provides a demo that lets you combine text prompts with sketches and labels to generate images for experimentation. There’s no official paid tier; reproduction of the research requires local hosting or community notebooks, and you’ll pay only your own compute costs if you run models locally.
How does Make‑A‑Scene (Meta Research) compare to Stable Diffusion?+
Research demo focused on sketch-driven layout control. Make‑A‑Scene emphasizes per-object sketch and label conditioning to prescribe layout relationships, whereas Stable Diffusion is a general text-to-image model that gains layout control via plugins like ControlNet. For composition-first experiments and academic transparency, Make‑A‑Scene is a stronger research reference.
What is Make‑A‑Scene (Meta Research) best used for?+
Best for composition and scene-layout experimentation. The demo is ideal for concept artists and researchers who need to iterate on spatial arrangements: sketch where objects should be, label attributes, and generate variants to explore composition before committing to final artwork or downstream rendering.
How do I get started with Make‑A‑Scene (Meta Research)?+
Open the demo page and combine a prompt with sketches. Start by entering a short descriptive prompt, draw rough object regions on the canvas, add brief labels for each region, then click Generate; iterate by adjusting the sketch and prompt until the generated outputs match your intended composition.

More Image Generation Tools

Browse all Image Generation tools →
🎨
Midjourney
High-fidelity visual creation fast — Image Generation for professionals
Updated Mar 25, 2026
🎨
stable-diffusion-webui (AUTOMATIC1111)
Local-first image generation web UI for Stable Diffusion
Updated Apr 21, 2026
🎨
Hugging Face
Image-generation platform with open models and hosted inference
Updated Apr 22, 2026