AI video and image tools for creative production
Runway is an AI-first design & creativity platform that pairs text-to-video (Gen-2) and image generation with a multitrack editor, aimed at video editors and social creators who want to prototype and export AI-enhanced clips quickly; pricing starts with a free tier and paid Creator/Pro plans for heavier exports and commercial licensing (pricing tiers vary by seat and usage).
Runway is a Design & Creativity platform that combines text-to-video, text-to-image, and AI-driven video editing tools in one browser-based app. It centers on Runway Gen-2 for text-to-video and a suite of video-centric features—inpainting, background removal, and a multitrack timeline—so creators can generate, edit, and export clips without stitching multiple tools together. The key differentiator is in-app AI models that operate directly on video frames (including automatic alpha/green-screen removal) aimed at editors, social media teams, and indie filmmakers. Runway offers a usable free tier plus paid Creator and Pro subscriptions for higher-resolution exports and commercial usage (prices vary by plan).
Runway launched as a creative AI company to put generative models and video tools into the hands of creators. Originating from the lab-to-product pipeline popularized in machine-learning communities, Runway positioned itself as a Design & Creativity platform that blends research-grade generative models with a non-linear video editor. Its core value proposition is to let users go from a prompt or a clip to an edited, exportable result without hopping between model UIs and NLEs.
The company foregrounds video-first AI — offering model-driven workflows that operate across frames rather than only on still images — which appeals to teams that produce short-form video at scale. Runway’s feature set mixes model access with editing tools. Its Gen-2 text-to-video model generates short videos from prompts and image references; a text-to-image engine (Stable Diffusion variants and custom checkpoints available) produces assets for compositing; video inpainting and object removal work across multiple frames with propagated masks; and a background removal/green-screen tool yields per-frame alpha channels for compositing.
The in-app multitrack timeline supports clip trimming, layered compositions, and frame-accurate edits, while export formats typically include MP4 and higher-bitrate codecs on paid plans. Collaboration features and project versioning help teams iterate without duplicating files. Pricing mixes a free tier with paid subscriptions and enterprise licensing.
The free plan provides limited model credits, watermarked or lower-resolution exports, and basic editing access. Paid Creator and Pro plans (monthly subscriptions) unlock higher-resolution exports, more model compute credits, commercial licensing, and faster queue priority; enterprise customers get seat-based billing, SSO, and a custom SLA. GPU-heavy tasks such as long text-to-video renders or large-batch inpainting can consume credits quickly, so teams often choose Pro or Enterprise for production workloads.
Exact prices and seat discounts change periodically; evaluate current rates on Runway’s pricing page. Runway is used by videographers, social media managers, motion designers, and small studios to speed production. A video editor at an indie agency might use Runway to produce 30–60 second social ads using Gen-2 backgrounds and AI object removal, while a social media manager creates multiple 15-second vertical cuts for platforms from a single source clip.
Runway competes with Adobe (Firefly + Premiere) and Descript for different parts of the workflow, but its combination of integrated text-to-video models and frame-aware editing is the platform’s primary distinction versus image-first competitors.
Three capabilities that set Runway apart from its nearest competitors.
Current tiers and what you get at each price point. Verified against the vendor's pricing page.
| Plan | Price | What you get | Best for |
|---|---|---|---|
| Free | Free | Limited model credits, low-res or watermarked exports, basic editor access | Hobbyists testing AI video features |
| Creator | $12/mo (approx) | Higher model credits, 1080p exports, commercial license, faster queue | Independent creators and freelancers |
| Pro | $35/mo (approx) | More GPU credits, 4K exports (paid), team-style collaboration features | Small studios and heavy creators |
| Enterprise | Custom | Seat-based billing, SSO, SLA, custom compute allowances | Agencies and large production teams |
Copy these into Runway as-is. Each targets a different high-value workflow.
You are Runway Gen-2 acting as a short-form social ad director. Constraints: produce a single 15-second vertical (9:16) video, 24fps, subject is a wristwatch product centered in frame, no logos or on-screen text, warm cinematic lighting, clear macro detail on watch face and strap, natural reflections, neutral studio background. Output format: return one copy-ready Gen-2 text prompt and three brief variation prompts (each with a single variable changed: lighting, color grade, or camera distance). Example prompt start: '15s cinematic close-up of a luxury wristwatch rotating on a pedestal, shallow depth of field, warm key light...' Provide only the prompts.
You are a Runway assistant performing background removal and export. Constraints: input is a 00:02:30 MP4 interview, 1920x1080, subject is one person seated center; preserve original audio; prioritize hair-detail and soft edges; output must be a transparent-background video and a PNG sequence with alpha. Output format: provide a numbered 4–6 step checklist of exact Runway actions (tool names, toggles, parameter values) and precise export settings (container, codec, resolution, filename convention). Example step: '1) Upload file → Effects → Remove Background (Person, Hair Detail: High) → Refine Edge: 0.15.' Provide only steps and export settings.
You are a social-video repackager converting a single 2:00 horizontal master into four vertical clips optimized for Reels/TikTok/Shorts. Constraints: produce four 10–15s clips, give exact start/end timestamps, recommended crop presets (9:16), and focal point coordinates for each clip; include a three-word hook caption (≤30 characters) and caption timing array (start/end seconds). Output format: JSON array of four objects: {start,end,crop,focal_timestamp,focal_coords,hook_caption,caption_timestamps,export_preset}. Example object: {start:'00:00:18',end:'00:00:32',crop:'9:16 top-left 560x1000',focal_coords:[960,540],hook_caption:'Big Reveal',caption_timestamps:[0.5,3.2],export_preset:'1080x1920 @ 30fps, 8Mbps H.264'}. Provide only the JSON array.
You are a motion-design assistant creating a technical plan to retime a clip and remove a moving tripod crossing frame. Constraints: target clip segment 00:00:10–00:00:25 at 24fps; apply a smooth retime ramp from 0.8x to 1.2x over the segment; remove tripod between frames 250–375 using inpainting while preserving motion blur and shadow continuity; provide mask keyframes every 5 frames. Output format: return a detailed timeline JSON containing {retime_curve:[{time,rate}],mask_keyframes:[{frame,x,y,width,height}],inpaint_layers:[name,tool,parameters],runway_actions:[tool,parameter,value]}. Example keyframe: {frame:260,x:842,y:420,w:120,h:300}. Provide only the JSON plan.
You are a senior editor creating a 45-second hero spot by combining Gen-2 backgrounds with live-action product plates. Multi-step deliverable: 1) produce three copy-ready Gen-2 background prompts (modern minimal office, dusk rooftop, warm wood studio) that match brand palette #FFDDAA and #333333; 2) provide exact compositing instructions: camera focal length recommendations (mm), scale and anchor transforms, edge feather values, key shadow placement and distance, and LUT/color-match values (numeric); 3) deliver audio mix guide (levels in dB) and final render/export settings. Output format: numbered steps with the three Gen-2 prompts and precise Runway effect parameter values. Include one short example Gen-2 prompt. Provide only actionable steps and prompts.
You are a cinematic director and VFX supervisor building a 60-second scene with Runway Gen-2 plus timeline editing. Deliverables: 1) shot list of six shots (24fps, 2–12s each) with copy-ready Gen-2 prompts specifying camera lens, lighting, and mood; 2) rotoscope/inpaint checklist for inserting actor plates into generated backgrounds, including suggested roto feather (px), tracking points, and inpaint parameters; 3) color-grade recommendations: three LUT names with exposure/Gamma numbers and three-step grade order; 4) final export settings for DCP 4K 24fps. Output format: single JSON object {shots:[{id,prompt,duration,camera}],vfx_steps:[...],color_grade:{...},export:{...}} and include two brief few-shot prompt examples. Provide only the JSON.
Choose Runway over Adobe Premiere/Firefly if you want in-browser text-to-video models integrated with editing, not just image generation.