Filmmakers Using AI in Pre-Production
AI video generation is transforming pre-production — from storyboards to pre-vis to pitch decks — giving filmmakers new ways to plan before cameras roll.
Pre-production is where films are won or lost. The decisions made before cameras roll — shot composition, visual tone, location design, narrative pacing — determine whether the production runs smoothly or stumbles. AI video generation has become a practical pre-production tool because it lets filmmakers visualize these decisions at a fraction of the time and cost of traditional methods.
Pre-visualization: seeing the film before you shoot it
Traditional pre-visualization ranges from rough storyboard sketches to expensive 3D animated sequences. AI video generation slots between these extremes — more visual fidelity than sketches, faster and cheaper than 3D pre-vis.
Shot-by-shot visualization. A director can describe a shot — "medium close-up, character turns from window toward camera, warm afternoon light from camera right, shallow depth of field" — and see an approximation of the result in under a minute. This is not about replacing the cinematographer's craft. It is about giving the director a visual reference to discuss with the DP, the production designer, and the editor before anyone arrives on set.
Camera movement planning. Veo 3.1's camera control lets directors experiment with specific camera movements. "Slow dolly push toward the desk, 35mm lens, maintain eye-level framing." Try a dolly push, then an orbital move, then a crane shot — all from the same prompt with different camera directions. Decide which movement tells the story best before committing to physical rigging.
Sequence pacing. Kling 3.0's multi-shot capability generates sequences with cuts between different angles. A director can pre-visualize not just individual shots but the rhythm of a sequence — wide establishing shot, medium two-shot, close-up reaction. The timing and flow become tangible before shooting begins.
Lighting and color exploration. Different lighting setups produce dramatically different moods. AI generation lets directors and DPs explore options — warm practical lighting versus cool overhead fluorescent, golden hour versus overcast, high-key versus low-key — and establish a visual language for the film through concrete examples rather than verbal descriptions.
Storyboarding at the speed of thought
Traditional storyboarding requires either drawing skill or a storyboard artist. AI generation removes this bottleneck.
A director can describe scenes in natural language and generate visual storyboard frames in minutes. The output is not rough sketches — it is near-photorealistic imagery that communicates composition, lighting, and mood precisely. For video storyboards, generating short clips shows movement and timing that static frames cannot convey.
The speed matters. During creative development, ideas evolve rapidly. A director might reimagine a scene five times in a single meeting. With AI generation, each reimagining can be visualized immediately rather than noted for later illustration. The visual reference keeps pace with the creative thinking.
On PonPon, Nano Banana Pro generates storyboard-quality images almost instantly. For motion storyboards, Seedance 2.0 produces video clips in under 60 seconds.
Virtual location scouting
Finding the right location is one of pre-production's most time-consuming tasks. AI generation assists in two ways.
Concept location creation. Before searching for real locations, generate the ideal location. "An abandoned Art Deco theater with peeling paint, shafts of light through broken skylights, rows of empty red velvet seats." This gives the locations team a concrete visual target rather than a verbal description they interpret differently.
Location modification. Have a location that is almost right? Generate variations that show how it could look with different set dressing, lighting conditions, or time of day. A daytime location visit can be visualized at night. A furnished space can be shown empty. A modern building can be shown with period modifications.
Environment visualization. For projects involving environments that do not exist — science fiction settings, historical periods, fantasy worlds — AI generation creates visual references that the production design team uses as starting points for physical or digital set construction.
Pitch decks that close deals
Getting a film funded requires persuading people who respond to visuals, not just scripts. AI-generated content transforms pitch presentations.
Concept trailers. Generate a series of clips that represent the tone, pacing, and visual style of the proposed film. This is not a finished trailer — it is a visual mood piece that tells investors and producers what the film will feel and look like. Sora 2's photorealism makes these concept pieces genuinely compelling.
Character visualization. Before casting, generate images or clips that show how the characters should appear — their style, their environment, their movement. This communicates character design more effectively than written descriptions in a script.
World building. For genre projects, the world itself is a selling point. Generate environments, architecture, technology, and atmospheric conditions that establish the project's visual identity. Investors can see the world rather than imagining it from a treatment.
The practical workflow
A filmmaker incorporating AI generation into pre-production might follow this workflow.
Script breakdown. Go through the script scene by scene. For each scene, identify the key visual elements: setting, lighting, composition, character positioning, camera movement.
Reference generation. For each scene, generate 3-5 visual references on PonPon. Use different models for different strengths — Sora 2 for photorealistic environments, Veo 3.1 for camera movement tests, Kling 3.0 for character scenes, Seedance 2.0 for rapid iteration.
Creative review. Share generated references with department heads — DP, production designer, costume designer, editor. Use the visuals as a starting point for creative conversation, not as a prescription.
Refinement. Based on creative discussion, regenerate with adjusted prompts. The iteration is fast enough to happen within a single meeting.
Documentation. Compile the approved visual references into a look book or visual bible for the production. This becomes the shared visual language that every department works from.
What AI pre-vis is not
It is not a replacement for the creative professionals who make films. A generated clip cannot replace a cinematographer's eye, a production designer's craft, or an editor's sense of rhythm.
It is a communication tool. It makes the director's vision visible and discussable before the expensive process of physical production begins. It reduces the gap between imagination and shared understanding. It does not generate the film — it helps plan the film better.
The most effective filmmakers using AI in pre-production treat generated content as a starting point for conversation, not an end product. "This is the feeling I want, but with warmer light and a wider lens" is a more productive conversation when "this" is a visible reference rather than a concept existing only in the director's mind.
Getting started
Start with one scene from your current project. Describe the key shot in a prompt. Generate it on PonPon. Show it to your DP or collaborator. The first conversation you have around a generated reference will demonstrate the value more clearly than any explanation.
The technology is not science fiction. It is a practical tool available today, and filmmakers who incorporate it into their pre-production workflow are making better-informed decisions before cameras roll.