In partnership with

Welcome back to Staying Ahead 👋

If you’ve ever tried to make a product lookbook, you know the pain of having to shoot photos in one place, edit them somewhere else, generate AI mockups in a third tool, write video prompts in a fourth, and pray that the final output looks coherent.

Flora AI kills that workflow. It’s an infinite canvas where you upload assets, generate AI images, write video prompts, and produce the final output, all in one connected space.

In this guide, we’ll walk through exactly how to use Flora to create model mockups and a visual branding video for a Rajasthani jacket, with me as the model. By the end, you’ll have a repeatable workflow for any product shoot.

Time required: ~5 minutes.

Why would you make a product lookbook?

Login or Subscribe to participate

NEWS NEWS NEWS

Quick Primer: What is Flora AI?

Flora (flora.ai) is an AI-powered creative workflow system. Think of it as a cross between Figma’s design canvas and Miro’s whiteboard, but with AI generation models baked directly into the interface.

Three things make it different from juggling separate tools:

  • Node-based workflow. Every asset and output is a visual node on the canvas. Lines connect inputs to outputs, so you can see exactly which reference images fed into which generated shot.

  • Multi-model hub. Flora connects to image models (Flux Pro 1.1, etc.), video models (Kling, Higgsfield), and text models (Claude, GPT), all accessible from the same canvas.

Everything stays connected. Your text prompt, reference images, generated shots, and video output all live on one canvas. Nothing gets lost in a downloads folder.

Step 1: Upload Your Assets

Start by creating a new Flora project and uploading your raw materials. For this campaign, we need four things: a photo of me (the model), the two jacket variants we want to showcase, and a reference image of a Rajasthani turban for styling context.

Drag and drop them onto the canvas. Flora automatically creates an Assets panel, a vertical stack of labelled image nodes. Name each one clearly: “Vaibhav,” “Jacket,” “Jacket 2,” and so on. These labels help you stay oriented when the canvas fills up.

Pro tip: Name your assets descriptively from the start. Once you have 10+ nodes on the canvas, generic names like “Image” become useless.

Step 2: Generate Model Mockup Shots

Now we turn these raw assets into polished product shots. Click the + button on any asset node to create a new generation node connected to it.

Here’s the key concept: you can connect multiple asset nodes to a single generation node. Flora’s AI will use all of them as reference inputs. So we connect my photo + the first jacket + the turban image to one node, and write a prompt like:

"show this model in this traditional rajasthani jacket, standing in the blinding sun of a rajasthani fort in the desert."

Flora takes the model’s face, the jacket’s pattern, and the turban’s style, and composites them into a new image set in the specified scene. The toolbar at the top lets you pick which AI model to use (set to Auto by default) and access additional tools.

The generated “Embroidered Jacket Design” shot. I’m wearing the first jacket, composited into a desert fort scene.

Repeat for the second jacket. This time, try a different scene and different styling cues:

show this model in this traditional rajasthani jacket and yellow turban, standing next to a camel posing with a camelskin water bag. he looks happy.

The “Rajasthani Outfit Scene”, Me in the second jacket, posed with a camel in a desert landscape.

Pro tip: The small thumbnail overlays on each generated image show which reference assets were used. This visual lineage is one of Flora’s best features, and you always know what went into any output.

Step 3: Review the Node Workflow

Zoom out on the canvas and you’ll see the full picture. Your Assets panel on the left connects via curved lines to the Shots panel on the right. Each line represents which inputs fed into which output.

This is where Flora’s node-based approach really shines. Instead of a flat folder of images with no context, you have a visual map of your entire creative pipeline. You can see that the “Embroidered Jacket” shot drew from Me + Jacket, while the “Rajasthani Outfit Scene” drew from Me + Jacket 2 + the turban image.

The full node graph: Assets on the left, generated Shots on the right, with connection lines showing the creative lineage.

This view is also useful for team collaboration. If a colleague opens the project, they don’t have to guess what reference material was used for each shot — the connections make it self-documenting.

Step 4: Generate a Video Prompt

Time to go from still images to video. Pick your best generated shot (we’ll use the camel scene) and create a new text generation node from it.

This time, instead of generating an image, you’re asking Flora’s text model to write a video prompt based on the photo. Something like:

give me a prompt to make a video of the photo of a man posing in a rajasthani jacket and turban next to his camel. this is for an indian brand that celebrates its unique culture and likes incorporating regional quirks into its visual branding.

Flora’s text model returns a rich, cinematic video prompt describing camera movements, lighting, and atmosphere — golden hour desert light, a slow 360-degree pan, sand particles drifting in the breeze, the camel swaying its head naturally.

The generated video prompt, a detailed cinematic description ready to be fed into a video model.

Pro tip: You can edit the generated text prompt directly on the canvas before passing it to the video model. Tweak the camera movement, adjust the mood, add or remove details.

Step 5: Generate the Branding Video

Connect the text prompt node and the reference image to a new video generation node. Flora sends both to the video model (Kling, Higgsfield, or whichever is set), and you get back a short branding clip.

The result, titled “Rajasthani Joy” is a cinematic clip with the aspect ratio, camera movement, and mood defined by the prompt. The toolbar shows you can adjust the aspect ratio (we used 16:9), access additional tools, and download the final video.

The final video output “Rajasthani Joy” a branding video ready for social media or a product page

From here you can iterate: try different prompts, different aspect ratios, different jacket combinations. Everything stays on the same canvas, so your creative history is always visible and traceable.

My Take

Flora isn’t the simplest tool to learn. The node-based system takes a bit of getting used to, and there are no pre-built templates to lean on. But once you internalise the workflow, it’s genuinely faster than bouncing between four separate tools.

The visual lineage feature (seeing exactly which inputs created which outputs) is something no other tool in this category does well. For teams working on campaigns with multiple product variants, that alone justifies the learning curve.

Try it with your own product this week. Upload three assets, generate two mockups, and make one video.

TLDR: it can yet make humans look artificial, but once that’s fixed, this will be dope. 

Reply and tell me what you think.

Weekend Watch


I sat down with Josh Woodward, the man behind Google Gemini AND Google Labs and got him to demo 7 Google AI tools LIVE, including one the world hasn’t seen yet. If you have 30 mins, it’s a must watch!


That's it for this week.

If you try one of these prompts, reply and let me know what you got. The most interesting results I hear about will make it into a future issue.

Vaibhav🤝

If you read till here, you might find this interesting

#AD 1

How Jennifer Anniston’s LolaVie brand grew sales 40% with CTV ads

For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.

LolaVie used Roku Ads Manager to test and optimize creatives — reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.

The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.

Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.

The DTC beauty category is crowded. To break through, Jennifer Anniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

#AD 2

Build a LinkedIn Growth Routine That Actually Compounds

Taplio helps you grow followers with consistent posting, boost visibility with smart engagement, and iterate on what’s working with advanced analytics.

All in one place.

Try free for 7 days + $1 for your first month with code BEEHIIV1X1.

Reply

Avatar

or to participate

Keep Reading