How It Works¶
Test Morph turns your application demo videos into ready-to-use BDD test cases through a fully automated, AI-powered pipeline. Here is what happens from the moment you send a video to the moment you receive your feature files.
The End-to-End Flow¶
flowchart TD
A(["You send a video"]) --> B["Test Morph receives the request"]
B --> C["Video is forwarded to Gemini AI"]
C --> D["Gemini analyses all screens, flows, and interactions"]
D --> E["BDD test cases are generated per business domain"]
E --> F["Feature files stream back to you one by one"]
F --> G["A summary report is delivered"]
G --> H(["Done — your .feature files are ready"])
Step 1 — You Send a Video¶
You start by sending a video of your application to Test Morph. The video can be:
- A full screen recording of a product demo
- A walkthrough of specific user flows
- Any recording that shows your application in action
You can also include an optional text message alongside the video to give context, such as:
“Focus on the login and checkout flows”
Test Morph uses this context to prioritize the areas you care about most while still covering all other visible flows.
Supported video formats
MP4, WebM, and QuickTime (.mov) are all accepted. Videos up to 55 minutes long are supported.
Step 2 — Video Analysis¶
Test Morph forwards your video to Google Gemini — a multimodal AI model capable of understanding both visual and textual content simultaneously.
Gemini carefully watches the video and identifies:
- All application screens and pages
- User workflows and navigation paths (login, checkout, registration, etc.)
- Form fields, buttons, dropdowns, and other interactive elements
- Success states and error states (validation messages, alerts)
- Data displays such as tables, lists, and cards
- Authentication and authorization flows
- Any edge cases visible in the recording
Step 3 — BDD Generation¶
Using an expert QA engineering system prompt, Gemini produces structured Gherkin test cases organized by business domain (e.g., authentication, checkout, user management) — not by page or screen name.
For each domain identified, a separate .feature file is created containing:
| Section | Description |
|---|---|
| Feature header | Name, user story (As a / I want / So that) |
| Background | Shared preconditions for all scenarios in the file |
| Happy path scenarios | Primary success flows |
| Negative scenarios | Empty fields, wrong credentials, unauthorized access |
| Boundary scenarios | Maximum lengths, special characters, rapid repeated inputs |
| Scenario Outlines | Data-driven tests with Examples tables |
Step 4 — Streaming Delivery¶
Test Morph does not wait until everything is ready before responding. Feature files are streamed back as they are generated, so you start receiving results almost immediately.
You receive the following sequence of real-time events:
| Event | What it means |
|---|---|
| Task received | Your request has been accepted |
| Analyzing video | Gemini is processing the video |
| Generating feature files | Results are being assembled |
| Artifact per feature file | Each .feature file delivered one at a time |
summary.json artifact |
Final metadata report |
| Completed | All results have been delivered |
Step 5 — Output¶
You receive one .feature file per business domain, each containing fully tagged, production-ready Gherkin scenarios.
You also receive a summary report that includes:
- Total number of feature files generated
- Total number of scenarios across all files
- The application name as detected from the video
- A list of all user flows that were identified
What the AI Uses to Guide Generation¶
Test Morph's AI is guided by an expert system prompt written from the perspective of a senior QA engineer with 15+ years of experience. The prompt instructs the AI to:
- Write steps in business language — no technical selectors or IDs
- Keep each step describing one behavior only
- Ensure each scenario is fully independent
- Tag every scenario consistently with priority, type, and nature
- Keep scenarios focused: 3–8 steps per scenario
- Use
Scenario Outlineover duplicated scenarios wherever possible
Tagging Convention¶
Every generated scenario is automatically tagged. Tags communicate at a glance what the scenario covers and how important it is.
| Tag | Meaning |
|---|---|
@P1 |
Critical — must pass for a release |
@P2 |
Important — regression suite |
@P3 |
Nice to have |
@smoke |
Core happy-path checks |
@regression |
Full regression suite |
@positive |
Testing the expected success path |
@negative |
Testing invalid inputs or error states |
@boundary |
Testing limits and edge cases |
@data-driven |
Parameterized test with Examples table |