Skip to main content

The Problem

You need to generate the same type of video repeatedly with different data — weekly reports, personalized onboarding videos, per-customer dashboards, changelog announcements. Doing this manually doesn’t scale.

How It Works

Data event → Template composition + injected data → Hyperframes render → Distribute
Hyperframes compositions are just HTML files. You can template them, inject data, and render programmatically — no browser, no human, no AI agent in the loop.

Build It

1

Create a template composition

Build one great composition with your AI agent, then extract the variable parts:
<!-- Template: weekly-report/index.html -->
<div id="root" data-composition-id="main" data-start="0"
     data-duration="15" data-width="1920" data-height="1080">

  <div id="metric-1" class="clip" data-start="1" data-duration="4"
       data-track-index="2">
    <span class="value">{{ACTIVE_USERS}}</span>
    <span class="label">active users this week</span>
  </div>

  <div id="metric-2" class="clip" data-start="5" data-duration="4"
       data-track-index="2">
    <span class="value">{{REVENUE}}</span>
    <span class="label">revenue</span>
  </div>

  <!-- ... more metrics ... -->
</div>
2

Build the generation script

import subprocess
import shutil
from pathlib import Path

def generate_report_video(data: dict, output_path: str):
    """Generate a weekly report video from data."""

    # Copy template
    work_dir = Path(f"/tmp/report-{data['week']}")
    shutil.copytree("templates/weekly-report", work_dir, dirs_exist_ok=True)

    # Inject data into template
    html = (work_dir / "index.html").read_text()
    html = html.replace("{{ACTIVE_USERS}}", f"{data['active_users']:,}")
    html = html.replace("{{REVENUE}}", f"${data['revenue']:,.0f}")
    html = html.replace("{{GROWTH}}", f"{data['growth_pct']:.1f}%")
    (work_dir / "index.html").write_text(html)

    # Render
    subprocess.run([
        "npx", "hyperframes", "render",
        "--output", output_path,
        "--quality", "standard",
        "--fps", "30",
    ], cwd=str(work_dir), check=True)

    # Cleanup
    shutil.rmtree(work_dir)
    return output_path
3

Trigger from your pipeline

GitHub Actions:
# .github/workflows/weekly-report.yml
name: Weekly Report Video
on:
  schedule:
    - cron: '0 9 * * 1'  # Every Monday at 9am

jobs:
  generate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '22'
      - run: sudo apt-get install -y ffmpeg
      - run: python scripts/generate_report.py
      - uses: actions/upload-artifact@v4
        with:
          name: weekly-report
          path: renders/*.mp4
Webhook-triggered:
from flask import Flask, request

app = Flask(__name__)

@app.route("/webhook/new-signup", methods=["POST"])
def on_new_signup():
    user = request.json
    generate_welcome_video(
        name=user["name"],
        company=user["company"],
        output=f"renders/welcome-{user['id']}.mp4"
    )
    # Upload to CDN, send via email, etc.
    return {"status": "ok"}

Pipeline Patterns

TriggerData sourceOutputExample
Cron scheduleDatabase queryWeekly/monthly report videoMonday metrics recap
WebhookEvent payloadPer-user personalized videoWelcome onboarding
Git pushChangelog / commit logRelease announcement”What’s new in v2.4”
API callRequest parametersOn-demand custom videoCustomer dashboard export

Combine with Video Agent

For the best of both worlds — motion graphics + avatar narration:
def generate_narrated_report(data):
    # Step 1: Render the motion graphics with Hyperframes
    graphics_path = generate_report_video(data, "renders/graphics.mp4")

    # Step 2: Generate avatar narration with Video Agent
    narration = requests.post(
        "https://api.heygen.com/v3/video-agents",
        headers={"X-Api-Key": HEYGEN_API_KEY},
        json={
            "prompt": f"""Narrate this weekly report: {data['active_users']:,} active
            users (up {data['growth_pct']:.0f}%), ${data['revenue']:,.0f} revenue.
            Keep it under 15 seconds, upbeat and concise.""",
        },
    ).json()

    # Step 3: Composite in Hyperframes (avatar PiP over graphics)
    # ... or use ffmpeg to overlay
Start simple — get one template working end-to-end, then add automation. A working pipeline that generates one video type reliably is more valuable than a complex system that handles everything.

Next Steps

Data Visualization

Build the animated visualizations that feed into your pipeline.

Docs to Video

Similar automation pattern using Video Agent for avatar-based content.