You need to generate the same type of video repeatedly with different data — weekly reports, personalized onboarding videos, per-customer dashboards, changelog announcements. Doing this manually doesn’t scale.
Data event → Template composition + injected data → Hyperframes render → Distribute
Hyperframes compositions are just HTML files. You can template them, inject data, and render programmatically — no browser, no human, no AI agent in the loop.
For the best of both worlds — motion graphics + avatar narration:
def generate_narrated_report(data): # Step 1: Render the motion graphics with Hyperframes graphics_path = generate_report_video(data, "renders/graphics.mp4") # Step 2: Generate avatar narration with Video Agent narration = requests.post( "https://api.heygen.com/v3/video-agents", headers={"X-Api-Key": HEYGEN_API_KEY}, json={ "prompt": f"""Narrate this weekly report: {data['active_users']:,} active users (up {data['growth_pct']:.0f}%), ${data['revenue']:,.0f} revenue. Keep it under 15 seconds, upbeat and concise.""", }, ).json() # Step 3: Composite in Hyperframes (avatar PiP over graphics) # ... or use ffmpeg to overlay
Start simple — get one template working end-to-end, then add automation. A working pipeline that generates one video type reliably is more valuable than a complex system that handles everything.