Skip to content

gylthuin/vibecodeexample

Repository files navigation

Remotion Video Generator

A web application that enables users to generate videos through natural language descriptions. Users input text prompts, and the system generates Remotion code via OpenAI API to create custom videos.

Features

  • 🤖 AI-Powered Video Generation: Advanced AI generates professional Remotion code from text descriptions
  • 🎬 Real-time Rendering: Watch your video come to life with real-time progress updates
  • 💬 Chat History: Track and revisit your previous video generation requests
  • 🎨 No Coding Required: Create complex animations without writing a single line of code
  • 📱 Responsive Design: Beautiful UI that works on desktop and tablet devices
  • Server-Sent Events: Real-time progress updates during video generation

Tech Stack

  • Frontend: Next.js 14+ (App Router)
  • Language: TypeScript
  • Styling: Tailwind CSS
  • UI Components: shadcn/ui
  • Video Framework: Remotion
  • AI Integration: OpenAI API
  • Deployment: Vercel (recommended)

Prerequisites

  • Node.js 18+
  • NPM or Yarn package manager
  • OpenAI API key
  • Remotion license (for commercial use)

Installation

  1. Clone the repository

    git clone <repository-url>
    cd vibecodejonny
  2. Install dependencies

    npm install
  3. Set up environment variables Create a .env.local file in the root directory:

    # OpenAI API Configuration
    OPENAI_API_KEY=your_openai_api_key_here
    
    # Application Configuration
    NEXT_PUBLIC_APP_URL=http://localhost:3000
    
    # Remotion Configuration (for commercial use)
    REMOTION_LICENSE_KEY=your_remotion_license_key_here
  4. Start the development server

    npm run dev
  5. Open your browser Navigate to http://localhost:3000

Usage

1. Landing Page

  • Visit the homepage to see the video generation interface
  • Enter a detailed description of the video you want to create
  • Click "Generate Video" to start the process

2. Video Generation

  • The system will generate Remotion code using OpenAI
  • Watch real-time progress updates during code generation and rendering
  • Once complete, your video will be displayed in the video player

3. Chat History

  • View all your previous video generation requests in the sidebar
  • Click on any history item to view the associated video
  • Use the "New Video" button to create another video

Project Structure

src/
├── app/                    # Next.js App Router pages
│   ├── api/               # API routes
│   │   ├── generate-video/ # Video generation endpoint
│   │   ├── videos/        # Video retrieval endpoint
│   │   └── history/       # Chat history endpoint
│   ├── generate/[id]/     # Video generation page
│   └── page.tsx           # Landing page
├── components/            # React components
│   ├── ui/               # shadcn/ui components
│   ├── chat-input.tsx    # Chat input component
│   ├── chat-history.tsx  # Chat history sidebar
│   ├── progress-indicator.tsx # Progress indicator
│   └── video-player.tsx  # Video player component
├── lib/                  # Utility functions
├── remotion/             # Remotion video components
│   ├── MyVideo.tsx       # Main video component
│   └── Root.tsx          # Remotion root composition
└── types/                # TypeScript type definitions

API Endpoints

POST /api/generate-video

Generates video from text prompt using OpenAI and Remotion.

Request Body:

{
  "prompt": "string - User's video description",
  "sessionId": "string - Optional session identifier"
}

Response: Server-Sent Events (SSE) stream with real-time updates

GET /api/videos/[id]

Retrieves a video file by ID.

GET /api/history

Returns list of previous generation requests.

POST /api/history

Creates or updates a history item.

Development

Running in Development Mode

npm run dev

Building for Production

npm run build

Starting Production Server

npm start

Running Tests

npm test

Deployment

Vercel (Recommended)

  1. Push your code to GitHub
  2. Connect your repository to Vercel
  3. Set environment variables in Vercel dashboard
  4. Deploy automatically on push

Environment Variables for Production

  • OPENAI_API_KEY: Your OpenAI API key
  • NEXT_PUBLIC_APP_URL: Your production URL
  • REMOTION_LICENSE_KEY: Your Remotion license (for commercial use)

Customization

Adding New Video Components

  1. Create new components in src/remotion/
  2. Update the OpenAI system prompt in /api/generate-video/route.ts
  3. Add new compositions to src/remotion/Root.tsx

Styling

  • Modify Tailwind classes in components
  • Update theme in tailwind.config.js
  • Customize shadcn/ui components in src/components/ui/

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For support, email support@example.com or create an issue in the repository.

Acknowledgments

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published