Skip to main content
If you are already running AI models on another platform or on your own infrastructure, fal is designed to make migration straightforward. You can bring your existing Docker containers, HTTP servers, or platform-specific code and deploy them on fal’s GPU infrastructure with minimal changes. The guides in this section provide step-by-step walkthroughs for common migration paths. If you are starting a new project rather than migrating, you can skip this section entirely and go to Defining Your Environment to set up your app from scratch. The fastest path is Migrate a Docker Server, which lets you deploy any existing HTTP server with @fal.function and exposed_port with no changes to your application code. If you are coming from a specific platform, the Replicate, Modal, and RunPod guides provide side-by-side code comparisons and cover how to map your existing configuration (GPU types, scaling, secrets, storage) to fal equivalents.