Skip to main content
Every model on fal has a Playground where you can try it with real inputs, see outputs instantly, and copy working code in Python, JavaScript, or cURL. When you deploy your own app, it gets a Playground too, so your teammates and users can test it the same way.
Playground for Nano Banana 2 on fal.ai
The Playground is the fastest way to validate a model before writing any integration code. Once you have a result you like, copy the generated code into your project and you are ready to go. If you want to compare multiple models side by side, use the Sandbox instead. For programmatic access, see Client Setup.

Try It

The best way to understand the Playground is to open one. Pick a model and start generating.

What the Playground Shows

Each model page on fal.ai (for example, Nano Banana 2) is organized into tabs. The Playground tab lets you fill in inputs and run the model directly in your browser. The API tab shows the full input and output schemas with type information, so you know exactly what fields are available and what the response looks like. The page also displays pricing, average latency, and ready-to-copy code examples.

Testing a Model

1

Find a model

Browse the model gallery or search for a specific model. Click on it to open its page.
2

Fill in the inputs

The Playground form is auto-generated from the model’s input schema. Required fields are marked, and optional fields have sensible defaults. For models that accept images, video, or audio, you can upload files directly.
3

Run the model

Click Run to submit your request. The result appears below the form, typically within a few seconds.
4

Iterate

Adjust your inputs and run again. Each result stays visible so you can compare outputs across runs.

Copying Code

Every Playground result includes generated code that reproduces the exact request you just ran. Click the code tab to see examples in Python, JavaScript, and cURL, then copy them directly into your project.
import fal_client

result = fal_client.subscribe("fal-ai/nano-banana-2", arguments={
    "prompt": "a futuristic cityscape at sunset",
    "aspect_ratio": "16:9"
})
print(result["images"][0]["url"])
The generated code includes all the parameters you configured in the form, so there is no gap between what you tested and what you ship.

Your Own Apps

When you run fal deploy, the output includes a Playground URL for your app. This means anyone with access can test your endpoints through the same interface that powers the model gallery.
fal deploy my_app.py::MyApp
# Output includes:
#   Playground: https://fal.ai/models/your-username/my-app
To control how your app’s inputs render in the Playground (image uploaders, hidden fields, field ordering), see Handle Inputs and Outputs. For example, naming a field with an image_url suffix renders it as an image upload widget, and wrapping a field in Hidden() keeps it accessible via API but hides it from the Playground form.

Playground vs Sandbox

The Playground and the Sandbox serve different purposes. The Playground is for testing a single model with specific inputs and copying code. The Sandbox is for comparing multiple models at once, with features like model sets, cost estimates, search across past generations, and shareable links.
PlaygroundSandbox
PurposeTest one model, copy codeCompare multiple models
WhereEach model’s pagefal.ai/sandbox
Your own appsYes, after fal deployYes, can be added manually
Code generationPython, JS, cURLNot available
SharingNot availableShareable links with previews

More Models to Try