Inference Providers documentation
Image to Image
Get Started
Guides
Your First API CallBuilding Your First AI AppStructured Outputs with LLMsFunction CallingResponses API (beta)How to use OpenAI gpt-ossBuild an Image EditorAutomating Code Review with GitHub ActionsAgentic Coding Environments with OpenEnvEvaluating Models with Inspect
Integrations
OverviewAdd Your IntegrationClaude CodeHermes AgentNeMo Data DesignerMacWhisperOpenCodePiVision AgentsVS Code with GitHub Copilot
Inference Tasks
Providers
CerebrasCohereDeepInfraFal AIFeatherless AIFireworksGroqHyperbolicHF InferenceNovitaNscaleOVHcloud AI EndpointsPublic AIReplicateSambaNovaScalewayTogetherWaveSpeedAIZ.ai
Hub APIRegister as an Inference ProviderImage to Image
Image-to-image is the task of transforming a source image to match the characteristics of a target image or a target image domain.
Example applications:
- Transferring the style of an image to another image
- Colorizing a black and white image
- Increasing the resolution of an image
For more details about the
image-to-imagetask, check out its dedicated page! You will find examples and related materials.
Recommended models
- black-forest-labs/FLUX.1-Kontext-dev: Powerful image editing model.
- kontext-community/relighting-kontext-dev-lora-v3: Image re-lighting model.
Explore all available models and find the one that suits you best here.
Using the API
Language
Client
Provider
Copied
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="fal-ai",
api_key=os.environ["HF_TOKEN"],
)
with open("cat.png", "rb") as image_file:
input_image = image_file.read()
# output is a PIL.Image object
image = client.image_to_image(
input_image,
prompt="Turn the cat into a tiger.",
model="black-forest-labs/FLUX.2-dev",
)API specification
Request
| Headers | ||
|---|---|---|
| authorization | string | Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with “Inference Providers” permission. You can generate one from your settings page. |
| Payload | ||
|---|---|---|
| inputs* | string | The input image data as a base64-encoded string. If no parameters are provided, you can also provide the image data as a raw bytes payload. |
| parameters | object | |
| prompt | string | The text prompt to guide the image generation. |
| guidance_scale | number | For diffusion models. A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. |
| negative_prompt | string | One prompt to guide what NOT to include in image generation. |
| num_inference_steps | integer | For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. |
| target_size | object | The size in pixels of the output image. This parameter is only supported by some providers and for specific models. It will be ignored when unsupported. |
| width* | integer | |
| height* | integer |
Response
| Body | ||
|---|---|---|
| image | unknown | The output image returned as raw bytes in the payload. |