withoutBG Python SDK#
AI-powered background removal with local and cloud options
Open Source (Local)
Run entirely on your machine. Free and private.
- No API key required
- Unlimited processing
- Runs offline
- Uses "focus" models
Cloud API (Pro)
Best quality for hair, fur, and complex edges.
- Managed infrastructure
- Highest precision
- Hardware accelerated
- Pay-per-image
Installation#
Installation
# Using uv (recommended)
uv add withoutbg
# Or with pip
pip install withoutbgQuick Start#
Choose Your Model:
- See Open Source Results → - Local processing
- See Pro API Results → - Cloud API
- Compare Open Source vs Pro →
Local Processing (Open Source, Free)#
Local Processing with Open Source Model
from withoutbg import WithoutBG
# Initialize model once
model = WithoutBG.opensource()
# Process image
result = model.remove_background("input.jpg")
result.save("output.png")Cloud Processing (withoutBG Pro)#
Cloud API Processing
from withoutbg import WithoutBG
# Initialize API client
model = WithoutBG.api(api_key="sk_your_key")
# Process image
result = model.remove_background("input.jpg")
result.save("output.png")CLI#
Command Line Interface
# Local processing
withoutbg image.jpg
# Cloud processing
withoutbg image.jpg --api-key sk_your_api_keyAPI Reference#
When using the cloud API, the SDK calls these endpoints:
- Background Removal: Binary and Base64 variants
- Alpha Matte Extraction: Binary and Base64 variants for advanced compositing
- Credits Management: GET /available-credit for usage tracking
For detailed endpoint specifications, error codes, rate limits, and advanced options, see the API Documentation.
Python API#
Single Image Processing#
Single Image Processing
from withoutbg import WithoutBG
# Initialize model once
model = WithoutBG.opensource()
# Process image
result = model.remove_background("photo.jpg")
result.save("photo-withoutbg.png")
# Process with progress callback
def progress(value):
print(f"Progress: {value * 100:.1f}%")
result = model.remove_background("photo.jpg", progress_callback=progress)Returns: PIL.Image.Image in RGBA (transparent background)
Batch Processing#
Batch Processing
from withoutbg import WithoutBG
# Initialize model once (efficient!)
model = WithoutBG.opensource()
# Process multiple images - model is reused for all images
images = ["photo1.jpg", "photo2.jpg", "photo3.jpg"]
results = model.remove_background_batch(images, output_dir="results/")
# Or process without saving
results = model.remove_background_batch(images)
for i, result in enumerate(results):
result.save(f"output_{i}.png")Using withoutBG Pro#
Using withoutBG Pro
from withoutbg import WithoutBG
# Initialize API client
model = WithoutBG.api(api_key="sk_your_key")
# Process images
result = model.remove_background("input.jpg")
# Batch processing with withoutBG Pro
results = model.remove_background_batch(
["img1.jpg", "img2.jpg", "img3.jpg"],
output_dir="api_results/"
)Advanced: Direct Model Access#
Direct Model Access
from withoutbg import OpenSourceModel, ProAPI
# For advanced users who need direct control
opensource_model = OpenSourceModel()
result = opensource_model.remove_background("input.jpg")
# Or with custom model paths
# Models can be downloaded from: https://huggingface.co/withoutbg/focus
model = OpenSourceModel(
depth_model_path="/path/to/depth.onnx",
isnet_model_path="/path/to/isnet.onnx",
matting_model_path="/path/to/matting.onnx",
refiner_model_path="/path/to/refiner.onnx"
)
# Direct withoutBG Pro API access
api = ProAPI(api_key="sk_your_key")
result = api.remove_background("input.jpg")
usage = api.get_usage()Configuration#
API Key (Cloud)#
Environment Variable Setup
export WITHOUTBG_API_KEY="sk_your_api_key"Or pass per call: WithoutBG.api(api_key="sk_your_api_key").
Model Path Environment Variables#
By default, models are downloaded from HuggingFace Hub. You can override this by setting environment variables to use local model files:
Model Path Configuration
export WITHOUTBG_DEPTH_MODEL_PATH=/path/to/depth_anything_v2_vits_slim.onnx
export WITHOUTBG_ISNET_MODEL_PATH=/path/to/isnet.onnx
export WITHOUTBG_MATTING_MODEL_PATH=/path/to/focus_matting_1.0.0.onnx
export WITHOUTBG_REFINER_MODEL_PATH=/path/to/focus_refiner_1.0.0.onnxModel Files (total ~320MB):
- ISNet segmentation: 177 MB
- Depth Anything V2: 99 MB
- Matting model: 27 MB
- Refiner model: 15 MB
This is useful for:
- Offline environments
- CI/CD pipelines
- Custom model versions
- Faster startup times (no download needed)
Input/Output#
- Inputs: JPG, PNG, or WebP (typical).
- Outputs: Prefer PNG or WebP to retain transparency.
- Compositing: Use the alpha channel as a mask.
Compositing Example
from PIL import Image
from withoutbg import WithoutBG
model = WithoutBG.api(api_key="sk_your_api_key") # or WithoutBG.opensource()
fg = model.remove_background("subject.jpg")
bg = Image.open("background.jpg")
bg.paste(fg, (0, 0), fg) # alpha used as mask
bg.save("composite.png")When to Use Which Mode?#
| Mode | Quality | Cost | Runs | Best for |
|---|---|---|---|---|
| Local (Open Source) | Excellent | Free | Your machine | Scripts, privacy, offline usage |
| Cloud API (Pro) | Highest | Pay-per-use | Managed cloud | Hair/fur, thin edges, consistency |
CLI Reference#
CLI Usage
# Single image
withoutbg photo.jpg
withoutbg photo.jpg --output result.png
withoutbg photo.jpg --format webp --quality 90
# Cloud API
export WITHOUTBG_API_KEY="sk_your_api_key"
withoutbg photo.jpg --api-key sk_your_api_key
# Batch
withoutbg photos/ --batch --output-dir results/Performance#
Local Model:
- First run: ~5-10 seconds (~320MB download from HuggingFace)
- CPU: ~2-5 seconds per image
- Memory: ~2GB RAM
withoutBG Pro:
- ~1-3 seconds per image (network dependent)
- No local resources needed
Usage Analytics (Cloud)#
Usage Analytics
from withoutbg import ProAPI
api = ProAPI(api_key="sk_your_api_key")
usage = api.get_usage() # Calls GET /available-credit
print(f"Credits: {usage['credit']}, Expires: {usage['expiration_date']}")For detailed credit management, see: Credits API Documentation
Advanced: Direct API Access#
The SDK uses the cloud API under the hood. For advanced use cases like:
- Custom request/response handling
- Different programming languages
- Serverless environments
- Direct endpoint integration
See the API Documentation for:
- Background Removal (Binary) - Direct PNG output for server workflows
- Background Removal (Base64) - JSON responses for browser/serverless apps
- Alpha Matte Binary - Separate alpha masks for advanced compositing
- Alpha Matte Base64 - Alpha matte in base64 encoding
The SDK is a convenience wrapper; all functionality is available through the REST API.
Models (Hugging Face)#
The open-source model is available on HuggingFace: withoutbg/focus
The model uses a 4-stage pipeline (Depth → ISNet → Matting → Refiner) delivering high-quality edge and detail fidelity for local processing.
Documentation#
- Python SDK Documentation - Complete online documentation
- Open Source Model Results - See example outputs
- Pro API Results - See example outputs
- Compare Models - Open Source vs Pro comparison
FAQ#
Is the local model really free?
Yes. The open-source model runs entirely on your hardware. There are no API calls, no costs, and no data leaves your machine.
Why use the Cloud API instead?
The Cloud API uses our Pro model, which is significantly larger and more accurate than the local model, especially for difficult edges like hair or transparent objects. It also offloads processing from your CPU/GPU.
Does it support batch processing?
Yes. The SDK includes optimized batch processing helpers that load the model once and process multiple images efficiently.
Support#
- Issues: GitHub Issues