# Ikomia Documentation > This guide will help you discover Ikomia SCALE. You will learn how to deploy your first computer vision workflow to the cloud. - [Get started with SCALE](https://docs.ikomia.ai/index.md) ## scale - [Credits usage](https://docs.ikomia.ai/scale/billing/consumption.md): Depending on your computer vision workflows, the amount of computing power you need will vary. - [Plans and pricing](https://docs.ikomia.ai/scale/billing/plans.md): Users and organizations can subscribe to paid plans to access all available compute infrastructure with increased usage and quotas. - [Quotas](https://docs.ikomia.ai/scale/billing/quotas.md): All users and organizations on the platform are subjects to quotas depending on their plan. - [CLI](https://docs.ikomia.ai/scale/cli.md): We provide you multiple ways to interact with the SCALE platform, via the web interface, Ikomia STUDIO, or from your terminal using our command-line interface (CLI). - [Roles and permissions](https://docs.ikomia.ai/scale/collaboration/roles-and-permissions.md): The roles system in the platform is designed to give you fine-grained control over what your organization's team members can do. - [Workspaces and organizations](https://docs.ikomia.ai/scale/collaboration/workspaces-and-organizations.md): Projects and algorithms are stored in workspaces. All users have a personal workspace. They can also create and join organizations to share projects and algorithms with others. - [Core concepts](https://docs.ikomia.ai/scale/concepts.md): We use specific terms to describe the different components of Ikomia SCALE. Here are the main concepts it may be helpful to know: - [Available infrastructures](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md): This page lists the available compute infrastructures for your deployments. - [Manage deployments](https://docs.ikomia.ai/scale/deployment/managing-deployments.md): Ikomia SCALE allows you to deploy your workflows on different cloud providers and regions. - [Monitoring deployments](https://docs.ikomia.ai/scale/deployment/monitoring.md): Sometimes, you may need to inspect the logs of a running deployment to understand what's happening. - [Test your deployment](https://docs.ikomia.ai/scale/deployment/test-interface.md): The SCALE platform provides a convenient way to test deployments through the Test Interface. - [How to deploy FLUX.1](https://docs.ikomia.ai/scale/example.md): In this guide, we will create a simple FLUX image generation workflow using Ikomia API - [Advanced Usage](https://docs.ikomia.ai/scale/integration/javascript/advanced-usage.md): Running a specific task - [Fullstack integration](https://docs.ikomia.ai/scale/integration/javascript/fullstack-integration.md): If you are developing an interactive application that rely on results from a SCALE deployment, - [Getting Started with JS/TS](https://docs.ikomia.ai/scale/integration/javascript/getting-started.md): Learn how to integrate Ikomia SCALE deployments in your JavaScript application - [Working with storage](https://docs.ikomia.ai/scale/integration/javascript/storage.md): Every Ikomia SCALE project gets its own storage space that can be used to store inputs and outputs of your deployments. - [Advanced Usage](https://docs.ikomia.ai/scale/integration/python/advanced-usage.md): Running a specific task - [Getting Started with Python](https://docs.ikomia.ai/scale/integration/python/getting-started.md): Learn how to integrate Ikomia SCALE deployments in your Python application - [Working with storage](https://docs.ikomia.ai/scale/integration/python/storage.md): Every Ikomia SCALE project gets its own storage space that can be used to store inputs and outputs of your deployments. - [REST API](https://docs.ikomia.ai/scale/integration/rest.md): Learn how to integrate Ikomia SCALE deployments with the REST API - [Project](https://docs.ikomia.ai/scale/project.md): Projects are the main container for your work on Ikomia SCALE. They contains workflows and can be shared with others. - [Find algorithms on Ikomia HUB](https://docs.ikomia.ai/scale/workflow/algorithms/finding-algorithms.md): Ikomia HUB provides a large collection of algorithms for different Computer Vision tasks - [Store and share algorithms](https://docs.ikomia.ai/scale/workflow/algorithms/storing-algorithms.md): If you created your own algorithm, you can push it to your private HUB to share it with your team. - [Workflows with Ikomia API](https://docs.ikomia.ai/scale/workflow/ikomia-api.md): You can create a workflow using our Python API. - [Workflows with Ikomia STUDIO](https://docs.ikomia.ai/scale/workflow/ikomia-studio.md): You can create a workflow using Ikomia STUDIO, our no-code editor --- # Full Documentation Content # Credits usage Depending on your computer vision workflows, the amount of computing power you need will vary. As a general rule, the more computing power your deployments require, the more credits they consume. To help you choose the right plan, the credits usage for each compute infrastructure is shown below. ## Serverless deployments[​](#serverless-deployments "Direct link to Serverless deployments") For serverless deployments (CPU only), you are charged based on the workflow execution time in seconds. You are only charged when you call the deployment endpoint. See [infrastructure specifications](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md#serverless-deployments). | Size | vCPU | RAM | Credits/s | | ------ | ----------------- | -------------------- | --------------------- | | **S** | 3 | 4GB | 0.829 | | **M** | 4 | 6GB | 0.836 | | **L** | 5 | 8GB | 0.842 | | **XL** | 6 | 10GB | 0.849 | **Optimizing serverless costs** Due to their pricing model, choosing the cheapest serverless deployment (lowest compute power) is not always the best financial choice. As an example of credits usage for a concrete workflow, we propose a simulation with a classical OCR workflow: * basic pre-processing algorithms (noise reduction and luminosity correction) * text detection with MMLAB framework * text recognition with MMLAB framework This table shows the number of images that can be processed with this workflow and the **monthly version** of the plans: | Size | Execution time (s) | Credits/image | Starter plan (images/month) | Basic plan (images/month) | Pro plan (images/month) | | ------ | ------------------ | ------------- | ----------------------------------- | --------------------------------- | ------------------------------- | | **S** | 16 | 13.26 | 151 | 754 | 6031 | | **M** | 12 | 10.03 | 199 | 997 | 7974 | | **L** | 9.6 | 8.08 | 247 | 1237 | 9897 | | **XL** | 8 | 6.79 | 294 | 1472 | 11779 | As you can see, in this case, **choosing a more powerful deployment is sufficiently beneficial in terms of execution time to offset the higher costs associated with it**. To optimize costs, it is crucial to find the right trade-off between compute power and execution time, this may depend on the specific implementation of your workflow algorithms. ## CPU instances[​](#cpu-instances "Direct link to CPU instances") For CPU instance deployment, you are charged based on the time the instance is running (in seconds). See [infrastructure specifications](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md#instance-cpu-deployments). This table shows the lifetime of **one deployment** for the **monthly version** of the plans: | Size | Provider | vCPU | RAM | Credits/s | Starter plan | Basic plan | Pro plan | | ------ | -------- | ---- | ----- | --------- | ------------------------------------------------- | --------------------------------------------------- | ---------------------------------------------------- | | **XS** | AWS | 2 | 8GB | 0.00477 | \~116h (4.9 days) | \~582h (24.3 days) | \~4659h (194.1 days) | | **XS** | GCP | 2 | 8GB | 0.00497 | \~112h (4.7 days) | \~559h (23.3 days) | \~4471h (186.3 days) | | **XS** | Scaleway | 2 | 8GB | 0.00259 | \~215h (8.9 days) | \~1073h (44.7 days) | \~8580h (357.5 days) | | **S** | AWS | 4 | 8GB | 0.00802 | \~69h (2.9 days) | \~346h (14.4 days) | \~2771h (115.5 days) | | **S** | GCP | 4 | 16GB | 0.00904 | \~61h (2.6 days) | \~307h (12.8 days) | \~2458h (102.4 days) | | **S** | Scaleway | 4 | 16GB | 0.00489 | \~114h (4.7 days) | \~568h (23.7 days) | \~4544h (189.4 days) | | **M** | AWS | 8 | 16GB | 0.01532 | \~36h (1.5 days) | \~181h (7.6 days) | \~1451h (60.4 days) | | **M** | GCP | 8 | 32GB | 0.01718 | \~32h (1.3 days) | \~162h (6.7 days) | \~1293h (53.9 days) | | **M** | Scaleway | 8 | 32GB | 0.00943 | \~59h (2.5 days) | \~295h (12.3 days) | \~2357h (98.2 days) | | **L** | AWS | 16 | 32GB | 0.02991 | \~19h (0.8 days) | \~93h (3.9 days) | \~743h (31 days) | | **L** | GCP | 16 | 64GB | 0.03345 | \~17h (0.7 days) | \~83h (3.5 days) | \~664h (27.7 days) | | **L** | Scaleway | 16 | 64GB | 0.01855 | \~30h (1.2 days) | \~150h (6.2 days) | \~1198h (49.9 days) | | **XL** | AWS | 32 | 64GB | 0.05908 | \~9h (0.4 days) | \~47h (2 days) | \~376h (15.7 days) | | **XL** | GCP | 32 | 128GB | 0.06599 | \~8h (0.4 days) | \~42h (1.8 days) | \~337h (14 days) | | **XL** | Scaleway | 32 | 128GB | 0.03684 | \~15h (0.6 days) | \~75h (3.1 days) | \~603h (25.1 days) | ## GPU instances[​](#gpu-instances "Direct link to GPU instances") For GPU instance deployment, you are charged based on the time the instance is running (in seconds). See [infrastructure specifications](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md#instance-gpu-deployments). This table shows the lifetime of **one deployment** for the **monthly version** of the plans: | Size | Provider | vCPU | RAM | GPU | Credits/s | Starter plan | Basic plan | Pro plan | | ------ | -------- | ---- | ----- | ------------------- | --------- | ------------------------------------------------ | ------------------------------------------------- | --------------------------------------------------- | | **XS** | AWS | 4 | 16GB | NVIDIA T4 16GB | 0.02132 | \~26h (1.1 days) | \~130h (5.4 days) | \~1042h (43.4 days) | | **XS** | GCP | 4 | 16GB | NVIDIA L4 24GB | 0.02687 | \~21h (0.9 days) | \~103h (4.3 days) | \~827h (34.5 days) | | **XS** | Scaleway | 8 | 16GB | NVIDIA RTX 3070 8GB | 0.03634 | \~15h (0.6 days) | \~76h (3.2 days) | \~612h (25.5 days) | | **S** | AWS | 8 | 32GB | NVIDIA T4 16GB | 0.03263 | \~17h (0.7 days) | \~85h (3.5 days) | \~681h (28.4 days) | | **S** | GCP | 8 | 32GB | NVIDIA L4 24GB | 0.03495 | \~16h (0.7 days) | \~79h (3.3 days) | \~636h (26.5 days) | | **S** | Scaleway | 8 | 48GB | NVIDIA L4 24GB | 0.03235 | \~17h (0.7 days) | \~86h (3.6 days) | \~687h (28.6 days) | | **M** | AWS | 4 | 16GB | NVIDIA A10 24GB | 0.03826 | \~15h (0.6 days) | \~73h (3 days) | \~581h (24.2 days) | | **M** | GCP | 16 | 64GB | NVIDIA L4 24GB | 0.04662 | \~12h (0.5 days) | \~60h (2.5 days) | \~477h (19.9 days) | | **M** | Scaleway | 16 | 96GB | 2x NVIDIA L4 24GB | 0.0636 | \~9h (0.4 days) | \~44h (1.8 days) | \~349h (14.6 days) | | **L** | AWS | 8 | 32GB | NVIDIA A10 24GB | 0.04975 | \~11h (0.5 days) | \~56h (2.3 days) | \~447h (18.6 days) | | **L** | GCP | 12 | 85GB | NVIDIA A100 40GB | 0.12586 | \~4h (0.2 days) | \~22h (0.9 days) | \~177h (7.4 days) | | **L** | Scaleway | 8 | 96GB | NVIDIA L40S 48GB | 0.05151 | \~11h (0.4 days) | \~54h (2.2 days) | \~431h (18 days) | | **XL** | AWS | 16 | 64GB | NVIDIA A10 24GB | 0.06636 | \~8h (0.3 days) | \~42h (1.7 days) | \~335h (14 days) | | **XL** | GCP | 12 | 170GB | NVIDIA A100 80GB | 0.17426 | \~3h (0.1 days) | \~16h (0.7 days) | \~128h (5.3 days) | | **XL** | Scaleway | 24 | 240GB | NVIDIA H100 80GB | 0.11485 | \~5h (0.2 days) | \~24h (1 days) | \~193h (8.1 days) | ## What happens when credits run out[​](#what-happens-when-credits-run-out "Direct link to What happens when credits run out") When your credits run out, you will no longer be able to deploy new workflows. You will have to wait for your next refund date depending on whether you are on a monthly or yearly plan. You can also upgrade your plan to get more credits. You will receive your new credits immediately after subscription validation (see [plans and pricing](https://docs.ikomia.ai/scale/billing/plans.md)). For active deployments, the behaviour depends on the compute infrastructure: * **Serverless**: Deployments are being preserved but you are not able to send requests on them. * **CPU/GPU instances**: Deployments (and the infrastructure behind them) are deleted and the endpoint URL becomes invalid. Workflows are being preserved so that you can then deploy when your account is credited. Important We will send notification emails to individual user or organization owners when credits are low. You should then make the right decision for your active deployments. --- # Plans and pricing Users and organizations can subscribe to paid plans to access all available compute infrastructure with increased usage and quotas. * Serverless CPU (AWS) * CPU instances (AWS, GCP) * GPU instances (AWS, GCP) ## Free plan[​](#free-plan "Direct link to Free plan") After signing up on Ikomia SCALE, you are automatically subscribed to the **Free plan**. This plan gives you **500 credits to spend within the month**. While we try to maintain a consistent set of features across all plans, available compute infrastructures and providers may change frequently on the free tier. info The free plan is only available for your personal workspace. For deployments in an organization, subscription to a paid plan is mandatory. ### Legacy free plan[​](#legacy-free-plan "Direct link to Legacy free plan") If you signed up for Ikomia SCALE before November 29, 2024, you were automatically enrolled in our legacy free plan. This plan provided a one-time allocation of 2000 free hits for Serverless CPU deployments, with no expiration date. If you're still on this legacy plan, you can continue using your remaining free hits until they're exhausted, or you can migrate to the current **Free plan** at any time. ## Subscribe to a paid plan[​](#subscribe-to-a-paid-plan "Direct link to Subscribe to a paid plan") ![Billing page](/assets/images/billing_page-eb190d1c837aa1533a762dde627d056b.png) Choose prepaid plans All paid plans open access to all compute infrastructures and providers. You can subscribe for your personal account or for an organization (or both) but organizations allows you to take advantage of the [collaboration features](https://docs.ikomia.ai/scale/collaboration/workspaces-and-organizations.md). Subscription steps: 1. Click the **Upgrade plan** button or go to user or organization ***Settings*** 2. Select the ***Billing*** tab 3. Choose your prefered plan and click the **Subscribe** button 4. Follow instructions through to payment Subscription to a plan gives you or your organization **credits**, which is the resource consumption unit in Ikomia SCALE. Every active deployments consume **credits**, the amount and the modality depends on the compute infrastructures behind your deployments (see [credits usage chapter](https://docs.ikomia.ai/scale/billing/consumption.md) for more information). ## Pricing[​](#pricing "Direct link to Pricing") We offer prepaid subscriptions with different credit levels to suit your needs and control your budget. * Starter * Basic * Pro | | Monthly | Yearly | | --------------------------- | ------------- | ------------------------------------------- | | **Price** | 19.9€ | 199.9€ (save \~16%) | | **Credits** | 2000 | 24000 | | **Credits validity period** | 6 months | 24 months | | | Monthly | Yearly | | --------------------------- | ------------- | ------------------------------------------- | | **Price** | 99.9€ | 999.9€ (save \~17%) | | **Credits** | 10000 | 120000 | | **Credits validity period** | 6 months | 24 months | | | Monthly | Yearly | | --------------------------- | -------------- | -------------------------------------------- | | **Price** | 799.9€ | 7999.9€ (save \~17%) | | **Credits** | 80000 | 960000 | | **Credits validity period** | 6 months | 24 months | info Credits validity period starts from the date where the credits are granted: For example, if you subscribe to the Starter monthly plan on the 1st of January, you will receive your first 2000 credits. These credits are available for 6 months, so they will be removed from your account on the 1st of July. On the first refund date, the 1st of February, you will receive a further 2000 credits which are added to the unused valid credits. They will be valid until the 1st of August. And so on. ## Change your plan[​](#change-your-plan "Direct link to Change your plan") You can upgrade or downgrade your plans at any time by clicking the **Upgrade** button or the **Manage subscription** button in personal or organization **settings**. As you are subscribing to prepaid plans, there is no refund policy. After changing your plan, new credits will be added to your account and your previous credits will remain valid all along their validity period. The previous plan will be cancelled and replaced by the new one, so you will be charged for the new plan on your next billing date. ## Cancel subscription[​](#cancel-subscription "Direct link to Cancel subscription") There is no commitment for Ikomia SCALE subscriptions. You can cancel your subscription at any time by clicking the **Manage subscription** button in personal or organization **settings**. Even if you cancel your subscription, prepaid credits are still valid and you can use them to deploy workflows. info A cancelled subscription can be reactivated by clicking the **Manage subscription** button in personal or organization settings. In case you want to subscribe to another plan, you must reactivate your subscription first and then change your plan. ## Custom plans[​](#custom-plans "Direct link to Custom plans") **Contact us at ** if you have any questions about prepaid plans or if they do not suit your needs. We will be happy to discuss your project requirements. --- # Quotas All users and organizations on the platform are subjects to quotas depending on their plan. These quotas limit: * the **number of projects** that can be created * the **number of algorithms** that can be uploaded * the **number of workflows** that can be pushed * the **number of deployments** that can be created * the **available storage** for each project Other features are also limited on a per-user or organization basis such as: * the **ability to [publish algorithms on Ikomia HUB](https://docs.ikomia.ai/scale/workflow/algorithms/storing-algorithms.md#publish-to-ikomia-hub)** * the list of **[available infrastructures](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md) you have access** to deploy workflows ## Check your quotas[​](#check-your-quotas "Direct link to Check your quotas") Users can check their quotas for their personal workspace in their [billing settings](https://app.ikomia.ai/settings/billing/). Organization owner can also check their quotas in the ***Billing*** tab of their organization settings. ![Screenshot of organization's quotas](/assets/images/organization_quotas-dba9deee13720e9a7f5a366a4941735d.png) Checking an organization's quotas ## Upgrade your quotas[​](#upgrade-your-quotas "Direct link to Upgrade your quotas") When using SCALE, you may encounter limitations due to your quotas or lack of access to some features. In this case, you may need to reach out to us to upgrade your quotas. **Contact us at ** to discuss your needs and we will try to upgrade your plan to fit your requirements. --- # CLI We provide you multiple ways to interact with the SCALE platform, via the web interface, Ikomia STUDIO, or from your terminal using our command-line interface (CLI). The CLI is a powerful tool that allows you to manage your projects, algorithms and deployments. It is also the easiest way to push your workflows to the SCALE platform. ## Installation[​](#installation "Direct link to Installation") ``` pip install "ikomia-cli[full]" ``` ## Sign in to your Ikomia account[​](#sign-in-to-your-ikomia-account "Direct link to Sign in to your Ikomia account") To be able to access your projects and deployments, you need to sign in to your Ikomia account. You can do this by running the following command: ``` ikcli login ``` You will be prompted to enter your username and password. Once you are logged in, the CLI will ask you to export your access token to your environment variables. ``` export IKOMIA_TOKEN= ``` warning Currently, **the CLI only supports username and password authentication**. If you are using a third-party authentication provider, you can directly generate a token from the web interface and export it to your environment variables using the `IKOMIA_TOKEN` variable. tip By default, your access token will be valid for 1 hour. However, **you can generate a new token with a longer expiration time** using the `--token-ttl` option. ``` ikcli login --token-ttl 86400 # duration in seconds, here that corresponds to 24 hours ``` ## Manage your projects[​](#manage-your-projects "Direct link to Manage your projects") * List your projects ``` ikcli project ls ``` * Create new project ``` ikcli project add ``` * Delete project ``` ikcli project delete ``` * Push a workflow to a project ``` ikcli project push ``` Workflow files are in JSON format and can be exported from Python API or Ikomia STUDIO. ## Manage your workflows[​](#manage-your-workflows "Direct link to Manage your workflows") * List all workflows of a project ``` ikcli project workflow ls ``` * Deploy a workflow * provider: AWS or GCP * region: * AWS: FRANCE, GERMANY or IRELAND * GCP: NETHERLANDS or US\_CENTRAL * type: SERVERLESS, CLUSTER (CPU instances) or GPU (GPU instances) ``` ikcli project workflow deploy ``` warning We recommend using the web interface for deploying workflows for latest features and improvements. * Delete a workflow ``` ikcli project workflow delete ``` ## Manage your deployments[​](#manage-your-deployments "Direct link to Manage your deployments") * List all deployments of a workflow ``` ikcli project deployment ls ``` * Get deployment logs ``` ikcli project deployment logs ``` * Get deployment usage * period: 'day' (current day), 'yesterday', 'week' (current week), 'last\_week', 'month' (current calendar month) or 'last\_month' ``` ikcli project deployment usage --period ``` * Delete a deployment ``` ikcli project deployment delete ``` ## Manage your algorithms (private HUB)[​](#manage-your-algorithms-private-hub "Direct link to Manage your algorithms (private HUB)") * List algorithms in your private HUB ``` ikcli algo ls ``` * Push an algorithm to your private HUB Private algorithms belong to a workspace. It can be your personal workspace (same name as your user name) or a organization workspace (same name as your organization name). You can use the CLI to retrieve your available workspaces: ``` ikcli namespace ls ``` To push your algorithm, it must be a valid one so that it is loaded successfully with Python API or STUDIO. You can use the CLI to retrieve the list of valid algorithms: ``` ikcli algo local ls ``` You're ready to push your algorithm: ``` ikcli algo add ``` * Update an algorithm ``` ikcli algo update ``` * Delete an algorithm ``` ikcli algo delete ``` --- # Roles and permissions The roles system in the platform is designed to give you fine-grained control over what your organization's team members can do. ## Permissions[​](#permissions "Direct link to Permissions") Each role has a set of permissions that define what it can do: | Permission | Partner | Member | Manager | Owner | | ------------------------------------- | ------- | ------ | ------- | ----- | | Viewing projects and algorithms | ❌ | ✅ | ✅ | ✅ | | Using deployments and project storage | ❌ | ✅ | ✅ | ✅ | | Adding/Deleting/Modifying projects | ❌ | ❌ | ✅ | ✅ | | Pushing/Deleting workflows | ❌ | ❌ | ✅ | ✅ | | Creating/Managing deployments | ❌ | ❌ | ✅ | ✅ | | Managing members | ❌ | ❌ | ❌ | ✅ | ## Partner users[​](#partner-users "Direct link to Partner users") As a partner, users have no permissions in the organization. This role allows you to add external users to your organization without giving them any permissions on all the projects and algorithms. You can then assign upgrade their role in a specific project or algorithm to restrict their access to only that project or algorithm. ## Cease organization ownership[​](#cease-organization-ownership "Direct link to Cease organization ownership") Organization requires at least one owner. If you want to cease your ownership, you need to assign the ownership to another member first and downgrading your own role to member or manager. Then, the new owner can remove you from the organization. --- # Workspaces and organizations Projects and algorithms are stored in **workspaces**. All users have a personal workspace. They can also create and join organizations to share projects and algorithms with others. ## Organizations[​](#organizations "Direct link to Organizations") An organization is a group of users sharing a common workspace. Organizations have a profile page, where you can add a description, a website URL, and add members. They also have their own plan and quotas. ![A screenshot of an organization.](/assets/images/organization-caec3f5f9f9486cb2d77fb85bb3f73e2.png) An Ikomia SCALE organization ## Create an organization[​](#create-an-organization "Direct link to Create an organization") To create an organization, click on your profile picture in the top right corner and select `Create new organization`. Once created, you will then be redirected to the organization profile page. To access this page later, click on your profile picture and select your organization from the list. ## Add members[​](#add-members "Direct link to Add members") To add members to your organization, click on the `Add members` button on the organization profile page. You can then search for users by using their username. By default, members have the **Partner** role, which grant them no rights in your organization. But you can [set specific rights to any members](https://docs.ikomia.ai/scale/collaboration/roles-and-permissions.md) on all projects and algorithms of the organization. note You can only add users that are already registered on Ikomia. First ask them to [create an account](https://app.ikomia.ai/signup/) if they don't have one yet. --- # Core concepts We use specific terms to describe the different components of Ikomia SCALE. Here are the main concepts it may be helpful to know: * **Workflow**: a workflow is a graph structure composed of algorithms to build a computer vision solution. You create them using our **Python API** or with our no-code software **Ikomia STUDIO.** * **Algorithms**: algorithms are the building blocks of the workflow. They can be chained to achieve complex processing pipelines. Ready to use algorithms are available in [Ikomia HUB](https://app.ikomia.ai/hub/) and you can also [implement your own](https://ikomia-dev.github.io/python-api-documentation/integration/index.html). * **Workspaces**: workspaces are designed to provide isolated environments for your work. You will find 2 types of workspace: your personal workspace and organization workspaces. The latter is the way to go if you want to share projects across a team. * **Projects**: on Ikomia SCALE, a project is a container for your workflows. It helps you organize your work. When you create an account, we provide you with a `Getting_started` project which contains some example workflows. * **Deployment**: a deployment is a running instance of a workflow. It is hosted on the cloud provider of your choice. SCALE gives you access to different architectures (serverless, CPU or GPU instances) depending on your compute needs. Each deployment comes with a REST API for easy integration with your applications. --- # Available infrastructures This page lists the available compute infrastructures for your deployments. Deployment environment All deployments runs on a **Linux-based environment**, with the versions of Python and Ikomia API defined by the workflow. ## Serverless deployments[​](#serverless-deployments "Direct link to Serverless deployments") These deployments runs on serverless functions (CPU only). Serverless deployments come with auto-scaling capability, making them a good choice for applications that need to adapt to traffic loads. They are better suited for lightweight workflows as they are generally limited in terms of compute power. When you run these deployments, you are charged based on the workflow execution time. Idle time is free of charge. Such deployments may have cold starts, which means high latency for the first request after a period of inactivity. | Size | vCPU | RAM | Ephemeral storage | Available regions | | ---- | ---- | ---- | ----------------- | ------------------------ | | AWS | | | | | | S | 3 | 4GB | 512MB | France, Germany, Ireland | | M | 4 | 6GB | 512MB | France, Germany, Ireland | | L | 5 | 8GB | 512MB | France, Germany, Ireland | | XL | 6 | 10GB | 512MB | France, Germany, Ireland | Serverless deployments comes with ephemeral storage which is used to stores images/videos during the workflow execution. ## Instance CPU deployments[​](#instance-cpu-deployments "Direct link to Instance CPU deployments") These deployments run on dedicated CPU instances. Once they are running, they respond instantly and are generally more powerful than serverless deployments. When you run these deployments, you are charged based on the time the instance is running, not depending on the number of requests. | Size | vCPU | RAM | Disk | Available regions | | -------- | ---- | ----- | ----- | ------------------------------------ | | AWS | | | | | | XS | 2 | 8GB | 30GB | France, Germany, Ireland | | S | 4 | 8GB | 30GB | France, Germany, Ireland | | M | 8 | 16GB | 30GB | France, Germany, Ireland | | L | 16 | 32GB | 30GB | France, Germany, Ireland | | XL | 32 | 64GB | 30GB | France, Germany, Ireland | | GCP | | | | | | XS | 2 | 8GB | 100GB | Netherlands, United States (Central) | | S | 4 | 16GB | 100GB | Netherlands, United States (Central) | | M | 8 | 32GB | 100GB | Netherlands, United States (Central) | | L | 16 | 64GB | 100GB | Netherlands, United States (Central) | | XL | 32 | 128GB | 100GB | Netherlands, United States (Central) | | Scaleway | | | | | | XS | 2 | 8GB | 128GB | France | | S | 4 | 16GB | 128GB | France | | M | 8 | 32GB | 128GB | France | | L | 16 | 64GB | 128GB | France | | XL | 32 | 128GB | 128GB | France | ## Instance GPU deployments[​](#instance-gpu-deployments "Direct link to Instance GPU deployments") These deployments run on dedicated instances with GPU acceleration. They are generally the most powerful deployments, and are a good choice for workflows running deep learning models or other GPU intensive tasks. When you run these deployments, you are charged based on the time the instance is running, not depending on the number of requests. | Size | vCPU | RAM | Disk | GPU | Available regions | | -------- | ---- | ----- | ----------------------- | ------------------- | ------------------------------------ | | AWS | | | | | | | XS | 4 | 16GB | 100GB | NVIDIA T4 16GB | France, Germany, Ireland | | S | 8 | 32GB | 100GB | NVIDIA T4 16GB | France, Germany, Ireland | | M | 4 | 16GB | 100GB | NVIDIA A10 24GB | Germany, Ireland | | L | 8 | 32GB | 100GB | NVIDIA A10 24GB | Germany, Ireland | | XL | 16 | 64GB | 100GB | NVIDIA A10 24GB | Germany, Ireland | | GCP | | | | | | | XS | 4 | 16GB | 100GB | NVIDIA L4 24GB | Netherlands, United States (Central) | | S | 8 | 32GB | 100GB | NVIDIA L4 24GB | Netherlands, United States (Central) | | M | 16 | 64GB | 100GB | NVIDIA L4 24GB | Netherlands, United States (Central) | | L | 12 | 85GB | 100GB | NVIDIA A100 40GB | Netherlands, United States (Central) | | XL | 12 | 170GB | 100GB | NVIDIA A100 80GB | Netherlands, United States (Central) | | Scaleway | | | | | | | XS | 8 | 16GB | 128GB | NVIDIA RTX 3070 8GB | France | | S | 8 | 48GB | 128GB | NVIDIA L4 24GB | France | | M | 16 | 96GB | 128GB | 2x NVIDIA L4 24GB | France | | L | 8 | 96GB | 128GB + 1.6TB ephemeral | NVIDIA L40S 48GB | France | | XL | 24 | 240GB | 128GB + 3TB ephemeral | NVIDIA H100 80GB | France | High-end Scaleway GPU instances also comes with large partitions of ephemeral storage which is used to stores images/videos during the workflow execution. --- # Manage deployments Ikomia SCALE allows you to deploy your workflows on different cloud providers and regions. We provide 3 main compute infrastructures: * **Serverless**: CPU only, you are only charged for execution time of your workflow. * **CPU instances**: CPU only dedicated instances, you are charged for the time the instance is in use (per second). * **GPU instances**: dedicated instances with GPU acceleration, you are charged for the time the instance is in use (per second). ## Create a deployment[​](#create-a-deployment "Direct link to Create a deployment") 1. Open the workflow you want to deploy. 2. Select the [provider, deployment type, region and size](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md). 3. Click on **Deploy workflow** Your new deployment will then appear in the deployment list on the left side of the page. ![A screenshot of an interface allowing to pick the provider, deployment type, region and size.](/assets/images/deployment_ui-a89ed918588d5905b8b46d9621a3ac23.png) The deployment interface Before being ready to use, the deployment will go through a **Building** step, where we set up an environment for running your workflow, and a **Scaling** step where we provision the infrastructure of your deployment. Depending on your workflow and the infrastructure you chose, each of these steps may take a few minutes, please be patient. ![A screenshot of a running deployment.](/assets/images/running_deployment-508efa84f1354a45c700bcf5ab59e756.png) A running deployment ## Update the deployment's workflow[​](#update-the-deployments-workflow "Direct link to Update the deployment's workflow") Pushing a new version of a workflow will not automatically update its running deployments. Your deployment will continue to run the previous version of the workflow until you decide to update it. In this case, a message is displayed on the deployment's page to notify you that the running workflow is outdated. If you want to update it, you just have to click on the dedicated button. **Your deployment will remain available while updating.** ## Upgrade/downgrade a deployment[​](#upgradedowngrade-a-deployment "Direct link to Upgrade/downgrade a deployment") You can change the compute infrastructure size of a deployment at any time on the deployment settings page. You can find a description of the available configurations [here](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md). As with update, your deployment will remain available during the process. note While you can change infrastructure size, you cannot change the infrastructure type, region or provider. You'll have to create another deployment. --- # Monitoring deployments Sometimes, you may need to inspect the logs of a running deployment to understand what's happening. For example, if you're developing your own algorithms, you may want to see the logs of your algorithm for debugging purposes. ## Build logs[​](#build-logs "Direct link to Build logs") Build logs are the logs that are generated during the deployment's building step. They contains information about the environment setup, dependencies installation, and any errors that may occur during the build process. warning The build logs are stored for a limited duration and may disappear a few hours after the end of the build process. Make sure to save them if you need to keep them for future reference. ## Execution logs[​](#execution-logs "Direct link to Execution logs") These logs are added in real-time when the deployment is being called. They indicate the execution time of each query and whether or not the query was successful. If you are experiencing issues with your deployment, these logs generally contain the exception stack trace for debugging. We also provide a graph with aggregated usage statistics for the deployment over time. ![A screenshot of the usage section of a deployment](/assets/images/usage-38b31432e5ef1082a493a0b950831904.png) Execution logs and usage of a deployment --- # Test your deployment The SCALE platform provides a convenient way to test deployments through the Test Interface. ![A screenshot of the Test Interface](/assets/images/test_interface-4a400bc8c25cff36e8b12becce0fb87c.png) The Test Interface ## Open the Test Interface[​](#open-the-test-interface "Direct link to Open the Test Interface") From your workflow page, click on the **Test me** button on your deployment or simply navigate to its endpoint's URL. ## Run your workflow[​](#run-your-workflow "Direct link to Run your workflow") If your workflow requires input images, start by adding them in the **Inputs** panel using the upload button, through drag-and-drop or by selecting one of our sample images. Once all inputs are set, the Test Interface will, by default, automatically run your workflow and display the outputs. Otherwise, you can manually run the workflow by clicking on the **Run workflow** button or by pressing `Shift + Enter`. info You can toggle the automatic run feature by clicking on the setting icon on the top right corner of the Test Interface. ## Modify algorithms parameters[​](#modify-algorithms-parameters "Direct link to Modify algorithms parameters") You can tweaks the parameters of each algorithms in your workflow, and see how the outputs change. * Open the **Parameters** panel * Select the algorithm you want to modify by clicking on it on the workflow diagram or through the dropdown menu * Edit the parameters * Run the workflow again to see the new outputs You can always reset the parameters to the default values by clicking on the ![reset](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAAAXNSR0IArs4c6QAAA1lJREFUeF7tl71Lm1EUxh/RJjVREYwiqLXGxcFBxK+0IlFq/QJXF4VYFwe7St9OTnbUjtop3bT/gYqimIK4mNpFp0YFwXzUz3yYYsvNEGoSU+/Nm9DhuZAlee85z/vc8zvnJs/n9/8GV9yBPBpyvxpoSAIdNISGpG+YrBBWCCtE6lJBZIgMkSEyUg4QGSIjVTCcMkSGyBAZKQeIDJGRKhhOGSJDZIiMlANEhshIFUxGUyYYDOLu7g5FRUVSSU9PT/F9fx9Hx8cIh0IIhkK4ubnBk4ICmMxmmM1mlJaWory8HJWVlbBarTAajVI5VB9WNmRzcxMbGxuxvMPDw2hubk6rQZi3u7uLb243/IGAlF6jwYCWlhZ02GwoLi6W2iv7sLIhH2ZnEbm9jecbHBxEW1tbUn6fz4ft7W3s7e3Jakv5fFNTE+x2e6yCsrGUDZmZmUnSMzAwgPb29vj36+vr2NrayoZuvLDZ8LqvT/fYuhoi1IlKaWxsxPLSEn54PLoL/jtgVVUVRkZGUFJSolse3Q0RykyFhbFG+dAqyM9HndWK+vp6VFdXx5qo+Ih1eXmJQCCAk5MTHHk8/zRVNHSHwwGLxaKLKVkx5CFlT41GvOzsjGFlMBge9QLX19dwu91wuVwQjTnVEgfgGB9HRUXFo2Kmeyhnhtg6OmDv7lYen9FoFDs7O1hbW0v5PqLCJicnM55CWTdE4DE6OorndXUZn54IcHZ2hi/Ly/D6fEnxntXU4M3EREZ5sm6IUKf3RAiHw3A6nRAXvMTV1dWFnp4eZVNyYohQJ5Dp6+9XFpq4MRKJ4NPiInx+f1LMt1NTKFNssjkzRKgWzVTcVfRaXq8XiwsLiP76dS+kmF5jY2NKaXJqSDbw+epyYWV1NenlNU1TauA5N0Qon56ehslkUjrBVJs+zs/j5/l5/Ccx3t9pmlJ8ZUPm5uZwcXGhlPS9psGg479XcYn77HTiNhqN6RkaGkJra6uSNmVDDg4OsLqykrKpPaTEUlaGV729aGhoUBKbbtPV1RUODw9RW1ub0a1V2RDd3+g/CUhDEg6ChtCQ9GyyQlghrBCp+UVkiAyRITJSDhAZIiNVMJwyRIbIEBkpB4gMkZEqGE4ZIkNkiIyUA0SGyEgVDKdMgl1/AOdF26hzQUjPAAAAAElFTkSuQmCC) icon next to modified fields. ## Change the output algorithm[​](#change-the-output-algorithm "Direct link to Change the output algorithm") By default, the Test Interface runs the entire workflow and display outputs of the first ending algorithm. However, you can also decide to run the workflow partially, and see the outputs of any intermediate algorithm. To do so, click on the dropdown menu next to the **Run workflow** button and select the algorithm you want to run. You can also run a specific algorithm from the workflow diagram in the **Parameters** panel. ![The menu allowing to select until which algorithm to run the workflow](/assets/images/partial_run-9e991cd9fd4d34ca49a906b6f38e0dd0.png) Running the workflow partially ## Inspect outputs[​](#inspect-outputs "Direct link to Inspect outputs") If your workflow produces any of the supported standard outputs, the Test Interface will render an interactive viewer for you to explore the results. Supported outputs includes: * Images * Semantic/Panoptic/Instance segmentation * Optical character recognition outputs (text detection, text recognition, KIE) * Point clouds * Pose estimation * Object detection Other outputs can be accessed through the **JSON response** tab which contains the full response provided by the API. ![A screenshot of the JSON response of an instance segmentation workflow](/assets/images/json_response-78ea1713a5cca2bbefd4f6e83c2954d6.png) Inspecting the JSON response --- # How to deploy FLUX.1 In this guide, we will create a simple **FLUX image generation workflow** using **Ikomia API** and deploy it to the cloud with **Ikomia SCALE** to integrate it into your application. ![An adventurer in the jungle with a tshirt that says 'Deploy FLUX.1'.](/assets/images/generated-b550ad149a7bbb6cc4f38861317bb1c0.jpg) FLUX generated image 💫 **This tutorial is also available as a [Jupyter notebook](https://github.com/Ikomia-dev/notebooks/blob/main/examples/HOWTO_deploy_Ikomia_SCALE_FLUX1.ipynb)**.
## 1. Installation[​](#1-installation "Direct link to 1. Installation") First, ensure that you have installed the Ikomia Python API and CLI: ``` pip install ikomia ikomia-cli ```
## 2. Create a workflow with Ikomia API[​](#2-create-a-workflow-with-ikomia-api "Direct link to 2. Create a workflow with Ikomia API") To get started, let's create a workflow using **[infer\_flux\_1](https://app.ikomia.ai/hub/algorithms/infer_flux_1/)** algorithm from Ikomia HUB: ``` from ikomia.dataprocess.workflow import Workflow workflow = Workflow("FLUX Image Generation") flux = workflow.add_task(name="infer_flux_1") flux.set_parameters({ "model_name": "flux1-schnell", }) # Save workflow as a JSON file workflow.save("flux_workflow.json") ``` **Running your workflow locally** Ikomia API is **open-source** and also highly suitable for **self-hosted solutions** if you prefer to run workflows locally. Here's how you can modify the previous script to execute the workflow locally: ``` from ikomia.dataprocess.workflow import Workflow from ikomia.utils.displayIO import display workflow = Workflow("FLUX Image Generation") flux = workflow.add_task(name="infer_flux_1") # Configure the algorithm flux.set_parameters({ "model_name": "flux1-schnell", "prompt": "An adventurer in the jungle with a tshirt that says 'Deploy FLUX.1'.", }) # Run the workflow workflow.run() # Display the output image display(flux.get_output(0).get_image()) ``` warning Please note that FLUX is a large model that requires at least 12GB of VRAM. To create more advanced workflows, **check out the [Ikomia API documentation](https://ikomia-dev.github.io/python-api-documentation/index.html)**.
## 3. Deploy your workflow to Ikomia SCALE[​](#3-deploy-your-workflow-to-ikomia-scale "Direct link to 3. Deploy your workflow to Ikomia SCALE") If you haven't already, **[create an Ikomia account](https://app.ikomia.ai/signup)**. ### Create an API token[​](#create-an-api-token "Direct link to Create an API token") To authorize the CLI and your code to access your Ikomia SCALE account, **[create an API token](https://app.ikomia.ai/settings/tokens)** and set it as an environment variable: ``` export IKOMIA_TOKEN=PASTE_YOUR_TOKEN_HERE ``` ### Push your workflow[​](#push-your-workflow "Direct link to Push your workflow") Create a project and push your workflow to Ikomia SCALE: ``` ikcli project add YOUR_USERNAME FluxImageGeneration ikcli project push FluxImageGeneration flux_workflow.json ``` You can now view and manage your project and workflow on the **[Ikomia SCALE dashboard](https://app.ikomia.ai/)**. ### Deploy[​](#deploy "Direct link to Deploy") On the workflow page, select a **[deployment option](https://docs.ikomia.ai/scale/deployment/available-infrastructures.md)** and click on the **Deploy workflow** button. ![The deployment creation interface, with various deployment options](/assets/images/deployment_options-a15feaa61e69d801e9b3f90bcf2f1892.png) Deploying our FLUX image generation workflow on a cloud GPU instance Once your deployment is ready, you can **test it via our [online interface](https://docs.ikomia.ai/scale/deployment/test-interface.md)**.
## 4. Integrate your deployment in your application[​](#4-integrate-your-deployment-in-your-application "Direct link to 4. Integrate your deployment in your application") We provide **[Python](https://docs.ikomia.ai/scale/integration/python/getting-started.md)** and **[JavaScript](https://docs.ikomia.ai/scale/integration/javascript/getting-started.md)** client libraries to help you integrate your deployment into your application. info We also provide a **[REST API](https://docs.ikomia.ai/scale/integration/rest.md)** for integration on any language/platform. * Python * JavaScript/TypeScript ``` pip install ikomia-client ``` ``` from ikclient.core.client import Client from ikclient.core.io import ImageIO # Initialize the client with your deployment URL with Client( url="https://your.flux.deployment.url", token="your-api-token" # Or set IKOMIA_TOKEN environment variable ) as flux_deployment: # Generate an image with FLUX results = flux_deployment.run( parameters={ "prompt": "An adventurer in the jungle with a tshirt that says 'Deploy FLUX.1'.", } ) # Get the generated image output_image = results.get_output(0, assert_type=ImageIO) # Convert to PIL image and save pil_image = output_image.to_pil() pil_image.save("generated_image.png") ``` ``` npm install @ikomia/ikclient ``` ``` import {Buffer} from 'buffer'; import fs from 'fs'; import {Client, ImageIO} from '@ikomia/ikclient'; // Initialize the client with your deployment URL const fluxDeployment = new Client({ url: 'https://your.flux.deployment.url', token: 'your-api-token', // Or set IKOMIA_TOKEN environment variable }); // Generate an image with FLUX const results = await fluxDeployment.run({ parameters: { prompt: "An adventurer in the jungle with a tshirt that says 'Deploy FLUX.1'.", }, }); // Get the generated image const outputImage = results.getOutput(0, ImageIO); // Convert to buffer and save const imageBuffer = outputImage.toArrayBuffer(); fs.writeFileSync('generated_image.png', Buffer.from(imageBuffer)); ``` Replace `https://your.flux.deployment.url` with the actual URL of your FLUX deployment. --- # Advanced Usage ## Running a specific task[​](#running-a-specific-task "Direct link to Running a specific task") If you want to run the workflow to retrieve the output of another task, you can use `Client.runTask()`: ``` const results = await client.runTask('infer_yolo_v8', { inputs: [ 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg', ], }); ``` ## Advanced configuration and output selection[​](#advanced-configuration-and-output-selection "Direct link to Advanced configuration and output selection") If you need to select outputs from various tasks or set parameters for intermediate tasks, you can use a `Context` object for maximum flexibility: ``` const context = await client.buildContext(); ``` A context holds specific configuration for your deployment, which you can reuse across multiple runs. Using `context.addOutput()`, you can select specific outputs from any task in your workflow: ``` // Add ALL outputs from the "ocv_blur" task context.addOutput('ocv_blur'); // Add output 1 from "infer_yolo_v8" context.addOutput('infer_yolo_v8', {index: 1}); // Add ALL outputs from the "infer_yolo_v8_seg" task // Save them to project's storage (will be returned as StorageObjectIO) context.addOutput('infer_yolo_v8_seg', {saveTemporary: true}); ``` You can also use `context.setParameters()` to edit configuration of any task: ``` context.setParameters('infer_yolo_v8', {conf_thres: 0.5}); ``` To run using a context, you can pass it to the `Client.runOn()` method: ``` const results = await client.runOn(context, { inputs: [ 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg', ], }); ``` ## Chaining deployments[​](#chaining-deployments "Direct link to Chaining deployments") You can chain multiple deployments together by passing the outputs of one deployment as inputs to another: ``` const client1 = new Client({url: 'https://your.scale.endpoint.url'}); const client2 = new Client({url: 'https://your.other.scale.endpoint.url'}); const results1 = await client1.run({ inputs: [ 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg', ], }); const results2 = await client2.run({inputs: [results1]}); ``` You can also forward selected outputs using `results.getOutput()`: ``` const results1 = await client1.run({ inputs: [ 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg', ], }); const results2 = await client2.run({ inputs: [results1.getOutput(1), results1.getOutput(2)], }); ``` --- # Fullstack integration If you are developing an interactive application that rely on results from a SCALE deployment, you will probably need to write a proxy API over your deployment to handle requests from your frontend without exposing your Ikomia API key. You may also want to implement some business logic and guardrails on your server like authentication and rate limiting. While there is no universal solution for this as it depends on your application's requirements, we provide simple utilities to help you achieve this on your full-stack JavaScript application. ## Example: Generating an image[​](#example-generating-an-image "Direct link to Example: Generating an image") Let's say that you want to create a simple web application that allows users to generate images from text prompts. * You implement an API endpoint that will call your SCALE deployment using the client library. * Your front-end can use whatever technology you want to query that endpoint, such as standard HTTP fetch, TRPC, WebSockets, etc. * You stream the deployment run status to the front-end using the `StreamingRun` utility so that you can provide a nice progress indicator to your users. * When the deployment is complete, `StreamingRun` sends the results to the front-end. * Your front-end can use the client library types and utility to process the results. ## Implementing the server endpoint[​](#implementing-the-server-endpoint "Direct link to Implementing the server endpoint") Create a new API endpoint that run your deployment wrapped in a `StreamingRun` instance. ``` import {StreamingRun} from '@ikomia/ikclient/streaming'; const streamingRun = new StreamingRun(onProgress => client.run({parameters: {prompt}, onProgress}) ); const response = streamingRun.getResponse({ keepAliveDelay: 30, headers: {'X-Custom-Header': 'value'}, }); ``` The `StreamingRun` can produce a stream in the following formats depending on your frameworks and requirements: | Method | Description | `raise` | `includeInputs` | `keepAliveDelay` | `headers` | | --------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------ | ------------------- | ------------------------ | ---------------- | | `getAsyncGenerator()` | Returns an async generator that yields session states. Useful for tRPC or custom streaming implementations. | ✅ Default: `true` | ✅ Default: `false` | ❌ | ❌ | | `getReadableStream()` | Returns a ReadableStream that emits Server-Sent Events. | ✅ Default: `true` | ✅ Default: `false` | ✅ Default: `10` seconds | ❌ | | `getResponse()` | Returns a complete Fetch Response object with SSE stream and proper headers. | ✅ Default: `true` | ✅ Default: `false` | ✅ Default: `10` seconds | ✅ Default: `{}` | You can customize the behavior of the generated stream by passing an object to the method with the following properties: * `raise`: Whether to raise an error if the run fails * `includeInputs`: Whether the streamed result object should include inputs * `keepAliveDelay`: The delay in seconds between keep-alive messages (SSE only) * `headers`: Additional headers to include in the response (Response only) - Express - Next.js - Nuxt - tRPC You can use `StreamingRun.getReadableStream()` to stream the status and results as [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events). ``` import express from 'express'; import {Readable} from 'stream'; import {Client} from '@ikomia/ikclient'; import {StreamingRun} from '@ikomia/ikclient/streaming'; const app = express(); const client = new Client({ url: 'https://your.scale.endpoint.url', }); app.get('/api/generate-image', async (req, res) => { // Add your authentication/rate limiting logic here. const prompt = req.query.prompt; const streamingRun = new StreamingRun(onProgress => client.run({parameters: {prompt}, onProgress}) ); // Set status and headers for server-sent events res.status(200); res.setHeader('Content-Type', 'text/event-stream'); Readable.fromWeb(streamingRun.getReadableStream()).pipe(res); }); app.listen(3000); ``` On Next.js, you'll need to implement a Route Handler to stream the status and results as [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events). warning If you are using Next.js Page Router for your project, **you'll also need to use the App Router for this endpoint** as Page Router does not support streaming responses. Since Next.js 13+, you can use both router in the same project. app/api/generate-image/route.ts ``` import {Client} from '@ikomia/ikclient'; import {StreamingRun} from '@ikomia/ikclient/streaming'; const client = new Client({ url: 'https://your.scale.endpoint.url', }); export async function GET(request: Request) { // Add your authentication/rate limiting logic here. const url = new URL(request.url); const prompt = url.searchParams.get('prompt')!; // Get the prompt from the query string const streamingRun = new StreamingRun(onProgress => client.run({parameters: {prompt}, onProgress}) ); return streamingRun.getResponse(); } ``` On Nuxt, you'll need to implement an event handler to stream the status and results as [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events). server/api/generate-image.ts ``` import {Client} from '@ikomia/ikclient'; import {StreamingRun} from '@ikomia/ikclient/streaming'; const client = new Client({ url: 'https://your.scale.endpoint.url', }); export default defineEventHandler(async event => { // Add your authentication/rate limiting logic here. const {prompt} = getQuery(event); if (!prompt) { throw createError({statusCode: 400}); } const streamingRun = new StreamingRun(onProgress => client.run({parameters: {prompt}, onProgress}) ); return streamingRun.getResponse(); }); ``` tRPC allows to stream async generators directly. Under the hood it can use server-sent events or WebSockets depending on your configuration. ``` import {z} from 'zod'; import {publicProcedure, router} from './trpc.js'; import {Client} from '@ikomia/ikclient'; import {StreamingRun} from '@ikomia/ikclient/streaming'; import {createHTTPServer} from '@trpc/server/adapters/standalone'; const client = new Client({ url: 'https://your.scale.endpoint.url', }); const appRouter = router({ generateImage: publicProcedure.input(z.string()).query(async function* ({ input, }) { // Add your authentication/rate limiting logic here. const streamingRun = new StreamingRun(onProgress => client.run({parameters: {prompt: input}, onProgress}) ); yield* streamingRun.getAsyncGenerator(); }), }); export type AppRouter = typeof appRouter; const server = createHTTPServer({ router: appRouter, }); server.listen(3000); ``` ## Reading the stream on the front-end[​](#reading-the-stream-on-the-front-end "Direct link to Reading the stream on the front-end") Your front-end application can then call your server endpoint and use the `StreamingRun` utility methods to deserialize the streamed response. * Server-Sent Events * tRPC If you used server-sent events for streaming your results (as shown in the Express, Next.js, and Nuxt examples), you can read the stream on the front-end like this: ``` import {ImageIO} from '@ikomia/ikclient'; import {StreamingRun} from '@ikomia/ikclient/streaming'; const results = await fetch( `/api/generate-image?prompt=${encodeURIComponent('A puppy that fly like a superhero')}` ).then(response => StreamingRun.fromResponse(response, { // Track progress just like in Client.run onProgress: state => console.log(state), }) ); // You can then process the results like you would have done on the server side const image = results.getOutput(0, ImageIO); ``` ``` import type {AppRouter} from '../server/index.js'; import {ImageIO} from '@ikomia/ikclient'; import {StreamingRun} from '@ikomia/ikclient/streaming'; import { createTRPCClient, splitLink, unstable_httpBatchStreamLink, unstable_httpSubscriptionLink, } from '@trpc/client'; // Initialize the tRPC client const trpc = createTRPCClient({ links: [ splitLink({ condition: op => op.type === 'subscription', true: unstable_httpSubscriptionLink({ url: 'http://localhost:3000', }), false: unstable_httpBatchStreamLink({ url: 'http://localhost:3000', }), }), ], }); const results = await trpc.generateImage .query('A puppy that fly like a superhero') .then(stream => StreamingRun.fromAsyncGenerator(stream, { // Track progress just like in Client.run onProgress: state => console.log(state), }) ); // You can then process the results like you would have done on the server side const image = results.getOutput(0, ImageIO); ``` --- # Getting Started with JS/TS ## Installation[​](#installation "Direct link to Installation") Use your preferred package manager to install the client library: * npm * pnpm * Yarn * Bun ``` npm i @ikomia/ikclient ``` ``` pnpm add @ikomia/ikclient ``` ``` yarn add @ikomia/ikclient ``` ``` bun add @ikomia/ikclient ``` ## Instantiating the client[​](#instantiating-the-client "Direct link to Instantiating the client") ``` import {Client} from '@ikomia/ikclient'; const fluxDeployment = new Client({ url: 'https://your.scale.endpoint.url', token: 'your-api-token', // default to process.env.IKOMIA_TOKEN }); const results = await fluxDeployment.run({ parameters: {prompt: 'A cat sitting on a mat'}, }); ``` warning We strongly recommend avoiding exposing your personal token in your code. You can set `IKOMIA_TOKEN` as an environment variable to avoid passing it explicitly. ## Running deployment[​](#running-deployment "Direct link to Running deployment") For deployment of straightforward workflows, you can simply call `client.run()` optionally passing inputs and parameters. ``` const results = await client.run({ inputs: [ 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg', 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_starry_night.jpg', ], parameters: {model_name: 'yolov8m'}, }); ``` The `Results` object contains the output of your deployment. The number of outputs, their type and values depends on the deployment you are running and the inputs and parameters you provided. By default, the results will contains all outputs returned by the *first leaf task* in your workflow. In the common use case where your workflow is a simple chain of algorithms (or just one), this will correspond to the outputs of the last algorithm of the chain. ## Tracking progress[​](#tracking-progress "Direct link to Tracking progress") You can track the progress of your deployment by adding an `onProgress` callback: ``` const results = await client.run({ inputs: [ 'https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg', ], onProgress: progress => { // {"run_id": "v4d4mg96bu", "name": "Object Detection Workflow", "uuid": "c0315bef-3642-44ba-9e94-5749881fc297", "state": "PENDING", "eta": [1000, 2000]} // ... // {"run_id": "v4d4mg96bu", "name": "Object Detection Workflow", "uuid": "c0315bef-3642-44ba-9e94-5749881fc297", "state": "SUCCESS", "eta": [0, 0], "results": Results(...)} console.log(progress); }, }); ``` This callback will be called repeatedly during the polling process of the deployment. It can be useful for logging or updating UIs while the deployment runs. ## Accessing the results[​](#accessing-the-results "Direct link to Accessing the results") Once the deployment is complete, you can retrieve output data through the `results.getOutput()` method: ``` import {ImageIO} from '@ikomia/ikclient'; const firstOutput = results.getOutput(); // shortcut for results.getOutput(0); const outputImage = results.getOutput(1, ImageIO); ``` You can pass an optional type argument to `results.getOutput()` to assert what type of output you expect to retrieve. We recommend doing so when you know the expected type of the output. During development, it will provide better type hints in your editor and during runtime, it will throw an error if the output is not of the expected type. We provide the following standard types: * `ImageIO`: for image outputs * `StorageObjectIO`: for outputs saved to your Project's storage Other output types (e.g. object detection, segmentation, ocr, non-standard outputs, etc.) will be returned as the generic `TaskIO`. ### Image outputs[​](#image-outputs "Direct link to Image outputs") We provide some utility methods for working with image outputs: ``` const outputImage = results.getOutput(1, ImageIO); outputImage.toDataURL(); // data:image/png;base64,... outputImage.toBlob(); // Blob outputImage.toArrayBuffer(); // ArrayBuffer ``` --- # Working with storage Every Ikomia SCALE project gets its own storage space that can be used to store inputs and outputs of your deployments. This is particularly useful for handling large file like videos. The storage client is accessible through `client.storage` or as a "standalone" object: ``` import {StorageClient} from '@ikomia/ikclient/storage'; const storage = new StorageClient({ url: 'https://your.scale.endpoint.url', token: 'your-api-token', // default to process.env.IKOMIA_TOKEN }); ``` ## Uploading files[​](#uploading-files "Direct link to Uploading files") ``` import fs from 'fs/promises'; const data = await fs.readFile('path/to/video.mp4'); const obj = await client.storage.put(data, {path: "path/in/storage.mp4", contentType: "video/mp4"}); // Or upload as temporary (path-less) object const obj = await client.storage.put(data, {contentType: "video/mp4"}); ``` You can upload files as temporary objects or as regular objects with a specific path. Objects with a path will be stored permanently while temporary objects will be deleted after a certain period (\~ 1 hour). The latter are well-suited for uploading inputs that you want to process immediately without having to manage their lifecycle. info `StorageClient.put()` accepts `string`, `ArrayBuffer`, `Buffer` and `Blob`. If we can't infer the mime-type from these inputs, we will default to `application/octet-stream` or `text/plain`. You can also specify the mime-type explicitly using the `contentType` parameter, which we **recommend you to do**. ## Running deployment on stored file[​](#running-deployment-on-stored-file "Direct link to Running deployment on stored file") We provide `Client.createStorageInput()` and `Client.createStorageInputFromUid()` methods to easily work with stored files. ``` // Upload temporary video const data = await fs.readFile('path/to/video.mp4'); const obj = await client.storage.put(data, {contentType: 'video/mp4'}); // Create inputs and run const videoInput = await client.createStorageInputFromUid(obj.uid); const videoInput2 = await client.createStorageInput('path/in/storage.mp4'); // This video was already uploaded to project's storage const results = await client.run({inputs: [videoInput, videoInput2]}); ``` ## Reading files[​](#reading-files "Direct link to Reading files") `StorageClient.get()` can be used to retrieve file metadata, while `StorageClient.read()` can be used to read the contents of a file: ``` // Get file metadata const obj = await client.storage.get('path/in/storage.mp4'); const response = await client.storage.read(obj); fs.writeFile( 'path/to/local/file.mp4', Buffer.from(await response.arrayBuffer()) ); ``` `StorageClient.read()` returns a Promises that resolves into a standard Fetch `Response` object for the file content on our CDN. ## Handling StorageObjectIO outputs[​](#handling-storageobjectio-outputs "Direct link to Handling StorageObjectIO outputs") Your deployment may produce outputs that are instances of `StorageObjectIO`. These outputs represent files stored in your project's storage. When deployments returns `StorageObjectIO` outputs, they provide object metadata (like the one you would get from `storage.get()`) in the `data.metadata` field, then, you can use `storage.read()` to read the contents of these outputs, just like you would with any other file in storage: ``` import {StorageObjectIO} from '@ikomia/ikclient'; const results = await client.run({input: videoInput}); const videoOutput = results.getOutput(0, StorageObjectIO); const response = await client.storage.read(videoOutput.data.metadata); fs.writeFile( 'path/to/local/file.mp4', Buffer.from(await response.arrayBuffer()) ); ``` ## Managing objects[​](#managing-objects "Direct link to Managing objects") We provide some methods to manage objects in your project's storage: ``` // List all objects in a directory const objects = await client.storage.list('directory'); // List by sha256 const objects = await client.storage.listBySha256('your_file_checksum'); // Delete all objects in a directory await client.storage.delete('directory'); // Delete a specific object await client.storage.delete('directory/file.mp4', { exact: true }); // Get obj metadata from uid const obj = await client.storage.getByUid('your_file_uid'); // Copy an object const copyObj = await client.storage.copy(obj, {path: 'copy_of_file.mp4'}); // Create a presigned download URL const presignedUrl = await client.storage.getPresignedDownloadUrl(obj, { expiresIn: 3600 }); // URL valid for 1 hour ``` --- # Advanced Usage ## Running a specific task[​](#running-a-specific-task "Direct link to Running a specific task") If you want to run the workflow to retrieve the output of another task, you can use `Client.run_task()`: ``` with Client(url="https://your.scale.endpoint.url") as client: results = client.run_task("infer_yolo_v8", "path/to/image1.jpg") ``` ## Advanced configuration and output selection[​](#advanced-configuration-and-output-selection "Direct link to Advanced configuration and output selection") If you need to select outputs from various tasks or set parameters for intermediate tasks, you can use a `Context` object for maximum flexibility: ``` context = client.build_context() ``` A context holds specific configuration for your deployment, which you can reuse across multiple runs. Using `Context.add_output`, you can select specific outputs from any task in your workflow: ``` # Add ALL outputs from the "ocv_blur" task context.add_output("ocv_blur") # Add output 1 from "infer_yolo_v8" context.add_output("infer_yolo_v8", 1) # Add ALL outputs from the "infer_yolo_v8_seg" task # Save them to project's storage (will be returned as StorageObjectIO) context.add_output("infer_yolo_v8_seg", save_temporary=True) ``` You can also use `Context.set_parameters` to edit configuration of any task: ``` context.set_parameters("infer_yolo_v8", {"conf_thres": 0.5}) ``` To run using a context, you can pass it to the `Client.run_on()` method: ``` with Client(url="https://your.scale.endpoint.url") as client: results = client.run_on(context, "path/to/image1.jpg") ``` ## Chaining deployments[​](#chaining-deployments "Direct link to Chaining deployments") You can chain multiple deployments together by passing the outputs of one deployment as inputs to another: ``` with Client(url="https://your.scale.endpoint.url") as client1, \ Client(url="https://your.other.scale.endpoint.url") as client2: results1 = client1.run("path/to/image1.jpg") results2 = client2.run(results1) ``` You can also forward selected outputs using `Results.get_output()`: ``` results1 = client1.run("path/to/image1.jpg") results2 = client2.run(results1.get_output(1), results1.get_output(2)) ``` --- # Getting Started with Python info This doc refers to the Python client library for integrating Ikomia SCALE deployments. This is not to be confused with [Ikomia Python API](https://ikomia-dev.github.io/python-api-documentation/index.html) that allows you to define workflows and run them on your machine. ## Installation[​](#installation "Direct link to Installation") ``` pip install ikomia-client ``` ## Instantiating the client[​](#instantiating-the-client "Direct link to Instantiating the client") ``` from ikclient.core.client import Client with Client( url="https://your.scale.endpoint.url", token="your-api-token", # Or use the environment variable IKOMIA_TOKEN ) as flux_deployment: results = flux_deployment.run(parameters={"prompt": "A cat sitting on a mat"}) ``` warning We strongly recommend avoiding exposing your personal token in your code. One option is to set the environment variable `IKOMIA_TOKEN` to avoid passing it explicitly. ### Async support[​](#async-support "Direct link to Async support") We also provide an async version of the client with the same features: ``` import asyncio from ikclient.core.client import AsyncClient async def main(): async with AsyncClient(url="https://your.scale.endpoint.url") as flux_deployment: results = await flux_deployment.run(parameters={"prompt": "A cat sitting on a mat"}) asyncio.run(main()) ``` ## Running deployment[​](#running-deployment "Direct link to Running deployment") For deployment of straightforward workflows, you can simply call `Client.run()` optionally passing inputs and parameters. ``` with Client(url="https://your.scale.endpoint.url") as client: results = client.run("path/to/image.jpg", parameters={"model_name": "yolov8m"}) ``` We use [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/) for parsing paths, so you can pass local file paths, URLs, S3 URIs, and more. If your deployment takes multiple inputs, you can just pass multiple paths directly: ``` results = client.run( "path/to/image1.jpg", "path/to/image2.jpg", ) ``` Because we rely on the path file extension to determine the input type, you may encounter `CannotInferPathDataTypeException` if the path has no extension or an unrecognized extension. You can use `Client.create_input()` to explicitly define the input type: ``` input1 = client.create_input("https://this.is.an.image.url/without-extension", data_type="image") input2 = client.create_input("s3://bucket/video-object", data_type="video") results = client.run(input1, input2) ``` The `Results` object contains the output of your deployment. The number of outputs, their type and values depends on the deployment you are running and the inputs and parameters you provided. By default, the results will contains all outputs returned by the *first leaf task* in your workflow. In the common use case where your workflow is a simple chain of algorithms (or just one), this will correspond to the outputs of the last algorithm of the chain. ## Tracking progress[​](#tracking-progress "Direct link to Tracking progress") You can track the progress of your deployment using the `on_progress` callback: ``` results = client.run( "https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg", parameters={"model_name": "yolov8m"}, on_progress=lambda **kwargs: print(f"Progress: {kwargs}") ) # Progress: {"run_id": "v4d4mg96bu", "name": "Object Detection Workflow", "uuid": "c0315bef-3642-44ba-9e94-5749881fc297", "state": "PENDING", "eta": [1000, 2000]} # ... # Progress: {"run_id": "v4d4mg96bu", "name": "Object Detection Workflow", "uuid": "c0315bef-3642-44ba-9e94-5749881fc297", "state": "SUCCESS", "eta": [0, 0], "results": Results(...)} ``` This callback will be called repeatedly during the polling process of the deployment. It can be useful for logging or updating UIs while the deployment runs. ## Accessing the results[​](#accessing-the-results "Direct link to Accessing the results") Once the deployment is complete, you can retrieve output data through the `Results.get_output()` method: ``` from ikclient.core.io import ImageIO first_output = results.get_output() # shortcut for results.get_output(0) output_image = results.get_output(1, assert_type=ImageIO) ``` The `assert_type` parameter is optional, but we recommend using it when you know the expected type of the output. During development, it will provide better type hints in your editor and during runtime, it will raise an exception if the output is not of the expected type. We provide the following standard types: * `ImageIO`: for image outputs * `StorageObjectIO`: for outputs saved to your Project's storage Other output types (e.g. object detection, segmentation, ocr, non-standard outputs, etc.) will be returned as the generic `TaskIO`. ### Image outputs[​](#image-outputs "Direct link to Image outputs") We provide some utility methods for working with image outputs: ``` output_image = results.get_output(1, assert_type=ImageIO) image_pil = output_image.to_pil() # convert to PIL image image_array = output_image.to_numpy() # convert to numpy array image_bytes = output_image.to_bytes() # get raw bytes ``` --- # Working with storage Every Ikomia SCALE project gets its own storage space that can be used to store inputs and outputs of your deployments. This is particularly useful for handling large file like videos. The storage client is accessible through `Client.storage` or as a "standalone" object: ``` from ikclient.storage.client import StorageClient # from ikclient.storage.client import AsyncStorageClient (if you want async) ``` ## Uploading files[​](#uploading-files "Direct link to Uploading files") ``` with open("path/to/video.mp4", "rb") as f: obj = client.storage.put(f.read(), "path/in/storage.mp4") # Or upload as temporary (path-less) object obj = client.storage.put(f.read(), content_type="video/mp4") ``` You can upload files as temporary objects or as regular objects with a specific path. Objects with a path will be stored permanently while temporary objects will be deleted after a certain period (\~ 1 hour). The latter are well-suited for uploading inputs that you want to process immediately without having to manage their lifecycle. info We use the path file extension to infer the mime-type of the uploaded file. If we cannot infer the mime-type, we will default to `application/octet-stream` or `text/plain`. You can also specify the mime-type explicitly using the `content_type` parameter, which we **recommend for temporary objects**. ## Running deployment on stored file[​](#running-deployment-on-stored-file "Direct link to Running deployment on stored file") We provide `Client.create_storage_input()` and `Client.create_storage_input_from_uid()` methods to easily work with stored files. Videos inputs The following example is for demonstration purposes. Videos are always transmitted using storage inputs. As such, if you pass a video file path directly to `Client.run()` or `Client.create_input()` it will be automatically uploaded to storage as a temporary object. ``` with Client(url="https://your.scale.endpoint.url") as client: # Upload temporary video with open("path/to/video.mp4", "rb") as f: obj = client.storage.put(f.read(), content_type="video/mp4") # Create inputs and run video_input = client.create_storage_input_from_uid(obj["uid"]) video_input2 = client.create_storage_input("path/in/storage.mp4") # This video was already uploaded to project's storage results = client.run(video_input, video_input2) ``` ## Reading files[​](#reading-files "Direct link to Reading files") `StorageClient.get()` can be used to retrieve file metadata, while `StorageClient.read()` can be used to read the contents of a file: ``` with Client(url="https://your.scale.endpoint.url") as client: # Get file metadata obj = client.storage.get("path/in/storage.mp4") # Retrieve file contents and save to local file with client.storage.read(obj, streaming=True) as response, \ open("path/to/local/file.mp4", "wb") as f: for chunk in response.iter_bytes(): f.write(chunk) ``` `StorageClient.read()` returns a context manager that provides the [HTTPX response object](https://www.python-httpx.org/quickstart/#response-content) for the file content on our CDN. You can use `streaming=True` to get a streaming response. ## Handling StorageObjectIO outputs[​](#handling-storageobjectio-outputs "Direct link to Handling StorageObjectIO outputs") Your deployment may produce outputs that are instances of `StorageObjectIO`. These outputs represent files stored in your project's storage. When deployments returns `StorageObjectIO` outputs, they provide object metadata (like the one you would get from `StorageClient.get()`) in the `metadata` field, then, you can use `StorageClient.read()` to read the contents of these outputs, just like you would with any other file in storage: ``` from ikclient.core.io import StorageObjectIO results = client.run(video_input) video_output = results.get_output(assert_type=StorageObjectIO) with client.storage.read(video_output["metadata"], streaming=True) as response, \ open("path/to/local/file.mp4", "wb") as f: for chunk in response.iter_bytes(): f.write(chunk) ``` ## Managing objects[​](#managing-objects "Direct link to Managing objects") We provide some methods to manage objects in your project's storage: ``` # List all objects in a directory objects = client.storage.list('directory') # List by sha256 objects = client.storage.list_by_sha256('your_file_checksum') # Delete all objects in a directory client.storage.delete('directory') # Delete a specific object client.storage.delete('directory/file.mp4', exact=True) # Get obj metadata from uid obj = client.storage.get_by_uid('your_file_uid') # Copy an object copy_obj = client.storage.copy(obj, 'copy_of_file.mp4') # Create a presigned download URL presigned_url = client.storage.get_presigned_download_url(obj, expires_in=3600) # URL valid for 1 hour ``` --- # REST API If you can't use our client library, you can still integrate your Ikomia SCALE deployment through the REST API. The general workflow is as follows: * You retrieve a project-wide token for calling your deployment from Ikomia SCALE. * You queue a new run with your input data and parameters. * You regularly poll for the status of the run until it completes. * When the run is complete, the deployment returns the serialized run results. ## Authentication[​](#authentication "Direct link to Authentication") To authenticate with the Ikomia SCALE REST API, you need to provide your API token in the `Authorization` header of your requests. The token should be prefixed with `Token `. ``` curl --request GET \ --url 'https://scale.ikomia.ai/v1/projects/jwt/?endpoint=your.scale.endpoint.url' \ --header 'Authorization: Token YOUR_API_TOKEN' ``` Replace `your.scale.endpoint.url` with the URL of your Ikomia SCALE deployment and `YOUR_API_TOKEN` with your actual API token. This will returns a JSON object of the following format: ``` { "access_token": "...", "expires_in": 36000, "token_type": "Bearer", "scope": "openid consume", "refresh_token": "...", "id_token": "..." } ``` The `id_token` is the JWT you will need to use to authenticate with your deployment. warning The `id_token` is a sensitive piece of information. Keep it secure and do not expose it in client-side code. info This token can also be used to interact with your project storage space. **To enable full read/write access to storage with your JWT, add the `storage=true` query param to the token request URL**: ``` curl --request GET \ --url 'https://scale.ikomia.ai/v1/projects/jwt/?storage=true&endpoint=your.scale.endpoint.url' \ --header 'Authorization: Token YOUR_API_TOKEN' ``` You can then pass the JWT (`id_token`) as a `Bearer` token in the `Authorization` header of [your requests to the storage API](https://scalefs.ikomia.ai/docs). ## Running a deployment[​](#running-a-deployment "Direct link to Running a deployment") To run a deployment, you need to send a `PUT` request to the `/api/run` endpoint. The body of the request should be a JSON object with the following properties: * `inputs`: an array of objects representing the input data of the deployment * `outputs`: the list of workflow tasks for which you want to retrieve outputs * `parameters`: for each of the workflow tasks, you can overrides the default parameters of the workflow if needed. ``` curl --request PUT \ --url 'https://your.scale.endpoint.url/api/run?advice=true' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer YOUR_JWT' \ --data-raw '{ "inputs": [ { "image": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQ..." } ], "outputs": [ { "task_name": "infer_yolo_v8" } ], "parameters": [ { "task_name": "infer_yolo_v8", "parameters": { "conf_thres": "0.5" } } ] }' ``` note **This documentation covers only the responses formats when `advice=true` query parameter is set in the request.** We recommend always using this option to get the most detailed responses from the API. ### Inputs[​](#inputs "Direct link to Inputs") Currently, deployments only accept images and videos as inputs. Images can be sent as Base64 encoded strings or uploaded to your project's storage, while videos must be uploaded to your project's storage. #### Images as Base64[​](#images-as-base64 "Direct link to Images as Base64") Just format your input object in the following format: ``` { "image": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQ..." } ``` #### Images/Videos as file upload[​](#imagesvideos-as-file-upload "Direct link to Images/Videos as file upload") First, upload your image or video file to your project's storage space using the [SCALE Storage API](https://scalefs.ikomia.ai/docs): ``` curl --request 'POST' \ --url 'https://scalefs.ikomia.ai/v1/objects/' \ --header 'Content-Type: multipart/form-data' \ --header 'Authorization: Bearer YOUR_JWT' \ --form 'file=@path/to/your/image.png;type=image/png' \ ``` This will return a JSON object with the `uid` of the uploaded file: ``` { "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "path": null, "download_url": "https://delivery.scalefs.ikomia.ai/a5136ea9-a120-4173-9bca-5cf92ac98c11/3fa85f64-5717-4562-b3fc-2c963f66afa6", "is_directory_archive": false, "content_type": "image/png", "sha256": "ca8b9ba2b8ac022e17cd18c9bde2cc0291ffbfcc5549cdd76b65d12990ed3a93", "size": 123456, "created_at": "2025-07-03T08:58:20.403Z" } ``` You can then format your input object in the following way: ``` { "storage_object": { "url": "https://scalefs.ikomia.ai/v1/objects/?uid=3fa85f64-5717-4562-b3fc-2c963f66afa6", "data_type": "image" } } ``` Where `url` is the URL of the uploaded file metadata (`https://scalefs.ikomia.ai/v1/objects/` with the `uid` of the object as a query parameter) and `data_type` is the type of the uploaded file (`image` or `video`). ### Outputs[​](#outputs "Direct link to Outputs") By default, the REST API doesn't return any outputs. You need to specify which outputs you want to retrieve by providing an object for each output in the following format: ``` { "task_name": "infer_yolo_v8", "task_index": 0, "output_index": 1, "save_temporary": true } ``` Where: * `task_name` is the name of the workflow task (algorithm) you want to retrieve outputs for. * `task_index` if the same algorithm appears multiple times in the workflow, the index of the wanted task (default to 0 if not specified, corresponding to the first occurrence of the algorithm in the workflow, 1 for the second, etc.). * `output_index` if the algorithm returns multiple outputs, the index of the output you want to keep in the result (if not specified, all outputs of the given task will be returned). * `save_temporary` if set to true, the output will be saved as a temporary object to your project's storage. (default to false if not specified, except for video outputs that are always saved to project storage) ### Parameters[​](#parameters "Direct link to Parameters") You can override the default parameters of a workflow task by providing an object in the following format: ``` { "task_name": "infer_yolo_v8", "task_index": 0, "parameters": { "conf_thres": "0.5" } } ``` Where: * `task_name` is the name of the workflow task (algorithm) you want to set parameters for. * `task_index` if the same algorithm appears multiple times in the workflow, the index of the wanted task (default to 0 if not specified, corresponding to the first occurrence of the algorithm in the workflow, 1 for the second, etc.). * `parameters` is an object containing the parameters you want to set. **All parameters values must be specified as strings.** ### Response[​](#response "Direct link to Response") The query will return a JSON object containing the current status of the deployment run: ``` { "uuid": "7cc8b1a6-a77a-4001-957d-6775ea9323a2", "state": "PENDING", "eta": [1000, 2000], "next_poll_in": 500, "next_poll_long": true, "results": null } ``` The status object contains the following fields: * `uuid`: The id of the deployment run. * `state`: The current state, can be one of the following: `SENDING`, `FAILURE`, `PENDING`, `SUCCESS`. * `eta`: An estimation of the time remaining for the deployment to complete, in milliseconds. Can be `[null, null]`. * `next_poll_in`: The recommended time to wait before polling for the next status update, in milliseconds. Can be `null`. * `next_poll_long`: A recommendation on whether or not to enable long polling for the next status update. * `results`: The results of the deployment, if available. ## Polling for results[​](#polling-for-results "Direct link to Polling for results") Your deployment is unlikely to respond with results immediately, so you will need to poll for the results of the run. You can do this by sending a `GET` request to the `/api/results/{uuid}` endpoint, where `{uuid}` is the id of the deployment run you received in the previous response. ``` curl --request GET \ --url 'https://your.scale.endpoint.url/api/results/{uuid}?advice=true' \ --header 'Authorization: Bearer YOUR_JWT' ``` This will return a new object in the same format as the initial request with the updated state of the deployment run. If the run is still ongoing, the deployment will respond with a status `202 Accepted`. info The deployments optionally return some hints (`eta`, `next_poll_in`, and `next_poll_long`) to help you provide eta for your user and optimize your polling strategy. These hints are based on statistics from recent previous runs and **suppose similar patterns will hold in the current run**. This may not always be the case depending on your application, for example, if you are processing videos with a high variance in duration. We recommend testing with your specific workload to find the optimal polling strategy. ### Long polling[​](#long-polling "Direct link to Long polling") CPU and GPU instance deployments may emit a `"next_poll_long": true` advice, recommending you to enable long polling for the next status update. Long polling means that the deployment will try to wait for the end of the run before responding to your request. This is usually when the deployment expects to have finished processing rapidly after `next_poll_in` duration. To enable long polling, you can set the `long=true` query parameter in your status update request. ``` curl --request GET \ --url 'https://your.scale.endpoint.url/api/results/{uuid}?advice=true&long=true' \ --header 'Authorization: Bearer YOUR_JWT' ``` warning It is not expected that all queries with `long=true` will return a final status and results. To avoid timeouts, they may resolve early with a partial status. ### Results[​](#results "Direct link to Results") The deployment results will be returned as a JSON array in the same format as you can find in the JSON response of the [test interface](https://docs.ikomia.ai/scale/deployment/test-interface.md) and in the same order as the outputs you requested. ``` { "uuid": "7cc8b1a6-a77a-4001-957d-6775ea9323a2", "state": "SUCCESS", "results": [ { "OBJECT_DETECTION": { "detections": [ { "box": { "height": 804, "width": 291, "x": 9, "y": 250 }, "color": { "a": 54, "b": 253, "g": 11, "r": 70 }, "confidence": 0.9631829261779785, "id": 0, "label": "person" } ], "referenceImageIndex": 0 } } ], "eta": [0, 0], "next_poll_in": null, "next_poll_long": false } ``` --- # Project Projects are the main container for your work on Ikomia SCALE. They contains workflows and can be shared with others. ## Create a project[​](#create-a-project "Direct link to Create a project") Click on the **New project** button in the dashboard. You will be invited to fill the form: * a [workspace](https://docs.ikomia.ai/scale/collaboration/workspaces-and-organizations.md) to store your project. If you don't intend to share your project with others, you can select your personal workspace. * a name for your project. * a markdown description (optional). ![A screenshot of a modal allowing to create a new project](/assets/images/new_project_modal-2af25226f60d862c86838412333fc80c.png) Creating a new project Your project is ready to [receive your workflows](https://docs.ikomia.ai/scale/workflow/pushing-workflow)! ## Share a project[​](#share-a-project "Direct link to Share a project") The project is accessible to all members of the workspace to which it belongs. To be able to share a project with others, you need to use an [organization](https://docs.ikomia.ai/scale/collaboration/workspaces-and-organizations.md) and add members to it. By default, members will inherit permissions from the project's workspace. But you can also give [additional permissions](https://docs.ikomia.ai/scale/collaboration/roles-and-permissions.md) for the project specifically: go to the project **Members** tab and click on the ![wheel](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAAAXNSR0IArs4c6QAABUpJREFUaEPtWX1MlVUY/6GAfG044svAgKlQ/iFqCglKrhEZJUOY1WyE1wvMMgyJGE0asHDLmcxN0xyCNq0tcxlcEjJMTC6fOkD6QxwMp/LhhWFwQbjy1Z53QRfue99z7uvrws3z5z2/5/ec33nOe87zPNdqUN83iad4WD0T8D9H71kEpgLQ29sLTckFtLW2ScZkydIl2BIdhYULFyoSO8UicPbsOdy40cy1qNWrVyEmJpoLywIpJuDIkWPo7u5m+RPmFy3yxK5dH3JhWSDFBOzb9xWGh4dZ/oR5e3t77N2bwYVlgRQRMDo6ipycXJavGfNZWZmwsbGxyEYMzCVgZMQA+ki9vb1EHd6/r8Phw99YtJjk5F3w8HAXtbl79x7c3NxgZ7eAyckUQGSFhadAu+zh4YGg4LVYGbgCCxb8R15RcQXl5X8wnRkDwsNfw8aNr07/9OjRIzQ0NqGmuhY9PT2wtbWFShWPxYu9JXmZAg7mHcKDvgczSCj0gYEr4Ofni5qaWpBIOcPH5wUEBwehtbUNzc1/CZtkPFyec0Hqnk/kC9Bqq1Ba+puctSlmExm5CSEh68zymY3A0MOHyDt4CAaDQbHFyCGio5qWlgp7eztRc7MCioo0qK+/Jsen4jZBQWsQFbWZX0BXVzeOHv0Wk5NzI9O2srIC3Vru7m4mIkQjcPx4vuwPU/Ht/5fQ188XCWoVW0B7ezsKCk7JXoednR1CQ0OEN4PueRsbW3R1daKzswuV2ioM6gdlc6sTVPDz9Z1hbxKBuvprKC7SyHLiH+CP2NgtcHRwELWnC6Gk5AIaGhpl8VMCSImg8TARMDw8gvz8Auh0Oouc0LuwdWssl82F0jJUaau5sFMgT09PJCWphQdOUgBN0qv449mf0HLzFpcTBwcHpKd/Cmtray78xMQE6IHs/7ufCx/woj/efWeryeLJWPIl5k0RZqcFPKuqra2DRvMrE8riZqYSN2+24MyZHyQd7d79segVJ2XU39+PAwfyJHnj4t5HQIC/JIYpgKwzM7MkSTIyPoOTkxNzN2cDpHjpOGZnf8HkVERAWtoeWTVudvaXGBsbM7vI3NycxxfAc4TE7meWZ71ej/37v5aExcfHYdmypfKP0OXLV3DpEjvPX78hFJveiGCtecY870ccERGOsLANZrlFj5Cl1ygrY5ztfXx8XLhGB/oHuEQvX/6S8MaIlaCKPWTkZNu297gWVKwpQV1tPRd2CkSdjMREjofs+vUGnD//i0XkU2AhlYiJhqOjo6g9vfLFxRqh+pIzaINoo4yHSQRaWm7h9Onv5fALNtQyoQqKallK5ubPtxaSuY6OTmirqjE0OCSbOyFxB3x9fKQF0OyJgpO43X5btqMnYUgtSdX2D0yoRT9ina5HaJPMlYJm3rx5SElJhouLC58AQs2lknJdyCt4K/JN0cDO+aKevqnU1BTLi3qSW1mpRVnZxSdxpLk5N0e9jeCgtZY9ZMZoc42tVatXgoqM3y+Wczd1Z6+C6ojXI8LRca8DjY1NJnkRdQKTkz+SFMtM5u7cuYuTJ78TumbUr6TzSK3FqcqIysSKij9x9Wol964SkNqKYWHrp3mo/9rU1ISamrrp1qJarYKX1/OPJ4CsidxgGIGzs7MomZzmLt0qrq6uonxUK1BzwLj/ak4FMwI82zrn2+s8Iiz5g8PRyRGfZ6Tz0DIxikSAvFjyFxP1jHbuTGIujgegmIBz534WbhKesWbNy4iOjuKBMjGKCejr60P+iULoB/SSTqnnr96x3eyFwFzxLIBiAix1rBT+mQCldlIuz1MfgX8AP/uGD9dVqWIAAAAASUVORK5CYII=) icon next to the member you want to manage. warning Currently, you can't move a project from one workspace to another. **If you need to share a project with others, make sure to create it in an organization workspace.** ## Delete a project[​](#delete-a-project "Direct link to Delete a project") To delete a project, go to the project settings and click on the **Delete** button. Make sure you delete all the workflows it contains, which also require to manually delete all the running deployments. ![A screenshot of the delete project section](/assets/images/delete_project-498cec164c706a1597aaca9f9c44f776.png) Deleting a project --- # Find algorithms on Ikomia HUB [Ikomia HUB](https://app.ikomia.ai/hub/) provides a large collection of algorithms for different Computer Vision tasks (image generation, OCR, classification, object detection, segmentation, pose estimation, etc). All algorithms from Ikomia HUB are ready to use in our Python API and in Ikomia STUDIO. ![Screenshot of Ikomia HUB](/assets/images/hub-57ea6cb015c4d09005a15c81907d532f.png) Ikomia HUB We offer search and filter functionalities to help you find the algorithms you need. You can search by name, task and type. note You don't need to create an account to use Ikomia HUB! But if you do, you will be able to access [your private HUB](https://docs.ikomia.ai/scale/workflow/algorithms/storing-algorithms.md). ## Deploying HUB algorithms directly[​](#deploying-hub-algorithms-directly "Direct link to Deploying HUB algorithms directly") We auto-generated basic workflows for many algorithms available on Ikomia HUB. This allows you to quickly deploy algorithms from the HUB without having to install Ikomia API or STUDIO. This is particularly useful if you have a basic use-case where you just want to deploy a specific model. On Ikomia HUB, you can find compatible algorithms by selecting the **One-click deployment** filter. Then, click on the **Deploy** button of the algorithm you want to use and select the project where you want to add the auto-generated workflow. You can then [deploy it on the infrastructure of your choice](https://docs.ikomia.ai/scale/deployment/managing-deployments.md). ![Screenshot of the deployment modal from Ikomia HUB](/assets/images/deploy_algo-df47da8e05d1d327be7ab5dfb7dc4f5a.png) Deploying algorithms from Ikomia HUB ## Use HUB algorithm in Python[​](#use-hub-algorithm-in-python "Direct link to Use HUB algorithm in Python") Ikomia API automatically handles the installation of HUB algorithms. Therefore, you can call the algorithm in a workflow directly. On the first call, Ikomia API will download and install the required dependencies automatically. ``` from ikomia.dataprocess.workflow import Workflow from ikomia.utils import ik from ikomia.utils.displayIO import display wf = Workflow() face_detector = wf.add_task(ik.infer_face_detection_kornia(), auto_connect=True) blur = wf.add_task(ik.ocv_stack_blur(), auto_connect=True) wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_people.jpg") display(blur.get_output(0).get_image()) ``` Note We recommend using the `ik` namespace as it gives auto-completion capabilities in your IDE. You can consult the Python API [documentation](https://ikomia-dev.github.io/python-api-documentation/advanced_guide/ik_namespace.html) for more information. ## Use HUB algorithm in Ikomia STUDIO[​](#use-hub-algorithm-in-ikomia-studio "Direct link to Use HUB algorithm in Ikomia STUDIO") In Ikomia STUDIO, open the HUB window by clicking on the ![HUB](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGEAAABhCAYAAADGBs+jAAAABHNCSVQICAgIfAhkiAAAABl0RVh0U29mdHdhcmUAZ25vbWUtc2NyZWVuc2hvdO8Dvz4AAAAodEVYdENyZWF0aW9uIFRpbWUAbHVuLiAwNCBtYXJzIDIwMjQgMTc6Mzg6NDkMZ/KsAAAHdklEQVR4nO2cSWyUZRjH/7N2uk1rFyi1QLUgFigpBIoLISiIF4HEgyQaD2qiMV44eDCR6MFojIknEhI5GhNNNJh4EIJIWEQ0GIHYig0FLB0oMF1maTt7x8P/+TRtWGSZmafN87u8/abvfPN+ff7/93m3qauzszMPo6S4S90Aw4KgAguCAiwICrAgKMCCoAALggIsCAqwICjAgqAAC4ICLAgKsCAowIKgAAuCAiwICrAgKMCCoAALggIsCArwlroBhcQl5apmDwBgTTM1VyFPfW5kEgBw6GIOADCeKWrz/sWcoADXbDzyUkbhY+f6MgBAVyOVHounAAA+Dz1S5mfFkRS1+MEJ1vtrpGhNBWBOUMGszAlvdfkBAEsrkwCAvSfHAADHwnw9maf2NrWy/pY2dgZvdzJH7DhKp4xlnaxSWMwJCphVTmiooHI3zuf1j6fpgM8vVQEAErmpyt7Tw3JSXt6ykOXqujQA4PD1sgK29j/MCQqYVU5YXE9NTaao5N/CvJ7ugOnsv8Tfb32Y9edX5ArVxBtiTlDArHKCQzbNUU6lM2H4n7jddERC3l8szAkKmFVOOC9rQRlZA3i2lRr7aZjX8ZusDW1d4gMAhCNxAEBftDjzAwdzggJK4gQ3KNX2OeyzVzezGU1VVGBOlHxxlKOUX0MsQ7EbL3P53Xz9hUUsPS6WjQGW766kBQ5c47g/Jo7Y0MrPfaKBL+z7fRwA0JuovfuHuwvMCQoo6ipqWx1jvuMxruF0NIry01mWOfbp+Uk2ySVdc9bF9x26xOs9p+iMjFhmZxdfX17H95/qnwAAXEmI0xpYr8zH+/hk9XRYksTJ/gQAYN9wNQAgnPXf66PeEeYEBRTFCZ1NVN5HG9kn+3Oc0Y5GuL4/KorM5dmUoGx91VZy1OI4wtkq+zvGHwKSC4IeOuPoBa6a7jkXAACk8vzc5jLef2GAn4tJOuZamr8fSLN+Ol8aTZoTFFDQ0VF9ORX73gY6wJ2i8nv6YyyvUplnxsoBAINJNscr0lg1h7liezsVW+7j/RZUUcnjCTrgi27W+y5UAQCYxNRx/uWUb0qpDXOCAgrqhJdWUHlBF5XaG+KM9MSAjNuHggCAVhf78jUejmriWSr/WD+VfVr2fN9fyxxQI8v8MojC2SjrT3fATMGcoICCOEGG4djQyh/G45yJ9l2j4o+M0gFPe6MAgOe8XNypBvv6rNznrJdO+Cw+FwDw1Xla4PV25oKA5IgVQeaa3rjOPv92mBMUUBAnNFUxtkEvO+0RWZ/vH6cz/DIz3hwYBQBU56lsZz7g6HmZizlinZ97xT+EOJN9rUNqyA5aczm942SEmXaQypyggII4oUKEmstQoXmZCY/lGPNqF5VfLb2/6yaDGre8r8lNxU8kZL/ARUd4ZOabzxR3T/h+Y05QQEGcEE9TwdL1Q5b30VpDyZ+NMDdE8vz4OeC8wXGE06c7hyRCOY6KKsvESQGWkShrRtJT3zfTMCcooCBOCI9Tk+GErONL3772QSaLg9fphO8zdQCAbT6ZJ8goKSvjnO485wnH0zxBt/FRNteTYy4Jx2iBSxN3dqpCG+YEBRTECWkZrBweYLm5gWXAS4W/sZTJYld3DQDgfJLr+Q+5OaOOSbN6slxdbWtiTnh1JUdFsSGuQV2J0RF9yYpCPEbRMCcooKA7a8EyKv/DtVRsrWvqwZ+k7HydjFLp5+S8j18c09VCR6xvYf3IENegomPMBVVBOujT07xPtYyeti3hdWu1jJ4kN/18hQ78ske+uZO61ye8P5gTFFCUPeaFosg3F/NUQ5N8j8CZGMgRUATKOXry+ajkbJbKjYpkuy/TCedlj/mpRcwZi1t4SsIrGW5gkPUiKdZrfoC/qKvgfeU2eOcY2zWcLO0+hDlBAUU5gdcfp9I+/oN9+KZGOqKjlrmiRvpy1xivE7IWFI7zeiDK8lyqEgDQnWSZHKRT2uax/oFu7l1/PcDR0tWU7E3LU76yjMp/ZgGvX2zjfXf1lHYfwpyggKKeRY1kGPNvrlDJB69TwY0ejnb8eRk9yYm7CbDe1QznB4nJqZp5sk1OaQxzh25viE4LJac+lnMae/cZlkvrWXbW00lVsu9RrG9rTsecoICSfj8hIqcqIjIzBspvXvkGzA1QyRNDLC+nbv3+rIwDe+N87OV+ztCr3MwNYyhNbjAnKGBGf1MnmqS058r4v8bLHDOcvvWq6lzZA4+MMlmk8qX9M5gTFDCjnfDLIJ2wvYVaevkR5obdf/I6PTl1tLNuAR3SHmQO+LaXzolkSrsfYU5QwIx2wv4LVH5XLRW/qo7K/uRxXvclOL+YV0OlL62mA45383DrwTBn1vkSn2E1JyhgVvznL2fG+3wLx/0dNVS8cyY2mqRjeofplCOjnImH0sX5Ly63w5yggFnhhOlUeWRNSHbyMi5aIioz9Gxe1/cYzAkKmNGjo5vhnHkdg44+/3aYExRgQVCABUEBFgQFWBAUYEFQgAVBARYEBVgQFGBBUIAFQQEWBAVYEBRgQVCABUEBFgQFWBAUYEFQgAVBAf8AaeovvbmEjYEAAAAASUVORK5CYII=) icon on the top right of the window. Then, you can search for the algorithm you need and click on the **Install** button. Installation may take a while as STUDIO will download the algorithm package and automatically install its dependencies. At the end, algorithm is loaded and available in your process library (in the "Plugins" folder). You are now ready to add it to your workflow. ![Screenshot of Ikomia STUDIO](/assets/images/studio_hub-31546abcfc3016624a961a41afd9f131.png) Ikomia STUDIO's HUB window --- # Store and share algorithms If you [created your own algorithm](https://ikomia-dev.github.io/python-api-documentation/integration/index.html), you can push it to your private HUB to share it with your team. While Ikomia HUB allows you to [find publicly available, Open-source algorithms](https://docs.ikomia.ai/scale/workflow/algorithms/finding-algorithms.md), you can also store and share your own algorithms on your private HUB. ## Push an algorithm to your private HUB[​](#push-an-algorithm-to-your-private-hub "Direct link to Push an algorithm to your private HUB") ### Using Ikomia CLI[​](#using-ikomia-cli "Direct link to Using Ikomia CLI") If you use our Python API, the easiest way to push your algorithm is to use Ikomia CLI. You can use the following command: ``` ikcli algo add ``` To update an algorithm already pushed, just use the following command: ``` ikcli algo update ``` ### Using Ikomia STUDIO[​](#using-ikomia-studio "Direct link to Using Ikomia STUDIO") With Ikomia STUDIO, you can push your algorithm by following these steps: 1. Make sure that you are logged to your Ikomia account. 2. Open the HUB window by clicking on the ![HUB](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGEAAABhCAYAAADGBs+jAAAABHNCSVQICAgIfAhkiAAAABl0RVh0U29mdHdhcmUAZ25vbWUtc2NyZWVuc2hvdO8Dvz4AAAAodEVYdENyZWF0aW9uIFRpbWUAbHVuLiAwNCBtYXJzIDIwMjQgMTc6Mzg6NDkMZ/KsAAAHdklEQVR4nO2cSWyUZRjH/7N2uk1rFyi1QLUgFigpBIoLISiIF4HEgyQaD2qiMV44eDCR6MFojIknEhI5GhNNNJh4EIJIWEQ0GIHYig0FLB0oMF1maTt7x8P/+TRtWGSZmafN87u8/abvfPN+ff7/93m3qauzszMPo6S4S90Aw4KgAguCAiwICrAgKMCCoAALggIsCAqwICjAgqAAC4ICLAgKsCAowIKgAAuCAiwICrAgKMCCoAALggIsCArwlroBhcQl5apmDwBgTTM1VyFPfW5kEgBw6GIOADCeKWrz/sWcoADXbDzyUkbhY+f6MgBAVyOVHounAAA+Dz1S5mfFkRS1+MEJ1vtrpGhNBWBOUMGszAlvdfkBAEsrkwCAvSfHAADHwnw9maf2NrWy/pY2dgZvdzJH7DhKp4xlnaxSWMwJCphVTmiooHI3zuf1j6fpgM8vVQEAErmpyt7Tw3JSXt6ykOXqujQA4PD1sgK29j/MCQqYVU5YXE9NTaao5N/CvJ7ugOnsv8Tfb32Y9edX5ArVxBtiTlDArHKCQzbNUU6lM2H4n7jddERC3l8szAkKmFVOOC9rQRlZA3i2lRr7aZjX8ZusDW1d4gMAhCNxAEBftDjzAwdzggJK4gQ3KNX2OeyzVzezGU1VVGBOlHxxlKOUX0MsQ7EbL3P53Xz9hUUsPS6WjQGW766kBQ5c47g/Jo7Y0MrPfaKBL+z7fRwA0JuovfuHuwvMCQoo6ipqWx1jvuMxruF0NIry01mWOfbp+Uk2ySVdc9bF9x26xOs9p+iMjFhmZxdfX17H95/qnwAAXEmI0xpYr8zH+/hk9XRYksTJ/gQAYN9wNQAgnPXf66PeEeYEBRTFCZ1NVN5HG9kn+3Oc0Y5GuL4/KorM5dmUoGx91VZy1OI4wtkq+zvGHwKSC4IeOuPoBa6a7jkXAACk8vzc5jLef2GAn4tJOuZamr8fSLN+Ol8aTZoTFFDQ0VF9ORX73gY6wJ2i8nv6YyyvUplnxsoBAINJNscr0lg1h7liezsVW+7j/RZUUcnjCTrgi27W+y5UAQCYxNRx/uWUb0qpDXOCAgrqhJdWUHlBF5XaG+KM9MSAjNuHggCAVhf78jUejmriWSr/WD+VfVr2fN9fyxxQI8v8MojC2SjrT3fATMGcoICCOEGG4djQyh/G45yJ9l2j4o+M0gFPe6MAgOe8XNypBvv6rNznrJdO+Cw+FwDw1Xla4PV25oKA5IgVQeaa3rjOPv92mBMUUBAnNFUxtkEvO+0RWZ/vH6cz/DIz3hwYBQBU56lsZz7g6HmZizlinZ97xT+EOJN9rUNqyA5aczm942SEmXaQypyggII4oUKEmstQoXmZCY/lGPNqF5VfLb2/6yaDGre8r8lNxU8kZL/ARUd4ZOabzxR3T/h+Y05QQEGcEE9TwdL1Q5b30VpDyZ+NMDdE8vz4OeC8wXGE06c7hyRCOY6KKsvESQGWkShrRtJT3zfTMCcooCBOCI9Tk+GErONL3772QSaLg9fphO8zdQCAbT6ZJ8goKSvjnO485wnH0zxBt/FRNteTYy4Jx2iBSxN3dqpCG+YEBRTECWkZrBweYLm5gWXAS4W/sZTJYld3DQDgfJLr+Q+5OaOOSbN6slxdbWtiTnh1JUdFsSGuQV2J0RF9yYpCPEbRMCcooKA7a8EyKv/DtVRsrWvqwZ+k7HydjFLp5+S8j18c09VCR6xvYf3IENegomPMBVVBOujT07xPtYyeti3hdWu1jJ4kN/18hQ78ske+uZO61ye8P5gTFFCUPeaFosg3F/NUQ5N8j8CZGMgRUATKOXry+ajkbJbKjYpkuy/TCedlj/mpRcwZi1t4SsIrGW5gkPUiKdZrfoC/qKvgfeU2eOcY2zWcLO0+hDlBAUU5gdcfp9I+/oN9+KZGOqKjlrmiRvpy1xivE7IWFI7zeiDK8lyqEgDQnWSZHKRT2uax/oFu7l1/PcDR0tWU7E3LU76yjMp/ZgGvX2zjfXf1lHYfwpyggKKeRY1kGPNvrlDJB69TwY0ejnb8eRk9yYm7CbDe1QznB4nJqZp5sk1OaQxzh25viE4LJac+lnMae/cZlkvrWXbW00lVsu9RrG9rTsecoICSfj8hIqcqIjIzBspvXvkGzA1QyRNDLC+nbv3+rIwDe+N87OV+ztCr3MwNYyhNbjAnKGBGf1MnmqS058r4v8bLHDOcvvWq6lzZA4+MMlmk8qX9M5gTFDCjnfDLIJ2wvYVaevkR5obdf/I6PTl1tLNuAR3SHmQO+LaXzolkSrsfYU5QwIx2wv4LVH5XLRW/qo7K/uRxXvclOL+YV0OlL62mA45383DrwTBn1vkSn2E1JyhgVvznL2fG+3wLx/0dNVS8cyY2mqRjeofplCOjnImH0sX5Ly63w5yggFnhhOlUeWRNSHbyMi5aIioz9Gxe1/cYzAkKmNGjo5vhnHkdg44+/3aYExRgQVCABUEBFgQFWBAUYEFQgAVBARYEBVgQFGBBUIAFQQEWBAVYEBRgQVCABUEBFgQFWBAUYEFQgAVBAf8AaeovvbmEjYEAAAAASUVORK5CYII=) icon in the top right corner of the main window. 3. Click on *"Installed algorithms"* in the left panel. 4. In the list, click on the *"Publish"* button of the algorithm you want to push. 5. Select the workspace where you want to push your algorithm and click *"OK"*. ## Share an algorithm[​](#share-an-algorithm "Direct link to Share an algorithm") Just like [projects](https://docs.ikomia.ai/scale/project.md#share-a-project), algorithms can be added to an organization workspace to be accessible by all members of an organization. Other members will then be able to find them on their private HUB. ## Publish to Ikomia HUB[​](#publish-to-ikomia-hub "Direct link to Publish to Ikomia HUB") Note Access to this feature is currently restricted to a select group of users who are subject to a review process. If you are interested in publishing your algorithm, please reach out to us at . If you want to share your algorithm with the community, you can publish it to Ikomia HUB. This will make it available to all Ikomia users. From the private HUB interface, open the algorithm details and click on the *"Publish"* button. You will be asked to set a license and a version number. It will then be available to all Ikomia users! ![Screenshot of the algorithm publication modal](/assets/images/publish_algorithm_form-247e22929c2061aa07fa83fb95d6beab.png) Publishing an algorithm to Ikomia HUB --- # Workflows with Ikomia API You can create a workflow using our [Python API](https://ikomia-dev.github.io/python-api-documentation/index.html). [YouTube video player](https://www.youtube.com/embed/mKPXdh_AQPw?si=xwv6iu3O4z_ZRtG4) ## Installation[​](#installation "Direct link to Installation") Recommended As Ikomia API automatically installs algorithm dependencies, we strongly recommend using a virtual environment to avoid conflicts with your system packages. * Windows * bash/zsh ``` python -m venv .venv .venv\Scripts\Activate.bat ``` ``` python -m venv .venv source .venv/bin/activate ``` Just install the Ikomia package and the CLI using pip: ``` pip install ikomia ikomia-cli ``` ## Make your own workflow[​](#make-your-own-workflow "Direct link to Make your own workflow") Here's a minimal code example demonstrating how to create a workflow that blurs faces in an image using the Ikomia API: ``` from ikomia.dataprocess.workflow import Workflow from ikomia.utils.displayIO import display from ikomia.utils import ik wf = Workflow() face = wf.add_task(ik.infer_face_detection_kornia(), auto_connect=True) blur = wf.add_task(ik.ocv_blur(kSizeWidth="61", kSizeHeight="61"), auto_connect=True) wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_people.jpg") display(blur.get_output(0).get_image()) ``` For a step-by-step guide and more detailed examples, **please refer to our [Python API documentation](https://ikomia-dev.github.io/python-api-documentation/advanced_guide/index.html)**. ## Push a workflow to SCALE[​](#push-a-workflow-to-scale "Direct link to Push a workflow to SCALE") To push your workflow, first export it to a JSON file using the `Workflow.save` method: ``` wf.save("path/to/your/workflow.json") ``` Once you have your workflow file, log in to your Ikomia account: ``` ikcli login # Export generated access token to your environment variables: export IKOMIA_TOKEN= ``` note If you have not defined a password on your account, you can generate an access token from [your settings](https://app.ikomia.ai/settings/tokens/) and export it to your environment variables directly. Then, use the Ikomia CLI `push` command: ``` ikcli project push ``` Under the hood, the CLI will package your JSON workflow file alongside the algorithms it depends on and upload them to SCALE. You will then be able to see your workflow in the project you specified from the SCALE dashboard. From there, you'll be able to [deploy it on the infrastructure of your choice](https://docs.ikomia.ai/scale/deployment/managing-deployments.md). --- # Workflows with Ikomia STUDIO You can create a workflow using [Ikomia STUDIO](https://ikomia.com/studio), our no-code editor built under the same engine as the Ikomia API. [YouTube video player](https://www.youtube.com/embed/wtFWBldtHZ4?si=iARs3Vb_ZPomwyJV) ## Installation[​](#installation "Direct link to Installation") Ikomia STUDIO is available for Windows and Linux. ### Download the installer[​](#download-the-installer "Direct link to Download the installer") | Windows | Linux | | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------- | | [Download installer (amd64)](https://s3.eu-west-3.amazonaws.com/installers.ikomia.com/IkomiaSetup.exe) | [Download installer (amd64)](https://s3.eu-west-3.amazonaws.com/installers.ikomia.com/IkomiaSetup) | On **Linux systems**, you can also use the following command to install Ikomia STUDIO directly: ``` wget https://s3.eu-west-3.amazonaws.com/installers.ikomia.com/IkomiaSetup && chmod a+x IkomiaSetup && ./IkomiaSetup ``` ## Make your own workflow[​](#make-your-own-workflow "Direct link to Make your own workflow") 1. Log in to your Ikomia account with STUDIO. 2. Create a workflow using the pre-installed algorithms or by installing ones from Ikomia HUB. 3. Try your workflow locally ![Screenshot of Ikomia STUDIO](/assets/images/studio_workflow_panel-50e1a24756eb06d240f73c870b9de4d5.jpg) Ikomia STUDIO workflow panel ## Push a workflow to SCALE[​](#push-a-workflow-to-scale "Direct link to Push a workflow to SCALE") 1. In the workflow panel, click on the ![share](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADYAAAA5CAYAAAB9E9gIAAAABHNCSVQICAgIfAhkiAAAABl0RVh0U29mdHdhcmUAZ25vbWUtc2NyZWVuc2hvdO8Dvz4AAAAodEVYdENyZWF0aW9uIFRpbWUAbHVuLiAwNCBtYXJzIDIwMjQgMTg6MDA6MjRQ0AtBAAAEIUlEQVRoge3ZzW9UVRjH8U8LpRaB+hIVBFooVjGKlqIkQJA3MYgJakLiwoQQF+pC3cta/Qdc6cL4tjDBiAkxkRgSAQElvoCQFlBaagu0QEtLX4Bpp3Vx7k2ZKW2nCDEzub/Nzc2Ze8/znO99zvmdM0U1NTVDClDF/3cAt0tJYvmmJLF8U5JYvilJLN9UsIlNvtUvLC4qAotra8HaNWtA1YIqUDqlBHR3XwaNjU1g77594I8/DoOhof/m9AqWWNGtMsHV1dVg27Z3wbyK2aCrow1c6esBg+k0SA8OhvvoWlJSCpqaz4GPP/kcnD7ddFPxJMRG0/Lly8EH778HutrP4DpCMZl0JqHBwXTGfTqrPdU/AL7+ZifYs/+XCcVVsMRuelaMayom1d7aCPpTKZDOIjIaucaohrbv+A5MiWbNzS9uBC9v2gDqT/4Nzl9ozym+hFisomidime/zostIJW6hpFERpDLav/0y+2g5WxrZkdR5b+25RWw4dlV4IuvdoTmcda5hFis2shRVM6dBVpbGjBWLWUS6+wMjmPX7j3gXNv5G/aT6u+P3hOef2h+Jbjn7nLQ3tE5ZpwJsVhrIu936WKoiXQ0ovHIHqs7AVKpMOJzZgeyP0SE4vUoXreW1CwCTc1h/Zs8aRJ4fv3qjPeXlITZcvasmUiI5a6qqvmgr7cbw7X07c5dYMfO7zN+XxR3FI34sqWhRtetXgHKp0/D9c4j25HEX0S4L58xLac4E2KxppSER9IDwWHEI/vnsfob/r64OIzdW69vAfMq54bnYpefHtszZhPMVQmxWPHOd0ZZqJ7421/02EJwqjFz/xT7gw8/+gw8XfsEWLNyGbirfDpGkhrKdi5R++XunpziTIjFamg4DRY9UoHhEX5u3Uowr2IOSEUu/8FZD4Af9x0EP/38Kzj0WzjbePzRh0HzmbAulkwO69hLL6wHM++/D3RdDl9Kx6Wx169YBUtswjvoxTU14J03t2J4p5x9hjHsGTNntZ6eXrD3wCGwZ/+hjPZY8ysD+Vc3bwJH646D3XsD+Z7eK2PGWbDEJlxjh48cAU0t4TTp3vKpGEkqPQqx0tIpYO0z4ayk/uQp0Np2ITOwyDN2dnWBuhNhBz0eqVgFS+ymT6kq5oYaePuNrWBSNEQjyI3iAePrmWjnvHvPgfCeiNTKZU+BI0frwLHjgdjVa6mc4kuIjaalS54EG6P9U1nZHRj9jCPbSWQTjGvq4KHfwcmGf0BvX261FSshNp5iz7dqxVKwsHoBmDq1DAwNhm5iMgMD4aT3Umcg1BB5zPq/wizZdqEDXIt24hNVQmyiujOqtfhUKXvnGzuQ7t4+0Nt3FbnPeuOpYInd8n80Y/VeuZpxbT7bdru6uqEKlliSWL4pSSzflCSWb0oSyzclieWb/gUJ9xCbLOi3gQAAAABJRU5ErkJggg==) icon on the left 2. Edit the workflow name and description (plain text or markdown) if needed 3. Select an existing project or create a new one 4. Click "OK" ![Screenshot of Ikomia STUDIO](/assets/images/studio_push-695e5456955fe751e2eb50b6884f7d0a.png) Pushing workflows from Ikomia STUDIO You will then be able to see your workflow in the project you specified in the SCALE dashboard. From there, you'll be able to [deploy it on the infrastructure of your choice](https://docs.ikomia.ai/scale/deployment/managing-deployments.md). ---