Get Started with mkinf grid
Learn how to deploy and manage GPU-enabled VMs through the mkinf grid API for scalable AI inference
API v0.1 documentation
Welcome to mkinf! This guide will help you get started with our GPU on-demand platform via API. We’ll walk you through the process of authentication, listing available GPUs, create a VM and setup Firewall Rules.
Overview
mkinf provides scalable GPU resources on demand, ideal for making inferences of AI models with variable user base. Through our API, you can manage and deploy GPU VMs easily, ensuring you have the computational power you need when you need it.
Pricing
A40
$1.10 / h
L40s
$1.25 / h
A100 40GB PCIe
$1.60 / h
A100 80GB PCIe
$1.81 / h
A100 80GB SXM
$2.15 / h
Attachable persistent disks
$0.00013 / GB / h
VM specifications
Customers can create GPU-enabled VMs with the following specs
Type | vCPU | GPU | Memory | Disk | VPC Network |
---|---|---|---|---|---|
a40.1x | 6-core Intel Xeon (Ice Lake) | 1x NVIDIA A40 48GB PCIe | 60GB | 1x 960GB NVMe | 25 Gbps |
a40.2x | 12-core Intel Xeon (Ice Lake) | 2x NVIDIA A40 48GB PCIe | 120GB | 2x 960GB NVMe | 50 Gbps |
a40.4x | 24-core Intel Xeon (Ice Lake) | 4x NVIDIA A40 48GB PCIe | 240GB | 4x 960GB NVMe | 100 Gbps |
a40.8x | 48-core Intel Xeon (Ice Lake) | 8x NVIDIA A40 48GB PCIe | 480GB | 8x 960GB NVMe | 200 Gbps |
a100.1x | 12-core Intel Xeon (Ice Lake) | 1x NVIDIA A100 40GB PCIe | 120GB | 1x 960GB NVMe | 25 Gbps |
a100.2x | 24-core Intel Xeon (Ice Lake) | 2x NVIDIA A100 40GB PCIe | 240GB | 2x 960GB NVMe | 50 Gbps |
a100.4x | 48-core Intel Xeon (Ice Lake) | 4x NVIDIA A100 40GB PCIe | 480GB | 4x 960GB NVMe | 100 Gbps |
a100.8x | 96-core Intel Xeon (Ice Lake) | 8x NVIDIA A100 40GB PCIe | 960GB | 8x 960GB NVMe | 200 Gbps |
a100-80gb.1x | 12-core Intel Xeon (Ice Lake) | 1x NVIDIA A100 80GB PCIe | 120GB | 1x 960GB NVMe | 25 Gbps |
a100-80gb.2x | 24-core Intel Xeon (Ice Lake) | 2x NVIDIA A100 80GB PCIe | 240GB | 2x 960GB NVMe | 50 Gbps |
a100-80gb.4x | 48-core Intel Xeon (Ice Lake) | 4x NVIDIA A100 80GB PCIe | 480GB | 4x 960GB NVMe | 100 Gbps |
a100-80gb.8x | 96-core Intel Xeon (Ice Lake) | 8x NVIDIA A100 80GB PCIe | 960GB | 8x 960GB NVMe | 200 Gbps |
l40s-48gb.1x | 8-core AMD EPYC (Genoa) | 1x NVIDIA L40S 48GB PCIe | 147GB | N/A | 20 Gbps |
l40s-48gb.2x | 16-core AMD EPYC (Genoa) | 2x NVIDIA L40S 48GB PCIe | 294GB | N/A | 40 Gbps |
l40s-48gb.4x | 32-core AMD EPYC (Genoa) | 4x NVIDIA L40S 48GB PCIe | 588GB | N/A | 80 Gbps |
l40s-48gb.8x | 64-core AMD EPYC (Genoa) | 8x NVIDIA L40S 48GB PCIe | 1176GB | N/A | 160 Gbps |
l40s-48gb.10x | 80-core AMD EPYC (Genoa) | 10x NVIDIA L40S 48GB PCIe | 1470GB | N/A | 200 Gbps |
Authentication
To access the mkinf API, you need to authenticate with your Access Key or email and password.
Access Key
If you want to authenticate your requests with your project Access Key, always include it in the header of your requests.
Email and password
If you want to authenticate your requests using your email and password use the Sign in endpoint.
Response: You will receive the Access and Refresh tokens.
If you want to refresh your Access Token use the same endpoint as follow
List organizations
Retrieve all your organizations using the Lists Organizations endpoint
Response
List projects
Retrieve all your organization project using the Lists Projects endpoint
Response
Setup Billing Profile
Before start using mkinf APIs you need to setup your billing profile. You can access your billing dashboard using Billing.
Response: You will receive a temporary URL to your billing dashboard.
Make sure to insert your payment method.
Check available GPUs
Check for available GPUs using the List Availabilities and List Specs endpoints.
List Availabilities
Retrieves the available GPUs per region.
List Specs
Retrieves the available node specs.
Launch a VM
You can create a new VM using the type
field to specify the model and amount of GPU. You need also to pass the location
where you want to launch the VM, you can find all the available locations using the List Availabilities or List Locations endpoints. Finally you have to pass your ssh_public_key
.
1. Generate SSH Key
This will create a public-private key pair in the .ssh
directory.
2. List Images
List and choose your preferred OS image.
Response:
3. Create a VM
Specify the name
, type
, location
, image
and your ssh_public_key
You can also specify a startup_script
as well as a shutdown_script
.
Response: A successful response from this resource will contain the async operation.
4. Check VM Operation Status
Use the operation_id
to check the VM operation status.
Response: In a few seconds, the VM should be ready.
5. Get VM
Use metadata.id
to retrieve the instance you just created.
Response:
6. SSH Connection
Use the IP address to connect to your instance via SSH.
Replace YOUR_PRIVATE_KEY
with the path to your private SSH key and INSTANCE_IP_ADDRESS
with the IP address of your instance (public_ipv4.address
).
7. Deallocate instances
When you don’t need an instance anymore you can terminate it.
Firewall Rules
Setup your VPC Firewall Rules to have desidered access to VMs.
Use List Networks to retrieve your vpc_network_id
and List VMs to retrieve your VM id
and set it as resource_id
.
Usage
You can use Usage to retreives the products usage. Usage is updated daily shortly after midnight UTC.
Additional Tips
- API Documentation: Refer to the detailed API documentation for more advanced features and configurations.
- Support: Contact mkinf support for any issues or questions regarding API usage at info@mkinf.io.