Overview
You can use this document to learn how to:
- Create a virtual machine-based workload
- Create a container-based workload
At a high level:
- A container packages application code, dependencies, and configurations into a single object that can be deployed in any environment.
- A virtual machine is a full software emulation of a physical computer that can perform all tasks similar to an operating system.
- A workload consists of a container or virtual machine image that is deployed to one or many locations.
For additional introductory information on Edge Computing components, see Learn about Edge Computing and Serverless Scripting.
Create a container-based workload
- In the StackPath Control Portal, in the left-side navigation menu, click Edge Compute.
- In the left-side navigation menu, click Workloads.
- Click Create Workload.
- In Name, enter a descriptive name for the workload.
- In Workload Type, select Container.
- In OS Image, enter the name of the container image to deployed when the workload is created. The image will be pulled from Docker Hub. For example, if you enter nginx:latest, the latest Nginx image will be pulled. A URL to a container image hosted somewhere else can also be specified and the workload will pull it down. The URL should be in the same format that would work with docker pull i.e. URL/IMAGE:TAG
- Click Continue to Settings.
- In Environment Variables, enter one or more key/value pairs to supply as environment variables. These variables will be available to instances within the same workload.
- To learn more, see Learn about Environment Variables.
- (Optional) If you want your virtual machines to be assigned internal and external IP addresses that allow direct access to each virtual machine, then mark Add Anycast IP Address.
- Global anycast IP addresses allow traffic to be routed to the nearest StackPath edge location, which can reduce latency and response times for application users.
- To learn more about anycast IP addresses, see Learn About Global Anycast IP Addresses.
- In Public Ports, specify which ports to open via TCP or UDP to allow traffic. By default, all traffic to the workload is denied.
- In Commands, specify one or many commands that will execute when each instance starts. Arguments can be passed as if they were additional commands.
- Click Continue to Spec.
- Under Spec, select the instance type to deploy to the workload. All instances will be the same type. very instance has a root SSD sized depending on the instance type selected. These disks are ephemeral and will be deleted when the instance is deleted (including if the instance dies unexpectedly). Available types and pricing are listed on the StackPath website.
- (Optional) You can use persistent storage to expand the overall storage space for newly created workloads. You can use this storage to store logs or any other miscellaneous content.
- Under Persistent Storage, enter the mount path and storage size.
- You can add up to 1000 GB of storage. If you need more storage space, StackPath recommends that you use object storage. To learn more, see Create and Manage Object Storage Buckets.
- To learn more about persistent storage, see Locate and Manage Existing Persistent Storage.
- Under Deployment Target, under Name, enter a descriptive name for the deployment target.
- If you plan to add multiple PoP locations, then you can create a name that is descriptive of the PoP locations, such as North America.
- Under PoPs, mark a location to deploy the workload.
- You can mark multiple locations.
- Under Instances Per PoP, enter the number of instances to deploy per PoP.
- For example, if you selected Dallas and London, and you enter 2, then a total of 4 instances will be deployed, 2 in Dallas and 2 in London.
- Failed instances will automatically restart to ensure this number is always maintained.
- (Optional) If you want to enable auto scaling, mark Enable Auto Scaling, and then in the menu that appears, complete the missing fields.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- When a particular PoP location reaches this threshold limit, auto scaling will take place for that particular PoP.
- For Min Instances Per PoP, enter the minimum number of instances that can be created per PoP location when auto scaling is activated.
- For Max Instances Per PoP, enter the maximum number of instances that can be created per PoP location when auto scaling is activated.
- Auto scaling is horizontal, not vertical. For example, the performance of a system will not increase from 4x8 to 12x64.
- The auto scaling feature will deploy another virtual machine, within the same PoP and OS template. Your apps will not be installed. As a result, you should still use auto-deploy tools, such as Puppet or Ansible, to make sure the new virtual machine is set up the same as the existing virtual machine.
- To learn more about auto scaling, see Learn and Add Auto Scaling to a Workload.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- Review the pricing information, and then click Create Workload.
Create a virtual machine-based workload
- In the StackPath Control Portal, in the left-side navigation menu, click Edge Compute.
- Click Create Workload.
- In Name, enter a descriptive name for the workload.
- In Workload Type, select VM.
- In OS Image, select the OS image to deploy.
- The image is pulled from a predefined list of options. For example, if you select CentOS 7, then the latest version of CentOS will be pulled.
- Click Continue to Settings.
- (Optional) If you want your virtual machines to be assigned internal and external IP addresses that allow direct access to each virtual machine, then mark Add Anycast IP Address.
- Global anycast IP addresses allow traffic to be routed to the nearest StackPath edge location, which can reduce latency and response times for application users.
- To learn more about anycast IP addresses, see Learn About Global Anycast IP Addresses.
- In Public Ports, specify the ports to allow traffic through TCP or UDP.
- By default, all traffic to the workload is denied.
- In
- Click Continue to Spec.
- In Spec, select the virtual machine type to deploy to the workload.
- Every instance has a root SSD.
- Available types and pricing are listed on the StackPath website.
- (Optional) You can use persistent storage to expand the overall storage space for newly created workloads. You can use this storage to store logs or any other miscellaneous content.
- Under Persistent Storage, enter the mount path and storage size.
- You can add up to 1000 GB of storage. If you need more storage space, StackPath recommends that you use object storage. To learn more, see Create and Manage Object Storage Buckets.
- To learn more about persistent storage, see Locate and Manage Existing Persistent Storage.
- Under Deployment Target, under Name, enter a descriptive name for the deployment target.
- If you plan to add multiple PoP locations, then you can create a name that is descriptive of the PoP locations, such as North America.
- Under PoPs, select a location to deploy the workload.
- You can mark multiple locations.
- Under Instances Per PoP, enter the number of instances to deploy per PoP.
- For example, if you selected Dallas and London, and you enter 2, then a total of 4 instances will be deployed, 2 in Dallas and 2 in London.
- Failed instances will automatically restart to ensure this number is always maintained.
- (Optional) If you want to enable auto scaling, mark Enable Auto Scaling, and then in the menu that appears, complete the missing fields.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- When a particular PoP location reaches this threshold limit, auto scaling will take place for that particular PoP.
- For Min Instances Per PoP, enter the minimum number of instances that can be created per PoP location when auto scaling is activated.
- For Max Instances Per PoP, enter the maximum number of instances that can be created per PoP location when auto scaling is activated.
- Auto scaling is horizontal, not vertical. For example, the performance of a system will not increase from 4x8 to 12x64.
- The auto scaling feature will deploy another virtual machine, within the same PoP and OS template. Your apps will not be installed. As a result, you should still use auto-deploy tools, such as Puppet or Ansible, to make sure the new virtual machine is set up the same as the existing virtual machine.
- To learn more about auto scaling, see Learn and Add Auto Scaling to a Workload.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- Review the pricing information, and then click Create Workload.
View an existing workload
- In the StackPath Control Portal, in the left-side navigation menu, click Edge Compute.
- In the left-side navigation menu, click Workloads
- Click a specific workload to view additional details.
- Monitoring of basic metrics (CPU and memory) is included for no additional cost for every workload. These metrics are collected for each instance from outside the instance itself. No monitoring processes run inside instances themselves, which are entirely under your control through the image provided. Monitoring data is kept for 24 hours. More advanced monitoring will be available in the future.
- For a newly created workload, under Instances, the workload will contain a Starting status. After a few seconds, the workload will be fully deployed and the full instance list will be shown as Running.