Overview
You can use this document to learn how to:
- Create a virtual machine-based workload
- Create a container-based workload
At a high level:
- A container packages application code, dependencies, and configurations into a single object that can be deployed in any environment.
- A virtual machine is a full software emulation of a physical computer that can perform all tasks similar to an operating system.
- A workload consists of a container or virtual machine image that is deployed to one or many locations.
For additional introductory information on Edge Computing components, see Learn about Edge Computing and Serverless Scripting.
Create a container-based workload
If you want to create a container-based workload with a private or custom image registry, see Add a private or custom image registry to a new container.
- In the StackPath Control Portal, in the left-side navigation, click Edge Compute.
- In the left-side navigation, click Workloads.
- Click Create Workload.
- In Name, enter a descriptive name for the workload.
- In Workload Type, select Container.
- In OS Image, enter the name of the container image to deploy.
- The image will be pulled from the Docker Hub. For example, if you enter nginx:latest, then the latest Nginx image will be pulled. (To learn more about Nginix, please visit Docker's website.)
- A URL to a container image hosted somewhere else can also be specified and the workload will pull it down. The URL should be compatible with docker pull, such as URL/IMAGE:TAG. (To learn more about docker pull, visit docker's website.)
- Click Continue to Settings.
- Use Environment Variables to configure an application’s operation.
- Enter one or more key/value pairs to use as environment variables.
- These variables will be available to instances within the same workload.
- Use Secret Environment Variables to protect sensitive application information.
- Use Downward API Environment Variables to expose application information to the workload and other instances in the same workload.
- For Downward API Environment Variables, you can mark Optional to indicate that the variable is not critical for the application to run. For example, if you mark Optional and a jsonpath results in no value, then the application will still run. The Optional button will appear for every variable you add.
- Review the following examples to learn how to obtain specific data:
Description Key Value Number of CPUs CPU_LIMITS .resources.limits.cpu Container ready status IS_READY .containerStatuses[*].ready City location MY_CITY_CODE .location.cityCode Public IP MY_EXTERNAL_IP .networkInterfaces[?(@.network=="default")].ipAddressAliases Private IP MY_IP_ADDRESS .ipAddress Name of the container slug MY_WORKLOAD_NAME .metadata.labels['workload\.platform\.stackpath\.net/workload-slug']
- Review the following examples to learn how to obtain specific data:
- To learn more, see Learn about Environment Variables
- For Downward API Environment Variables, you can mark Optional to indicate that the variable is not critical for the application to run. For example, if you mark Optional and a jsonpath results in no value, then the application will still run. The Optional button will appear for every variable you add.
- (Optional) If you want your virtual machines to be assigned internal and external IP addresses that allow direct access to each virtual machine, then mark Add Anycast IP Address.
- Global anycast IP addresses allow traffic to be routed to the nearest StackPath edge location, which can reduce latency and response times for application users.
- To learn more about anycast IP addresses, see Learn About Global Anycast IP Addresses.
- In Public Ports, specify which ports to open via TCP or UDP to allow traffic. By default, all traffic to the workload is denied.
- In Commands, specify a command that will execute when the instance starts.
- Arguments can be passed as additional commands.
- You can specify multiple commands.
- If you entered a variable, then you can use the Commands section to verify functionality.
- For the example in Step 10, you can enter the following command to print the variables to the logs every 10 seconds after the instance starts:
Line Number Command Line 1 /bin/sh Line 2 -c Line 3 while true; do echo -en '\n'; printenv MY_IP_ADDRESS MY_CITY_CODE MY_WORKLOAD_NAME MY_EXTERNAL_IP IS_READY CPU_LIMITS; sleep 10; done; - To view the output of this command in the portal, see View container logs.
- For the example in Step 10, you can enter the following command to print the variables to the logs every 10 seconds after the instance starts:
- Click Continue to Spec.
- Under Spec, select the instance type to deploy to the workload. All instances will be the same type. very instance has a root SSD sized depending on the instance type selected. These disks are ephemeral and will be deleted when the instance is deleted (including if the instance dies unexpectedly). Available types and pricing are listed on the StackPath website.
- (Optional) You can use persistent storage to expand the overall storage space for newly created workloads. You can use this storage to store logs or any other miscellaneous content.
- Under Persistent Storage, enter the mount path and storage size.
- You can add up to 1000 GB of storage. If you need more storage space, StackPath recommends that you use object storage. To learn more, see Create and Manage Object Storage Buckets.
- To learn more about persistent storage, see Locate and Manage Existing Persistent Storage.
- Under Deployment Target, under Name, enter a descriptive name for the deployment target.
- If you plan to add multiple PoP locations, then you can create a name that is descriptive of the PoP locations, such as North America.
- Under PoPs, mark a location to deploy the workload.
- You can mark multiple locations.
- Under Instances Per PoP, enter the number of instances to deploy per PoP.
- For example, if you selected Dallas and London, and you enter 2, then a total of 4 instances will be deployed, 2 in Dallas and 2 in London.
- Failed instances will automatically restart to ensure this number is always maintained.
- (Optional) If you want to enable auto scaling, mark Enable Auto Scaling, and then in the menu that appears, complete the missing fields.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- When a particular PoP location reaches this threshold limit, auto scaling will take place for that particular PoP.
- For Min Instances Per PoP, enter the minimum number of instances that can be created per PoP location when auto scaling is activated.
- For Max Instances Per PoP, enter the maximum number of instances that can be created per PoP location when auto scaling is activated.
- Auto scaling is horizontal, not vertical. For example, the performance of a system will not increase from 4x8 to 12x64.
- The auto scaling feature will deploy another virtual machine, within the same PoP and OS template. Your apps will not be installed. As a result, you should still use auto-deploy tools, such as Puppet or Ansible, to make sure the new virtual machine is set up the same as the existing virtual machine.
- To learn more about auto scaling, see Learn and Add Auto Scaling to a Workload.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- Review the pricing information, and then click Create Workload.
Create a virtual machine-based workload
- In the StackPath Control Portal, in the left-side navigation, click Edge Compute.
- Click Create Workload.
- In Name, enter a descriptive name for the workload.
- In Workload Type, select VM.
- In OS Image, select the OS image to deploy.
- The image is pulled from a predefined list of options. For example, if you select CentOS 7, then the latest version of CentOS will be pulled.
- Click Continue to Settings.
- (Optional) If you want your virtual machines to be assigned internal and external IP addresses that allow direct access to each virtual machine, then mark Add Anycast IP Address.
- Global anycast IP addresses allow traffic to be routed to the nearest StackPath edge location, which can reduce latency and response times for application users.
- To learn more about anycast IP addresses, see Learn About Global Anycast IP Addresses.
- In Public Ports, specify the ports to allow traffic through TCP or UDP.
- By default, all traffic to the workload is denied.
- In
- Click Continue to Spec.
- In Spec, select the virtual machine type to deploy to the workload.
- Every instance has a root SSD.
- Available types and pricing are listed on the StackPath website.
- (Optional) You can use persistent storage to expand the overall storage space for newly created workloads. You can use this storage to store logs or any other miscellaneous content.
- Under Persistent Storage, enter the mount path and storage size.
- You can add up to 1000 GB of storage. If you need more storage space, StackPath recommends that you use object storage. To learn more, see Create and Manage Object Storage Buckets.
- To learn more about persistent storage, see Locate and Manage Existing Persistent Storage.
- Under Deployment Target, under Name, enter a descriptive name for the deployment target.
- If you plan to add multiple PoP locations, then you can create a name that is descriptive of the PoP locations, such as North America.
- Under PoPs, select a location to deploy the workload.
- You can mark multiple locations.
- Under Instances Per PoP, enter the number of instances to deploy per PoP.
- For example, if you selected Dallas and London, and you enter 2, then a total of 4 instances will be deployed, 2 in Dallas and 2 in London.
- Failed instances will automatically restart to ensure this number is always maintained.
- (Optional) If you want to enable auto scaling, mark Enable Auto Scaling, and then in the menu that appears, complete the missing fields.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- When a particular PoP location reaches this threshold limit, auto scaling will take place for that particular PoP.
- For Min Instances Per PoP, enter the minimum number of instances that can be created per PoP location when auto scaling is activated.
- For Max Instances Per PoP, enter the maximum number of instances that can be created per PoP location when auto scaling is activated.
- Auto scaling is horizontal, not vertical. For example, the performance of a system will not increase from 4x8 to 12x64.
- The auto scaling feature will deploy another virtual machine, within the same PoP and OS template. Your apps will not be installed. As a result, you should still use auto-deploy tools, such as Puppet or Ansible, to make sure the new virtual machine is set up the same as the existing virtual machine.
- To learn more about auto scaling, see Learn and Add Auto Scaling to a Workload.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- Review the pricing information, and then click Create Workload.
View details for an existing workload
- In the StackPath Control Portal, in the left-side navigation, click Edge Compute.
- In the left-side navigation, click Workloads
- Click a specific workload.
- For a newly created workload, under Instances, the workload will display a Starting status. After a few seconds, the workload will be fully deployed, and the full instance list will display a Running status.
- Every workload is monitored for basic metrics, such as CPU and memory usage.
View container logs
- In the left-side navigation, click Edge Compute.
- Click Workloads.
- Locate and select the desired workload.
- Scroll down to the Instances section, and then select the desired instance.
- Scroll down to the Container Logs section, and then expand the menu.