Overview
You can use this document to learn how to:
- Create a virtual machine-based workload
- Create a container-based workload
At a high level:
- A container packages application code, dependencies, and configurations into a single object that can be deployed in any environment.
- A virtual machine is a full software emulation of a physical computer that can perform all tasks similar to an operating system.
- A workload consists of a container or virtual machine image that is deployed to one or many locations.
For additional introductory information on Edge Computing components, see Learn about Edge Computing and Serverless Scripting.
Create a container-based workload
To create a workload, there are 4 high-level steps:
- Step 1: Access the workload-creation screen
- Step 2: Add variables
- Step 3: Add VPC functionality
- Step 4: Configure storage and targets
If you want to create a container-based workload with a private or custom image registry, see Add a private or custom image registry to a new container.
Step 1: Access the workload-creation screen
- In the StackPath Control Portal, in the left-side navigation, click Edge Compute.
- In the left-side navigation, click Workloads.
- Click Create Workload.
- In Name, enter a descriptive name for the workload.
- In Workload Type, select Container.
- In OS Image, enter the name of the container image to deploy.
- The image will be pulled from the Docker Hub. For example, if you enter nginx:latest, then the latest Nginx image will be pulled. (To learn more about Nginix, please visit Docker's website.)
- You can also enter the URL to a container image hosted somewhere else. The URL should be compatible with Docker pull, such as URL/IMAGE:TAG. (To learn more about docker pull, visit Docker's website.)
- (Optional) To add private or custom registry to this container, mark Add Image Pull Credentials, and then complete the missing fields.
- To learn more, see Add a private or custom image registry to a new container.
- Click Continue to Settings.
Step 2: Add variables
This process in the portal is optional. To skip, go to Step 3: Add VPC functionality.
- Use Environment Variables to configure an application’s operation.
- Enter one or more key/value pairs to use as environment variables.
- These variables will be available to instances within the same workload.
- Use Secret Environment Variables to protect sensitive application information.
- Use Downward API Environment Variables to expose application information to the workload and other instances in the same workload.
- For Downward API Environment Variables, you can mark Optional to indicate that the variable is not critical for the application to run. For example, if you mark Optional and a jsonpath results in no value, then the application will still run. The Optional button will appear for every variable you add.
- Review the following examples to learn how to obtain specific data:
Description Key Value Number of CPUs CPU_LIMITS .resources.limits.cpu Container ready status IS_READY .containerStatuses[*].ready City location MY_CITY_CODE .location.cityCode Public IP MY_EXTERNAL_IP .networkInterfaces[?(@.network=="default")].ipAddressAliases Private IP MY_IP_ADDRESS .ipAddress Name of the container slug MY_WORKLOAD_NAME .metadata.labels['workload\.platform\.stackpath\.net/workload-slug']
- Review the following examples to learn how to obtain specific data:
- To learn more, see Learn about Environment Variables
- For Downward API Environment Variables, you can mark Optional to indicate that the variable is not critical for the application to run. For example, if you mark Optional and a jsonpath results in no value, then the application will still run. The Optional button will appear for every variable you add.
- To assign your instances internal and external IP addresses that allow direct access to each instances, mark Add Anycast IP Address.
- Global Anycast IP addresses allow traffic to be routed to the nearest StackPath edge location, which can reduce latency and response times for application users.
- To learn more about Anycast IP addresses, see Learn About Global Anycast IP Addresses.
- In Public Ports, specify which ports to open via TCP or UDP to allow traffic.
- By default, all traffic to the workload is denied.
- In Commands, specify a command to execute when the instance starts.
- Arguments can be passed as additional commands.
- You can specify multiple commands.
- If you entered a variable, then you can use the Commands section to verify functionality.
- For the example in Step 3, you can enter the following command to print the variables to the logs every 10 seconds after the instance starts:
Line Number Command Line 1 /bin/sh Line 2 -c Line 3 while true; do echo -en '\n'; printenv MY_IP_ADDRESS MY_CITY_CODE MY_WORKLOAD_NAME MY_EXTERNAL_IP IS_READY CPU_LIMITS; sleep 10; done; - To view the output of this command in the portal, see View container logs.
- For the example in Step 3, you can enter the following command to print the variables to the logs every 10 seconds after the instance starts:
Each PoP in StackPath has been assigned with a 64-bit ranged address. Any StackPath customer can use 64-bits for host addressing.
Step 3: Add VPC functionality
This process in the portal is optional. To skip, in the portal, click Continue to Spec, and then skip to Step 4: Configure storage and targets.
You can use this section to create and sync interfaces with existing VPCs and subnets to create a communication route between the instances in your workload.
By default, each workload has 1 interface that is assigned to the default network. While you cannot update or manage the default network, you can replace the default network with your own VPC.
Additionally, you can add up to 5 interfaces with 5 separate networks. While you can use the same VPC with multiple interfaces, you cannot use the same VPC for multiple interfaces when the VPC is public-facing.
To use these instructions, you must already have an existing VPC. To learn how to create a VPC, see Create and Manage a VPC.
- Under Interfaces, for the corresponding interface, use the drop-down menu to select an existing network.
- By default, you are allowed to convert one network to be public-facing. You can add multiple public-facing interfaces for an additional cost. To configure additional public-facing interfaces, please reach out to your Sales representative or email us at sales@stackpath.com.
- Workloads with multiple public IP addresses are subject to review in an effort to maintain the best performance for all of our customers.
- To enable Dual-Stack (IPv4/IPv6) on your Workload, select the drop-down menu under the IP Families column, then select IPv4/IPv6. What Dual-Stack (IPv4/IPv6) does, is it allows your Workload to provide both IPv4 and IPv6 support to your users in parallel. If you do not select the Dual-Stack (IPv4/IPv6) option, then IPv4 will be used by default.
- If you need to change between IPv4 and Dual-Stack (IPv4/IPv6), you will have to recreate your Workload.
- If you need to change between IPv4 and Dual-Stack (IPv4/IPv6), you will have to recreate your Workload.
- By default, you are allowed to convert one network to be public-facing. You can add multiple public-facing interfaces for an additional cost. To configure additional public-facing interfaces, please reach out to your Sales representative or email us at sales@stackpath.com.
- To add and configure additional interfaces, click the plus ( + ) icon, and then select another existing network.
- For a workload with multiple interfaces, none of the assigned networks or subnets can overlap IPv4 CIDR blocks.
- For instance, if you have 3 interfaces, then you can assign 3 different subnets from the same parent VPC. However, you cannot assign 2 subnets and its corresponding VPC because the VPC will overlap with the subnets.
- Click Continue to Spec.
Step 4: Configure storage and targets
- (Optional) Under Spec, select the instance type to deploy to the workload. All instances will be the same type. very instance has a root SSD sized depending on the instance type selected. These disks are ephemeral and will be deleted when the instance is deleted (including if the instance dies unexpectedly). Available types and pricing are listed on the StackPath website.
- (Optional) Use persistent storage to expand the overall storage space for newly created workloads. You can use this storage to store logs or any other miscellaneous content.
- Under Persistent Storage, enter the mount path and storage size.
- To learn more about persistent storage, see Locate and Manage Existing Persistent Storage.
- Under Deployment Target, under Name, enter a descriptive name for the deployment target.
- If you plan to add multiple PoP locations, then you can create a name that is descriptive of the PoP locations, such as North America.
- Under PoPs, mark a location to deploy the workload.
- You can mark multiple locations.
- (Optional) Under Instances Per PoP, enter the number of instances to deploy per PoP.
- For example, if you selected Dallas and London, and you enter 2, then a total of 4 instances will be deployed, 2 in Dallas and 2 in London.
- Failed instances will automatically restart to ensure this number is always maintained.
- (Optional) If you want to enable auto scaling, mark Enable Auto Scaling, and then in the menu that appears, complete the missing fields.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- When a particular PoP location reaches this threshold limit, auto scaling will take place for that particular PoP.
- For Min Instances Per PoP, enter the minimum number of instances that can be created per PoP location when auto scaling is activated.
- For Max Instances Per PoP, enter the maximum number of instances that can be created per PoP location when auto scaling is activated.
- Auto scaling is horizontal, not vertical. For example, the performance of a system will not increase from 4x8 to 12x64.
- The auto scaling feature will deploy another virtual machine, within the same PoP and OS template. Your apps will not be installed. As a result, you should still use auto-deploy tools, such as Puppet or Ansible, to make sure the new virtual machine is set up the same as the existing virtual machine.
- To learn more about auto scaling, see Learn and Add Auto Scaling to a Workload.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- Review the pricing information, and then click Create Workload.
Create a virtual machine-based workload
To create a workload, there are 4 high-level steps:
- Step 1: Access the workload-creation screen
- Step 2: Add variables
- Step 3: Add VPC functionality
- Step 4: Configure storage and targets
Step 1: Access the workload-creation screen
- In the StackPath Control Portal, in the left-side navigation, click Edge Compute.
- Click Create Workload.
- In Name, enter a descriptive name for the workload.
- In Workload Type, select VM.
- In OS Image, select the OS image to deploy.
- The image is pulled from a predefined list of options. For example, if you select CentOS 7, then the latest version of CentOS will be pulled.
- Click Continue to Settings.
Step 2: Add variables
- (Optional) To assign your instances internal and external IP addresses that allow direct access to each instances, mark Add Anycast IP Address.
- Global Anycast IP addresses allow traffic to be routed to the nearest StackPath edge location, which can reduce latency and response times for application users.
- To learn more about Anycast IP addresses, see Learn About Global Anycast IP Addresses.
- (Optional) In Public Ports, specify the ports to allow traffic through TCP or UDP.
- By default, all traffic to the workload is denied.
- In
Step 3: Add VPC functionality
This process in the portal is optional. To skip, in the portal, click Continue to Spec, and then skip to Step 4: Configure storage and targets.
You can use this section to create and sync interfaces with existing VPCs and subnets to create a communication route between the instances in your workload.
By default, each workload has 1 interface that is assigned to the default network. While you cannot update or manage the default network, you can replace the default network with your own VPC.
Additionally, you can add up to 5 interfaces with 5 separate networks. While you can use the same VPC with multiple interfaces, you cannot use the same VPC network for multiple interfaces when the VPC is public-facing.
To use these instructions, you must already have an existing VPC. To learn how to create a VPC, see Create and Manage a VPC.
- Under Interfaces, for the corresponding interface, use the drop-down menu to select an existing network.
- By default, you are allowed to convert one network to be public-facing. You can add multiple public-facing interfaces for an additional cost. To configure additional public-facing interfaces, please reach out to your Sales representative or email us at sales@stackpath.com.
- Workloads with multiple public IP addresses are subject to review in an effort to maintain the best performance for all of our customers.
- To enable Dual-Stack (IPv4/IPv6) on your Workload, select the drop-down menu under the IP Families column, then select IPv4/IPv6. If you do not select the Dual-Stack (IPv4/IPv6) option, then IPv4 will be used by default.
- If you need to change between IPv4 and Dual-Stack (IPv4/IPv6), you will have to recreate your Workload.
- If you need to change between IPv4 and Dual-Stack (IPv4/IPv6), you will have to recreate your Workload.
- By default, you are allowed to convert one network to be public-facing. You can add multiple public-facing interfaces for an additional cost. To configure additional public-facing interfaces, please reach out to your Sales representative or email us at sales@stackpath.com.
- To add and configure additional interfaces, click the plus ( + ) icon, and then select another existing network.
- For a workload with multiple interfaces, none of the assigned networks or subnets can overlap IPv4 CIDR blocks.
- For instance, if you have 3 interfaces, then you can assign 3 different subnets from the same parent VPC. However, you cannot assign 2 subnets and its corresponding VPC because the VPC will overlap with the subnets.
- Click Continue to Spec.
Multiple Public IP addresses can be added when creating a new workload or when editing an existing workload. If you do edit an existing workload, the old workload will be destroyed and recreated using the new settings.
For more information, please see How to Configure Virtual Machines with Multiple Public IPs.
Step 4: Configure storage and targets
- (Optional) In Spec, select the virtual machine type to deploy to the workload.
- Every instance has a root SSD.
- Available types and pricing are listed on the StackPath website.
- (Optional) You can use persistent storage to expand the overall storage space for newly created workloads. You can use this storage to store logs or any other miscellaneous content.
- Under Persistent Storage, enter the mount path and storage size.
- To learn more about persistent storage, see Locate and Manage Existing Persistent Storage.
- Under Deployment Target, under Name, enter a descriptive name for the deployment target.
- If you plan to add multiple PoP locations, then you can create a name that is descriptive of the PoP locations, such as North America.
- Under PoPs, select a location to deploy the workload.
- You can mark multiple locations.
- Under Instances Per PoP, enter the number of instances to deploy per PoP.
- For example, if you selected Dallas and London, and you enter 2, then a total of 4 instances will be deployed, 2 in Dallas and 2 in London.
- Failed instances will automatically restart to ensure this number is always maintained.
- (Optional) If you want to enable auto scaling, mark Enable Auto Scaling, and then in the menu that appears, complete the missing fields.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- When a particular PoP location reaches this threshold limit, auto scaling will take place for that particular PoP.
- For Min Instances Per PoP, enter the minimum number of instances that can be created per PoP location when auto scaling is activated.
- For Max Instances Per PoP, enter the maximum number of instances that can be created per PoP location when auto scaling is activated.
- Auto scaling is horizontal, not vertical. For example, the performance of a system will not increase from 4x8 to 12x64.
- The auto scaling feature will deploy another virtual machine, within the same PoP and OS template. Your apps will not be installed. As a result, you should still use auto-deploy tools, such as Puppet or Ansible, to make sure the new virtual machine is set up the same as the existing virtual machine.
- To learn more about auto scaling, see Learn and Add Auto Scaling to a Workload.
- For CPU Utilization, enter a threshold limit that will enable auto scaling.
- Review the pricing information, and then click Create Workload.
View details for an existing workload
- In the StackPath Control Portal, in the left-side navigation, click Edge Compute.
- In the left-side navigation, click Workloads
- Click a specific workload.
- For a newly created workload, under Instances, the workload will display a Starting status. After a few seconds, the workload will be fully deployed, and the full instance list will display a Running status.
- Every workload is monitored for basic metrics, such as CPU and memory usage.
View container logs
- In the left-side navigation, click Edge Compute.
- Click Workloads.
- Locate and select the desired workload.
- Scroll down to the Instances section, and then select the desired instance.
- Scroll down to the Container Logs section, and then expand the menu.