StackPath Edge Computing allows you to deploy containers or virtual machines (VM) on a platform with more diverse PoP locations than centralized cloud providers, getting your workload closer to your end users and clients.
Before walking through the setup process, lets first review a few options that will be available for use in Edge Computing Workloads.
Virtual Machine (VM) or Container?
What's a Container?
Containers allow you to package application code, dependencies, and configurations into a single object that can be deployed in any environment. Containers are created from container images. There is an Open Container Initiative which aims to set industry standards regarding container formats.
What's a Virtual Machine?
A virtual machine is a full software emulation of a physical computer and can perform all tasks a typical operating system normally would. Each virtual machine runs a full operating system and is allocated virtualized system hardware.
VMs versus Containers
That kind of software virtualization is the best fit for your workload? There are a few factors that play into the decision.
- Application diversity
- Virtual machines are optimal when running many different applications on a single machine - a jack of all trades. Containers are optimal for efficiently and typically run many copies of the same application.
- Resource consumption
- Unlike virtual machines, containers do not virtualize hardware. Instead, they share the OS kernel with the host. This allows containers to be lightweight in terms of size and CPU and Memory resources used, while a VM requires more resources and startup time.
- Deploy directly from testing to production
- Maintaining your applications on a container or VM means that you can test, deploy and scale on any machine.
- Maintaining your applications on a container or VM means that you can test, deploy and scale on any machine.
Anycast IP Address
All instances will be assigned internal and external IPs to allow direct access to each instance individually. These IPs are ephemeral and may change if the instance state changes. Ticking this option will allocate an anycast IP for the whole workload which is static for the lifetime of that workload.
Traffic directed to the anycast IP will enter the StackPath Network at the closest Edge Location, and will then be routed to the nearest location where the workload has instances running. Traffic will be balanced across all instances running in that location but there is no guarantee that this will be even across all instances. This means the anycast IP acts as a pseudo-load balancer, considering both the regional latency of the closest edge to the traffic source as well as spreading traffic across all running instances. At present, this routing logic is non-configurable, has no balance guarantee and there is no advanced balancing logic involved. Future plans include more advanced load balancing functionality.
Example 1: If a Workload is deployed with instances running in Dallas and London, and a visitor makes a request to the anycast IP from London, their traffic will enter the StackPath network in London and hit the instances running in London.
Example 2: If a Workload is deployed with instances running in Dallas only, and a visitor makes a request to the anycast IP from London, their traffic will enter the StackPath Network in London and will then be routed over the StackPath private backbone to hit the instances running in Dallas.
Create a New Workload
StackPath Edge Computing uses the concept of Workloads to organize different applications. A Workload can consist of one container or VM image that is deployed to one or many locations. Traffic is routed to the instances running in the workload either via direct, static IPs assigned to each instance or using an Anycast IP to route traffic to the closest StackPath location.
- Select Workloads from the left menu.
- Select Create Workload.
- Specify the configuration for the new Workload (see below).
- Select Create Workload.
- The Workload will be deployed within a few seconds.
Container Configuration
Name: Each Workload has its own name which allows unique identification when there are many workloads running.Docker Hub. For example, specifying nginx:latest will pull the latest Nginx image.
- Image: A URL to a container image hosted somewhere else can also be specified and the workload will pull it down. The URL should be in the same format that would work with docker pull i.e. URL/IMAGE:TAG
- Environment Variables: One or many key/value pairs can be supplied as environment variables. These will be available to all instances running in the workload.
- Add Anycast IP Address: All instances will be assigned internal and external IPs to allow direct access to each instance individually.
- VPC: Currently, workloads are assigned to a default VPC. A future release will open up more functionality for managing networks and multiple VPCs.
- Public Ports: By default, all traffic to the workload is denied. Specify which ports to open via TCP or UDP to allow traffic.
- Commands: Specify one or many commands that will be executed when each instance starts. Arguments can be passed as if they were additional commands.
- Spec: Choose the instance type that will be deployed to the Workload. All instances will be the same type. Every instance has a root SSD sized depending on the instance type selected. These disks are ephemeral and will be deleted when the instance is deleted (including if the instance dies unexpectedly). Available types and pricing are listed on the StackPath website.
VM Configuration
- Name: Each Workload has its own name which allows unique identification when there are many workloads running.
- Image: The OS image that will be deployed when the workload is created. The image will be pulled from a predefined list of options. For example, specifying CentOS 7 will pull the latest version of CentOS.
- VPC: Currently, workloads are assigned to a default VPC. A future release will open up more functionality for managing networks and multiple VPCs.
- Add Anycast IP Address: All instances will be assigned internal and external IPs to allow direct access to each instance individually. T
- Public Ports: By default, all traffic to the workload is denied. Specify which ports to open via TCP or UDP to allow traffic.
- SSH Key: One or more public SSH keys are required when creating a VM. Since password logins are disabled and not assigned, SSH will be your access point. If you don't yet have an SSH key, refer to our Generating an SSH key guide.
- Spec: Choose the instance type that will be deployed to the Workload. All instances will be the same type. Every instance has a root SSD sized depending on the instance type selected. Available types and pricing are listed on the StackPath website.
Deployment Targets
Workloads can be deployed to one or many locations, with specific instance numbers for each set of target locations.
- Name: A display name for the location specification.
- PoPs: One or many locations where the workload will be deployed.
- Enable Auto Scaling: Autoscaling based on deployment targets. Set a minimum and maximum number of instances, based on the percentage set for CPU utilization. Autoscaling is horizontal, not vertical, meaning currently you cannot increase the performance of a system, from 4x8 to a 12x64, for example. One thing to note with VM scaling, it will deploy another VM, within the same PoP, with the OS template chosen for the workload (ie: CentOs, etc). It will not install your apps. You should still use auto-deploy tools such as Puppet, Ansible, etc. to make sure the new VM is setup the same as the existing VM.
- Instances Per PoP: The number of instances to be deployed as part of this location specification. The number of instances will be deployed to each location e.g. if the config specifies 2 instances per PoP and selects London and Frankfurt, a total of 4 instances will be deployed. Failed instances will be automatically restarted to ensure this number is always maintained.
View the Workload
Having created a new workload, the Workloads section of the StackPath Control Portal will display the list of all workloads currently configured.
Selecting a specific Workload will provide more details about what is running.
Monitoring of basic metrics (CPU and memory) is included for no additional cost for every workload. These metrics are collected for each instance from outside the instance itself. No monitoring processes run inside instances themselves, which are entirely under your control through the image provided. Monitoring data is kept for 24 hours. More advanced monitoring will be available in the future.
Newly created workloads
The instance list will be populated within a few seconds of the Workload being created. Initially, the instances will show as Starting. There may be a slight delay before all instances show up in the list.
After a few seconds, the workload will be fully deployed and the full instance list will be shown as Running.
If an anycast IP address was selected for the workload, this will be shown when viewing the workload details.