Overview
You can use this document to obtain introductory information about:
- Serverless Scripting
- Edge Computing
- Virtual machines
- Containers
- Workloads
Serverless scripting
With Serverless scripting, you can execute isolated scripts as a response to user requests.
This capability allows low latency execution of complex logic as geographically close to the user as possible. Being able to intercept a request, perform some processing, personalize the response, and return the response within milliseconds allows for improved response times, lower costs, and better user experiences.
Serverless scripts are written in Javascript. They are designed to execute within milliseconds and typically perform all the logic within the script itself.
Serverless is a great way to implement certain types of functionality, but what if the script also needs to perform callbacks to a central API, make database lookups, or process image data through a custom library? These still involve making calls back to separate infrastructure, perhaps running in a central public cloud region, which means more latency, bandwidth costs, and slower response times.
This is where Edge Computing comes in.
Edge Computing (containers / virtual machines)
Edge Computing is a fully managed environment that allows container and virtual machine workloads to run in any StackPath global location.
With no clusters to manage, the workload requirements are specified (CPU/RAM sizing) along with:
- Where the instances should be deployed
- How many instances should run globally
- What disk resources are needed
The container/VM image is then uploaded and StackPath does the rest.
StackPath will ensure that the number of instances specified per location are always running, manage the reliability and redundancy of the deployment, and deal with all security patches. You need only to focus on what's running inside the containers or VMs.
Combining Edge Engine and Edge Computing
Workloads are deployed on the same platform that all StackPath products run on. With free, low latency connectivity between other instances and StackPath services within the same location, applications can take advantage of sophisticated request flows.
For example, content can be hosted on our CDN, requests can then be intercepted with Serverless scripts, heavy processing handed off to a manipulation library running on containers, which then returns a customized response back to the script and finally to the user, all within the same edge location and completed within milliseconds. Eliminate unnecessary network hops back to centralized infrastructure and save on bandwidth costs at the same time as improving response times.
Virtual machines and containers
StackPath Edge Computing allows you to deploy containers or virtual machines (VM) on a platform with more diverse PoP locations than centralized cloud providers, getting your workload closer to your end users and clients.
Before walking through the setup process, lets first review a few options that will be available for use in Edge Computing Workloads.
Containers
Containers allow you to package application code, dependencies, and configurations into a single object that can be deployed in any environment. Containers are created from container images. There is an Open Container Initiative which aims to set industry standards regarding container formats.
Virtual machines
A virtual machine is a full software emulation of a physical computer and can perform all tasks a typical operating system normally would. Each virtual machine runs a full operating system and is allocated virtualized system hardware.
Virtual machine versus containers
That kind of software virtualization is the best fit for your workload? There are a few factors that play into the decision.
- Application diversity
- Virtual machines are optimal when running many different applications on a single machine - a jack of all trades. Containers are optimal for efficiently and typically run many copies of the same application.
- Resource consumption
- Unlike virtual machines, containers do not virtualize hardware. Instead, they share the OS kernel with the host. This allows containers to be lightweight in terms of size and CPU and Memory resources used, while a VM requires more resources and startup time.
- Deploy directly from testing to production
- Maintaining your applications on a container or VM means that you can test, deploy and scale on any machine.
- Maintaining your applications on a container or VM means that you can test, deploy and scale on any machine.
Additionally, review the pricing information for virtual machines and containers. Click on the following links, and then scroll down to the Pricing section:
Anycast IP address
All instances will be assigned internal and external IPs to allow direct access to each instance individually. These IPs are ephemeral and may change if the instance state changes. Ticking this option will allocate an anycast IP for the whole workload which is static for the lifetime of that workload.
Traffic directed to the anycast IP will enter the StackPath Network at the closest Edge Location, and will then be routed to the nearest location where the workload has instances running. Traffic will be balanced across all instances running in that location but there is no guarantee that this will be even across all instances. This means the anycast IP acts as a pseudo-load balancer, considering both the regional latency of the closest edge to the traffic source as well as spreading traffic across all running instances. At present, this routing logic is non-configurable, has no balance guarantee and there is no advanced balancing logic involved. Future plans include more advanced load balancing functionality.
Example 1: If a Workload is deployed with instances running in Dallas and London, and a visitor makes a request to the anycast IP from London, their traffic will enter the StackPath network in London and hit the instances running in London.
Example 2: If a Workload is deployed with instances running in Dallas only, and a visitor makes a request to the anycast IP from London, their traffic will enter the StackPath Network in London and will then be routed over the StackPath private backbone to hit the instances running in Dallas.
Workloads
StackPath Edge Computing uses the concept of workloads to organize different applications. A workload can consist of one container or virtual machine image that is deployed to one or many locations. Traffic is routed to the instances running in the workload either via direct, static IPs assigned to each instance, or with an Anycast IP to route traffic to the closest StackPath location.