Overview
You can use this document to learn about the predefined EdgeRules that you can use to configure how StackPath responds to requests to your website. You can use these rules with both CDN and WAF integrations.
Before you begin:
These predefined rules only work with domains that resolve to StackPath.
If you use a custom delivery domain or full-site integration, then to use these rules, you must change your DNS records, To learn more, see How-To Configure DNS for CDN/WAF with Your Provider.
Note:
To create and use custom EdgeRules, see Create, Manage, and Validate Custom CDN EdgeRules.
Review predefined EdgeRules
StackPath offers the following predefined EdgeRules:
Predefined EdgeRule | Description |
Force www Connections |
The Force WWW Connections rule redirects every request to a www subdomain. This rule applies to all requests directed toward the StackPath Edge Address and any configured delivery domains. As a result, StackPath recommends that this rule only redirects the apex domain to your www subdomain (assuming both domains already resolve to the CDN through DNS). Additionally, StackPath recommends that this rule only be used with full-site integration, and not with static asset integration. For example, this rule rule will redirect any asset at h8j6g5r2.stackpathcdn.com to www.h8j6g5r2.stackpathcdn.com, which will prevent anything from loading. Similarly, if you use any custom subdomains as a delivery domain (such as cdn.yourdomain.com), then the rule will redirect to www.cdn.yourdomain.com and become unusable. |
Custom robots.txt file |
You can use the Custom Robot.txt Support rule to configure which pages or files search engine crawlers can or cannot index from your site. Any change you make with this rule will override the contents of the robots.txt file on your origin server. When you first enable the rule, by default, the robots.txt will populate with a rule to disallow all indexing for CDN content. This setup is recommended to prevent search engine indexing bots from flagging duplicated content. Even though this rule allows a bot to crawl the page for those who have the WAF, the WAF can still block the bot if the bot does not have all the proper information for the WAF to verify. You will need to add a WAF rule to allow the bot to crawl. |
Pseudo Streaming |
You can use the Pseudo Streaming rule to seek random locations within MP4 or FLV files without downloading the entire video. Flash players such as Flowplayer and JWPlayer can be configured to send a query string parameter that indicates the user's selected time to the server. Typically, a query string parameter named start will be used by these players, such as https://cdn.url/myvideo.flv?start=1. |
Referrer Protection | You can use the Referrer Protection rule to allow only requests whose referrer header matches a URL that you specified. |
Enable a predefined EdgeRule
- In the StackPath Control Portal, in the left-side navigation, click Sites.
- Locate and select the desired site.
- This action will refresh the portal.
- In the left-side navigation menu, click EdgeRules.
- Under Predefined EdgeRules, enable the desired rule.
For Referrer Protection, enter the referred domains where requests can come from, and then click Add.
For Custom robots.txt file, update the file, and then click Save.
- To allow a bot to crawl through your page: update the robot.txt with the name of the bot, enter allow, and then a path. Review the following example:
User-agent: Semrushbot-SA
Allow: /
- To disallow a bot from crawling your page, update the robot.txt with the name of the bot, enter disallow, and then a path. Review the following example:
User-agent: Semrushbot-SA
disallow: /
- To further limit where a bot can or cannot crawl, you can place the slash ( / ) with the page location.
Test Redirects
EdgeRules are active as soon as you enable the rule. As a result, you can immediately test and adjust the rules.
If you test in a browser, the 301 response code indicates a permanent redirect. Based on the image below, the initial request redirects to a secure domain (green lock), which then redirects to an unusable www subdomain.
If you test in a terminal, a cURL command with the -IL options will force the process to follow any 301 redirects and show the response headers through the entire process. Review the following example.