1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299
| vcl 4.0; # 使用varnish版本4的格式. import std; # 标准日志需要加载此模块 import directors; # 加载后端轮询模块,为了可以使用下面的vcl_init # import表示加载varnish模块(VMODs) backend server1 { # Define one backend。定义后端主机,server1是自定义的后端主机名称 .host = "10.129.14.4"; # Gateway on dl-gw-01 .port = "8090"; # Port Apache or whatever is listening .max_connections = 30000; # That's it
.probe = { #.url = "/"; # short easy way (GET /) # We prefer to only do a HEAD / .request = "HEAD / HTTP/1.1" "Host: localhost" "Connection: close" "User-Agent: Varnish Health Probe";
.interval = 5s; # check the health of each backend every 5 seconds .timeout = 1s; # timing out after 1 second. .window = 5; # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick .threshold = 3; }
.first_byte_timeout = 300s; # How long to wait before we receive a first byte from our backend? .connect_timeout = 5s; # How long to wait for a backend connection? .between_bytes_timeout = 2s; # How long to wait between bytes received from our backend? }
backend server2 { # Define one backend .host = "10.129.14.5"; # Gateway on dl-gw-01 .port = "8090"; # Port Apache or whatever is listening .max_connections = 30000; # That's it
.probe = { #.url = "/"; # short easy way (GET /) # We prefer to only do a HEAD / .request = "HEAD / HTTP/1.1" "Host: localhost" "Connection: close" "User-Agent: Varnish Health Probe";
.interval = 5s; # check the health of each backend every 5 seconds .timeout = 1s; # timing out after 1 second. .window = 5; # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick .threshold = 3; }
.first_byte_timeout = 300s; # How long to wait before we receive a first byte from our backend? .connect_timeout = 5s; # How long to wait for a backend connection? .between_bytes_timeout = 2s; # How long to wait between bytes received from our backend? }
acl purge { # 定义允许清理缓存的IP # ACL we'll use later to allow purges "localhost"; "127.0.0.1"; "::1"; }
sub vcl_init { # 配置后端集群事件 # Called when VCL is loaded, before any requests pass through it. # Typically used to initialize VMODs. # 后端集群有4种模式 random, round-robin, fallback, hash # random 随机 # round-robin 轮询 # fallback 后备 # hash 固定后端 根据url(req.http.url) 或用户cookie(req.http.cookie) 或用户session(req.http.sticky)(这个还有其他要配合) new vdir = directors.round_robin(); vdir.add_backend(server1); vdir.add_backend(server2); # vdir.add_backend(server...); # vdir.add_backend(servern); }
sub vcl_recv { # Called at the beginning of a request, after the complete request has been received and parsed. # Its purpose is to decide whether or not to serve the request, how to do it, and, if applicable, # which backend to use. # also used to modify the request # 请求入口,这里一般用作路由处理,判断是否读取缓存和指定该请求使用哪个后端 set req.backend_hint = vdir.backend(); # send all traffic to the vdir director
# Normalize the header, remove the port (in case you're testing this on various TCP ports) set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
# Remove the proxy header (see https://httpoxy.org/ unset req.http.proxy;
# Normalize the query arguments set req.url = std.querysort(req.url);
# Allow purging if (req.method == "PURGE") { if (!client.ip ~ purge) { # purge is the ACL defined at the begining # Not from an allowed IP? Then die with an error. return (synth(405, "This IP is not allowed to send PURGE requests.")); } # If you got this stage (and didn't error out above), purge the cached result return (purge); } # 如果不是指定IP执行PURGE方法会报错,否则就执行 # Only deal with "normal" types if (req.method != "GET" && req.method != "HEAD" && req.method != "PUT" && req.method != "POST" && req.method != "TRACE" && req.method != "OPTIONS" && req.method != "PATCH" && req.method != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ /*Why send the packet upstream, while the visitor is using a non-valid HTTP method? */ return (synth(404, "Non-valid HTTP method!")); } # 如果使用上面指定的方法就报错。 # Only cache GET or HEAD requests. This makes sure the POST requests are always passed. if (req.method != "GET" && req.method != "HEAD") { return (pass); }
if (req.url ~ "/aquapaas/rest/usertags/") { return (hash); } # 如果请求的地址中包含/aquapaas/rest/usertags/,就缓存。 return (pass); }
# The data on which the hashing will take place sub vcl_hash { # Called after vcl_recv to create a hash value for the request. This is used as a key # to look up the object in Varnish.
hash_data(req.url); return (lookup); }
# Handle the HTTP request coming from our backend sub vcl_backend_response { # Called after the response headers has been successfully retrieved from the backend.
# Sometimes, a 301 or 302 redirect formed via Apache's mod_rewrite can mess with the HTTP port that is being passed along. # This often happens with simple rewrite rules in a scenario where Varnish runs on :80 and Apache on :8080 on the same box. # A redirect can then often redirect the end-user to a URL on :8080, where it should be :80. # This may need finetuning on your setup. # # To prevent accidental replace, we only filter the 301/302 redirects for now. if (beresp.status == 301 || beresp.status == 302) { set beresp.http.Location = regsub(beresp.http.Location, ":[0-9]+", ""); }
# Don't cache 50x responses if (beresp.status == 500 || beresp.status == 502 || beresp.status == 503 || beresp.status == 504) { return (abandon); # abandon 放弃处理,并生成一个错误。 }
# Set 2min cache if unset for static files # if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { # set beresp.ttl = 120s; # Important, you shouldn't rely on this, SET YOUR HEADERS in the backend # set beresp.uncacheable = true; # return (deliver); # }
# Allow stale content, in case the backend goes down. # make Varnish keep all objects for 24000 hours beyond their TTL # set beresp.ttl = 5m; # set beresp.grace = 24000h; if ( beresp.status == 200 ) { #只有返回状态为200 OK的数据才考虑缓存。 if (bereq.url ~ "/aquapaas/rest/usertags/") { #AAA数据缓存30秒,过期后缓存0秒。 set beresp.ttl = 30s; set beresp.grace = 0s; } } else { #其余数据不缓存。 set beresp.ttl = 0s; } return (deliver); }
# The routine when we deliver the HTTP request to the user # Last chance to modify headers that are sent to the client sub vcl_deliver { # Called before a cached object is delivered to the client.
if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS and the number of hits, disable when not needed set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; }
# Please note that obj.hits behaviour changed in 4.0, now it counts per objecthead, not per object # and obj.hits may not be reset in some cases where bans are in use. See bug 1492 for details. # So take hits with a grain of salt set resp.http.X-Cache-Hits = obj.hits;
# Remove some headers: PHP version unset resp.http.X-Powered-By;
# Remove some headers: Apache version & OS unset resp.http.Server; unset resp.http.X-Drupal-Cache; unset resp.http.X-Varnish; unset resp.http.Via; unset resp.http.Link; unset resp.http.X-Generator;
return (deliver); }
sub vcl_purge { # Only handle actual PURGE HTTP methods, everything else is discarded if (req.method == "PURGE") { # restart request set req.http.X-Purge = "Yes"; return (restart); } }
sub vcl_synth { if (resp.status == 720) { # We use this special error status 720 to force redirects with 301 (permanent) redirects # To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html")); set resp.http.Location = resp.reason; set resp.status = 301; return (deliver); } elseif (resp.status == 721) { # And we use error status 721 to force redirects with a 302 (temporary) redirect # To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html")); set resp.http.Location = resp.reason; set resp.status = 302; return (deliver); }
return (deliver); }
sub vcl_hit { # Called when a cache lookup is successful.
if (obj.ttl >= 0s) { # A pure unadultered hit, deliver it return (deliver); }
# https://www.varnish-cache.org/docs/trunk/users-guide/vcl-grace.html # When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically. # If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content.
# if (!std.healthy(req.backend_hint) && (obj.ttl + obj.grace > 0s)) { # return (deliver); # } else { # return (miss); # }
# We have no fresh fish. Lets look at the stale ones. if (std.healthy(req.backend_hint)) { # Backend is healthy. Limit age to 10s. if (obj.ttl + 10s > 0s) { #set req.http.grace = "normal(limited)"; return (deliver); } else { # No candidate for grace. Fetch a fresh object. return (fetch); } } else { # backend is sick - use full grace if (obj.ttl + obj.grace > 0s) { #set req.http.grace = "full"; return (deliver); } else { # no graced object. return (fetch); } }
# fetch & deliver once we get the result return (fetch); # Dead code, keep as a safeguard }
sub vcl_miss { # Called after a cache lookup if the requested document was not found in the cache. Its purpose # is to decide whether or not to attempt to retrieve the document from the backend, and which # backend to use.
return (fetch); }
sub vcl_fini { # Called when VCL is discarded only after all requests have exited the VCL. # Typically used to clean up VMODs.
return (ok); }
|