Server-Side Request Forgery Attacks

A server-side request forgery (SSRF) attack is when an attacker crafts a malicious HTTP request that triggers a further request from your server to a domain of their choosing. SSRF vulnerabilities can be used to probe your network or used to disguise denial-of-service attacks against third parties.


Prevalence Common
Exploitability Easy
Impact Harmful


There are many reasons your web-server might make outgoing HTTP requests, including:

  • Calling a third-party API in response to a user action.
  • Communicating with a Single Sign-On (SSO) provider.
  • Implementing an image upload functions that accept URLs instead of files.
  • Checking validation URLs - for example, hosted schema files referenced in XML documents.
  • Accessing open-graph meta-data used in generating link previews.

In some of these scenarios, the domain of the URL will be taken from the HTTP request. This allows an attacker to trigger HTTP requests to arbitrary domains. Malicious users will try to use this in denial-of-service attacks against other targets (for which you will get blamed), and to probe internal IP addresses in your internal network that are not intended to be public.

Protecting Your Site

Construct The Domains Of URLs On The Server

The easiest way to mitigate SSRF vulnerabilities is to never make HTTP requests to domain names drawn from the HTTP request. If you call the Google Maps API from your web-server, for instance, the domain of the API should be defined in server-side code, rather than pulled from the client. An easy way to do this is to use the Google Maps SDK, which looks like this in Java:

DirectionsResult result =
Disable External Validation URLs

XML documents often reference schema files hosted on remote URLs. Generally speaking, however, you should know how to validate an uploaded XML file ahead of time. If you perform validation of XML documents on your server make sure it is against a schema file stored locally, rather than drawn from an uploaded XML that could be controlled by an attacker.

Here’s how to disable external schema validation in the java.xml.validation package if you use Java, for example:

SchemaFactory factory   = SchemaFactory.newInstance("");
Schema        schema    = factory.newSchema();
Validator     validator = schema.newValidator();

validator.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");
Only Make Outgoing HTTP Calls On Behalf of Real Users

Some websites do need to make requests to arbitrary third-party URLs. Social media sites, for example, allow sharing of web links, and will often pull down the open graph meta-data from those URLs to generate link previews. In these cases, you need to protect yourself against SSRF attacks. This means you should:

  • Only make outgoing HTTP requests from your server in response to actions by authenticated users.
  • Limit the number of links a user can share in a given time-frame, to avoid abuse.
  • Consider making the user pass a CAPTCHA test with each link they share.
Validate The URLs You Do Access

To prevent an attacker probing your network, you should make sure server-side requests are only sent to publicly accessible URLs. To enforce this, you should:

  • Talk to your networking team about limiting which internal servers are reachable from your web-servers.
  • Validate that supplied URLs contain web domains rather than IP addresses.
  • Disallow URLs with non-standard ports.
  • Make sure all URLs are access over HTTPS, with valid certificates.

Note that a competent attacker will be able to set up DNS records pointing to private IPs, so simply validating a URL has a domain isn’t generally sufficient.

Keep A Blocklist

You should maintain a “blocklist” of domains you will never access in server-side requests, either in configuration files or in a database. This will help you to interrupt mischievous requests triggered by attackers, and stop any attempt denial-of-service attacks in their tracks.

Code Samples

The code samples below illustrate some techniques discussed above.

import re
import validators

from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
from IPy import IP
from opengraph import OpenGraph
from urllib.parse import urlparse

app = Flask()

limiter = Limiter(
  key_func       = get_remote_address,
  default_limits = [ "200 per day", "50 per hour" ]

def share(link):
  """Return the meta-data for a web-link shared by a user, throttling
  access by the remote IP address, and validating the link before
  accessing it."""

  # Add a protocol if not supplied.
  link = link.lower()
  link = link if re.match("^[a-z]+://.*", link) else f"https://{link}"

  # Reject invalid URLs or those containing private IP addresses.
  if validators.url(link, public=True):
    raise Exception("Invalid or private URL")

  components = urlparse(link)

  # Reject URLs with non-standard protocols.
  if components.scheme not in ("http", "https"):
    raise Exception("Invalid protocol")

  # Reject URLs with non-standard ports.
  if ':' in components.netloc:
    raise Exception("Please do not specify a port")

  # Reject URLs containing IP addresses rather than domains.
    raise Exception("Please specify domains rather than IP addresses")
  except ValueError:

  # Reject URLs where the domain is in our blocklist.
  if components.netloc in BLOCKLIST:
    raise Exception("Please do not share links to this domain")

  # Everything looks good, go grab the meta-data.
  return OpenGraph(url=link).to_json()
public class LinkMetaDataFetcher
    public static Map<String, String> getMetaData(String link) throws IOException
        // Make sure the URL has a protocol.
        if (!link.startsWith("http")) {
            link = "https://" + link;

        URL url = new URL(link);

        // Confirm this is a domain not an IP address.
        if (!org.apache.commons.validator.routines.DomainValidator.getInstance().isValid(url.getHost())) {
            throw new IllegalArgumentException("Invalid domain");

        // Be suspicious of anything that supplies a port.
        if (url.getPort() != -1) {
            throw new IllegalArgumentException("Invalid port");

        // Check the block list of forbidden sites.
        if (BLOCKLIST.contains(url.getHost())) {
            throw new IllegalArgumentException("Invalid link");

        // Download the meta-data and convert it to JSON.
        org.jsoup.nodes.Document doc = org.jsoup.Jsoup.connect(url.toString()).get();

        Map<String, String> meta = new HashMap<>();

        for (org.jsoup.nodes.Element tag :"meta[property^=og:]")) {
            meta.put(tag.attr("property"), tag.attr("content"));

        return meta;
const urlMetadata = require('url-metadata')
const express     = require('express')
const app         = express()

function authenticated(request, response, next) {
  if (!request.session || !request.session.user) {
    return response.redirect(`/login`)


const throttle = require("express-rate-limit")

// Only allow 10 links to be shared from a given IP address every minute.
app.use("/share/", throttle({
  windowMs: 60 * 1000,
  max: 10

app.get('/share', authenticated,(request, response) => {
  let link =

  // Make sure the URL has a protocol.
  if (!link.startsWith('http')) {
    link = `https://${link}`

  const url = new URL(link)

  // Confirm this is a domain not an IP address by checking the hostname
  // ends with a two-letter or three-letter domain.
  if (!url.hostname.match(/[a-zA-Z]{2,3}$/)) {
    return response.status(400)

  // Be suspicious of anything that supplies a port.
  if (url.port) {
    return response.status(400)

  // Check the block list of forbidden sites.
  if (BLOCKLIST.contains(url.hostname)) {
    return response.status(400)

  // Download the metadata for this URL.
    (metadata) => {
    (error) => {
      log.error('Error generating link preview: ' + error)
public IActionResult Preview(string url)
    var uri = new Uri(url);

    if (uri.IsFile || !uri.IsAbsoluteUri || !uri.IsDefaultPort || uri.Scheme != "https")
        return BadRequest("Please supply a valid HTTPS url.");
    IPAddress address;
    if (IPAddress.TryParse(uri.Host, out address))
        return BadRequest("URLs must reference a web domain rather than an IP address.");

    if (!Blocklist.Contains(uri.Host))
        return BadRequest("This domain is block-listed.");

    var graph = OpenGraph.ParseUrl(url);

    return Json(graph.Metadata);

Further Reading