July 1, 2016

Generating Certificates for Local Development

Generating Certificates for Local Development | Mixmax

Mixmax is a rich communications client integrated directly into Gmail using iframes. As such, we need to load all our resources over https - both for the security of our users, and to comply with Gmail's strict security policies. This creates issues during our normal development workflow, where we configure the app to refer to local resources.

Overview

Architecture

Our platform is hosted across a set of microservices, each having a unique role. We have a microservice for serving contact autocomplete, rendering out the compose modal, displaying the dashboard, and more. Every service sits on its own fully-qualified domain name (FQDN) - contacts.mixmax.com, compose.mixmax.com, and app.mixmax.com, respective to the aforementioned services. For development, however, we need to host our microservices locally. To maintain consistency and functional parity with our production environment, we put a proxy server in front of our microservices, which handles requests to specific domains, such as app.mixmax.com, and forwards them to the appropriate server. This parity means we don't need to hard-code a microservice's port to make a request to it. Up until this point, unlike our production environment, our proxy only handled http connections.

Problem

A content security policy determines how the browser handles requests to origins different from the page's origin. A strict content security policy like Gmail's is intended to keep the user safe, in particular disallowing insecure requests on a secure page. When those resource requests are rejected, our app fails to load, which makes it impossible to test during development. Read more about content security policy on HTML5 ROCKS. Gmail's content security policy makes it so that if you inject content that uses insecure resources, the requests for those resources are rejected. From the Mozilla Developer Network:

When a user visits a page served over HTTPS, their connection with the web server is encrypted with TLS and is therefore safeguarded from sniffers and man-in-the-middle attacks. If the HTTPS page includes content retrieved through regular, cleartext HTTP, then the connection is only partially encrypted; the unencrypted content is accessible to sniffers and can be modified by man-in-the-middle attackers, so the connection is not safeguarded. When a web page exhibits this behavior, it is called a mixed content page.

As a result, we've had to run Chrome in an insecure mode when testing our changes. This introduces developer friction, and makes results from testing locally inconsistent with results post-deploy, so we'd like to mitigate this problem.

We'd like to share our findings while reducing friction in our development workflow, specifically regarding changes to our proxy server. We'll also discuss the changes we made to support our existing livereload mechanism. The straightforward solution entails enabling our local proxy server to serve content over https so that the content security policy allows our app to function.

Generating the Certificates

Any https connection requires a set of certificates, which serve to prove the identity of the organization or entity on the other end of the connection, and to encrypt the data between the two parties.

Actual certificates should not be used during this process - they are not necessary, and distributing them opens them up to unnecessary risk. Were any developer's computer compromised, the attacker could use the certificates to intercept legitimate traffic to the real servers, and cause real damage to both users and the company. Since we don't want to use real certificates, we'll need to generate our own. In our case, we have a number of different domains we'd like to handle, including those on two separate second-level domains: mixmax.com and mixmaxusercontent.com.

To support arbitrary servers and minimize the number of changes we'll need to make if we add a new subdomain, we'll use wildcard certificates for *.mixmax.com and *.mixmaxusercontent.com. We'll also create a self-signed certificate authority with which to sign the certificates. Finally, we'll add the certificate as trusted to the root certificate store, which instructs Chrome to trust the certificate. Here's the script:

#!/bin/bash
KEY_SIZE=2048
SUBJECT="/C=US/ST=California/L=San Francisco/O=Mixmax/OU=Engineering"
DOMAINS="mixmax.com mixmaxusercontent.com"

# generate the certificate authority private key
openssl genrsa -out ca.key "$KEY_SIZE"

# self-sign the certificate authority
CA_SUBJECT="$SUBJECT/CN=Mixmax Engineering" # add a common name
openssl req -x509 -new -nodes -key ca.key -sha256 -days 1024 -out ca.pem -subj "$CA_SUBJECT"

# create a .key and a wildcard .pem for each domain
for domain in $DOMAINS; do
  # add the wildcard common name to the subject
  LOCAL_SUBJECT="$SUBJECT/CN=*.$domain"

  # generate the local certificate
  openssl genrsa -out "$domain.key" "$KEY_SIZE"

  # generate the certificate signing request for the local certificate
  openssl req -new -key "$domain.key" -out tmp-local.csr -subj "$LOCAL_SUBJ"

  # sign the local certificate with the certificate authority
  openssl x509 -req -in tmp-local.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out  -days 512 -sha256
done

# cleanup
rm -f tmp-local.csr

We run this script once for each developer, rather than create one set of self-signed wildcard certificates and check them in to git. This mitigates the admittedly small risk of the pre-shared certificates leaking, and keeps our developers marginally safer. To make installing the certificates easier, the above script also prompts the developer to add them as trusted certificates to the keychain:

sudo security add-trusted-cert -d -r trustRoot -k "/Library/Keychains/System.keychain" ca.pem

It seems that Chrome doesn't acknowledge the certificate unless it's in the admin cert store, hence the -d flag.

Configuring the Proxy

Our existing proxy server uses http-proxy to actually proxy the individual requests to the microservices, and includes a short mapping of fully-qualified domain names (FQDNs) to local ports. This mapping mirrors our production setup with different FQDNs for each app server, and means we don't need to hardcode the ports for each microservice into our codebase.

The original proxy server:

const httpProxy = require('http-proxy'),
  proxy = httpProxy.createServer({ws: true});

const domainsToPorts = {
  'app.mixmax.com': 3000,
  'compose.mixmax.com': 3001,
  // ...
};

// Handle requests to the proxy.
function proxyRequest(req, res) {
  const destinationPort = domainsToPorts[req.headers.host];

  // Only proxy if we have a destination.
  if (destinationPort) {
    req.target = `http://localhost:${destinationPort}`;
    proxy.web(req, res, {
      target: req.target
    });
  } else {
    res.writeHead(503); // Bad gateway
    res.end();
  }
}

// Handle request upgrades on the proxy.
function proxyWebsocket(req, res) {
  proxy.ws(req, res, {
    target: req.target
  });
}

http.createServer(proxyRequest)
  .on('upgrade', proxyWebsocket)
  .listen(80);

It may seem as simple as replacing http with https, but it's not. Node uses and maintains its own set of trusted certificates. As such, we can't simply make all server-to-server requests run over https without also making changes to every server-to-service request location. Moreover, we want to give our developers a chance to migrate their code - the parts that don't directly integrate with Gmail - so we'll keep the http proxy code.

The https proxy code will, similar to the http proxy code, need to map individual requests to the corresponding FQDN. Given the requirement that we support both mixmax.com and mixmaxusercontent.com, and given that we don't want to specify a port with each request, we'll need to use a TLS extension called Server Name Indication (SNI). With SNI, before the server responds with its certificate, the client signals the name of the server it wants to communicate with, and the server uses this to select the appropriate certificate. Node supports SNI out of the box in both the tls and https modules, which require a little effort to configure correctly.

const domainsToCertificates = {
  'mixmax.com': {/* cert and key */},
  'mixmaxusercontent.com': {/* cert and key */}
};

// Use the server name from the client to select the appropriate certificate.
function SNICallback(servername, callback) {
  // Grab the top-level and second-level domain names.
  const domainMatch = /(?:^|\.)((?:[^.])+\.(?:[^.])+)$/.exec(servername),
    topAndSecondLevelDomains = domainMatch[1];

  // Create the secure context for this request.
  const secureContext = tls.createSecureContext(domainsToCertificates[topAndSecondLevelDomains]);

  // And hand off the secure context to tls to complete the handshake.
  callback(null, secureContext);
}

https.createServer({SNICallback}, proxyRequest)
  .on('upgrade', proxyWebsocket)
  .listen(443);

Fixing Livereload

Our app now loads fine, but our livereload doesn't work anymore. Livereload is super useful during development, so let's fix it. Chrome is rejecting the websocket connections due to Gmail's content security policy, which is causing mixed content issues. We're using gulp-livereload and connect-livereload, and the gulp plugin provides its own development certificates. We haven't instructed Chrome to trust those certificates, so we'll need to feed the livereload connections through our proxy as well.

Each of our servers uses a different port for livereload. In the name of reducing the use of hardcoded ports, we'll use <server>-livereload.mixmax.com to connect the proxy server to the upstream livereload server.

At the Proxy

On the proxy side, we already have a mechanism to handle different domains - namely, the domainsToPorts map - so it makes sense to handle the livereload subdomains there:

const domainsToPorts = {
  // ...
  'app-livereload.mixmax.com': 35729,
  'compose-livereload.mixmax.com': 35730,
  // ...
};

That's all the proxy needs, because the existing code supports both loading the livereload script, and upgrading requests into websockets for livereload.

On the Client

On the client side, we can override the port by specifying the port option to connect-livereload, but we'd like to change the domain as well, which means we'll need to use the src option in connect-livereload.

There's a catch - by default, livereload will use the domain and protocol of the page, but uses the port option and falls back to the port 35729. We'd like the script to be served over https on port 443, but we'd also like the subsequent connections to be over port 443. When reading the src attribute of script tags, Chrome omits the port if it corresponds to the protocol. Therefore, we can't just specify the port in the URL. The undocumented query parameter port solves the problem by overriding the default port: https://app-livereload.mixmax.com/livereload.js?snipver=1&port=443. Now the livereload script makes the correct request to the correct protocol/hostname/port tuple.

if (Environment.is(Environment.LOCAL)) {
  app.use(require('connect-livereload')({
    // This is necessary to prevent the mixed-content issues with the secure proxy. The port query
    // parameter is parsed by the livereload script to ensure that it connects to 443 instead of the
    // default livereload port (35729).
    src: 'https://app-livereload.mixmax.com/livereload.js?snipver=1&port=443'
  }));
}

Conclusions

Now our developers can launch Chrome as normal, without needing to include flags to disable web security. We hope you've gained some insight into setting up your own secure proxy for development.

Interested in a low-friction development workflow? Come join us at Mixmax.

You deserve a spike in replies, meetings booked, and deals won.

Try Mixmax free