TLS in 2026: How to Configure HTTPS for Speed and Security
Modern TLS best practices: TLS 1.3, HSTS, OCSP stapling, secure ciphers, and HTTPS performance tuning.
HTTPS is no longer just about encrypting traffic. It directly affects your security posture, page load performance, and whether browsers and search engines trust your site at all. Browsers flag HTTP pages as insecure. Certificate Transparency logs are public. And a misconfigured TLS stack can silently degrade both safety and speed.
This post covers the TLS baseline you should be running in 2026, the performance implications of your choices, and the operational habits that keep things from drifting. The goal is a configuration that is secure by default, fast for end users, and straightforward to maintain.
Recommended TLS Baseline
A solid TLS configuration in 2026 comes down to a handful of decisions. Each one eliminates a class of real-world attacks or operational failures.
Prefer TLS 1.3, keep TLS 1.2 for compatibility
TLS 1.3 (RFC 8446) is the only protocol version you should actively prefer. It removes all legacy cipher negotiation, mandates forward secrecy, and completes the handshake in a single round trip. There is no reason to support TLS 1.0 or 1.1 — both were formally deprecated by RFC 8996 in 2021.
Keep TLS 1.2 enabled as a fallback. Some older clients — certain Java HTTP libraries, embedded devices, and corporate proxies — still do not negotiate 1.3. Dropping 1.2 entirely risks breaking those connections without meaningful security gain, since TLS 1.2 with AEAD ciphers remains strong.
ssl_protocols TLSv1.2 TLSv1.3;Disable weak legacy ciphers and protocols
For TLS 1.2, restrict the cipher list to AEAD suites with ECDHE key exchange. This eliminates CBC-mode ciphers (vulnerable to padding oracles), static RSA key exchange (no forward secrecy), and anything based on RC4, DES, or 3DES. TLS 1.3 cipher suites are not configurable in most servers — they are already limited to strong AEAD ciphers by the spec itself.
# TLS 1.2 ciphers — TLS 1.3 suites are fixed by the protocol
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305';
ssl_prefer_server_ciphers off;Why ssl_prefer_server_ciphers off? When every cipher in your list is already strong, letting the client choose allows it to pick the cipher best suited to its hardware (e.g. ChaCha20 on mobile devices without AES-NI). This follows Mozilla's current "Intermediate" guidance.
Enable HSTS
HTTP Strict Transport Security (RFC 6797) tells browsers to refuse plain HTTP connections to your domain for a specified duration. Without it, the first request to your site can be intercepted and downgraded — the classic SSL-stripping attack.
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; A two-year max-age is the minimum for HSTS preload list submission. The always directive ensures the header ships on error responses too, not just 200s.
Before adding includeSubDomains: Verify that every subdomain has a valid certificate. HSTS with includeSubDomains will hard-fail any subdomain served over plain HTTP or with an expired cert — and the only way to undo it is to wait out the max-age in every affected browser.
Enable OCSP stapling
Without stapling, the client contacts the CA's OCSP responder on every new connection to check whether your certificate has been revoked. This adds a DNS lookup and an HTTP request to TLS handshake latency, and it leaks your visitors' browsing patterns to the CA. With stapling, your server fetches the OCSP response and bundles it into the TLS handshake.
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/yourdomain.here/chain.pem;
resolver 9.9.9.9 1.1.1.1 valid=300s;
resolver_timeout 5s; The resolver directive is required — without it, Nginx cannot look up the OCSP responder hostname and stapling silently fails. Point it at reliable public DNS resolvers or your own infrastructure DNS.
Force HTTPS redirect
Use a dedicated server block for the HTTP-to-HTTPS redirect rather than an if inside your HTTPS block. The redirect should be a 301 (permanent) so clients and search engines cache it.
server {
listen 80;
listen [::]:80;
server_name yourdomain.here;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}Automate certificate renewal and expiry monitoring
Let's Encrypt certificates expire every 90 days. Certbot's systemd timer handles renewal automatically, but you need a deploy hook to reload the server — otherwise Nginx keeps serving the old certificate from memory.
# Verify the renewal timer is active
systemctl status certbot.timer
# Add a deploy hook so Nginx picks up the new cert
certbot renew --deploy-hook "nginx -t && systemctl reload nginx" Beyond automated renewal, monitor certificate expiry externally. A simple cron job or monitoring check that runs openssl s_client against your domain and alerts when expiry is under 14 days catches renewal failures before they become outages.
# Check certificate expiry date
openssl s_client -connect yourdomain.here:443 < /dev/null 2>&1 \
| openssl x509 -noout -enddateRetest regularly
TLS configurations drift. Dependencies get updated, server packages change cipher defaults, and new vulnerabilities surface. Run an external scan after every change and on a regular schedule — quarterly at minimum. The Qualys SSL Labs test is the standard baseline. An A+ grade requires a valid HSTS header with at least six months of max-age.
# Quick local verification after config changes
openssl s_client -connect yourdomain.here:443 < /dev/null 2>&1 \
| grep -E 'Protocol|Cipher'
# Confirm old protocols are rejected
openssl s_client -connect yourdomain.here:443 -tls1_1 < /dev/null 2>&1 \
| grep 'alert'Performance Notes
A well-configured TLS stack should not be a performance bottleneck. In most cases, the right TLS settings actively improve latency compared to defaults.
Use HTTP/2 or HTTP/3 where stable
HTTP/2 multiplexes requests over a single TCP connection, eliminating the head-of-line blocking at the HTTP layer that plagued HTTP/1.1. It should be enabled on every HTTPS server — there is no downside for standard web traffic.
HTTP/3 (RFC 9114) goes further by running over QUIC (UDP), which eliminates TCP head-of-line blocking and provides 0-RTT connection resumption. Nginx 1.25+ supports it natively. Enable it alongside HTTP/2 — clients that support HTTP/3 will upgrade automatically via the Alt-Svc header, and the rest fall back to HTTP/2 over TCP.
listen 443 ssl;
http2 on;
listen 443 quic reuseport;
add_header Alt-Svc 'h3=":443"; ma=86400' always;Firewall note: HTTP/3 requires UDP port 443 to be open. If your infrastructure blocks UDP by default, HTTP/3 will silently fail and clients will stick with HTTP/2. Verify with curl --http3-only -I https://yourdomain.here.
Serve a clean certificate chain
A bloated or misordered certificate chain adds bytes to every TLS handshake. Serve only your leaf certificate plus the necessary intermediates — do not include the root CA (clients already have it in their trust store). For Let's Encrypt, the fullchain.pem file contains the correct chain.
ECDSA certificates produce smaller signatures and faster handshakes than RSA. Let's Encrypt supports ECDSA via certbot --key-type ecdsa. If your client base allows it, ECDSA P-256 is the best default — the handshake is measurably faster than RSA 2048, and the certificate itself is roughly a third of the size.
# Request an ECDSA certificate
sudo certbot certonly --nginx --key-type ecdsa --elliptic-curve secp256r1 -d yourdomain.here
# Verify the chain is correct and complete
openssl s_client -connect yourdomain.here:443 < /dev/null 2>&1 | grep -A 2 'Certificate chain'Session resumption
TLS session resumption lets returning clients skip the full handshake, saving a round trip. There are two mechanisms: server-side session caches and session tickets.
Session tickets encrypt session state with a server-held key and send it to the client. If that key is compromised, an attacker can decrypt past sessions — breaking forward secrecy. The safer option is a server-side session cache, which keeps session state in shared memory and does not expose key material to clients.
ssl_session_cache shared:TLS:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;Load-balanced setups: If you run multiple Nginx instances without shared memory, you either need to distribute session ticket keys securely across nodes (and rotate them at least daily) or accept that session resumption only works when a client hits the same backend. In TLS 1.3, the protocol's built-in PSK mechanism handles resumption more safely than legacy ticket schemes.
Conclusion
Modern TLS reduces both your attack surface and your connection overhead. TLS 1.3 eliminates an entire class of downgrade and cipher-negotiation attacks while shaving a round trip off every new connection. HSTS prevents SSL stripping. OCSP stapling removes a latency penalty and a privacy leak. HTTP/2 and HTTP/3 turn the single-connection TLS handshake into a multiplexed, low-latency transport.
None of this is exotic. The baseline described here — TLS 1.3 preferred, TLS 1.2 with AEAD ciphers as fallback, HSTS with preload, OCSP stapling, HTTPS redirect, automated cert renewal, and HTTP/2+3 — is achievable in an afternoon and maintainable with a quarterly scan. The friction is in the initial audit; once the configuration is in place, it largely takes care of itself.
The cost of not doing this is higher than it looks. A weak TLS configuration does not just fail security audits — it degrades performance for every visitor, erodes trust signals that browsers and search engines rely on, and leaves you exposed to attacks that were solved years ago.