Posts Tagged ‘security’

HTTP Strict Transport Security is subtly confusing

I stumbled into an issue with HTTP Strict Transport Security (HSTS) earlier this year, which led me down a rabbit hole to truly understand what HSTS was, how it should be used, and why it was introduced to begin with.

What is HTTP Strict Transport Security?

HTTP Strict Transport Security (HSTS) is a standard to force users to connect to a website via HTTPS instead of HTTP.

It’s simply a header that tells the browser only HTTPS connections should be made (i.e. the browser will not attempt HTTP connections), over a period of time:

Strict-Transport-Security: max-age=; includeSubDomains

A user has to visit the HTTPS site at least once for their browser to see the HSTS header and gain the protection offered by it; HSTS relies on a trust on first use scheme.

HSTS aims to address 3 classes of threats:

  • Passive Network Attackers: an attacker eavesdropping on an unsecure (HTTP) connection, allowing the attacker to grab user data (e.g. session identifiers from a non-secure cookie, which may have been set in a prior, secure, visit to the site).
  • Active Network Attackers: an attacker compromises a node on the network, redirecting the user elsewhere or serving the attacker’s content to the user.
  • Web Site Development and Deployment Bugs: a secure (HTTPS) site serving insecure content allowing an attacker to inject compromised content (e.g. serving a malicious script to the user)

HTTP → HTTPS redirects

You could, as an alternative, configure a web server to redirect all HTTP traffic to HTTPS. However, there is a some risk in doing this.

From MDN:

From cio.gov:

It is worth taking a step back here because, while redirects are a concern, RFC 6797, which details HSTS, isn’t specifically focused on redirects, but the broader need to push users to secure (HTTPS) connections.

Jackson and Barth proposed an approach, in [ForceHTTPS], to enable web resources to declare that any interactions by UAs with the web resource must be conducted securely and that any issues with establishing a secure transport session are to be treated as fatal and without direct user recourse. The aim is to prevent click- through insecurity and address other potential threats.

This specification embodies and refines the approach proposed in [ForceHTTPS].

To add some context here, also consider that this RFC is from 2012, close to 12 years old at this point, and the web looked quite different 12 years ago. HTTPS was nowhere near as ubiquitous as it is today. Sites did not always serve HTTPS and, for sites that did, HTTPS was typically provided as an option, not a requirement for all users (see Timeline of HTTPS adoption). Given that context, it’s unsurprising that section 7.2 of the RFC actually has a “SHOULD” behavior recommendation for a redirect:

If an HSTS Host receives an HTTP request message over a non-secure transport, it SHOULD send an HTTP response message containing a status code indicating a permanent redirect…

The takeaway here is that HTTP Strict Transport Security is not necessarily a replacement for an HTTP → HTTPS redirect. An initial connection to the HTTPS version of the site needs to be made for the browser to see the HSTS header and, assuming the risk is acceptable, a redirect is an elegant mechanism to automatically upgrade the connection for users.

You probably need a redirect…

Two mechanisms seemingly eliminate the needs for an HTTP → HTTPS redirect:

I say “seemingly” because, in practice, you’ll might find you still need a redirect.

Google has long supported a HSTS preload service. The service maintains a list of HSTS sites and browsers will only attempt HTTPS connections for sites on the list. However, adding a site to the preload list might not be appropriate for certain sites or site-owners and a requirement for a site to be added to the preload list is that it implements an HTTP → HTTPS redirect:

Browsers are making a push towards HTTPS-First Mode, attempting an HTTPS connection even if HTTP is specified by the user, and only falling back to HTTP if the HTTPS upgrade fails. However, implementation isn’t quite as cut-and-dry as that. For example, there’s a number of heuristic-based flags in Chrome that determine how and when HTTPS-First Mode is actually engaged, Balanced HTTPS-First Mode, HTTPS-First Mode For Typically Secure Users, etc. So if, for example, you migrate a site to HTTPS and drop HTTP support, don’t be surprised if users are unable to access the site, as an automatic HTTPS upgrade is not performed on bookmarked links (this was a situation I found myself in a few months ago).

The ultimate takeaway here is that, while there is some security risk, you probably support an HTTP → HTTPS redirect. In the future, a strict HTTPS-First Mode may eliminate the need for such a redirect but we’re not there yet.

Preventing fake signups

The problem

One annoying problem I began encountering with ScratchGraph a while ago was fake signups. Every so often I would notice a new account created but there would be no interaction on the site beyond account creation. I’d also notice some other errors in the application log, as the spam bot would attempt to fill in and submit every form on the landing page, so I’d get login failure and reset account errors as well. At first, I figured I could just ignore this; metrics would be a bit off, I’d have a bounced welcome email from time to time, and I could just purge the few fake accounts at some point the future. Unfortunately it got to the point where there were so many fake accounts being created that figuring out if an account was real took more effort, there was a lot of garbage in the database and logs and, perhaps most importantly, welcome emails being sent out would be bounced or flagged as spam, dragging down my email reputation and increasing the possibility of emails going to spam.

A solution

There’s a bunch of blog posts from email marketing services detailing this issue, e.g. Mailchip, Folderly. There’s usually a common few solutions mentioned:

  • ReCAPTCHA
  • Email confirmation for new accounts
  • Some sort of throttling
  • Honeypot fields

I opted for honeypot fields.

  • I wanted to minimize dependencies and additional integrations, so no ReCAPTCHA
  • Email confirmation seemed like too heavy of a lift and I disliked the idea of having a user jump from the application to their inbox

  • Throttling kinda makes sense (I could see if other forms were submitted around the same time, with the same email address, and flag the account), this is possible but not trivial for a PHP application where you don’t have service-level jobs running in the background
  • So, I added a honeypot field on the signup form. In practice, I wrapped a text input in a hidden div and I gave the input the name fullname. A user’s full name is also not requested/collected during sign up and I figured the spambot may try to gauge what to do with the field based on its name, so I should give it a realistic name. On the backend, if fullname is non-empty, the request still succeeds (HTTP 200), but an account is not created. Instead, email and IP address are logged in a table for spam signups (I figure I might be able to do something with this data at some point).

    Efficacy

    I was somewhat skeptical as to how well a honeypot field on the signup form would work, I was imaging spambots being incredibly sophisticated, maybe noticing the field was hidden or there was no change on the frontend. Turns out this is not the case or at least not the case for the spambots I was facing. From what I can tell, this relatively simple approach has detected and caught all fake signups from spambots.

    Fake signup caught via honeypot field

    I imagine there’s a shelf life to this solution (happy to be wrong) but, when it starts to fail, I can always integrate another approach such as throttling or ReCAPTCHA.

autocomplete=”off”

Something I haven’t thought about much, but very important: for sensitive information, turn off autocomplete on input tags.

<input type="text" name="super-secret-pin-num" autocomplete="off" />

It’s a non-standard attribute, but all the major browsers implement it (including Webkit/Safari).

h/t Pete Freitag

Network security and filthy lies told by Windows XP

Note: Everything below relates to Windows XP Professional with Simple File Sharing turned off.

One of the simple things that can be done to prevent unwanted peer-to-peer network access to data on Windows is to disable the Guest account (you can alternatively give permissions to specific users or groups, but for my situation this is a hassle as I, generally, don’t need the level of granularity). By some mechanism unknown to me (perhaps malware or a recent virus), the guest account on my desktop was turned on. With the guest account on and shared folders allowing everyone access, any machine connected to the network was able to seamlessly login and access anything in the shared folders. The situation bugged me for quite a while as I didn’t realize the active guest account was the culprit because from looking at the User Accounts extension in Control Panel, I saw the following:

win xp guest account off

Unfortunately, this does not mean the account is actually disabled, it simply means it doesn’t appear on XP’s welcome screen. I finally took at look at the Administrative Tools >> Computer Management extension, then navigated to Local Users and Groups >> Users, and saw that the guest account was enabled. Disabling it here (right-click on Guest >> Properties >> check the “Account is disabled” checkbox), actually disabled the account and prevented automatic authentication as Guest for incoming peer-to-peer connections.

win xp users

As you can probably guess my real annoyance here is the discrepancy between what appears in the User Account extension vs. the actual state of the account.