The first Amino features were anonymous. We deliberately chose not to collect our users’ personal information because we didn’t have a specific need for it. New features came along to change that, though, and we needed a system to store and access that information.

No existing system met our specific legal and ethical requirements, so we decided to design one for ourselves. This is how we made it, including:

  • The process we created for gathering requirements
  • The requirements the process yielded
  • How we implemented a system that satisfied them

The requirements gathering process

We knew our new user accounts system would have to be secure by design before we could make it available to our users. It had to be architected for safety and security from the start. This project would be unsuitable for the usual “make something that works, then iterate” agile engineering processes.

Before we could protect the information in the system, we had to establish our threat model: that is, who we were protecting it from and how they might try to access it.

The easy and obvious answer is that our solution should be secure from hackers outside our company, inside our network, and to the extent possible even inside the user account system itself. The first is standard, as everyone thinks about external hackers trying to break into sensitive services. The second is sadly less common. In too many organizations, defense stops at the firewall and anyone gaining access to the corporate network has free rein to explore at the leisure. The third scenario gets little attention and is usually dismissed with “once they’ve invaded the service or database, all bets are off”, but that’s not good enough for a system meant to store a user’s most personal information. Our goal was to design a data store that makes life as difficult as possible for an attacker, even one who’s already gained access to it.

At least as potentially damaging as a Hollywood hacker is a disgruntled employee with valid server credentials. It’s as hard to protect against this case as it is against a successful internal hacker, but we owe it to our users to try. We knew it wasn’t sufficient to just decide this was futile and give up.

By our company’s nature, we expect to store very personal, very private information. We need that to do things for our users, such as booking doctor appointments at their request. Those users might not be using their computer in a safe or controlled environment, so we need to be very mindful of how and when we display a user’s own information to them.

It’s important to clarify what groups we’re not trying to protect against. We specifically don’t expect to shield information from valid subpoenas and similar authorized government requests. First, there’s a high legal barrier against asking for personal medical information. While we would forward any such requests to our legal department for review and advice, we expect that they would likely be narrowly targeted and legitimate. Second, it’s probably not technologically possible to make a system that could resist these inquiries. Unlike, say, an end-to-end encrypted messaging system, we have to be able to access the information we store so we can provide services for our users. If we didn’t need that data, we would not collect it.

Designing with empathy

Our most important value for this process is empathy for our users. This isn’t a buzzword or a warm fuzzy feeling, but an honest desire to consider their needs and perspectives. It’s easy to imagine which features we might want in a system made just for our own purposes. It’s harder—but absolutely critical—to imagine how others unlike ourselves might want or need to use it.

The best way to increase our understanding of our user’s needs is to diversify the pool of people we consult for advice. People of different genders, races, religions, socioeconomic brackets, and other attributes have wildly different ideas of acceptable risk and consequences. We tried to get input from as wide a variety of coworkers, friends, and family as we could.

In tech, though, we too often find ourselves in rooms filled with people who look an awful lot just like us. In these situations, we found that a useful planning tool was to continually ask ourselves “what’s the worst that could happen?” The answers very often framed the discussions that followed. Examples included:

  • What if an abusive spouse discovered that our user was searching for information about sexually transmitted diseases?
  • What if unaccepting parents found out that their child was gay after seeing that they’d looked for psychological counseling?

Either of those could have very bad outcomes. For instance, we collectively decided that we never wanted to see our names in the news about an assault case triggered by information on our website.

Another useful empathy-building exercise is to consider the economic affect of a data breach or bad publicity on our company’s finances. We like Amino and want it to be successful. We can’t grow it into a respected, successful organization unless we take care of our users and their private information. Money can’t be the only (or even main) motivation for careful design, but it’s helpful to imagine how a serious failure could damage our employer, perhaps beyond recovery.

This emphasis on our users’ needs caused us to reject or largely redesign some product ideas. For instance, our product team made the perfectly reasonable request that we automatically store each user’s searches so that they could easily run them again at a later time. In a quick session of “what’s the worst that could happen?”, we imagined someone’s partner forcing them to log into our site so they could view their search history. That had unacceptable risks, so we negotiated a change such that users could explicitly choose to save a search but that we’d never do it automatically or without their permission.

Our requirements

After a lengthy and thorough process, we arrived at a set of hard requirements that a successful design would have to satisfy. We avoided proposing any specific technological solutions at this point, favoring descriptions of what the system must do over details of how it must do it.

Many of our requirements are imposed by regulation. These were easy to collect because they don’t require much decision making on our part. HIPAA describes limits on what data we can store and how it must be stored. Others aren’t as conveniently listed for us, but arise from security best practices and careful consideration.

  • Protected health information (PHI) and personally identifiable information (PII) must be housed in HIPAA-certifiable storage.
  • We must log access to our users’ information, both to read and to modify it.
  • Data must be encrypted either at the database or system level.
  • We store only the minimal data necessary to provide services, and avoid keeping very private information like Social Security numbers.

Other requirements are specific to customer products. For instance, we require email verification before permanently associating any personal information with an account to avoid scenarios like:

  • A user mistypes their email address while registering, then books an appointment.
  • The address’s actual owner clicks the verification link we sent to them and has access to the intended user’s information.

Session management is crucial in any user account system, and where we have to choose between security and convenience, security always wins.

Our users may only be logged in from one location. If they log into Amino from home, leave for work, then log in from there, the home login must be invalidated. This prevents scenarios like sneaky roommates or coworkers browsing information that a user is adding from another computer.

HIPAA requires that users are automatically logged out after 30 minutes without activity. We cheerfully agree with this.

An old security rule is to “never trust the client.” Although we need to set some kind of identifying information in a user’s logged in session cookie, we don’t want to expose their actual permanent database identifier, even in encrypted form. If a hijacker were to steal a session, any damage must be limited to the duration of that session so that they can’t continue to access the account afterward.

We trust our employees, but the best security designs don’t require us to. We want to use the principle of least privilege to limit any access to our database, even from ourselves whenever possible.

In particular, this means restricting the ability of any user to run queries like “return a list of all accounts in the system.” In normal operation, the user account service will only retrieve one specifically requested piece of information at a time. Any deviations from that must be logged and brought to our attention.

Since the system never needs to select many users at once, it shouldn’t be able to. Its database access account should not be able to execute “tell me everything!” queries, but should be limited to running stored procedures like “return information about this one specific user.”

Database migrations are not normal operation, and they sometimes do legitimately require access to run bulk queries. We should run migrations with their own specific privileges that are only available during the migration process and not at other times. Even then, we should log those operations to make sure they run as expected.

Implementation details

Our user account service architecture fell naturally out of its requirements. This is the point where our initial planning investment paid of handsomely. Since we had already decided what we needed, designing it was relatively easy.

We chose to implement the system as a new standalone microservice. We could pare its feature-set to the minimum and therefore reduce its attack surface, and encapsulate the necessary complications behind an easy to use interface.

The components of the new service didn’t need—and therefore shouldn’t have—easy access from other Amino services. We host it by itself in a new Amazon account covered by a Business Associate Agreement (BAA) that certifies it to store HIPAA-protected information. Those components occupy their own AWS Virtual Private Cloud (VPC). Security controls on the VPC deny all access except for connections from other Amino services to its API server’s HTTPS port.

The dedicated tenancy EC2 instances hosting the system’s API service have encrypted hard drives, as do our RDS for PostgreSQL database servers. After checking with outside counsel and finding that it wasn’t a legal requirement, we chose not to also require database-level encryption. Our thinking was that it wouldn’t protect the data from the weakest link: disgruntled employees who may have found the encryption keys that the user account service was using. It would also add significant complexity to the service’s code, making it hard to develop, test, and secure.

We run intrusion detection software on the service to monitor for and alert on unusual activity. We perform frequent vulnerability scans on each system, and regularly deploy any available software updates to fix known problems.

Database design

Our databases are the heart of our service and also its last line of defense against unwanted access. We protect them in a few ways:

  • As per HIPAA requirements, we log all database searches and updates to an encrypted S3 bucket. These logs are monitored for unusual or restricted activity.
  • We are in the process of restricting the user account service’s database login to a few specific stored procedure queries. A hacker with access to the service hosts could at worst monitor incoming connections to collect a list of currently active user sessions. They would be limited to asking for information about those users and would not be able to make queries about any other Amino users.

Our data is split across three physically separate units:

  • The account database is the top-level store that contains email addresses, hashed passwords, and links to the profile and logging databases.
  • The profile database stores only our users’ PHI and PII, and no links to the account or logging databases.
  • The logging database stores only logging information, and no links to the account or profile databases.

This separation gives the system several nice properties:

  • If only the account database is compromised, attackers at worst will have access to nothing but a list of email address and hashed passwords.
  • If only the profile database is compromised, attackers won’t have the links required to associate different pieces of information together. For example, it wouldn’t be possible to tell which “first name, last name” collections correspond to insurance card objects or appointment booking requests. They would be able to see that someone in the system has a particular insurance policy but not who that person is, and not which user made an appointment to see a doctor for a sore back.
  • If only the logging database is compromised, attackers would see that someone searched for a family practice doctor, while someone else searched for a orthopedic surgeon. It would not be possible to tell who performed those searches.

The account database

The account database stores the bare minimum necessary to create and authenticate users. It deliberately stores no information about their real names, phone numbers, or other identifying information. Each kind of data we’d need to store about a user—their name and address, insurance information, communication preferences, etc.—is assigned a different random UUID that refers to its own record in the profile database. If a user starts to store personal information before their account is verified, and they or someone else tries to register their email address again, those records and their links are destroyed so that the new account owner starts over with a blank slate. Passwords are hashed with the Argon2 key derivation function (the winner of the Password Hashing Competition) and never stored in plaintext.

Email verification tokens are also UUIDs. To verify their email address, users must supply both the token and their password. This avoids the problem of a user’s mistyped address landing their personal information in the wrong hands.

When a user logins into the service, we generate a new random UUID session token, store it in place of whatever token might have been on the user’s account record, and set an expiration timestamp on their account to thirty minutes in the future. This immediately logs out any other sessions at home or work, because each account can only physically store a single session token. That session token is returned to the user in place of any usernames, ID objects, or other permanently identifying information.

We also provide API endpoints to check and refresh a session’s validity: if the token exists and hasn’t expired, it updates the expiration time to thirty minutes from the current time and returns “true.” This allows users to stay logged in as long as they’re actively using our services, but invalidates their session after thirty idle minutes.

The result of all this is that tokens are inherently ephemeral and unguessable: it’s not possible for a hacker with access to the API to formulate a request like “give me all of Joe Smith’s information.” At worst, they can say “give me the information for this token, as long as it’s no more than half an hour old and the user hasn’t logged out or logged back in on another device.” This avoids entire classes of security problems.

The profile database

The profile database stores validated, well-formed records of many types. Each record has a separate “link key” as described in the account database section, and that link key is never stored in another profile record. This creates relationships like “user has an insurance card” and “user has a personal information record,” but never “this insurance card is related to that personal information record.” We expose these records with REST operations like “create an insurance card for the user with this session token” or “patch the communication settings to update the phone number for the user with that session.”

The logging database

We don’t currently log user actions, but it’s possible that we may decide to later. If we do, the separate logging database will:

  • Be append-only so that the user account service only has permission to add new log data, and not to edit, delete, or fetch existing log entries, and
  • Contain only de-identified data so that at most we’d be able to see that “the user with this random UUID logging token wanted to find a pulmonologist”

No system would have simultaneous read access to the account database and the logging database to be able to resolve those tokens. We might want to see that perhaps people who searched for allergy specialists also search for dermatologists. By design, we don’t want to be able to see who those people are.

Conclusion

It’s not easy to plan, design, and implement a user account service that both gives users ready access to their own information and keeps others from being able to do so. It’s especially not easy to make one designed to prevent our own employees from being able to do that without anyone noticing.

However, by using real empathy for our users and a policy of “first, do no harm,” we believe we’ve accomplished that. We want to share these ideas to help other engineers do the same for their own users.