Between Facebook controversy and mass data leaks, headlines have been dominated with questions around technological advances and ethical principles. Users are questioning and demanding the installment of regulations to protect their right to privacy. How does that impact those of us who deliver technical solutions?
How Privacy is Changing
The is an initiative that the European Union will enforce as regulation starting May 25th, 2018 to contain what is happening and attempt to make a change. Google lost in a first-of-its- kind ‘’ case in the UK against a businessman with a criminal conviction who wanted his data history removed from search engines.
What will happen next? Will North America follow? What about the rest of the world? These are questions that I can’t answer. What I can say is that we, the technical professionals working in the industry, have a duty as a community to consider ethical boundaries in the technology and software we create.
My Evolving Approach to Technology
In my professional role as a Solutions Architect, I spend time with clients, listen to their ideas and guide them on how technology can be used to make their concept a reality. In a nutshell, we go over what we plan to build, what problem we want to solve, what value the solution can bring to the user, and specifically focus on the user journey. We discuss their long term vision, how they will monetize their work and determine the high-level phases of the solution. Finally we discuss any apparent risks and security implications (especially when we are dealing with sensitive data).
After the meeting, equipped with the client’s perspective, I begin architecting a solution. The solution describes the composition of the technical team, the technology stack to use, the scope of the project, and includes a high-level breakdown of phases. The solution will also provide the client with cost.
At least, that’s how did things until one meeting changed my entire perspective on how to approach privacy and security.
The Client Who Changed My Processes
It began as a typical client meeting. I made notes on my laptop as my clients introduced themselves as seniors in the medical community.
Then, one woman on the phone mentioned that she would be the first user of the software. She explained that she used to have a fulfilling job in the workforce, but now due to a debilitating illness she is unable to work.
Time stopped in that moment for me and my entire approach changed. The first thing that came to mind was: what if it was me going through what she went through? What if I couldn’t work anymore? I felt an overwhelming urge to build this software in the best way possible, so that this woman’s lifestyle could improve. I wanted to ensure that her pain and struggle are respected, that her day to day activity is secure and private, and that her data is portable and available to her when she needs it.
She was willing to use her condition to participate in building a tool that could help others struggling with the same illness. In doing so, she helped me rethink my approach and consider the human aspect in order to build a better technological solution.
This goes beyond my ‘job.’ This is about ethics and humanity.
But then again, what about other software? Other clients? What about other users? Don’t all users, all people, require software that is safe, secure, and accessible?
The Impact of Technology and The Human Side of Privacy
When you are architecting software or any piece of technology, you don’t necessarily think about how it will affect the person using it; I mean really affect the person. I now realize that our user stories and use cases must account for the users’ privacy and security needs.
I now realize that it is important to consider the users’ privacy and security needs from the inception of the project and to ensure that the client understands its importance. A person’s data should belong to them and solely them; it should be available to them to access it, move it, modify it and delete it at their own discretion as long as it doesn’t break the law.
Now, when architecting a solution, I go through specific use cases that help me understand how the end users will not only interact with the data, but why and how they will use it. For instance, a person using software to track their health might want to be able to extract that information to either show it to doctors or get second opinions. In that case, how do we ensure that the data is secure and private before the user decides to extract it? How do we make sure the user has the right of access to their data? And what are the implications of a breach to that data? By considering the human side, I found myself agreeing with the ’s regulations.
Embracing Privacy by Design
Taking a privacy by design approach and ensures that the entire team - from devops, to designers, to developers and to quality analysts were on board. We should all be aligned to make sure that every use case is included in the design of the software.
What IS privacy by design?
Privacy by design is not necessarily about data protection, but rather designing so data doesn’t need protection with the root principle based on enabling service without data control transfer from the citizen to the system (i.e the citizen is not identifiable or recognizable).
(1) Data protection by design: where data controllers must put technical and organizational measures such as pseudonymisation in place to minimize personal data processing. (2) Privacy by default: where data controllers must only process data that are necessary, to an extent that is necessary, and must only store data as long as necessary.
Aside from these formal definitions, privacy by design is really a culture. It’s a mindset that will let your company stand out and cultivate respect and trust from its users.
Best practices for designing software with privacy and security in mind
As for designing software for privacy, after thorough research I’ve come up with six things to remember.
1. Ensure setup preserves the privacy of the data, its integrity and availability.
When building new IT systems that store personal data, ensure that the setup preserves the privacy of that data, its integrity and availability. For example, use certificates (SSL), encryptions, and implement proper access logging and monitoring of the application so that correct alerts are put in place in case of breaches. This will ensure that affected targets are notified in time.
2. Privacy from inception of the project through policies, terms and conditions.
At Rangle, we have policies, terms and conditions in place about anything that has privacy implications. We communicate to the user the intended use of the data, if it needs to be shared, who it is shared with, as well as why and when it will be disclosed. We also notify them if the data usage has changed at any point during their engagement with the platform. This all should be visible to the user before they become part of the platform as well as while they are using the platform.
3. Use the least amount of data possible.
A simple way to remember this “rule” is to only ask for information that’s absolutely necessary. An example of this is using ‘nicknames’ instead of the user’s full name, provided it’s not a legal platform dealing with legal data. You can also streamline the amount of data you gather by refraining from asking the user for their address unless the platform is used to ship or deliver home services. Refrain from sharing the user’s geolocation or storing it at any time even if the application needs to be geolocation aware. All these decisions need to be addressed carefully at the time of design of the platform. Overall, remember that the less data you have, the less damaging a breach will be. (This section is debatable when the company is built on AI and machine learning.)
4. Have end-to-end security with strong encryption.
It’s important to make sure that the company’s software development team have had the proper training on how to write secure code. It’s equally important to be sure that developers are including hours they spend on security efforts in their working hours estimations, that way, they won’t rush to build core features to meet deadlines. An example of this is db injections, where developers test for the most common hackable vulnerabilities in software.
5. Be transparent with end user about data usage and make privacy settings visible.
As I mentioned, it’s important to communicate the intended use of the data at the same moment the user is providing personal information. When providing this information, users should actively opt-in to targeted, direct or indirect marketing. Users should be able to easily access their privacy and security settings, and they should be given control over the data they’ve provided. Finally, it’s important to make the terms regarding their data understandable. If you’d like more information on designing with visibility and transparency, read this article about best practices for obtaining marketing consent.
6. Think of the user in the event of a breach.
The user has a right to claim compensation for damages caused by a breach. While this is often an overlooking topic when building software, it’s important to consider what happens and how the user is affected in the case of a breach. Remember to ask yourself, “what are the action points?” and “how will the user be compensated?”
Overall, privacy by design is a mindset that will let your company stand out and cultivate respect and trust from its users.
All the technology we create touches lives. It can improve it or worse, harm it. Privacy by design goes beyond going through a static checklist when building software. It about thinking of the ‘human’ experience of the software. It’s about empathy that connects us to other humans with feelings, thoughts, stories, and unique journeys. It’s about forgiveness, being free of judgement, being helpful, and promoting awareness about things that matter. It’s about the right to privacy, security, and most importantly, humanity.