Home Insight Automated Decisions, Algorithms, Profiling and AI: EU Data Protection Lessons

Automated Decisions, Algorithms, Profiling and AI: EU Data Protection Lessons

Article

Companies and organisations should ensure that their data protection compliance is not reduced to a set of policies and procedures, quarterly reports and annual reviews. Data protection outcomes should not be synonymous with the introduction of enterprise privacy software, compliance team updates of controls or data privacy as intractable legal and IT add-ons to be overcome. Effective data protection should be dynamic and integral to day to day activities, in the way that workplace health and safety, financial probity and corporate good conduct flows through organisations, affecting almost every decision. Data protection should not play catch-up to digital transformation initiatives, IT strategy changes, research and development priorities or expansions of the supply chain. Data protection principles should be applied consciously to strengthen an organisation’s core DNA and operating model. As a result, whenever personal data are collected, stored or used, data protection should become a byword for responsible data management, excellent data ethics, protecting individual personal data, accountability, security, resilience, profitability, trust and innovation.

Data Protection by Design and by Default

In the same way that financial transparency, environmental impacts and board accountability are key measures for listed companies, data protection should be designed into an organisation’s way of doing business, so that it becomes second nature. The EU’s General Data Protection Regulation (GDPR) has increased the prominence and status of Data Protection by Design, Security by Design and Privacy by Design (PbD) practices. The data protection principles of transparency, accountability and data minimisation are crucial. Data Protection Impact Assessment (DPIA) is a practical tool to practice high level data governance, demonstrate compliance and add vital data intelligence to an organisation’s knowledge base. Data Protection should  be operationalised, at the beginning of decision-making processes and information life cycles to maximise the planned outcomes.  Poor data governance should be considered as problematic as poor workplace health and safety, poorly trained staff and financial mismanagement.

Automated Decisions

Automated decisions are assessments, judgements, results and outcomes made by computers without human intervention. These decisions are often made by computer calculations and the outcomes are not checked or verified by humans. These results can have serious economic, financial, political and social implications for individuals. Companies and organisations may carry out automated decisions without full awareness or assessment of its impact or that specific data protection rules apply. The outsourcing of Human Resources functions and other business processes have redirected some automated decisions away from organisations’ direct internal management structures, creating greater risks. However, legal responsibilities and liabilities remain with the organisation that act as the personal data controller.  Automated decisions can be based on assumptions about a person’s skills, decisions, actions, intentions or characteristics. Assumptions can be wrong, out of date or incomplete and cause discrimination, financial loss, loss of opportunity, distress or other damage.  Companies and organisations should be transparent about assumptions made by automated decisions and apply internal quality checks, testing and outcome verification. Individuals affected should also be provided with a way to intervene into the decision-making processes, request human involvement, express their views or question the outcome.

Algorithms and Strategy

An algorithm is a sequence of defined, computer-implementable instructions, used to solve a type of problem or to perform a computation. Algorithms are present where computers operate. As a result of the exponential growth of computing power, the enormous increase of data and the rise of artificial intelligence, the role of algorithms has become more prominent in everyday business and how organisations operate. As a result, companies and organisations should ensure that they have a clear strategy for the use of algorithms that affect individuals. The strategy should sit with overall business strategies for growth, efficiency, profits and innovation. All strategic outcomes should be quality tested against how they protect individual’s personal data, promote information security (and cybersecurity), encourage data transparency, create data accountability and data fairness (quality and data minimisation).

Profiling

The rise of information technology, online transactions, social media and internet usage around the world have created an explosion of profiling. Companies and organisations may carry out profiling without full awareness or assessment of its impact or that specific data protection rules apply to the practice. Profiling is the use of mathematical formulas, computations or algorithms to categorize individuals into one or more classes or groups. Profiling can also be used to evaluate individual characteristics such as performance at work, economic standing, health, personal preferences, interests, reliability (skill or competence), behaviour, location, movement, intention or priorities. The most intrusive elements of profiling can be the ability to infer information from data and the ability to predict an individual’s future choices or actions. Inferences and predictions can be wrong, biased, incomplete and based on irrelevant data, yet have a substantial effect on individuals, including discrimination, financial loss, loss of opportunities, distress or other damage.  Companies and organisations must be transparent about their use of profiling, have internal quality checks, practice data minimisation and verification. Individuals affected must be able to seek information about their profiles and question the decisions made about them.

The GDPR has one of the most sophisticated regulatory frameworks to deal with profiling and automated decision making. In most cases, automated decision making is categorised as profiling. EU policy makers anticipated the growth of profiling by ensuring that all foreign companies (with or without an EU presence), that profile EU citizens’ behaviour in the EU, fall within the scope of GDPR, even where the profiling operations take place outside the EU. This may not be well understood and may often be ignored by organisations. As well as compliance with the GDPR’s main principles and provisions, profiling should always be accompanied by Data Protection Impact Assessments (DPIAs). These DPIAs must also comply with the requirements of the relevant EU member states’ data protection regulator and local laws. Consulting with the individuals affected and with the data protection regulator could also be required, based on the nature of the profiling.  The Data Protection Officer should support and drive the process of producing high quality DPIAs that are well written, honest, easy to understand, effectively reviewed and updated.

Artificial Intelligence

Artificial Intelligence (AI) is the ability for computer systems or computer-controlled robots to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, translation between languages, performing manual tasks, interactions with other computers and interactions with humans. AI is big business and is set to transform the global economy, work, home, education, healthcare and security. The global artificial intelligence market size is expected to reach $390.9 billion US dollars by 2025, according to a report by Grand View Research, Inc. The market is anticipated to expand at a Compound Annual Growth Rate (CAGR) of 46.2% from 2019 to 2025. Companies and organisations should ensure that in building AI systems that algorithms are tested, reviewed and outputs verified. Data sources should be quality checked to remove incomplete data, bias and out of date information. Assumptions and inferences should be robustly tested. These steps are data hygiene and reflect similar GDPR requirements. However, GDPR compliance and relevant data protection and privacy laws should be specifically incorporated into AI data life cycles.

Companies and organisations should ensure that AI is explainable so that individuals affected can increase their understanding and trust can be built. This requirement maps across to the GDPR’s principles of fairness, lawfulness, transparency, purpose limitation, accuracy, integrity, confidentiality and accountability. Frameworks have been published to help organisations manage and explain AI to improve accountability. The European Union High-Level Expert Group on AI has published Ethics Guidelines for Trustworthy Artificial Intelligence. The United States National Institute of Standards and Technology (NIST) has published Four Principles for Explainable Artificial Intelligence. The UK’s data protection regulator, the Information Commissioner’s Office and the Alan Turing Institute, have published joint guidance on Explaining Decisions Made with AI.

Data Protection Lessons

Data Protection maturity can improve companies and organisations key strategic goals of profitability, growth, efficiency, trust, innovation and resilience. Organisations that attempt to grow without robust data protection find that several of their key strategic goals remain uncertain. Their longevity can be at risk because users, customers and supply chain trust are low. Their efficiency and growth are precarious because at any time, a data protection regulator, markets regulator, privacy activists, civil society groups, governments and individuals could start campaigns against their poor data protection practices. Fines, bad publicity, internal staff protests, political interjections and whistleblowers can create a state of inherent instability. Excellence in data protection and data protection by design should be positive and proactive advancements rather than reactive responses. For the future, agility and trust will be important economic drivers. Organisations that understand their data and personal data, explain their data uses, imbed data protection by design and engage with stakeholders about data governance issues will thrive, remain resilient and fulfil their key strategic objectives.

PS082020

Client Success Stories: What Our Partners Say

Our clients’ testimonials are the performance indicators PrivacySolved values most. These keep us focused on excellent delivery, while never losing sight of the evolutionary nature of our clients’ needs, our expertise and the need for continuous improvement.

Partnerships &
Memberships 2024

Take the next step

PrivacySolved can empower your real-time response to Data Breaches or Cyber Attacks globally, around the clock and across time zones. At any time, you also can activate our global data privacy expertise, DPOs, vCISOs, cybersecurity strategy and responsible AI services.

Click below to start the most important conversation you’ll have this year.

© Copyright 2024 PrivacySolved. All rights reserved. Website by Jerboa.

'); printWindow.document.write('
' + note + '
'); printWindow.document.write('

' + title + '

'); printWindow.document.write('
' + content.innerHTML + '
'); printWindow.document.write(''); // Close the document to finish writing printWindow.document.close(); // Wait for the document to be fully loaded before calling print printWindow.onload = function() { printWindow.print(); }; });