Right to Privacy and Artificial Intelligence as a Public Health Tool Mussa Ohomorihiomayi Alexandra
Lawyard is a legal media and services platform that provides…
Begin by taking some seconds to pause and think about why every time you view a post on Facebook, your timeline continues to bring up similar contents. Better still, have you ever searched for an item on Ali Express or Jumia? If you have, is it not amazing how you open Facebook and come across those things you had googled in form of Ads?
Do you know why this happens? These systems record your interests, what you watch, what you like to buy etc. all for the purpose of keeping you continually interested.
All these bring up privacy concerns, and with Covid-19 as a public health emergency as well as the need to protect public interest, gathering personal data is essential but it raises serious privacy issues that underscores the need to protect privacy.
Fighting the Pandemic Using AI And The Benefits
Artificial intelligence systems, which are software designed by humans, with the capabilities to collect and elaborate data to make decisions, are currently being deployed by countries and organizations as a tool in detecting and predicting the outbreaks of Covid-19.
It is currently being used amongst other things to track patients and potentially affected people, detect symptoms, provide interactive voice response systems and chat-bots for patient self-triage, monitor public areas and transportation systems to detect situations where people are not complying with public order rules, hinder fake news, by adding fact-checker systems and forecasting the epidemic’s spread over time and space.
In December 2019, BlueDot a Canadian start-up used a website-leveraging AI technology to detect early warnings of an unknown form of pneumonia spreading in Wuhan, China. Still, in China, public transportation systems are reportedly deploying facial recognition platform to detect those presenting risk of Covid-19 infection.
In the United States, US-based think tanks and UN agencies have formed a Collective and Augmented Intelligence Against Covid-19 (CAIAC) to help policymakers around the globe leverage AI in the battle against the pandemic. Software is being developed that analyzes camera images taken in public places to detect social distancing and mask-wearing practices, a cell phone data is used for contact tracing while drone data are also being used to analyze fevers and personal health data. Likewise, the European Data Protection Supervisor (EDPS) has also called for a pan-European mobile app to track the spread of the virus in European Union countries.
In Africa, Fraym, a start-up is using artificial intelligence and machine learning to help aid organizations (Nigeria’s CDC, Kenyan presidential office, Zambian public health policymakers and aid organizations in Pakistan) in Africa and South Asia by identifying populations at risk of Covid-19. Also, Hadiel, an African SaaS Company with its headquarters in Lagos, launched an AI symptom checker that leverages health data to generate disease trends and map behavioural health patterns via data science & AI across Nigeria and other African cities.
Overall, the use of AI has significantly improved the treatment of Covid-19 patients and proper health monitoring. It has also been helpful to facilitate research on the virus by analyzing available data and developing proper treatment regimens, prevention strategies, and drug and vaccine development.
The Risks and Privacy Issues
As a result of the huge amount of data from citizen’s cell phone data, social behaviour, health records, travel patterns and social media content being processed by AI, there is a potential likelihood of infringing personal privacy. It raises issues as to who holds these data being collected – the government or a private entity, the accuracy of the data being collected, and whether our data privacy laws can adequately address privacy breaches caused by AI.
There is also the fear that public health concerns would override data privacy concerns as a result of the Covid-19 emergency and that the government may continue to use this data long after the pandemic is over.
The Response of Data Protection Laws
Responding to these issues, the Organization for Economic Co-operation and Development (OECD) has released recommendations for policymakers to ensure that AI systems deployed to help combat the pandemic are used in a manner that does not undermine personal privacy rights.
Also, U.S. lawmakers have offered legislative proposals to reform data privacy protections and prevent the misuse of health and other personal data. One of which is the Covid-19 Consumer Data Protection Act (CDPA), that would place rules on how businesses use data during health emergencies, obtain consent, employ data minimization measures, like encryption or anonymization and force companies to delete collected data once the crisis is over.
In Nigeria, the right to privacy is a fundamental human right and it is protected under Section 37 of the 1999 Constitution. The Nigeria Data Protection Regulation provides situations when personal data can be processed, that is collected, disclosed and shared.
It places a duty on a data controller to handle/process data following the conditions provided by the NDPR. One of the instances that a person’s data can be collected and disclosed is where it is necessary for the performance of a task carried out in the public interest. The Regulation provides that data processing should be adequate, accurate and should not prejudice the dignity of a person and/or fundamental rights.
In Europe, privacy is also fundamental and the right to data protection is ensured. The EU data protection authorities have provided its Recommendation for the use of technology and data to combat and exit from the Covid-19 crisis. The Recommendation entrenches respect for all fundamental rights, privacy as well as data protection.
The Union’s General Data Protection Regulation under Article 9 allows personal data collection and analysis, as long as it has a clear and specific public health aim. It provides that the consent of the data subject must be sought before personal data can be processed. This is not an absolute right as it provides for where processing is necessary for reasons of substantial public interest.
The safeguards provided by the GDPR object to uncontrolled data processing by an AI system, that a data subject has the right not to be subject to decisions merely based on automated processing and that data processing should adhere to protection principles, such as lawfulness, fairness and transparency, accuracy, data minimization and purpose limitation.
What can be deduced is that information regarding persons, such as employees or patients who have been infected by the virus, can be collected, disclosed and disseminated without their consent because it in light of public interest. This also means that notwithstanding the right of data subjects to give consent to data processing, where it has to do with public interest, they are under an obligation to allow collection and disclosure.
Both Section 2.13 of the NDPR and Article 17 of the GDPR provides amongst others that the data Controller shall delete personal data where the data is no longer necessary concerning the purposes for which they were collected or processed and there are no overriding legitimate grounds for the processing.
An identifiable lapse in the NDPR and GDPR is that they do not provide clear guidelines for determining legitimate interest and ground. This will, however, involve how well as the rights of the data subjects and the legitimate interests of the data controller can be balanced. They also do not address the issue as to who holds the data being collected.
Conclusion
This behoves on the government to bring data protection laws in tune with the realities of AI. Data protection laws should define the risks associated with the implementation of AI systems, as well as determine the key features that should be implemented to ensure that data subjects’ rights are complied with.
It should also provide the need for AI systems to be lawful, ethical and more particularly, comply with all applicable laws and regulations, as well as ensure adherence to ethical principles/values and be designed in a way that does not cause unintentional harm.
And to ensure that data gathered by AI systems are used responsibly and with utmost care to avoid privacy breaches. In all, governments and organizations must find a way to balance the use of AI as a tool for public health and safeguarding the privacy rights of citizens. One must not be sacrificed on an altar for the other.
Mussa Ohomorihiomayi Alexandra is a lawyer with interests in tech, copyrights law, sports and entertainment
Lawyard is a legal media and services platform that provides enlightenment and access to legal services to members of the public (individuals and businesses) while also availing lawyers of needed information on new trends and resources in various areas of practice.