Publications

AI and Data Protection – How do they work?

01/11/2019

The quality of data affects hugely the efficiency of the function of AI. As a saying goes, if you provide an excellent AI algorithm with a poor set of data, you will only be able to crawl poor output results. Therefore, if the input data is defective (e.g. discriminatory), the AI algorithm will also produce results / advice that are discriminatory.

Big data and purpose limitation

AI algorithms works like a detective and find correlations between data to make predictions and generate set of rules. Agreeably, our digital footprints on social media, search engines and instant messaging applications create rich sources of big data to fuel AI algorithms. Google, the most commonly used search engine, has a dark history of installing eavesdropping tool on computers in order to make search predictions and generate relative advertisements to target interested users. This potentially exposes real-life, private conversations without users’ consent, albeit having a user-friendly motive. In Hong Kong, there has not yet been any legislation answering to big data feeding. In the guidelines issued by the Privacy Commissioner for Personal Data (“PCPD”), the principles of “Privacy by Design” & “Privacy by Default” were introduced. They encourage companies to embed privacy in all business processes rather than an afterthought but after all, these are just recommended practices and carry no legal effect. The grey area remains as a loophole as big data collected from users is still one of the major sources of marketing information for companies today.

Monitoring and Profiling

The GDPR recognizes that AI may be susceptible to abuses, therefore section 71 calls for the implementation of technical measures that prevent discriminatory effects on natural persons. These concerns are not without merit. Hewlett-Packard’s implementation of a facial tracking software revealed that the computer could not recognize darker faces in certain light conditions. Similarly, in 2015, Google Photo’s image recognition came under fire as black faces were being classified as gorillas. These data missteps are just shadows of the larger issues. Last year, it was revealed that Amazon’s sought-after recruiting tool showed signs of human biases. The system taught itself that male candidates were preferable and discriminated against female candidates. Amazon changed the algorithm and later abandoned the project altogether.

There is a growing concern that virtual banking software will also self-teach itself discriminatory patterns and provide higher interests rates to people with ethnic names and information. Hong Kong’s banking sector drastically changed in 2019 with issuance of eight virtual bank licenses and the discriminatory practices in these new developments are a possibility, particularly without proper regulations to address it.

Transparency and Consent

When an individual logs on to an electronic device each time, the user is tracked. Although the collection of data must be consented by the users, they are unable to control and influence how their data is being used. There have been many attempts by people in the United States to pass “Do Not Track” legislations that allow individuals to prohibit websites from collecting their data. A do-not-track disclosure law was enacted in California which requires websites to disclose how the identifiable information from consumers were collected by third parties. There has been no legislation yet in Hong Kong. A guidance note titled “New Guidance on Direct Marketing” was released by The Office of the PCPD, and the note walks through high level business practices that should be followed, such as not to send unnamed emails to a consumer and to ensure that consent is received before any further contact is made.

Authors: Alan Chiu   Managing Partner

               Charles To  Partner

Date: 1 November 2019