Why not listening to good music while reading? My inspiration for this article about ethical digital is What a Wonderful World – Louis Armstrong. Play it now!
On the one hand, we have the General Data Protection Regulation (GDPR), newly effective in the European Union since May 2018. The purpose of the GDPR is to give people the power to know how personal data is processed. To do so, businesses need to have a single access point. This implies to have their data centralized and managed in a secure place. On the other hand, we have the blockchain technology, a decentralized data exchange and validation protocol. Blockchain technology relies on a distributed ledger managed by a peer-to-peer network. Therefore, it is difficult to imagine a world where the GDPR privacy laws and the blockchain technology would be compatible. For all that, does this mean European businesses have to put an end to their blockchain-related projects to ensure GDPR compliance?
In September 2018, the French data protection authority, the Commission Nationale de l’Information et des Libertés (CNIL) published a report on the GDPR and the use of blockchain technologies. In this paper, the CNIL provides study results and solutions to businesses that have the ambition to use blockchain technologies. As the saying goes, opposites attract…
Is there a data controller on your blockchain?
According to the CNIL, businesses should first make sure there is no better solution to process their data. In fact, blockchain technology is not always the most suitable option. Keeping in mind that they should embrace the “Privacy by design” framework as a result when they design their technical solution to collect data, businesses will remain GDPR-compliant, even if they decide to opt for a blockchain. In this case, they will also have to identify and designate a data controller in their organisation.
The CNIL identifies two distinct scenarios: either the data controller is a physical person whom needs to process data for business purposes, or a legal entity that collects personal data on a blockchain (e.g. a bank with customer data). Any participant with a right to write on the blockchain might be considered as a data controller. That is why the CNIL recommends assigning a role to each category of participants in the chain. People will know who their main contact point is to exercise their rights.
When blockchain meets the right to be forgotten
The General Data Protection Regulation gives European Union citizens the right to request the erasure of their personal data. This gives individuals more control over the ways organisations collect, store and process their data. Article 17 of the GDPR states that “the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay”. Because of the properties of hash functions in a blockchain, the slightest change in data will change the hash of a block. Furthermore, since each block contains a hash of the previous block in the chain, this makes removing personal data from the blockchain impossible.
The CNIL, however, acknowledges that under some circumstances blockchain could be compliant with the GDPR regarding the data subject’s rights. In fact, some of these rights seem to demand technical solutions to enable individuals to exercise them properly. For example, the right to erasure is, at first glance, impossible to apply technically here. But if the data controller implements cryptographic algorithms to make personal data inaccessible, the CNIL recognizes that this anonymisation process is close enough to the right to erasure. Even though the information is not, strictly speaking, erased.
The subcontractors, the weak link of the blockchain?
Despite these promising first results, European authorities still need to examine the responsibility of subcontractors in the blockchain’s network. According to the CNIL, there are two types of subcontractors in a blockchain: the “smart contract” developers and the miners who validate new transactions and record them on the global distributed ledger. Their role is still unclear, legally speaking, and needs to be addressed. Article 28 of the General Data Protection Regulation states: “Where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures”. If subcontractors fail to be GDPR-compliant, the blockchain will be held responsible too.
The possibilities seem endless for GDPR/blockchain partnerships. The European Union incites businesses to create innovative solutions that combine data privacy with the erasure required by the GDPR. If law has certainly challenged technology this year, this might also be thanks to the General Data Protection Regulation that we will soon witness major innovation in blockchain technologies. All’s well that ends well.
Sources – General Data Protection Regulation vs. Blockchain: an impossible love?
Commission Nationale de l’Information et des Libertés Website, https://www.cnil.fr/.
Information Commissioner’s Office website, https://ico.org.uk/.
PrivazyPlan Website, http://www.privacy-regulation.eu.
EU Blockchain Observatory and Forum, https://www.eublockchainforum.eu/.
Coinext, https://coinext.io/2018/06/blockchain-technology-incompatible-europes-gdpr/, 19 June, 2018.
Nowadays, connected devices and objects are challenging Big Data on new matters that include quick processing of multiple data sources from the Internet Of things but also cybersecurity. Internet of Things innovations’ are still at low maturity stages, but they represent a real potential which are able to modify deeply the processing of data. According to a report by IDC  (International Data Corporation), Big Data market will reach $125 billion in 2015. As said by some tech experts, we can already see the emergence of major technological challenges for Big Data in the next years: How can we process such amount of data, how can we measure the reliability of this amount of data and how can we secure them?
#1 Internet Of Things challenge: Processing Capacity & Reliability
The large amount of sensors integrated into our connected objects (known as 80 billion units outstanding for 2020) generate many data that must be stored but also that should be accessible to the ones that use them. Therefore, the reliability of this access and the storage of all the data become an important issue. The Internet of Things (IoT) will challenge companies to a huge increase in the volume of generated data.
With limited capacity of IT infrastructure to manage the storage and processing of data, companies must start thinking about solutions that will enable them to optimize their data center and facilitate its elasticity. Virtualization of networks, including chaining services or dynamic management of business flows can have a decisive role to exploit all the potential of the Internet of Things.
The wide variety of IoT products and applications, whether they already exist or not, will lead to a prioritization of data centers. In the next years, we expect to see the creation of new small and middle size datacenters that will feed the main datacenters. The management of these resources will not be a simple task like nowadays, to achieve this, a large part of the network must be automated to manage some parameters like real-time for example. It also means making software adjustments to optimize the processing, storage and delivery of data. For some companies that will choose a hardware approach it would requires a longer deployment of services and it would be almost impossible to maximize the use of available resources. But the process of data will not be the only issues of this flood of data because this data will have to be secure.
Second Challenge: Cyber security
All these new objects that will invade our daily lives, our homes, our cars, our clothes, our bodies, etc. can communicate through each other and have memory capacity, processor and limited energy that make them vulnerable to cyber attackers.
Threats of IoT in terms of safety may include the objects themselves but also the underlying information systems that can be hosted on remote servers or on the Cloud. Connected objects are largely vulnerable if they are not highly protected and these days some companies are looking for hiring hackers to test their security methods.
Now there are even search engines which reference poorly secured connected objects or poorly protected, this enhance the need of secure cloud environment or datacenter. This is exactely why Orange launched last fall Datavenue, a secure platform for IoT creators’.
For Big Data, the challenge is also to secure the processing of data by checking that they are not vulnerable and contain no flaws. Another type of security platform called c-Frama allows the analysis of the source code to detect a potential default but also to know if the memory of the computer is used wisely or if the machine records its data in a secure part of its memory. These three technological challenges will accelerate the automation of data analysis process in connection with the rapid development of connected objects in the world.
We must deal with objects connected with the same consideration as computers, smartphones and tablets. Yet organizations often underestimate the safety of their IoT projects at the risk of exposing sensitive data. Companies must carefully evaluate the amount of devices they install on their networks to mentor the huge volumes of data traveling across these devices and networks. Without proper preparation and consistent administration, the explosion of data produced by component devices of IoT may slow networks and overwhelm existing security infrastructure.
It is necessary for companies to be extremely vigilant and not to implement a major IoT program without understanding the impact of these devices on their safety. In this context, companies must get help from service companies to accompany them in their projects and to draw their attention on potential risks and existing solutions.
The world of the Internet of Things offers fascinating prospects for economic growth and improvement of everyday life, but it should be in full control of risks. Users and companies should become fully aware of all structural risks and should be informed of their rights so that they can control their data at any time.
Last 6th of October, the ECJ (European Court of Justice) invalidated the “Safe Harbor” agreement, corner stone of the personal data transmission between European Union and United States.
But what are the consequences of such a decision for us, daily users of Google and Facebook ?
Safe Harbor’s limitations
The Safe Harbor Agreement was settled in 2000 in order to protect the private life of all European users using American companies’ online services. During this process, users had to provide personal data without any insurance that this data won’t be used to a different end than it is supposed to be. The agreement was the warranty of the data protection.
But It has shown its limitations especially in 2013 with Edward Snowden’s disclosure about the NSA mass monitoring programs and also several complaints against Facebook. European citizens data weren’t actually well protected.
These first warnings led to this judgment of the European Court of Justice. But what does it really mean? Will Facebook be really affected by this decision?
An ocean of agreements and clauses
It actually doesn’t change anything for big companies. They don’t even need to store all the European data collected in Europe. But they can’t use and hide themselves behind this agreement anymore if they meet issues the legality of data flows between Europe and United States.
But Safe Harbor isn’t the only agreement and Facebook ensures to not only use this one but also others methods recommended by the EU in order to legally transfer data. It is common for such big companies to go through other types of contracts like internal firms’ contracts or contractual clauses that, often, are more complete than Safe Harbor.
The need to revise this agreement
The potential issue of this decision is precisely the development of this kind of contracts between companies and European countries with the risk of losing the global frame inspired by Safe Harbor.
To avoid this, the agreement will be renegotiated. It actually was renegotiated even before the ECJ decision. But as every such big decision, it will take a lot of time before a definitive new agreement is approved. Especially in a world where data is circulating fast and where it is nearly impossible to ensure that it won’t be lost at some point.
It would mean a total change in the American law that is, by the way, not likely to happen. And even if it would happen; we should have to wait until the next nullification coming from the ECJ.
Who’s the main looser with this nullification?
Safe Harbor doesn’t only involve big companies but smaller companies as well (4000 companies were subjected to this agreement). These smaller firms are now in what we could call a legal vacuum until the next decision regarding this judgment. All the web players are now pressuring the European Commission and the US Government to reach a new agreement as soon as possible.
More than just a decision against big Americans firms, the agreement is a strong mean to protest and report NSA mass monitoring going totally against European law principles.
For its part, the US government was disappointed after this decision, reporting the uncertainty of transatlantic digital economy booming. It doesn’t involve firms anymore but it is a matter between governments in which Facebook does not want to take part.
The next steps are to follow with a lot of vigilance for smaller companies. On the contrary of bigger companies such as Facebook, it will be more difficult and will require investments for them to find a way to store data abroad. Facebook already plans the worst investing in data storage servers in Europe with its Irish subsidiary.
With the rise of Cloud Computing, we can wonder how this data will be regulated to fit the new protection requirements…