Update: Meta’s behavioural advertising vs GDPR

This is blog post #20 in our series on the GDPR and is a continuation of the blog post # 19 which ended with this:

  • After the Norwegian (NO) DPA (Data Protection Authority) imposed a ban on Meta’s behavioral advertising and put a fine on Meta, the company brought the case to Oslo District Court asking for provisional injunction – and lost.
  • Starting om August 14, the fine of NOK 1 million per day could last until November 3, unless the European Data Protection Board (EDPB) decided otherwise, as requested by the NO DPA.

And the story continues, currently in (at least) three different processes:

Scroll down and find that the noyb has on 28 November filed GDPR complaint aganst Meta’s “Pay or Okay”.

1. Enforcement notice against Meta from Ireland’s High Court

EDPB

Following the request by the NO DPA, the EDPB published on 27 October their conclusion and adopted an urgent binding decision, and instructed the Irish (IE) DPA to take, within two weeks, final measures to stop Meta’s processing of personal data for behavioral advertising across entire European Economic Area (EEA), arguing that reference to the user contract and legitimate interest is not valid legal basis for using personal data for behavioral advertising.

The GDPR states that consent for processing of personal data is not freely given when it is bundled to accessing a service, and when the processing of personal data is not necessary to that service.

IDPC

The Irish (IE) DPA notified Meta on 31 October the EDPB binding decision, giving Meta two weeks to comply.

Ahead of this, Meta announced on 30 October the new subscription model where users can pay monthly for ad-free Facebook and Instagram services in the EEA and Switzerland, but NO DPA has informed Meta that they have strong concerns regarding Meta’s “consent” mechanism. EDPB is evaluating, and (according to our knowledge) has not concluded yet. NO DPA is of course active in this process. [source]

The Irish (IE) DPA took action 10 November and served Meta with an enforcement notice saying that the company has seven days to cease processing data for behavioral advertising. If not, the company will be fined.

However, Meta has brought a High Court challenge, resulting in a permission for Meta to bring its judicial review action, and later also to grant Meta a temporary stay on the enforcement notice from coming into effect. When the matter will return to court is unclear.

2. Meta is taking a new case against the Norwegian Data Protection Authority

Datatilsynet

Following up on the decision from 14 July this year, where Meta was notified that they may impose a fine of up to NOK 1 million (approximately USD 100 000) if Meta did not comply with the GDPR regarding consent from users of Facebook and Instagram when the company use personal data for behavioral advertising, the fine started rolling from 14 August.

The NO DPA confirms that they have sent a claim of NOK 82 mill against Meta to the State Collection Agency, a unit within The Norwegian Tax Administration.

Meta claims that the ban is invalid, and for the second time, Meta is taking the case to Oslo District Court. They also claim that the compulsory fine has to be abolished.

However, Meta has later requested that the case be dismissed, and the NO DPA has agreed to this. But the case is not laid dead, because Meta kept open the possibility to raise the matter again, awaiting the outcome of the proceedings in the EDPB [source].

NO DPA (Datatilsynet) write in an email to a Norwegian newspaper (6 December 2023), that Meta has now reluctantly paid the fine. But it is not hard to guess that the last word is not said,

3. noyb files GDPR complaint against Meta over “Pay or Okay”

noyb

Recently (28 November), the Noyb – European Center for Digital Rights, a non-profit organization based in Vienna, Austria, filed a complaint against Meta with the Austrian data protection authority, on behalf of an anonymous complainant, who is unemployed and receives benefits, and lacks the financial means to pay Meta’s subscription fee €20.99 a month to access Facebook and Instagram. [source; source]

The noyb claims that paying up to €251.88 a year to retain their fundamental rights to data protection on Facebook and Instagram is unacceptable, and in addition, if such arrangement is not stopped, other tech companies will soon follow, according to noyb. [source].

With this, noyb opened up a wider concern and perspective on the matter, which may deserve another blog post. So, stay tuned.

4. Another complaint process: Meta’s “pay-or-consent” model contravenes consumer legislation

BEUC

The European Consumer Organization (BEUC, Bureau Européen des Unions de Consommateurs), organizing Forbrukerrådet (The Norwegian Consumer Council) and similar organizations in Europe, has filed a complaint against Meta’s changes to its service in the EU, saying that the “pay-or-consent” model is “… an unfair choice for users, which runs afoul of EU consumer law on several counts and must be stopped.”

Forbrukerrådet

The complaint is filed with the network of Consumer Protection Authorities (Consumer Protection Commission, CPC) “on the grounds of Meta engaging in unfair commercial practices in multiple ways.”

Further, the BEUC press release contains a very to-the-point list of issues that are identified under consumer protection law, and put Meta in trouble: Aggressive practice; sense of urgency; misleading consumers to believe in less tracking an profiling, and to believe that not paying then the service is “free” while they are paying through the provision of their data; the consumers do not have a real choice, because quitting the service means losing their contacts and interaction history.

Forbrukertilsynet

In Norway it is Forbrukertilsynet (The Norwegian Consumer Authority) that is entitled to impose a compulsory fine if the consumer legislation has been breached.

In addition, BEUC is also assessing whether Meta is infringing the GDPR.

Wrapping up the whole thing

There is an intense battle going on: The power of the big technology companies over people and society, versus democratic principles and how they are embodied in European legislation.

Because we at Runbox have the privacy flag hoisted, we will continue to follow what is happening in the field, and continue to keep our customers updated.

The content of this article is intended to provide a general guide to the subject matter, and Runbox take no responsibility for its accuracy. It is advised that when using the information for any purpose other than personal that the sources provided are verified. Expert advice should be sought about your specific circumstances.

Continue Reading →

In the case of GDPR vs Meta’s illegal behavioral advertising, the Norwegian DPA plays an important role

This is blog post #19 in our series on the GDPR.

Runbox takes a clear stand against big tech companies’ use of personal information for advertising purposes, and we are critical of their huge influence on society in general.

At the same time, we are proud of the Norwegian government agencies’ effort to crack down on companies breaking privacy legislation, by applying the legislation provided by the EU’s GDPR (General Data Protection Regulation).

This monitoring of privacy has its roots as far back as 1978 when Norway, as the second country in the world (shortly after Sweden), adopted a law on the processing of personal data, and established Datatilsynet (the Norwegian Data Protection Authority; NDPA).

For instance, in October 2022 we wrote about Google Analytics (GA) vs privacy, following up with a blog post about action taken by NPDA towards a Norwegian company’s use of GA, which implies unlawful transfer of personal data to the United States via GA.

In 2021 we published a couple of blog posts about reports from Forbrukerrådet (the Norwegian Consumer Council; NCC) about how the extensive AdTech and MarTech industry use personal data for targeted advertising.

NDPA was then prompted (by NCC) to impose a fine of NOK 65 mill (approximately USD 6,5 mill) on the dating app Grindr for breaching the consent requirement in the GDPR. (Read our update on 30 September 2023 on the Grindr case here.)

The Norwegian DPA case against Meta – and personal data as a commercial product

NDPA logo [source]

Meta Platforms Ltd is the umbrella organization that owns Facebook, Instagram, WhatsApp, and more. Currently, the Norwegian DPA has a lawsuit going against Meta Platforms Ireland Ltd and Facebook Norway AS, because of illegal behavioral advertising where they use personal data they are not allowed to for such purposes [source, source] according to the GDPR.

When they (as do Google and other tech companies) are using personal data for targeted advertising, it creates plenty of opportunities for advertisers to pay and get your personal information in return. [source].

In addition, they share the access to users’ data with other tech firms when doing business together, for instance Facebook argues that such firms are essentially an extension of itself, defined as “service providers” or “partners” [source, source, source, source].

If that weren’t enough, real-time bidding (RTB) results in the average Norwegian internet user’s data being shared 340 times per day, according to a study from the Irish Council for Civil Liberties (ICCL) [source]. The fact that personal data has become commercial merchandise could be a theme for a separate blog post, but for now we’ll stick to what the headline indicates.

The NPDA has taken a leading role and has been involved in this legal issue for many years precisely because it has such major implications for Norwegians’ privacy. [Source: Datatilsynet]

Meta’s gliding flight for legal use of personal data in their advertising business

The NDPA versus Meta is the provisional culmination of a long process starting in May 2018, the day after GDPR came into force in the EU.

At that time the Austrian non-profit European Center for Digital Rights (NOYB) filed four complaints against respectively Google (Android), Facebook, WhatsApp and Instagram over “forced consent”: The services would not be accessible if users declined to agree to their terms of use [source], which is a breach of GDPR Article 6.

The complaint against Meta was lodged on 25 May 2018 to Österreichissche Datenschutzbehörde [source] who transferred the complaint to Facebook Ireland Ltd on behalf of the data subject from Austria.

Irish DPC logo [source]

Because Meta’s regional headquarters in Dublin is serving European countries, it is the Irish Data Protection Commission (DPC) who is Meta’s lead European data privacy regulator (Lead DPA).

Since the NOYB’s complaints in 2018, the cases have been through the European Data Protection Board (EDPB) and the Court of Justice of the European Union (CJEU), where the conclusion is unanimous: Meta can’t use personal data for targeted advertising based solely on its Terms of Service (ToS). The GDPR’s Article 7, Recital 32, Recital 42, and Recital 47 make this very clear.

The apple of discord has been whether Meta uses the correct basis for processing personal information when they collect data about what users do on the platform, and use it to display targeted advertising. The dispute is about the term contractual necessity, legitimate interest, and consent, referring to GDPR Article 6.

Meta first argued towards the Irish DPC, that contractual necessity, as stated in Facebook and Instagram ToS from 2018 (after introduction of GDPR), was a sufficient legal basis for its advertising business – claiming that users of Facebook and Instagram are in contract with Meta to receive targeted ads. This actually means that Meta admits that behavioral advertising is a core service [source].

But after the ruling by EDPB 5 December 2022, and financial penalties totaling EUR 390 million from DPC 04 January 2023, Meta 5 April 2023 moved to “legitimate interest in its ToS. The fines are set according to GDPR Article 83 and seem significant, but is a small amount compared to that the advertisement practices that helped Meta generate $118 billion in revenue in 2021.

The penalty of EUR 390 million was decided because the contractual necessity in Meta’s ToS as legal basis for targeted ads was deemed in violation of the GDPR. However, Meta’s move to argue legitimate interest did not help, even when Meta provided an “opt-out tool”. Under the GDPR Articles 21(1) and (2), users have the right to object to companies claiming that they have a “legitimate interest” in the processing of their personal data.

A new player in the field: Das Bundeskartellamt

Bundeskartellamt logo [source]

Then on 7 February 2019, the German Federal Cartel Authority (“Bundeskartellamt”), with support from the German Consumer Rights Organization (“VZBV”), entered the arena. They brought into the game the German competition legislation with a decision arguing that Meta’s terms of use for Facebook violated German legislation due to the abuse of a dominant market position by Facebook merging and utilizing the data in user accounts.

Facebook’s terms were said to violate the GDPR, as using Facebook required that Meta could collect and process user data from various sources without actual user consent. On this basis Bundeskartellamt prohibits Facebook from combining user data from different sources — Facebook-owned services and third party websites included.

CJEU logo [source]

In the case between Germany and Meta that followed, the Higher Regional Court, Düsseldorf (Oberlandesgericht Düsseldorf), put the case forward to the CJEU which decided on 4 July 2023 that legitimate interest (referring to Article 6 (1f)) is not adequate for targeted advertising, and that the user’s explicit consent is necessary to be in line with the GDPR. With this, the CJEU agreed with noyb, and Meta is not allowed to use personal data beyond what is strictly necessary to provide its core social media products.

That said, the CJEU recognizes that legitimate interest may be used as basis for direct marketing processing, but this argument will not outweigh the interests and rights of individuals.

The Irish DPC is dragging its feet?

Here we have to mention that the Irish DPC has been unwilling to fully support the claim that Meta violates the GDPR regarding their targeting advertising. Instead, they (on 6 October 2021) in their draft decision, initially sided with Meta and put the light on Meta’s lack of transparency, and thereby violation of the requirements of the GDPR (Article 12 and 13c). According to this, the Irish DPC proposed a modest penalty of EUR 28–36 million.

“The GDPR countries” [source]

Following the GDPR procedure, the draft decision was sent to the other DPAs within EU/EEA who may have a legal interest in the decision. Ten of 47 raised objections against the DPC’s reasoning that the personalized service could legally include personalized advertising. The disagreement led the Irisih DPC to refer the point of dispute to the EDPB.

As referred above, the EDPB took the view that Meta Ireland could not rely on contractual necessity as legal basis for their targeted advertising, and due to the binding decision by EDPB 5 December 2022, the Irish DPC had a month to reach a final decision.

The story didn’t end there, as is explained in the 12 January 2023 EDPB press release where the Irish DPC is instructed to issue a tenfold penalty increase – both because of lack of transparency and breach of the GDPR – on Meta Ireland to €210 million in the case of Facebook and €180 million in the case of Instagram [source]. The Irish DPC then had to follow the EDPB instruction as it did on 31 December 2022 regarding Facebook and Instagram.

In the binding decision the EDPB also directed the Irish DPC to conduct a fresh investigation into Facebook and Instagram regarding the different personal data they collect, hereunder to assess whether processing of sensitive data is taking place [source].

The Irish DPC did not agree and said that “the DPC considers it appropriate that it would bring an action for annulment before the Court of Justice of the EU in order to seek the setting aside of the EDPB’s directions” [source]. And so it has done. The details are not known per 23 March 2023 [source], but the claims probably refer to Article 263 of the Treaty on the Functioning of the European Union, which allows the CJEU to examine the legality of the legal acts of bodies, offices or agencies [source].

The Irish DPC is Lead DPA for many Big Tech companies [source]. Click image to view full size.

The Irish DPC has been criticized as a bottleneck of enforcement regarding GDPR cross-border complaints concerning the 8 big tech companies (Meta, Google etc.) that have their European headquarters in Ireland. According to the report by the Irish Council for Civil Liberties (ICCL), and adding the new cases since the report was published, some 80 % of all cases have been overruled by the EDPB with demands for tougher enforcement action.

Back in 2020 the Austrian non-profit European Center for Digital Rights (NOYB) filed an open letter to the EU authorities that brought the Irish DPC’s weaknesses to light, referring to secret meetings between Meta and the Irish DPC to find ways to bypass GDPR requirements [source].

For the sake of balance we will refer to an article in The Irish Times where The Irish Data Protection Commissioner Helen Dixon defended the work of the DPC, and rejected claims that Ireland is a ‘bottleneck’ for enforcement [source].

The Norwegian DPA is taking action and imposes daily fines

The Irish DPC’s delay in the Meta case has triggered the Norwegian Data Protection Authority to intervene: On 14 July 2023, the Norwegian DPA notified Meta that they may decide to impose a coercive fine of up to NOK 1 000 000 (approximately USD 100 000) per day because of non-compliance with the GDPR’s Article 6, which in this case requires consent (ref. Article 6 (a)). Meta had until 4 August 2023 to either stop the use of personal data or receive daily fines.

On 4 August 2023 the NDPA put a temporary ban on Meta’s processing practice to use behavioral marketing. “Temporary” meant three months (from 4 August 2023), or until Meta showed that they had legally aligned themselves. That didn’t happen, the time limit was exceeded, and the NDPA did what they warned Meta about on 4 August by imposing a coercive fine of NOK one million per day [source], starting on 14 August, lasting until 3 November 2023.

It may seem strange that the NDPA can do this since Meta has its European headquarters in Dublin, and normally it is the Irish Data Protection Commission as Lead DPA that supervises the company in the EEA.

However, since NDPA’s concern is Norwegian users, they did this with reference to the GDPR Article 66 which allows data authorities to enact measures immediately when “there is an urgent need to act in order to protect the rights and freedoms of data subjects.” NDPA asked the Irish Data Protection Authority to impose a ban in May, but they didn’t, without saying why [source].

It follows that he decision from the Norwegian Data Protection Authority only applies to users in Norway.

Meta is taking the NDPA decision to Oslo District Court – and lost

It was no surprise that Meta didn’t accept the ban, and their reaction was to take the ban and the fine to Oslo District Court on 4 August 2023) , applying for a temporary injunction in an attempt to invalidate the decision. The reason: “This decision is invalid and causes significant damage to the company” [source].

“Meta Ireland and Facebook Norway have further stated that the decision is disproportionate, unclear, impossible to fulfill, contrary to other legislation (including the European Court of Human Rights, ECHR), and that it has already been fulfilled” [from the court’s ruling]. None of these statements were given weight, and Meta lost according to the court’s judicial ruling 6 September 2023.

In the court Meta stated that they would have to suspend Facebook and Instagram services in Norway to comply with the order. This seems strange, because in a blog post update 01 August 2023 they announced the following:

Today, we are announcing our intention to change the legal basis that we use to process certain data for behavioral advertising for people in the EU, EEA and Switzerland from Legitimate Interests to Consent.”

It is to be noted that the UK is excluded, Norway is not mentioned, and not a word is said about when and how the change will take place (more on this below).

In addition to the case in the legal system, Meta has submitted several administrative complaints against the Norwegian Data Protection Authority’s decision. These processes are ongoing. [Source: NDPA won against Meta]

NDPA asks EDPB to make the ban permanent, also for the EU/EEA area

The Norwegian DPA is only authorized to make a temporary decision in this case, and the decision expires on 3 November 2023. Because of the urgency as stated by NDPA, they, according to a press release 28 September 2023, have asked the central European Data Protection Board (EDPB) for a European binding decision in the case against Meta.

In the request, the NDPA asked that the Norwegian temporary ban on behavioral advertising on Facebook and Instagram be made permanent and extended to the entire EU/EEA.

Referring to Meta’s announced intention to change the legal basis to consent, NDPA says in the press release: “It is uncertain whether and when a valid consent mechanism may be in place. The Norwegian DPA believes that we cannot tolerate illegal activity in the meantime.

It is just about one month until the Norwegian ban expires, and one can only await the EDPB decision. It would seem strange if the EDPB decides against making the ban permanent, and that it is preferable that the GDPR should be interpreted consistently throughout the EU/EEA, and the rest of Europe as well.

Meta’s last move: Pay for your Rights”

In September this year Meta proposed to GDPR regulators that they want to charge Europeans monthly subscriptions if they don’t agree to let the company to expose them to targeted advertising.

According to Wall Street Journal on 3 October, Meta hopes to roll out the plan – Subscriptions No Ads (SNA) – in the coming months for Europeans users. This will hit users with fees in the range of EUR 10 to 20 per month depending on platform used and also if the accounts covers mobile devices.

With this, Meta is trying a smart move to circumvent requirements for explicit consent before processing user data to select ads that are targeted. The company refers to some other companies, such as Spotify, who offers users a choice to avoid ads for a paid subscription. But there is a difference, as Techcrunch points out: Spotify has to pay to license the songs it delivers ad-free to subscribers, while Meta gets content from its users for free.

In addition, Meta has pointed to paragraph 150 in the recitals of CJEU’s 4 July 2023 decision that “… if necessary for an appropriate fee…” could be an alternative to users who decline to let their data be used for ad-targeting purposes, and that opens the door to a subscription service. However, as NOYB points out, these 6 words are not directly related to the case and should not be part of the binding decision – and as Max Schrems, founder and chair of the NOYB put it (quote):

noyb logo [source]

The CJEU said that the alternative to ads must be ‘necessary’ and the fee must be ‘appropriate’. I don’t think € 160 a year is what they had in mind. These six words are also an ‘obiter dictum‘, a non-binding element that went beyond the core case before the CJEU. For Meta this is not the most stable case law and we will clearly fight against such an approach.” (our text highlighting)

Per 3 October it is not clear if the Irish DPO will deem the SNA-plan compliant with the GDPR, and it is also a question whether the CJEU will stick to its ruling from 4 July 2023.

Here it is also worth mentioning that Meta’s advertising network will fall under the EU’s Digital Markets Act which requires user consent before mingling user data among its services, or combining it with data from other companies [source]. 

The case of Meta vs GDPR will obviously roll on.

The content of this article is intended to provide a general guide to the subject matter, and Runbox take no responsibility for its accuracy. It is advised that when using the information for any purpose other than personal that the sources provided are verified. Expert advice should be sought about your specific circumstances.

ADDENDUM: Why is it urgent to stop behavioral advertising?

Behavioral advertising one of the largest risks to privacy: Statement from Datatilsynet

“Meta, the company behind Facebook and Instagram, holds vast amounts of data on Norwegians, including sensitive data. Many Norwegians spend a lot of time on these platforms, and therefore tracking and profiling can be used to paint a detailed picture of these people’s private life, personality, and interests.

Many people interact with content such as that related to health, politics and sexual orientation, and there is a danger that this is indirectly used to target marketing to them. 

“Invasive commercial surveillance for marketing purposes is one of the biggest risks to data protection on the Internet today”, head of international department at the NDPA Tobias Judin says. 

When Meta decides which advertisements will be shown to a user, they also decide what not to show someone. This affects freedom of expression and freedom of information in a society. There is a risk that behavioral advertising strengthens existing stereotypes or could lead to unfair discrimination of various groups.

Behavioral targeting of political adverts in election campaigns is particularly problematic from a democratic perspective. Since tracking is hidden from view, most people find it difficult to understand.

There are also are many vulnerable people who use Facebook and Instagram that need extra protection such as children, the elderly, and people with cognitive disabilities.”

Continue Reading →

Privacy, GDPR, and Google Analytics – Revisited

This is blog post #17 in our series on the GDPR.

Summary of the case

In our blog post on 23 October 2022, we referred to the Data Protection Authorities (DPAs) of Austria, Denmark, France, and Italy who were concluding that the use of Google’s Universal Analytics (UA or GA3) is not compliant with the EU’s General Data Processing Regulation (GDPR).

The reason for this is that the use of GA3 implies that personal data is transferred to the US, which at that point in time was not on the EU’s list of countries that have adequate level of protection of personal data. This means that the US was not fulfilling the requirements set by the EU/GDPR regarding ‘the protection of fundamental rights and freedoms of natural persons’, which is a key expression in the GDPR.

Furthermore, the Norwegian DPA (Datatilsynet) had up until 23 October 2022 received one (1) complaint regarding Google Analytics. Before any final decision is made, they have to confer with other supervisory authorities in the EEA that also have received similar complaints, according to GDPR Article 60 (One-Stop-Shop mechanism).

(We regret that links in italics in this article point to web pages in Norwegian.)

Universal Analytics (GA3) replaced by GA4

In October 2020, Google released Google Analytics 4, the new version of Google Analytics. In March 2022, Google announced that the Google Universal Analytics tool will be sunset in July 2023 and that Google would only provide the GA4 tool after 1 July 2023.

The Danish DPA have analyzed the GA4 regarding privacy, and concludes on their website that even if improvements have been made, it is still the case that “law enforcement authorities in the third country can obtain access to additional information that allows the data from Google Analytics to be assigned to a natural person.” That said, GA4 is illegal in terms of the GDPR because servers in the US are involved in the process, as long as an adequacy decision EU/US is not made.

The Norwegian DPA decision

Norwegian DPA reports on their website 27 July 2023 that they have concluded on the complaint mentioned above. The complaint stems from the noyb who lodged it against 101 European websites to the data supervisory authorities in the EEA for the use of GA. One of these was the Norwegian telecom-company Telenor, who at that time was using GA.

The conclusion is that personal data then was transferred to the US in violation of the GDPR, Article 44. In other words, the use of Google Analytics was illegal. Because Telenor discontinued use of GA on January 15, 2021, the Norwegian DPA in a letter on 26 July 2023 finds that a reprimand “to be an adequate and proportionate corrective measure”.

The Norwegian DPA relies on the Danish authority by claiming that the conclusion will be the same regardless of whether Google Analytics 3 or 4 has been used (see above).

What about adequacy EU/US?

On 10 July 2023 the European Commission adopted its adequacy decision for the EU-US Data Privacy Framework and announced a new data transfer pact with the United States.

Accordingly, companies from the EEA area should be able to legally use GA as long as Google enter into a so-called Standard Contractual Clauses that provide data subjects with a number of safeguards and rights in relation to the transfer of personal data to Google LLC (Limited Liability Company) in the US.

However there is a big “but”: Max Schrems at noyb writes: “We have various options for a challenge already in the drawer, …. We currently expect this to be back at the Court of Justice by the beginning of next year. The Court of Justice could then even suspend the new deal while it is reviewing the substance of it.”

To use the same phrase as in the recent update of our blog post On the EU-US data transfer problem: The last words are obviously not said.

Continue Reading →

Privacy, GDPR, and Google Analytics

This is blog post #15 in our series on the GDPR.

GDPR

Four European Data Protection Authorities (DPAs) have thus far concluded that the transfer of personal data to the United States via Google Analytics is unlawful according to the General Data Protection Regulation (GDPR).

It is quite certain that other European DPAs, including the Norwegian Data Protection Authority, will follow suit because all members of EU/EEA are committed to comply with the GDPR.

Website analytics vs privacy

Everyone who manages a website is (or should be) interested in the behavior of users across web pages. For this purpose there are analytics platforms that measure activities on a website, for example how many users visit, how long they stay, which pages they visit, and whether they arrive by following a link or not.

To help measure those parameters (and a lot of others) there exists a market of web analytics tools of which Google Analytics (GA), launched in 2005, is the dominant one. In addition, GA includes features that support integration with other Google products, for example Google Ads, Google AdSense and many more.

The use of GA implies collecting data that is personal by GDPR definition, for instance IP-addresses, which can be used to identify a person even if done in an indirect way. GA may use pseudonymization, using their own identifier, but the result is still personal data.

The fact that data collected by GA, of which some data is personal, is transferred to the USA and processed there, has brought the DPAs of Austria, Denmark, France, and Italy to conclude that the use of Google Analytics is not compliant with the GDPR.

None Of Your Business

This conclusion has been reached after complaints submitted by the Austrian non-profit organization NOYB (“my privacy is None Of Your Business”) to a number of European DPAs.

The complaints are based on the Court of Justice of the European Union (CJEU) concluding that the transfer of personal data to the US, without special measures, violates the GDPR.

According to NOYB the Executive Order signed by US President Joe Biden recently will not solve the problem with EU-US data transfers with regards to the potential for mass surveillance.

DPAs on the case

The Danish DPA writes that even if Google has indicated that they have implemented such measures, these measures are not satisfactory in order “to prevent access to transferred personal data by US law enforcement authorities”.

Datatilsynet logo

The Norwegian DPA has thus far received one complaint regarding Google Analytics, and they are saying on their web site that the case is being processed.

They “will place great emphasis on what other countries have come up with”, they say in an email conversation.

Runbox will continue following these developments and keep you updated.

Note: Runbox used GA during a short period between 2011 and 2013. When we became aware of how Google collects data and how they potentially could use these data across their various services, we terminated the use of GA in October 2013. Since then we use only internal statistics to monitor our service and visitor traffic on our web site, and these data are not shared with anyone in accordance with our Privacy Policy.

Continue Reading →

GDPR in the Wake of COVID-19: Privacy Under Pressure

Tech companies all over the world are rushing to support health authorities in combating the spread of the SARS-CoV2 virus, which is causing the more well-known COVID-19 disease. Whether those companies do so by invitation, by commitment, or by sheer self-interest, country after country is embracing mobile phone tracking and other technological means of tracking their citizens.

It might be worthwhile to take a deep breath and understand what’s currently technologically possible, and what might be at stake.

Tracking the infection

Everyone wants to avoid infection, and every government wishes to decrease the consequences of the pandemic within their country. And modern technology makes it possible to impose on citizens surveillance systems that represents a significant step towards realizing a Big Brother scenario.

In fighting the spread of the virus, it is crucial to know who is infected, track where the infected are located, and inform others that have been, or may come, in contact with the infected. It is precisely in this context that mobile phone tracking is playing a role, and this is currently being explored and implemented in some countries, raising ethical and privacy related questions.

Smartphone tracking apps

Once tracking of individuals’ phones is established for this particular and possibly justifiable reason, it could be tempting for a government or company to use it for other purposes as well. For instance, tracking data could be combined with other personal data such as health data, travel patterns, or even credit card records. Or the location of the infected individuals could be presented on a map along with the persons’ recent whereabouts, perhaps supplemented with warnings to avoid that area. Privacy is under pressure.

A smartphone can also be used as “electric fence” to alert authorities when someone who is quarantined at home is leaving their premises, or to fulfill an obligation from the authorities to send geolocated selfies to confirm the quarantine. Some authorities even provide individuals with wristbands that log their location and share it with the relevant authorities. The examples are many, and they are real, underlining the ongoing pressure on privacy.

Big tech gets involved

Very recently two of the world’s biggest tech companies, Apple and Google, announced they are joining forces to build an opt-in contact-tracing tool using Bluetooth technology, and will draw on beacon technology as well. The tool will work between iPhones and Android phones, and open up for future applications one cannot currently imagine.

In the first version, the solution is announced as an opt-in API (application programming interface) that will let iOS and Android applications become interoperable, and — now comes crux no 1 — the API will be open for public health authorities to build applications that support Bluetooth-based contact tracing. The tool is planned for a second step — here is crux no 2 — an upcoming update of both iOS and Android will make the API superfluous. Of course, you can opt-out, but then you can’t download the operating system software update at all.

It is a double-edged sword: It is great that big tech companies are mobilizing resources to help in a public health crisis, but do we really want these companies to potentially know even more about our personal lives (in the name of the common good)? Privacy is under pressure.

Norway’s privacy oriented approach

Norway has also launched a mobile phone application to help limit the spread of the infection, but this development is done under the strict regime of privacy regulations and in accordance with the GDPR. The decision to implement the app was taken by the Government in a regulation containing specifications and strict requirements adhering to the GDPR is taken care of, including limited use until December 1, 2020.

It should be added that some of the exceptions in GDPR for authorities is put into effect because of the extraordinary situation. However, the Norwegian parliament (Stortinget) may terminate the law supporting the regulation at any time if 1/3 of the parliament members decides so.

Even if, at least in theory, it might be feasible to use a similar app from other countries, it is crucial that the software is developed from scratch in Norway. This will ensure that Norwegian authorities maintains control over all functions and data, and that the privacy regulations in the GDPR are respected.

It is also comforting that the app is developed in cooperation with The Norwegian Data Protection Authority (Datatilsynet). Other countries allow similar apps to store health information, access images or video from cameras, or even establish direct contact with the police. Such functionality is naturally out of the question in Norway’s case.

The app is designed and will be used for purposes of tracking the pandemic only, and installation and usage is voluntary. When installed and activated the app collects location data using GPS and Bluetooth, which is encrypted and stored in a registry.

In case of a diagnosed infected individual, health personnel will check if the person has installed the app. Individuals that have been in closer contact than two meters for more than 15 minutes with the “infected phone” will be notified by text message. The location data is kept for up to 30 days, and when the virus is no longer a threat the app will stop collecting data. The app users may at any time delete the app and all personal data that is collected.

What does it take to succeed?

In order for the tracking to have any impact on the spread of infections, around 60% of the population* must use the application. At the time of writing (late April), 1,218,000 inhabitants had downloaded the application, that is about 30 % of the population for which downloading is allowed (age limit 16 years).

However, the number of downloads is not a good metric and there are a few obstacles for making it operable. For instance, the “app” must be installed on the phone, permission to use GPS and Bluetooth must be given, the 4 pages long privacy declaration* has to be accepted, and the battery must provide sufficient power at any time.

The battery issue turns out to be a problem because of GPS-positioning* and the simultaneous use of Bluetooth, which seems necessary to obtain precise location data.

Furthermore, not everyone is accustomed to using the smartphone functionality that is needed, depending of the user interface. For instance elderly people and people with vision impairments* may find it difficult to use the app. And, will the criteria two meters for more than 15 minutes represent a filter that is too coarse to provide useful results and subsequent notification to the user?

For these reasons, the skeptical may wonder if using the app implies that privacy is traded for uncertain and unreliable results from infection tracking.

What the application will provide even if 60% adoption is not realized is data for later research. For instance, data from mobile phone operators who can trace mobile phones movements between base stations could be correlated to instances of infections.

In the name of fighting the pandemic, the main telecommunication companies* are now, with strict privacy considerations, cooperating with The Norwegian Institute of Public Health to analyze movement patterns of the population compared with reported infections. Data is collected in groups of at least 20 people (phones), and identification of individual persons (phones) is not possible*.

Bottom Line

At Runbox we are very concerned about privacy and any type of user tracking that may infringe on this right. While various nations are developing and implementing technological solutions to combat the spread of the decease, we are grateful that we reside in a country with strong privacy traditions. In fact, the first version of personal data protection legislation was implemented in Norway as early as 1978.

It is crucial that The Norwegian Institute of Public Health and The Norwegian Data Protection Authority ensure that the app developers at Simula Research Laboratory (a Norwegian non-profit research organization) attend to both privacy and information security issues in a responsible manner according to the well established tradition in Norway.

When privacy is under threat, as in this case, it is absolutely justified that objections arise. It is often too easy to accept privacy intrusions in the name of a perceived common good.

But one related point could be made as a final remark: Perhaps it would be more appropriate to be concerned about personal data that is collected and shared through one’s use of social media, where personal data is traded and used for purposes that are literally out of control.

* Article unfortunately only available in Norwegian.

Continue Reading →

GDPR implementation part 8: “Personal data” in the EU and the US is not the same

We usually think of “personal data” as a term that contains for instance a person’s full name, home address, email address, telephone number, and date of birth.

These are ordinary data that can obviously identify a specific person. But in the personal data category of linked personal information are also data such as social security number, passport number, and credit card numbers – data that can identify us, and data we usually feel more restrictive about.

Linkable and non-linkable information

But there is another category of data that on its own may not be able to identify a person, but combined with other information could identify, trace, or locate a person. Such data are gender, race, sexual orientation, workplace, employment etc. These are examples of linkable personal information.

Then we have the category non-personally identifiable information. That is data that cannot be used on its own to identify or trace a person, for example IP addresses, cookies, device IDs, and software IDs (non-linkable personal information).

Privacy regulations differ in the EU and the US

Now, we know that there are industries that exist almost under the radar while taking advantage of personal data. For instance, companies in the AdTech and MarTech industry base their business on collecting and trading personal data for targeted advertising and marketing.

Many of these actors try to take protection of personal data seriously, and refer to the rules and regulations for processing personal data. In Europe this is the GDPR (General Data Protection Regulation) within the EU/EEA-area1, and in the US it is the responsibility of the FTC (Federal Trade Commission).

However, what the EU/GDPR and US government agencies mean by “personal data” is different. Specifically, the definition by EU/GDPR is more comprehensive than the definition often referenced by US agencies, such as that of NIST (National Institute of Technology).

For example, the EU concept of personal data includes information such as cookies and IP addresses, which are not considered as personal data in a US setting.2

This means that if US websites in their privacy policy state that they are GDPR compliant, but combine their data with other data sets, they may breach the GDPR. For example, they must have the user’s consent to collect their IP address under the GDPR.

Definitions of “personal data”

National Institute of Technology’s definition

NIST’s definition of personal data is contained in the definition of Personal Identifiable Information (PII):

PII is any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual‘s identity, such as name, social security number, date and place of birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

US Office of Privacy and Open Government’s definition

Another PII-definition is from the US Office of Privacy and Open Government (OPOG) as follows:

The term personally identifiable information refers to information which can be used to distinguish or trace an individual’s identity, such as their name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother’s maiden name, etc.

EU’s GDPR definition

Compare these PII-definitions with the GDPR Article 4(1)’s definition of personal data:

‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

It is obvious that GDPR defines personal data much broader than both NIST’s and OPOG’s PII, and this is underlined by this statement found in GDPR’s Recital 30:

Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.

The US is lacking comprehensive regulation

That said, US authorities are moving towards stronger protection of privacy and personal data, but as late as March 2019, the US Congressional Research Service says:

Despite the increased interest in data protection, the legal paradigms governing the security and privacy of personal data are complex and technical, and lack uniformity at the federal level. The Supreme Court has recognized that the Constitution provides various rights protecting individual privacy, but these rights generally guard only against government intrusions and do little to prevent private actors from abusing personal data online. At the federal statutory level, while there are a number of data protection statutes, they primarily regulate certain industries and subcategories of data. The Federal Trade Commission (FTC) fills in some of the statutory gaps by enforcing the federal prohibition against unfair and deceptive data protection practices. But no single federal law comprehensively regulates the collection and use of personal data (our emphasis).

Conclusion

When US websites claim to follow the rules for processing personal data it is dubious at best, compared to the regulations in the EU/EEA – which the Norwegian legislation is based on and is what Runbox adheres to.

However, it should be mentioned that some US states, for instance California, do classify some anonymous data (i.e. IP-addresses, aliases and account data) as PII.

In addition, as stated in our Privacy Policy, the personal data we ask customers to register in order to use our service is very limited. We are conscious about the trust our customers place in us when they register personal data in our systems, and in return we can demonstrate that we are compliant with the regulations.

Addendum

Above we referred to the AdTech and MarTech industries and their usage of personal data to identify, trace, or locate a person for advertising and marketing purposes. That topic is outside the scope of this blog post, but is absolutely worth writing about in a later post.

1 EEA = European Economic Area, that is the EU and three countries: Iceland, Lichtenstein, and Norway.

2 https://www.forbrukerradet.no/out-of-control/ footnote on page 102.

Continue Reading →

GDPR implementation part 7: Information and Tools for Implementation of Users’ Rights

GDPR

One of the main objectives for the European Union (EU) when they developed the replacement for the Data Protection Directive 95/46 (from 1995), was to expand individual control over the use of personal data.

This can be seen in a broader view as an implementation of the right to one’s private life, as laid down in the European Convention on Human Rights (Article 8). The right to respect for one’s private and family life is also stated in the EU Treaty on Fundamental Rights (Article 7).

Norway has signed both of these agreements, and the Constitution of Norway implements these rights in Article 100 and 102 of the Constitution and in the Norwegian Human Rights Act.

Already in GDPR1 Article 1 we see the connection between the GDPR and especially the Treaty on Fundamental Rights:

This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data

Article 1-2 of the GDPR

Observe the expression “rights and freedoms of natural persons“, which is very important throughout the Regulation and is used 31 times in all.

Before we go further into the subject of this post, it is important to state that Norway’s legislation on the processing of personal data was already compliant with the GDPR before the latter was declared as the new framework for the legislation in Norway. The Norwegian Personal Data Act (PDA2), as compliant with the GDPR, tok effect 20 July 2018.

First and foremost, the GDPR states that no processing of personal data shall be done unless the data subject has given consent (Article 6-1, a). Runbox obtains consent to registration of our users’ personal data when they sign up for an account and accept our Terms of Service.

The GDPR (Article 6-1, ff.) allows a controller – that is Runbox in our context – to process personal data when there is a legitimate reason for doing so, i.e. something that is necessary to use our services.

It is an important objective for the GDPR to secure one’s control of one’s own personal data. In this respect, the GDPR has given the data subjects eight fundamental rights (Article 15—17).

When implementing these rights in Runbox, we found that most of those were already there. However, the introduction of the GDPR provided us with a checklist and the opportunity to analyze our status, and to improve our services in this respect.

Our Privacy Policy provides exhaustive information about how we process personal data, but here is an overview of the data subject’s rights, and our implementation of them:

  • The right to access (Article 15): Since Runbox does not collect other types of information than what the users register by themselves, they can easily check which personal data is processed. The data processing is only done in order to process your emails, and optionally your web site and domain name.
  • The right to rectification (Article 16): You may at any time log in to your email account and change your personal information.
  • The right to erasure (‘right to be forgotten’) (Article 17): You may terminate your subscription any time, and your account contents will subsequently be deleted after 6 months. Your personal details data will be deleted after 5 years in accordance with Norwegian accounting regulations. However, you may send a request to dataprotectionofficer@nullrunbox.com for immediate erasure of your account contents.
  • The right to restriction of processing (Article 18): Runbox will never use your personal information for purposes other than providing our services to you, so restrictions are not necessary in our context.
  • The right to be informed (Article 19): Runbox uses your personal information only in order to provide our services to you..
  • The right to data portability (Article 20): In case that you wish to move to another email service provider and export your data, you will find information on how to do this through our services and documentation.
  • The right to object (Article 21): Since we never will use your personal data for other purposes than to deliver the services you have agreed to, this right is implicitly fulfilled.
  • The right to individual decision-making (Article 22): This article is intended to protect data subjects against automated data-processing that might involve profiling them based on personally identifiable information, which is something Runbox doesn’t do.

Regarding questions or concerns about our implementation of the GDPR, customers may use the email address dataprotectionofficer@nullrunbox.com as a direct channel to our appointed Data Protection Officer.

Some final remarks about consent: Runbox uses cookies in order to provide our services, and new users must give express consent to this on our signup page. On this page, and on the Account page once logged in, you may also give/revoke consent to future news and offers from Runbox.

In our next post in this series, we will consider our contractual situation regarding GDPR requirements. Stay tuned.

Footnotes

1. The GDPR means Regulation EU 2016/679 of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46 / EC General Data Protection, General Data Processing Regulation. Article refers to Article in the GDPR, unless stated otherwise.

2. The Personal Data Act (the PDA) means the regulations that are currently in force in Norway for the protection of individuals in connection with the processing of personal data, which includes the implementation of GDPR in Norway (2018-07-20).

Continue Reading →

GDPR implementation part 6: Access Control and Permissions

In part 3 of this blog series we described how we mapped the “world” of our operations, including the following components:

  • Server infrastructure, including all servers and other hardware as well as the links between these.
  • Software components that comprise our application stack from the operating system level to the front-end application level.
  • Data networks, including how and where our serves are connected to the Internet, but also the Local Area Network at our premises.
  • Data inventory, i.e. all personal data including customer and employee data, financial records, information about partners/associates, etc.
  • Applications necessary to run the company itself, meaning software that is managerial in nature.

Access control concerns permissions attached to system-related objects. Within each of the components listed above, there may be several sub-objects — servers, software modules, data files, catalogues etc., to which restricted access should be implemented.

Creating an Access Control Table

These objects then form one axis of an Access Control matrix or table (ACT). The other axis of the table include organizational units, broken down into person-related objects, for instance segments or groups, but also individuals, for each unit.

After breaking these objects down to an appropriate level, we attached roles to each of these components. In terms of the GDPR, data processor and data controller are examples of roles to use in this context.

To each of the defined roles, we attached categories of tasks, for instance sysadmin, developer, and support staff tasks.

For our email service systems we found it convenient to structure the system-related objects in 3 main categories:

  • General software.
  • Application software.
  • Personal data.

Within each of these categories there are various numbers of objects, to which access permissions are attached, comprising the Access Control Table for the realm in question. For other realms of our “world” we used a similar approach, resulting in a number of ACTs that implement a principle of least privilege.

With this the groundwork was laid for establishing various mechanisms for implementing the access control regime, in order to secure our most precious pieces of hardware, software, and data.

In our next blog post in this series we will look at Information and Tools for Implementation of Users’ Rights.

Continue Reading →

GDPR implementation part 5: Risk Assessment and Gap Analysis

In previous posts in this blog series we have referred to our main planning document, Rules and Regulations for Information Security Management, or RRISM for short, where our road to GDPR compliance started out. In that document we worked out the structure of the project, based on descriptions and definitions of the various components.

Obviously, risk management has to be taken very seriously, and the RRISM lays the groundwork for how we should handle this aspect of information security. The baseline is that risk management is an essential part of the company’s life, and one that comprises all its assets.

Defining and assessing risks

As usual, we first had to agree upon some definitions, and we found the following to be adequate for our purpose — directly from NIST (National Institute of Standards and Technology):

Risk is the net negative impact of the exercise of a vulnerability, considering both the probability and the impact of occurrence. Risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level.

Risk is a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization.

In order to assess risks, we first have to identify possible threats that may exploit vulnerabilities in our systems or our organization.

In short: Risk management shall first and foremost have as objective to protect assets that are at potential risk.

Analyzing assets

Then we outlined the methodology we adopted:

  1. Identify the assets that could be at risk.
  2. Identify possible threats and vulnerabilities.
  3. Identify the possible consequences of each potential vulnerability.

Each threat was characterized by probability and criticality which together gives one of four risk levels: Very High (red), High (orange), Medium (yellow), and Low (green). This helped us decide what we should prioritize regarding improvements, measures, and other actions.

Analyzing our assets we actually found more of these than anticipated, grouped in 21 different asset types, ranging from our customer base, general software in use and our own key business systems, through hardware and communication lines, and employees and partners – and more.

Threat, vulnerability, and gap analysis

Then we reviewed the vulnerability potentials (what could go wrong) for each asset and created scenarios for possible consequences if something happened that exploited a vulnerability.

The question raised thereafter was: Do we have the necessary measures and remedies in place to eliminate the potential vulnerabilities, or mitigate the consequences if things went wrong — or is there a gap?

The next step was to find out what actions should be taken in order to close the gaps in cases where we were not satisfied with the situation, and this will be the topic of future blog posts in this series.

Conclusion

Our mantra through this process has been: Threats we can imagine will sooner or later be reality, but never as we expect them to happen, and never where we expect them.

We live in an ever-changing environment, which means that risks have to be monitored continuously, and so our risk assessment and gap analysis is continually evolving as well.

Continue Reading →

GDPR implementation part 4: Information Security Policy

The groundwork for compliancy

Privacy and security has always been a part of the Runbox culture. However, the GDPR project made it clear to us that we had to systematically work through how to implement the various aspects of data protection and information security.

Let’s start by recalling the meaning of some important terms:

Privacy is about individual’s right to a private life, and the right to control all information about themselves. Grounded in European Convention on Human Rights (1950), the Norwegian Constitution § 102 states that “Everyone has the right to the respect of their privacy and family life, their home and their communication.” followed by “The authorities of the state shall ensure the protection of personal integrity».

Norway’s law on privacy, the Personal Data Act (PDA1), was introduced as early as 1978, so we have tradition for this kind of legislation. That’s why the GDPR2, in principle, didn’t result in significant changes.

In order to protect privacy, Information Security (IS) is crucial. It is mainly about how to prevent personal data from going astray, but we had to go for a more stringent definition: To secure confidentiality, integrity, authenticity, availability (for the approved purpose only), reliability, resilience (the ability to recover), possession (ownership), and utility (readable for the approved purpose) of the data.

With this in mind, we developed our Information Security Policy (ISP) as a documentation of the GDPR compliancy practices, and GDPR requirements to employees and states the company’s commitment to compliance. Article 24 in the GDPR demands controllers (such as Runbox) to implement appropriate data protection policies, and our ISP is an important part of our response to that requirement.

The purpose of Runbox’ Information Security Policy is to provide rules and guidance for Runbox’ employees, Runbox’ contractual employees/consultants, and everyone else working for Runbox, voluntarily or according to contract/agreement, so that they in all respects act

  • to comply with the company’s information security policies,
  • to comply with the company’s Privacy Policy and Terms of Service regarding our obligations to our customers,
  • to ensure that the processing of Personal Data is in accordance with the PDA/GDPR and ensure that appropriate technical and organizational measures are adapted to the purpose, extent and context of the processing, and ensure that such measures are adapted to the risks for the rights and freedoms of natural persons3.

The ISP is a very comprehensive document, stating our commitment to the protection of our customer’s data, and defining technical and organizational measures to fulfill this obligation.

For instance, we will not store customer’s data on any “cloud” (we use our own servers), we shall never disclose account information or email data to authorities (unless presented with a court order from the Norwegian prosecuting authority), and we shall never scan customer’s data to display ads. More information about this can be found on our Privacy Protection page.

An important aspect of the ISP is to define the responsibilities of two roles/positions: The Managing Director is the personified Data Controller, responsible for GDPR compliancy on behalf of the company, and the appointed Data Protection Officer, who is a watchdog regarding the company’s status where GDPR is concerned.

The ISP imposes strict rules for employees, partners, consultants etc. on how to handle systems and data, anchored in a No Disclosure Agreement and Agreement on Protection of Personal Data. This includes rules for how to process and store data and how to protect digital devices.

Finally, let’s mention that the ISP provides rules for contractual agreements with organizations Runbox has partnered with, consultants etc. so that appropriate technical and organizational measures are implemented to ensure GDPR-compliant data processing and systems development.

All together, we have developed two documents that serve as guidance, and control our behavior regarding the GDPR. These are the RRISM (planning document, mentioned in an earlier blog), and the ISP. It is worth mentioning that these documents are continuously updated when new privacy and security issues arise.

1 The Personal Data Act (the PDA) means the regulations that are currently in force in Norway for the protection of individuals in connection with the processing of personal data, which includes the implementation of GDPR in Norway (2018-07-20).

2 The GDPR means Regulation EU 2016/679 of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46 / EC General Data Protection, General Data Processing Regulation. Article refers to Article in the GDPR.

3 See GDPR Article 4(1).

Continue Reading →