Runbox is not on Meta or X (Twitter) – Because Privacy Matters

Social media platforms like Meta (Facebook, Instagram) and X (Twitter) are huge parts of our online lives. They’re where we catch up with friends, get our news, and share ideas. But while these platforms bring us together in a lot of ways, they also come with big problems—especially when it comes to privacy and misinformation. For a company like Runbox, being part of these platforms just doesn’t make sense. Here’s why.

1. Privacy Comes First

Runbox has built its reputation on privacy. It’s not just a feature—it’s the whole point. We don’t collect or sell your data to third-party advertisers. In fact, privacy is baked into everything we do.

Meta and X (Twitter) on the other hand make a lot of their money by gathering tons of data about their users—everything from your location to your browsing history. They use that data to target ads and sell you things, often without your full understanding of how much info they’re collecting. For Runbox, which is all about user privacy, being associated with platforms that profit off personal data just doesn’t align with our values.

(more…)

Continue Reading →

Malware and Your Privacy: How to Defend Against Digital Threats

Malware poses a significant threat to our personal information and security. From ransomware to keyloggers, malicious software programs can infiltrate our devices and compromise our most sensitive data, including contact lists. In this post, we’ll explore how malware works, the risks it presents, and the potential consequences of a breach.

What is Malware?

Malware, short for malicious software, is any software designed to harm, exploit, or compromise the functionality of a computer or network. One of the primary goals of many malware types is to steal sensitive information. Here’s how it works:

Malware often enters a device through infected downloads, email attachments, or compromised websites. Users may inadvertently install it by clicking on malicious links or accepting untrustworthy downloads.

Once installed, the malware can access various parts of the system. Spyware, for instance, can monitor keystrokes and capture personal information, while other types may directly search for files containing sensitive data.

Many malware types can scan for contact lists stored on devices, and extract names, phone numbers, and email addresses. This information can then be used for spam, phishing attacks, or sold on the dark web.

Types of Malware

(more…)

Continue Reading →

Runbox Ensures User Security and Privacy through Unique Email Addresses

Runbox is dedicated to maintaining user security and privacy. A key practice that supports this commitment is our policy of not recycling email addresses. We do not assign old addresses to someone else.

Runbox is committed to user security and privacy, and a fundamental part of this commitment is our policy of always using unique email addresses for our users. We understand that your email address is a critical component of your digital identity, which is why we never reassign old addresses to new users. This practice safeguards your personal information and ensures that your communications remain private and secure.

By avoiding the reuse of email addresses, we help eliminate the risks associated with recycled addresses. This includes potential unauthorized access to sensitive information or confusion from multiple users sharing the same address. This approach not only protects your identity but also fosters a more reliable and trustworthy communication environment.

Using unique email addresses offers several benefits:

(more…)

Continue Reading →

Privacy Matters – How Our Personal Data Is Used

When we go online or use apps, we are being tracked. Companies collect our personal data by tracking us across the web sites we visit. They build profiles on us based on our browsing history and online behavior. They want to sell us their products and services, and the more they know about us the better they can use this data to manipulate our behavior. 

You know those ads that pop up everywhere after you looked up something? After you’ve looked up a new car, car ads follow you around all day. You research a vacation to Alaska, and travel ads show up everywhere. This is the result of targeted advertising, which is based on data they collected on you. Some call it surveillance capitalism, and it’s big business. 

Privacy is about how your data is collected, processed, stored and used. It’s about maintaining control over your personal information and your identity. Privacy isn’t about hiding secrets, it’s about keeping your personal information safe from people who can do harm.
(more…)

Continue Reading →

Be privacy concerned when using ChatGPT (and other AI chatbots)

This is blog post #18 in our series on the GDPR.

Don’t tell anything to a chatbot you want to keep private.” [source]

Writing about AI in general and about chatbots specifically is like shooting at a moving target because of the speed of development. However, at Runbox we are always concerned about privacy and must examine the chatbots case in that respect.

Due to its popularity, we have mainly used ChatGPT from OpenAI as the target of our examination. NOTE: ChatGPT and the images from text captions DALL-E are both consumer services from OpenAI.

This blog post is a summary of our findings, leading to advice on how to avoid putting your privacy at risk when using the Natural Language Processing (NLP)-based ChatGPT.

Our examination is based on OpenAIs Privacy Policy, Terms of Use, and FAQ, and a number of documents resulting from hours of Internet browsing.

The blog post consists of two parts: PART I is a summary of our understanding of the technology behind language models in order to grasp the concepts and better understand its implications regarding privacy. In PART II we mainly discuss the relevant privacy issues. It is written as a stand alone piece, and can be read without necessarily have read PART I.

PART I: Generative AI technology

The basics

GPT stands for Generative Pre-trained Transformer, and GPT-3 is a 175 billion parameter language model that can compose fluent original writings in response to a short text prompted by a user. The current version of ChatGPT is built upon GPT-3.5 and GPT-4 from OpenAI.

ChatGPT was launched publicly on November 30, 2022. ChatGPT was released as a freely available research preview, but due to its popularity, OpenAI now operates the service on a freemium model [source].

The GPTs are the result of three main steps: 1) Development and use of the underpinning technology Large Language Models (LLMs), 2) Collection of a very large amount of data/information, and 3) Training of the model.

Let us also keep in mind that all this is possible only because of today’s advancements of computational power.

Language models

A language model is a system which denotes mathematics “converted” to computer programs that predict the next word/words in a sentence, or a complete sentence, based on probabilities. The model is a mathematical representation of the principle that words in a sentence depend of the words that precede them.

Since computers basically can only process numbers (in fact only additions and comparisons), text input to the model (prompts) must be converted to numbers, and likewise the output numbers have to be converted to text (response). Text in this context consists of phrases, single words, or parts of words called tokens.

When prompting a GPT then, your query is converted to tokens (represented by numbers), and used by the transformer where its attention mechanism generates a score matrix that determines how much weight should be put on each word in the input (prompt). This is used to produce the answer to the prompt, using the model’s generative capability – that is to predict the next word in a sentence by selecting relevant information from the pre-processed text with high level of probability of being fluent and similar to human-like text [source].

The learning part of the model is handled by a huge number of parameters representing the weights and also statistical biases for preventing unwanted associations between words. For instance, GPT-3 has 175 billion parameters, and GPT-4 is approximated to have around 1 trillion.

(The label “large in LLM refers to the number of values (parameters) the model can change autonomously as it learns.)

Collecting the data

The texts the GPT model generate stems from OpenAIs scraping of some 500 billion words (in the case of GPT-3, the predecessor for the current version of ChatGPT) systematically from the Internet: books, articles, websites, blogs – all open and available information, from libraries to social media – without any restriction regarding content, copyrights or privacy.

The scraping includes pictures and program codes as well and is filtered resulting in a subset where “bad” websites are excluded

The pre-training process

All that data is fundamental for pre-training the model. This process analyses the huge volume of data (the corpus) for linguistics patterns, vocabulary, grammatic properties etc. in order to assign probabilities to combinations of tokens and combinations of words. The aforementioned transformer architecture is used in the training process, where the attention mechanism makes it possible to capture the dependencies between words independent of their position in a sentence.

The result of the pre-training process is an intermediate stage that has to be fine-tuned to the specific task the model is intended for, for instance providing texts, program code, or translation of speech as response to a prompt. The fine-tuning process uses appropriate task-specific datasets containing examples typically for the task in question, and the weights and parameters are adjusted accordingly.

Of cause, a ChatGPT-response to a prompt is not “burdened” with the ethical, contextual, or other considerations a human will perform. To prevent undesired responses (toxicity, bias, or incorrect information), the fine-tuning process is supervised by humans in order to correct inappropriate or erroneous responses, using prompt-based learning. Here the responses are given a “toxicity” score that incorporates human feedback information [source].

ChatGPT usage training

The learning process continues when response generated following by a user’s prompts is saved and subject to the training process, at least for 30 days, but “forever” if chat history isn’t turned off. In any event it is not possible to delete specific prompts from user history [source], only entire conversations

In the world of AI and LLMs, hallucinations are the word used when responses are like “pulled from thin air”.

OpenAI offers an API that makes it possible for “anyone” to train GPT-n models for domain specific tasks [source], that is to build a customized chatbot. In addition, they have launched a feature that allow GPT-n to “remember” information that otherwise will have to be repeated [source, source].

Takeaways

  • The huge volume of data scraped is obviously a cacophony of contents and qualities that will affect the corpus and so also the probability pattern and the responses produced [source].
  • ChatGPT has limited knowledge of events that have occurred after September 2021, the cutoff date for the data it was trained on [source].
  • The response you get from ChatGPT to your prompt is based on probabilities, and as such you have no guarantee of the validity [source].
  • A prompt starts a conversation, unlike a search engine like DuckDuckGo and Google that gives you a list of websites matching your search query [source].
  • ChatGPT uses information scraped from all over the Internet, without any restrictions regarding content, copyrights, or privacy. However, manual training of a model was introduced to detect harmful content [source]. Violations of copyrights has resulted in lawsuits [source], and also signing of more than 10 000 authors of an open letter to the CEOs of prominent AI companies [source].
  • Your conversation is normally used to train the models that power ChatGPT, unless you specifically opt-out [source].

PART II: Chatbot privacy considerations

The privacy considerations with something like ChatGPT cannot be overstated” [source]

The following introduction is mainly made for readers that have skipped this blog post PART I.

Generative AI systems, such as ChatGPT, use information scraped from all over the Internet, without permissions nor restrictions regarding content, copyrights, or privacy (more on this in PART II). This means that what you have written on social media, blogs, comments on an article online etc. may have been stored and used by AI companies to train their chatbots.

Another source for training of generative AI systems is prompts, that is information from users when asking the chatbot something. What you ask ChatGPT, the sentences you write, and the generated text as well, is “taken care of” by the system and could be available for other users through the answer of their questions/prompts.

However, according to Open AI’s help center article, you can opt-out of training the model, but “opt-in” is obviously default.

So, both the Internet scraping and any personal information included in your prompts can have as result that personal information could turn up in a generated answer to another arbitrary prompt.

This is very problematic for several reasons.

Is Open AI breaching the GDPR?

First, OpenAI (and other scraping of the Internet) never asked for permission to use the collected data, which could contain information that may be used to identify individuals, their location, and all kinds of sensitive information from hundreds of millions of Internet users.

Even if Internet scraping is not prohibited by law, it is ethically problematic because data can be used outside the context in which it was produced, and so can breach contextual integrity, which has de facto been manifested in the EU’s General Data Protection Regulation (GDPR) Article 6, 1 (a) as prerequisite for lawful processing of personal data:

…the data subject has given consent to the processing of his or her personal data for one or more specific purposes

Here language models, like Open AI’s ChatGPT, are in trouble: Personal data can be used for any purpose – a clear violation of Article 6.

Second, there is no procedures given by Open AI for individuals to check if their personal data is stored and thereby can potentially be revealed by arbitrary prompt, and far less can data be deleted by request. This “right to erasure” is set forth in the GDPR Article 17, 1:

The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay …” on the grounds that “(d) the personal data have been unlawfully processed

It is inherent in language models that data can be processed in ways that are not predictable and presented/stored anywhere, and therefore the “right to be forgotten” is unobtainable.

Third, and without going into details, the GDPR gives the data subjects (individuals) regarding personal data the right to be informed, the right of access, the right to rectification, the right to object, and the right to data portability. It is questionable if generative AI systems can ever accommodate such requirements since an individual’s personal data could be replicated arbitrarily in the system’s huge dataset.

Fourth, Open AI stores all their data, including personal data they collect, one way or another, on servers located in the US. That mean they are subject to the EU-US Data Privacy Framework (see our blog Privacy, GDPR, and Google Analytics – Revisited), and the requirements set there.

To answer the question posed in the headline of this paragraph, Is OpenAI breaching the GDPR?It is very difficult to understand how ChatGPT, and other language models for generative use (Generative AI systems) as well, can ever comply with the GDPR.

What about the privacy regulations in the US?

Contrary to the situation in Europe, there is no federal privacy law in the United States – each state has their own jurisdiction in this area. There are only federal laws such as HIPAA (Health Insurance Portability and Accountability Act) and COPPA (Children’s Online Privacy Protection Act) which regulate the collection and use of personal data categorized as sensitive. However, there are movements towards regulation of personal information in several states as tracked by IAPP (The International Association of Privacy Professionals).

How do OpenAI use data they collect?

When signing up to ChatGPT, you have to agree to OpenAI’s Privacy Policy (PP), and allow them to gather and store a lot of information about you and your browsing habits. Of course, you have to submit all the usual account information, and to allow them to collect your IP-address, browser type, and browser settings.

But you also allow them to automatically collect information about for instance

“… the types of content that you view or engage with, the features you use and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, and your computer connection”.

All this data made it possible to build a profile of each user – bare facts, but also more tangible information such as interests, social belongingness, concerns etc. This is similar to what search engines do, but ChatGPT is not a search engine — it is a “conversational” engine and as such is able to “learn” more about you depending on what you submit in a prompt, that is, how you engage with the system. According to their PP and the citation above, that information is collected.

The PP acknowledges that users have certain rights regarding their personal information, with indirect reference to the GDPR, for instance the right to rectification. However, they add:

“Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.”

OpenAI reserves the right to provide personal information to third parties, generally without notice to the user, so your personal information could be spread to actors in OpenAI’s economic infrastructure and is very difficult to control.

Misuse of your personal information – what are the risks?

It is reasonable to assume that OpenAI will not knowingly and willfully set out to abuse your personal information because they have to adhere to strict regulations such as GDPR, where misuse could result in fines of hundreds of millions of dollars.

The biggest uncertainty is linked to how the system responds to input in combination with the system’s “learning” abilities.

If asked the “right” question, the system can expose personal information, and may combine information about a person, e.g. a person’s name, with characteristics and histories that are untrue, and which may be very unfortunate for that individual. For instance, asking the system something about a person by name, can result in an answer that “transforms” a credit card fraud investigator to be a person adhered to credit card scam.

Takeaways

Using generative AI systems, for example ChatGPT, is like chatting with a “black box” – you never know how the “box” utilizes your input. Likewise, you will never know the sources of the information you get in return. Also, you will never know if the information is correct. You may also receive information about other individuals that you shouldn’t have, potentially even sensitive and confidential information.

Similarly, other individuals chatting with the “box”, may learn about you, your friends, your company etc. The only way to avoid that, is to be very careful when writing your prompts.

That said, OpenAI has introduced some control features in their ChatGPT where you can disable your chat histories through the account settings – however the data is deleted first after 30 days, which means that your data can be used for training ChatGPT in the meantime.

You can object to the processing of your personal data by OpenAI’s models by filling out and submitting the User Content Opt Out Request form or OpenAI Personal Data Removal Request form, if your privacy is covered by the GDPR. However, when they say that they reserve the right “to determine the correct balance of interests, rights, and freedoms and what is in the public interest”, it is an indication of their reluctance to accept your request. The article in Wired is recommended in this regard.

Valuable sources

  1. GPT-3 Overview. History and main concepts (The Hitchhiker’s Guide to GPT3)
  2. GPT-3 technical overview
  3. Transformers – step by step explanation
  4. LMM training and fine-tuning

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Continue Reading →

Privacy, GDPR, and Google Analytics – Revisited

This is blog post #17 in our series on the GDPR.

Summary of the case

In our blog post on 23 October 2022, we referred to the Data Protection Authorities (DPAs) of Austria, Denmark, France, and Italy who were concluding that the use of Google’s Universal Analytics (UA or GA3) is not compliant with the EU’s General Data Processing Regulation (GDPR).

The reason for this is that the use of GA3 implies that personal data is transferred to the US, which at that point in time was not on the EU’s list of countries that have adequate level of protection of personal data. This means that the US was not fulfilling the requirements set by the EU/GDPR regarding ‘the protection of fundamental rights and freedoms of natural persons’, which is a key expression in the GDPR.

Furthermore, the Norwegian DPA (Datatilsynet) had up until 23 October 2022 received one (1) complaint regarding Google Analytics. Before any final decision is made, they have to confer with other supervisory authorities in the EEA that also have received similar complaints, according to GDPR Article 60 (One-Stop-Shop mechanism).

(We regret that links in italics in this article point to web pages in Norwegian.)

Universal Analytics (GA3) replaced by GA4

In October 2020, Google released Google Analytics 4, the new version of Google Analytics. In March 2022, Google announced that the Google Universal Analytics tool will be sunset in July 2023 and that Google would only provide the GA4 tool after 1 July 2023.

The Danish DPA have analyzed the GA4 regarding privacy, and concludes on their website that even if improvements have been made, it is still the case that “law enforcement authorities in the third country can obtain access to additional information that allows the data from Google Analytics to be assigned to a natural person.” That said, GA4 is illegal in terms of the GDPR because servers in the US are involved in the process, as long as an adequacy decision EU/US is not made.

The Norwegian DPA decision

Norwegian DPA reports on their website 27 July 2023 that they have concluded on the complaint mentioned above. The complaint stems from the noyb who lodged it against 101 European websites to the data supervisory authorities in the EEA for the use of GA. One of these was the Norwegian telecom-company Telenor, who at that time was using GA.

The conclusion is that personal data then was transferred to the US in violation of the GDPR, Article 44. In other words, the use of Google Analytics was illegal. Because Telenor discontinued use of GA on January 15, 2021, the Norwegian DPA in a letter on 26 July 2023 finds that a reprimand “to be an adequate and proportionate corrective measure”.

The Norwegian DPA relies on the Danish authority by claiming that the conclusion will be the same regardless of whether Google Analytics 3 or 4 has been used (see above).

What about adequacy EU/US?

On 10 July 2023 the European Commission adopted its adequacy decision for the EU-US Data Privacy Framework and announced a new data transfer pact with the United States.

Accordingly, companies from the EEA area should be able to legally use GA as long as Google enter into a so-called Standard Contractual Clauses that provide data subjects with a number of safeguards and rights in relation to the transfer of personal data to Google LLC (Limited Liability Company) in the US.

However there is a big “but”: Max Schrems at noyb writes: “We have various options for a challenge already in the drawer, …. We currently expect this to be back at the Court of Justice by the beginning of next year. The Court of Justice could then even suspend the new deal while it is reviewing the substance of it.”

To use the same phrase as in the recent update of our blog post On the EU-US data transfer problem: The last words are obviously not said.

Continue Reading →

Privacy, GDPR, and Google Analytics

This is blog post #15 in our series on the GDPR.

GDPR

Four European Data Protection Authorities (DPAs) have thus far concluded that the transfer of personal data to the United States via Google Analytics is unlawful according to the General Data Protection Regulation (GDPR).

It is quite certain that other European DPAs, including the Norwegian Data Protection Authority, will follow suit because all members of EU/EEA are committed to comply with the GDPR.

Website analytics vs privacy

Everyone who manages a website is (or should be) interested in the behavior of users across web pages. For this purpose there are analytics platforms that measure activities on a website, for example how many users visit, how long they stay, which pages they visit, and whether they arrive by following a link or not.

To help measure those parameters (and a lot of others) there exists a market of web analytics tools of which Google Analytics (GA), launched in 2005, is the dominant one. In addition, GA includes features that support integration with other Google products, for example Google Ads, Google AdSense and many more.

The use of GA implies collecting data that is personal by GDPR definition, for instance IP-addresses, which can be used to identify a person even if done in an indirect way. GA may use pseudonymization, using their own identifier, but the result is still personal data.

The fact that data collected by GA, of which some data is personal, is transferred to the USA and processed there, has brought the DPAs of Austria, Denmark, France, and Italy to conclude that the use of Google Analytics is not compliant with the GDPR.

None Of Your Business

This conclusion has been reached after complaints submitted by the Austrian non-profit organization NOYB (“my privacy is None Of Your Business”) to a number of European DPAs.

The complaints are based on the Court of Justice of the European Union (CJEU) concluding that the transfer of personal data to the US, without special measures, violates the GDPR.

According to NOYB the Executive Order signed by US President Joe Biden recently will not solve the problem with EU-US data transfers with regards to the potential for mass surveillance.

DPAs on the case

The Danish DPA writes that even if Google has indicated that they have implemented such measures, these measures are not satisfactory in order “to prevent access to transferred personal data by US law enforcement authorities”.

Datatilsynet logo

The Norwegian DPA has thus far received one complaint regarding Google Analytics, and they are saying on their web site that the case is being processed.

They “will place great emphasis on what other countries have come up with”, they say in an email conversation.

Runbox will continue following these developments and keep you updated.

Note: Runbox used GA during a short period between 2011 and 2013. When we became aware of how Google collects data and how they potentially could use these data across their various services, we terminated the use of GA in October 2013. Since then we use only internal statistics to monitor our service and visitor traffic on our web site, and these data are not shared with anyone in accordance with our Privacy Policy.

Continue Reading →

GDPR in the Wake of COVID-19: Privacy Under Pressure

Tech companies all over the world are rushing to support health authorities in combating the spread of the SARS-CoV2 virus, which is causing the more well-known COVID-19 disease. Whether those companies do so by invitation, by commitment, or by sheer self-interest, country after country is embracing mobile phone tracking and other technological means of tracking their citizens.

It might be worthwhile to take a deep breath and understand what’s currently technologically possible, and what might be at stake.

Tracking the infection

Everyone wants to avoid infection, and every government wishes to decrease the consequences of the pandemic within their country. And modern technology makes it possible to impose on citizens surveillance systems that represents a significant step towards realizing a Big Brother scenario.

In fighting the spread of the virus, it is crucial to know who is infected, track where the infected are located, and inform others that have been, or may come, in contact with the infected. It is precisely in this context that mobile phone tracking is playing a role, and this is currently being explored and implemented in some countries, raising ethical and privacy related questions.

Smartphone tracking apps

Once tracking of individuals’ phones is established for this particular and possibly justifiable reason, it could be tempting for a government or company to use it for other purposes as well. For instance, tracking data could be combined with other personal data such as health data, travel patterns, or even credit card records. Or the location of the infected individuals could be presented on a map along with the persons’ recent whereabouts, perhaps supplemented with warnings to avoid that area. Privacy is under pressure.

A smartphone can also be used as “electric fence” to alert authorities when someone who is quarantined at home is leaving their premises, or to fulfill an obligation from the authorities to send geolocated selfies to confirm the quarantine. Some authorities even provide individuals with wristbands that log their location and share it with the relevant authorities. The examples are many, and they are real, underlining the ongoing pressure on privacy.

Big tech gets involved

Very recently two of the world’s biggest tech companies, Apple and Google, announced they are joining forces to build an opt-in contact-tracing tool using Bluetooth technology, and will draw on beacon technology as well. The tool will work between iPhones and Android phones, and open up for future applications one cannot currently imagine.

In the first version, the solution is announced as an opt-in API (application programming interface) that will let iOS and Android applications become interoperable, and — now comes crux no 1 — the API will be open for public health authorities to build applications that support Bluetooth-based contact tracing. The tool is planned for a second step — here is crux no 2 — an upcoming update of both iOS and Android will make the API superfluous. Of course, you can opt-out, but then you can’t download the operating system software update at all.

It is a double-edged sword: It is great that big tech companies are mobilizing resources to help in a public health crisis, but do we really want these companies to potentially know even more about our personal lives (in the name of the common good)? Privacy is under pressure.

Norway’s privacy oriented approach

Norway has also launched a mobile phone application to help limit the spread of the infection, but this development is done under the strict regime of privacy regulations and in accordance with the GDPR. The decision to implement the app was taken by the Government in a regulation containing specifications and strict requirements adhering to the GDPR is taken care of, including limited use until December 1, 2020.

It should be added that some of the exceptions in GDPR for authorities is put into effect because of the extraordinary situation. However, the Norwegian parliament (Stortinget) may terminate the law supporting the regulation at any time if 1/3 of the parliament members decides so.

Even if, at least in theory, it might be feasible to use a similar app from other countries, it is crucial that the software is developed from scratch in Norway. This will ensure that Norwegian authorities maintains control over all functions and data, and that the privacy regulations in the GDPR are respected.

It is also comforting that the app is developed in cooperation with The Norwegian Data Protection Authority (Datatilsynet). Other countries allow similar apps to store health information, access images or video from cameras, or even establish direct contact with the police. Such functionality is naturally out of the question in Norway’s case.

The app is designed and will be used for purposes of tracking the pandemic only, and installation and usage is voluntary. When installed and activated the app collects location data using GPS and Bluetooth, which is encrypted and stored in a registry.

In case of a diagnosed infected individual, health personnel will check if the person has installed the app. Individuals that have been in closer contact than two meters for more than 15 minutes with the “infected phone” will be notified by text message. The location data is kept for up to 30 days, and when the virus is no longer a threat the app will stop collecting data. The app users may at any time delete the app and all personal data that is collected.

What does it take to succeed?

In order for the tracking to have any impact on the spread of infections, around 60% of the population* must use the application. At the time of writing (late April), 1,218,000 inhabitants had downloaded the application, that is about 30 % of the population for which downloading is allowed (age limit 16 years).

However, the number of downloads is not a good metric and there are a few obstacles for making it operable. For instance, the “app” must be installed on the phone, permission to use GPS and Bluetooth must be given, the 4 pages long privacy declaration* has to be accepted, and the battery must provide sufficient power at any time.

The battery issue turns out to be a problem because of GPS-positioning* and the simultaneous use of Bluetooth, which seems necessary to obtain precise location data.

Furthermore, not everyone is accustomed to using the smartphone functionality that is needed, depending of the user interface. For instance elderly people and people with vision impairments* may find it difficult to use the app. And, will the criteria two meters for more than 15 minutes represent a filter that is too coarse to provide useful results and subsequent notification to the user?

For these reasons, the skeptical may wonder if using the app implies that privacy is traded for uncertain and unreliable results from infection tracking.

What the application will provide even if 60% adoption is not realized is data for later research. For instance, data from mobile phone operators who can trace mobile phones movements between base stations could be correlated to instances of infections.

In the name of fighting the pandemic, the main telecommunication companies* are now, with strict privacy considerations, cooperating with The Norwegian Institute of Public Health to analyze movement patterns of the population compared with reported infections. Data is collected in groups of at least 20 people (phones), and identification of individual persons (phones) is not possible*.

Bottom Line

At Runbox we are very concerned about privacy and any type of user tracking that may infringe on this right. While various nations are developing and implementing technological solutions to combat the spread of the decease, we are grateful that we reside in a country with strong privacy traditions. In fact, the first version of personal data protection legislation was implemented in Norway as early as 1978.

It is crucial that The Norwegian Institute of Public Health and The Norwegian Data Protection Authority ensure that the app developers at Simula Research Laboratory (a Norwegian non-profit research organization) attend to both privacy and information security issues in a responsible manner according to the well established tradition in Norway.

When privacy is under threat, as in this case, it is absolutely justified that objections arise. It is often too easy to accept privacy intrusions in the name of a perceived common good.

But one related point could be made as a final remark: Perhaps it would be more appropriate to be concerned about personal data that is collected and shared through one’s use of social media, where personal data is traded and used for purposes that are literally out of control.

* Article unfortunately only available in Norwegian.

Continue Reading →

Data Privacy Day

January 28th is Data Privacy Day, and was initiated by the Council of Europe in 2007. Since then, many advances to protect individuals’ right to privacy have been made.

The most important of these is the European Union’s General Data Protection Regulation (GDPR) which was implemented on May 25, 2018. Runbox has promoted data privacy for many years, anchored in Norway’s strong privacy legislation.

At Runbox, which is located in the privacy bastion Norway, we believe that privacy is an intrinsic right and that data privacy should be promoted every day of the year.

Your data is safe in the privacy bastion of Norway

We’re pleased that Data Privacy Day highlights this important cause. Many who use the Internet and email services in particular may think they have nothing to hide, not realizing that their data may be analyzed and exploited by corporations and nation states in ways they aren’t aware of and can’t control.

While threats to online privacy around the world are real and must be addressed, we should not be overly alarmed or exaggerate the problem. Therefore we take the opportunity to calmly provide an overview of Norway’s and Runbox’ implementation of data privacy protection.

Norway enforces strong privacy legislation

First of all, Norway has enacted strong legislation regulating the collection, storage, and processing of personal data, mainly in The Personal Data Act.

The first version of Norway’s Personal Data Act was implemented as early as 1978. This was a result of the pioneering work provided by the Department of Private Law at the University of Oslo, where one of the first academic teams within IT and privacy worldwide was established in 1970.

Additionally, the Norwegian Data Protection Authority, an independent authority, facilitates protection of individuals from violation of their right to privacy through processing of their personal data.

For an overview of privacy related regulations in the US, in Europe, and in Norway, and describes how Runbox applies the strong Norwegian privacy regulations in our operations, see this article: Email Privacy Regulations

Runbox enforces a strong Privacy Policy

The Runbox Privacy Policy is the main policy document regulating the privacy protection of account information, account content, and other user data registered via our services.

If you haven’t reviewed our Privacy Policy yet we strongly encourage you to do so as it describes how data are collected and processed while using Runbox, explains what your rights are as a user, and helps you understand what your options are with regards to your privacy.

Runbox is transparent

Runbox believes in transparency and we provide an overview of requests for disclosure of individual customer data that we have received directly from authorities and others.

Our Transparency Report is available online to ensure that Runbox is fully transparent about any disclosure of user data.

Runbox is GDPR compliant

Runbox spent 4 years planning and implementing EU’s General Data Protection Regulation, starting the process as early as 2014.

We divided the activities implementing the GDPR in Runbox into 3 main areas:

  • Internal policies and procedures
  • Partners and contractors
  • Protection of users’ rights

This blog post describes how we did it: GDPR and Updates to our Terms and Policies

Runbox' GDPR Implementation

More information

For more information about Runbox’ commitment to data privacy, we recommend reviewing the Runbox Privacy Commitment.

Continue Reading →

Profiles, Identities, Privacy or just a different look!

Whether you need to run personal and business emails from the same account, or just want to have a different identity for some purposes, Runbox has always provided customisation tools that let you adapt the name and email address on your outgoing message to suit any occasion. We call these Profiles and they are based on folders.

Profiles in Runbox 6

In the original design of Runbox it was intended that where necessary you could move or automatically filter incoming message to folders for different purposes, or to help you organize your email better. Along with folders there are a set of preferences for each folder. By default new folders that are created are set to have the same preferences as your Inbox, but you can change this setting so that you can customise these preferences on a per folder basis.

By far the most commonly customised settings are the Name, From, Reply to and Signature settings. These in particular allow you to create new “Profiles” so that you can send mail as it you have more than one email account. When you are reading email in a particular folder and you reply or create a new message while that folder is selected, your preferences for that folder are automatically applied to the message you are creating.

As mentioned in a previous blog post aliases are an excellent way to keep mail separate for different purposes, and potentially help you manage any unsolicited mail. Profiles let you take this further and create a whole new identity, including a different name to go along with the alias address. Whenever you are using the Compose windows your aliases and profiles are listed in the drop-down box at the top of the window so you can easily select the one you need.

Identities in Runbox 7

One of the drawbacks of the flexibility the existing interface offers is that it can be quite time consuming setting up a alias, and then having to create a folder for a profile just so you can set up a different “from” name or signature. You might not even want to move or filter messages to a folder, but you would still need to create one if you want a different profile.

In Runbox 7 we are going to simplify and streamline this process and all aliases will automatically become part of an “Identity”. When you create an alias you will at the same time have the option to update other details attached to that alias to create a different identity, or accept the default values that will automatically be pre-filled for you.

We are also planning to eventually allow you to create a folder from the identities interface and at the same time a filter so that when you create an alias and decide to use that as an identity you can complete all the necessary steps at the same time.

In Runbox 7 these identities will replace profiles and will improve on a feature we have offered for a long time, and one that is a key feature of what Runbox offers in its email service.

For more information about Runbox 7, see some of our previous blog posts below:

We still have some open spots in the beta testing, so if you would like to participate send an email as soon as possible to support@nullrunbox.com with the subject “Runbox 7 Webmail beta test”.

Continue Reading →