AI, LLMs, and Boxy

Understanding what they are — and what they are not

At Runbox we’ve been thinking a lot about what we hear in the news about Artificial Intelligence (AI) and what that means for us as individuals, as a privacy conscious business and for our customers. We’ve had many discussions and debates on this topic and whether we could, should or even can use AI to improve what we do.

For many years our website has stated that:

“We believe that communication is a fundamental principle, and inherently good.”

Communication, done well, builds understanding and brings people together. That matters to us.

With that in mind we started to try and understand where AI might fit into what we do and where it can’t, or shouldn’t play a part.

LLM and AI

We decided that one area that would help us and our customers would be to make it easier to find out information about Runbox, our services and how you can get set up as a new customer or troubleshoot problems as an existing one. This seemed like a good way to improve communications and understanding and is very much in line with our values stated above.

There is understandable public concern about artificial intelligence — what it can do, what it knows, and what it might learn about us. This blog post sets out some clear distinctions to help make sense of those concerns, to explain where those concerns are exaggerated and where they are real.

Artificial Intelligence (AI)

Artificial Intelligence is a broad umbrella term covering any computer system designed to mimic aspects of human intelligence. Most AI that people encounter every day is narrow, task-specific, and largely invisible — spam filters, fraud detection, music recommendations, and the autocomplete on your phone are all forms of AI.

The word “AI” has become a catch-all in popular culture, which is part of why it generates so much anxiety. It is worth remembering that not all AI is the same, and most of it is doing something far more mundane than the headlines suggest.

Search Engines

In 2019, Google launched BERT (Bidirectional Encoder Representations from Transformers) and it enhanced web searches by allowing search engines to recognise what someone meant when they entered a natural language question such as “Why does my router have a flashing red light?”. It was distinctly different from previous web searches where it was a case of matching words to websites that contained those words and instead allowed search to recognise what you were asking for and adapt what is searched for appropriately to find the best matching websites for your needs. Search engines have been interpreting natural language from humans for years.

Large Language Models (LLMs)

Large Language Models are a specific and relatively recent type of AI. They are trained on vast quantities of text — books, websites, articles — and in doing so develop a sophisticated statistical model of language. Put simply, they learn what words and ideas tend to follow other words and ideas. This allows them to interpret human input and also generate plausible-sounding responses (whether they are correct or not).

Think of it this way. A search engine is like a librarian who finds books you need. An LLM is more like someone who has read millions of books and will write you a brand new answer based on everything they’ve read. The search engine points you to sources you can check. The LLM just writes the answer for you — which means you have to trust it more.

And that’s where things get tricky. LLMs are very good at sounding correct. But because they work on patterns — not actual understanding — they can sometimes get things wrong and still sound completely convincing. Some LLM apps will show you the sources they used, but most people don’t bother checking those sources.

It’s this difference where the dangers lie. We often use search engines and LLMs (like Copilot, Gemini and ChatGPT among others) in the same way; to find answers to questions we have. We rely on the answers being correct and for the most part LLMs are very good at producing factual answers. However, this doesn’t change that they are in fact statistical models that occasionally can misinterpret what we ask for and/or produce a reasonable-sounding reply that is entirely wrong.

Two things are important to understand about LLMs:

  • Training happened once, in the past. The model’s knowledge was fixed at the point its training ended. It is not continuously reading the internet or absorbing new information.
  • After training, the model’s knowledge is frozen. When you have a conversation with an LLM-powered tool, the model itself is not being updated or changed by what you say.

How LLMs learn about us


In short, they don’t.

Some implementations of LLMs that you may have come across will appear to learn about you or your preferences, or may refer back to previous things said in conversations with them. This kind of behaviour is a feature of the app, program, or system in which the LLM is embedded. For example, many familiar apps like Copilot, Gemini and ChatGPT will store information in the account you have with them or inside the app so that responses can be personalised.

This is necessary because as mentioned above the LLM (the AI if you prefer) is not learning about you and needs this personalised information about previous conversations for context. This information is sent every time you send a message because the LLM can’t learn anything new and needs the full context of the current conversation and your preferences in order to respond in a way that is useful.

In many apps, you can turn this memory off, but the responses from the LLM may become less useful. As is often the case, you have to make informed decisions about how you want your data used.

Ask Boxy Logo

Boxy — the Runbox Support Assistant

Boxy is a chatbot built on top of an AI/LLM, configured specifically to help Runbox customers with support questions. Understanding what Boxy is — and is not — should provide some reassurance.

What Boxy does NOT do

  • It does not learn from your conversations. Each session is independent; nothing is retained between them.
  • It has no access to your Runbox account, emails, or personal data.
  • It is not browsing the web or cross-referencing information between users.
  • It cannot remember anything you told it in a previous session.

Boxy’s knowledge comes from a curated knowledge base that Runbox controls and can inspect at any time. It works only with what is in the current conversation and that knowledge base — nothing more.

Boxy works by matching your question against Runbox’s extensive support knowledge base and generating a response based on patterns in that information. It can ask clarifying questions and follow a conversation (it gets sent up to the last 10 messages in your conversation with it each time you reply), but it does not reason about your situation the way a person would — it is responding only to what you have written, not drawing on the judgement or experience a person would apply. This means Boxy is reliable for common, well-documented questions and less so for unusual or account-specific ones. We think that distinction matters, and we would rather be clear about it than not.

The AI model that powers Boxy uses probability to determine which words to use in a response. A small element of randomness is introduced to make replies feel more natural rather than mechanical. This means that if you ask Boxy the same question more than once, it may produce differently worded answers that contain the same information.

To keep an eye on quality and make sure Boxy is behaving as expected, Runbox retains a log of Boxy’s replies for 30 days. These are stored on Runbox’s own servers in Norway. Your side of the conversation — the messages you type — is never logged or retained.

In summary

Boxy is a well-configured customer service assistant with a fixed brief. It has no memory, no curiosity, and no agenda. It is a practical tool for answering Runbox support questions — not a system that watches, learns from, or retains anything about the people who use it.

We take pride in our human support team and we are always here to help with general questions or account-specific issues. Boxy’s purpose is to provide quick responses to queries so customers can get help fast, day or night. You can always contact Support in the usual way by emailing us at support@nullrunbox.com or through the Runbox Support website.

You can read more detail in the privacy documentation that accompanies Boxy.

— The Runbox Team

Share:

Continue Reading →