How Are Named Entity Recognition Models (NER) Used in Data Anonymization for LLMs?

How Are Named Entity Recognition Models (NER) Used in Data Anonymization for LLMs?

Intro

One of the biggest issues for companies these days is figuring out how to protect their sensitive data from ending up in LLMs. In fact, many companies that work with sensitive data have gone so far as to ban LLMs such as ChatGPT, Claude and others from their corporate networks. The fear is that an employee inadvertently sends sensitive data to an LLM as part of a natural language prompt. Once that prompt is sent to the LLM, it's nearly impossible to remove the sensitive data (even if the model provider says that they won't use it in their training data). However, outright banning LLMs means that employees can't take advantage of the power of LLMs to do their jobs more efficiently and effectively. Of course, companies can run their own models but that's difficult and only tenable for a very small subset of companies.

So the question then is - how can companies allow their employees to use LLMs while protecting sensitive data? A possible solution is Named Entity Recognition (NER) models.

Let's jump in.

What are Named Entity Recongition (NER) models?

NER is a subtask of Natural Language Processing (NLP) that aims to identify and classify entities in text. Here are some examples of entities:

  • Person names (e.g., John Doe)
  • Locations (e.g., New York City)
  • Organizations (e.g., Google)
  • Dates (e.g., January 1st, 2024)
  • Monetary values (e.g., $500)
  • Identifiers (e.g SSN, License Number, etc.)

NER models are machine learning models that implement NER and can detect and classify these entities in structured and unstructured text. Most NER models come pre-trained with some entities that it can recognize and usually have a way to add on new entities to the training data that you can fine-tune. In the context of LLMs, this means being able to detect and classify text in a prompt that may or may not be sensitive. Let's look at an example in the next section.

How are NER Models Developed?

Developing an NER model can a pretty complex task but let's break it down into the main steps.

  1. Data Collection and Annotation The first and most critical step is collecting a dataset relevant to the type of entity that you want to be able to identify. If the NER model is being developed for medical text, the dataset should contain medical documents. Once the data is collected, it must be annotated. Annotation is the process of marking up the text with the correct entity labels (e.g., tagging "John Doe" as a person). This annotated data acts as the training set that the model learns from when it's trained.
  2. Feature Engineering Traditionally, NER models were trained using manually crafted features such as embeddings from GloVe and Word2Vec but nowadays, NER models using deep learning to auto-generate its feature using contextual embeddings such as BERT and RoBerta.
  3. Model Selection There are different types of models used for NER, from classical Conditional Random Fields (CRFs) to modern deep learning models such as Recurent Neural Networks (RNNs) and Transformer-based models such as BERT.
  4. Training The next step is to train the model on the annotated dataset.
  5. Evaluation and Fine-Tuning After training, the model's performance is evaluated using metrics such as precision, recall, and F1-score. Based on these evaluations, the model is fine-tuned—adjusting hyperparameters or tweaking the architecture to optimize performance.

How do NER Models protect sensitive data?

Let's say that you're building an app that transcribes doctor's visits into text and using OpenAI to do summarization. There is a lot of sensitive data (PII & PHI) that gets said in a doctor's visit and before you send that data to OpenAI to summarize, you want to anonymize the sensitive data. In this case, using an NER model is a great to to detect and classify that sensitive data. For example:

For example, in the sentence: "Angela, you're 25 years old, right? I'm going to prescribe you Metformin to help with your high blood pressure."

An NER model would identify:

Person: Angela Age: 25 Medicine: Metformin Diagnosis: High Blood Pressure

In this sentence, the NER model has identified the Person and Age entities which are PII (Personally Identifiable Information) and the Medicine and Diagnosis entities which are PHI (Personal Health Information). Then this gives developers the ability to deal with these entities. They can mask them, redact them or generate synthetic data that looks like it in order to backfill it before it gets sent to the LLM.

Using NER Models in the Real World

Generally NER models are used in free-form text since that's where the most ambiguity about sensitive data arises. Here are some examples of where we've seen NER models being used in the real world:

  1. Healthcare/Healthtech - the example we gave above is real example from a customer that we work with. They transcribe notes and then summarize using an LLM. Prior to the data doing to the LLM for summarization, they want to strip out any sensitive information.
  2. Chatbots - customer facing chatbots are very popular these days and many companies want to make sure that customers are accidently leaking their own sensitive information to their Chatbots which are backed by LLMs. In this situations, they use NER models to detect and redact PII before it gets sent to the LLM.
  3. Legal - as more law firms adopt LLMs for case-work, protecting sensitive data is crucial. Using NER models to detect legal specific entities can help reduce the potential leak of sensitive information.

These are just a few examples, but there are many other use-cases where free-form text is sent to an LLM for summarization, transcription, parsing or some of other operation. In many of those cases, if there is sensitive data at play, it may make sense to use an NER model to identify, classify and redact that data.

Wrapping up

If you're working with sensitive data in free-form text, you should consider Named Entity Recognition (NER) models as a way to detect, classify and anonymize that text before sending it to an LLM. Especially in agentic workloads where different agents are passing around prompts and data. As LLMs become even more intertwined into our infrastructures and workflows, it's critical to protect users' sensitive data.


Consistent Data Anonymization - How to Reliably Anonymize Data

Consistent Data Anonymization - How to Reliably Anonymize Data

Consistently anonymizing data in crucial in transactional databases. In this blog, we explore how you can use Neosync to generate consistently anonymized data.

October 2nd, 2024

View Article
Join our Community
Have questions about Neosync? Come chat with us on Discord!
dev
NeosyncLogo
soc2
Nucleus Cloud Corp. 2024
Privacy Policy
Terms of Service