Désolé, cette page n'existe qu'en anglais. Sa traduction est en cours. En attendant, veuillez retourner à la page d'accueil de Dialpad en cliquant ici.

Retourner à l'accueil
Dialpad LogoDialpad Logo Icon
BLOG
Share

Back to Blogs

PII anonymization: Securing customer privacy with DialpadGPT

Dialpad logo
Dialpad's Ai Team

Building and training Dialpad's Ai

Customer Privacy with Dialpad GPT Header

Tags

DialpadArtificial Intelligence

Share

In a world dominated by Large Language Models (LLMs), there are growing concerns of the potential risks associated with handling sensitive data. At the forefront of these concerns— particularly for companies like Dialpad that work with sensitive customer data—is personally identifiable information (PII).

In this blog post, we’ll cover the challenges that come with training an LLM to process PII while safeguarding user privacy—and the processes we follow here as we continue building out DialpadGPT, a business conversation-focused LLM.

The challenge: Excluding PII while training an LLM to process, well, PII

LLMs are black box systems, and there’s no getting around that. This means what you put into them could come out in unpredictable ways. Hence, it's important to be cautious about the data used for training or fine-tuning these models.

The same principles that apply to LLMs also generally apply to PII data. If we don't want the model to generate information that could identify clients, we must avoid including any PII in the training data.

However, this poses a challenge: While we aim to exclude PII during training, we still need the model to effectively process PII during real-world usage.

The solution: In-house PII anonymization

At Dialpad, we've tackled this challenge by training our own LLM, DialpadGPT, to offer generative AI capabilities alongside Dialpad's existing AI features.

To safeguard customer privacy, we've employed an in-house PII anonymization solution that automatically removes PII from the training data. We've rigorously evaluated this solution to ensure that no customer PII is inadvertently mixed into DialpadGPT's training.

How we did it

Step 1: Identifying PII

The Dialpad AI team identifies over 40 different types of data, including names, addresses, and identifying numbers. We then use a combination of pattern-based and machine learning approaches to identify and classify these PII instances.

Pattern-based recognition targets numeric data (like phone numbers or credit card details) and structured data (like email addresses, including formats unique to spoken conversation such as using spelled out words).

Machine learning model-based recognition is used to target names, where the classification is more dependent on context. For example, the word “Stanley” is ambiguous between a person, company or place name, where each has different levels of PII risk, so classification based on learned context is required.

Our approach to PII detection during LLM training is deliberately aggressive, emphasizing over-detection to avoid any risk of missing cases of real PII. This gives us confidence that no PII is seen during model training, and is therefore never unintentionally included in model output.

Step 2: Anonymizing PII

When different PII instances are identified in transcripts, we replace them with generic tokens. For example:

“This is Ben calling from Dialpad” would be replaced with “This is [PERSON_NAME] calling from [ORGANIZATION_NAME].”

Additionally, when multiple PII instances appear in a single conversation (like multiple names being mentioned), we replace each masked piece of information with a unique numeric identifier to maintain contextual information. For example:

“This is Ben calling from Dialpad, may I speak to Eddie?” would be replaced with “This is [PERSON_NAME_1] calling from [ORGANIZATION_NAME], may I speak to [PERSON_NAME_2]?”

Step 3: Evaluating PII anonymization

To assess the effectiveness of our anonymization method, we've conducted both quantitative and qualitative analyses.

Residual Privacy Risk

We introduced a concept called "Residual Privacy Risk" by annotating anonymized transcripts to identify any missed PII. A Residual Privacy Risk score is then assigned based on the type of missed PII, which we work with the Dialpad legal team to define. For example, organization names will pose lower risk than a person’s full name.

The Residual Privacy Risk score for each conversation is calculated by accumulating unique instances of missed PII. The success criteria for anonymization is where mean + standard deviation of residual risk of the manually annotated data is lower than a defined threshold —which is the least identifiable information one could use to reveal the identity of a person.

For example:

Inline blog image

Once our anonymization process passes the Residual Privacy Risk check, we proceed to red teaming.

Red teaming

The term “red team” arose within a military context: A red team is a group that pretends to be an adversary, creating attacks on a given organization as a form of preparation for real attacks. In the context of LLMs, this means looking for ways through which a model can be made to produce harmful output, including (but not limited to) PII leakage.

Before data enters DialpadGPT for training, we subject it to a meticulous red teaming process:

  1. Obtain sample outputs.

  2. Run these outputs through our anonymization pipeline.

  3. Red team members search for any remaining information that could identify individuals.

  4. Adjustments are made to the anonymization pipeline.

  5. Output from the pipeline is red teamed again, and the process is repeated until anonymization is successful.

“Success” means that the red team isn’t able to identify an individual based on the unmasked remaining information. Once this process is complete, the anonymized data is fed into the DialpadGPT model for training and fine-tuning.

Future iterations

While anonymizing data with generic tokens like [PERSON_NAME] is effective, we are continuously striving to enhance DialpadGPT's ability to handle real data containing names, addresses, numbers, and other PII—all without jeopardizing customer privacy.

Our next step involves "tricking" the model into learning PII patterns without using actual customer data. We've created a database of believably human names and are experimenting with substituting real names with fake ones from our database. We're also exploring techniques to scramble numeric data to appear real but not actually be PII.

These experiments will allow us to try fine-tuning DialpadGPT with fake data to see if performance on tasks that require real names, such as summaries or action items, can be improved with this method.

See how it works

Book a personal walkthrough of Dialpad Ai with our team, or take a self-guided interactive tour of the app first!