
How Can We Protect Our Personal Identifiable Information (PII)?
The fast boom of Artificial Intelligence, including the large language models (LLMs), and their associated chatbots, will provide a new challenge for Data Privacy. Will our collected personal information be part of a model for training data, or for some other nefarious use? Will our postings, messages, and emails be shared with law enforcement organizations, and we are unaware of how it is being used? Can chatbots connect to diverse threads from our online lives and be shared with anyone, willing to pay, or worse a bad actor sharing your intimate information for anyone to be able to see?
What are the risks to our data that it might possibly be able
to be bought and sold, and then used with AI systems?
AI systems have many of the same, or similar, privacy risks that we have been facing during the past two or three decades of Internet commercialization of unabated data collection. The only difference is the scale of what AI systems have actually ingested, kept, shared, and sold.
It is basically impossible for people using online products and services to escape the systematic digital surveillance across most aspects of digital life. It is quite possible that AI may possibly make matters even worse, until there are legal and proper control requirements to protect personal data.
There is the risk of others using your data, and AI tools for
anti-social purposes. For example, generative AI tools trained to “scape” the Internet that can capture personal information, such as PII, about the people collected.
This type of data will help the bad actors to use spear-phishing campaigns for the purpose of identity theft or fraud; or using AI voice cloning to impersonate people for the purpose of fraud.
It is not always the “bad actors”. AI predictive systems are being used to help screen candidates to help employers decide who to interview for open jobs. For example, Amazon’s famously known hiring screening tool being compromised with a bias against women.
(Insight – Amazon scraps secret AI recruiting tool that showed bias against women | Reuters)
Another example is using facial recognition to identify and apprehend people who may have committed a crime. It seems like facial recognition would work fine because it will get the bad guys. Only to find instead bias inherent in the data used to train facial recognition. Unmasking the bias in facial recognition algorithms | MIT Sloan.
With AI innovation comes a pressing concern: AI privacy
As AI systems process vast amounts of personal information, the line between utility and intrusion becomes increasingly blurred. Companies using AI business tools, or developing their own must be careful to balance protecting sensitive information with maximizing the AI’s capabilities.

Do not become complacent with the idea that companies are
taking your data, and that it is too late to do anything
Ten to twenty years ago most people looked at data privacy with online shopping and were not too worried about what companies were capturing from you, because it was convenient for most people. However, when browsing online, my data should not be collected unless I make some affirmative choice, like signing up for the service or creating an account. And even then, my data should not be considered public unless the person has agreed to share it.

As a general approach to data privacy protection, it should be enough to provide the minimum of data that is requested; and limit what companies and organizations can gather from you or your organization.
Data minimization is a fundamental principle in data privacy and protection. Data minimization is about having a data collection with a bare minimum of Personal Information (PII) needed; and retaining it for a short time. Think of it as a packing for a trip, but not with a bulky suitcase filled with unnecessary items. Instead, you are being careful with selecting and packing only what is essential for the “specific journey”

- Collecting what is not needed to begin with.
- Deleting what is not needed after a certain period of time.
- Substituting sensitive information elements (names, emails, phone numbers, Social Security Numbers), with tokens.
- Data retraction techniques like working with partial data fields when applicable.
For example: only the last Four(4) digits of a credit card number, or a masked phone number.
How is Digital Privacy Protected?
Both organizations and individuals can do their part to protect digital data privacy. For organizations, that starts with having the right security systems in place, hiring the right experts to manage them, and following data privacy laws. Here are some other general data protection strategies to help enhance your data privacy:
- Limit Sharing Your Private Data – Sharing your private data should only be trusted to organizations that have a strong data security statement.
While it will be boring, it is worth reading the data security statement to have an understanding how your data is being handled. - Encryption – Most legitimate websites use what is called “secure sockets layer” (SSL), which is a form of encrypting data when it is being sent to and from a website. This keeps attackers from accessing that private data. Look for the padlock icon in the URL bar, and the “s” in the “https://” to make sure
you are conducting secure, encrypted transactions online.
- Strong Passwords – Password strength is a measure of the effectiveness of a password against guessing or brute-forced attacks.
A Strong Password is important for the protection any account.
- Multi-Factor Authentication – Multi-factor authentication (MFA; two-factor authentication, or 2FA, along with similar terms) is an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence (or factors) to an authentication mechanism.
- Detection of Threats
Anomaly Detection: Identifying unusual patterns that may indicate a security breach.
Phishing Detection: Spotting phishing attempts by analyzing emails and web pages.
Malware Detection: Using machine learning models to recognize malware signatures.
Network Security: Monitoring network traffic in real time to detect and mitigate threats.
Ai has been a hot topic of discussion for many years. With Ai and Cybersecurity, AI systems can be trained to enable automatic Cybersecurity threat alerts and prevention, identify new strands of malware, and protect sensitive data
By Dave Broucek, Trusted Advisor and Cybersecurity
Comments are closed