Every single aspect of your life is available on-line. Every purchase you’ve ever made. The prescriptions you had filled. Every text message you’ve sent. Your phone calls. Any picture you took with your phone. Every single thing you’ve liked, retweeted, or commented on with Facebook, Instagram, and Twitter. Your browsing history. It’s all out there in the cloud. Don’t believe me? If you use Google Maps, go have a look at your timeline. Every mile you’ve ever driven, walked, or traveled is there. Every single store or destination you visited. How many minutes and hours you spent driving or walking. It’s a bit disconcerting the first time you see it if you didn’t realize all that info was being harvested.
Your devices are listening to you 24×7, vacuuming up everything that’s said. How do you think those recommendations on YouTube or ads in your Instagram feed get there? All that data is being collected and mined for information about you. Why? Right now, it’s mostly so companies can market and sell to you. That information is collated and sold to anyone who’s willing to pay for it. Increasingly however, that information is being used to authenticate who you are.
I mentioned it previously – Dynamic Knowledge Based Authentication. Companies buy all this personal information about you and then use it to generate authentication questions. It’s presumably more secure than the previous method of Static Knowledge Based Authentication. The static version was the canned questions you’d set up and be asked; what was the name of your first pet, or what city were you born in. The static version has become too easy to hack, so smart dynamic questions are now generated from the massive databases of information collected about you.
As we start to add AI to this huge collection of data gathered about you… systems are soon going to be able to start making judgments about you. They’ll be constantly creating and updating a profile of you. And that, my friends, is the beginning of the end. Here’s a few scenarios I can think of off the top of my head:
- You go on to OpenTable to make a dinner reservation for tonight. Hmm, not a single restaurant has a table available except for a few one-star, lower end places. That’s odd for a Tuesday night. Not really – your profile indicates you infrequently eat out, most of your clothing purchases are from Costco, and you rarely buy alcohol. Odds are you won’t order drinks, may share a plate, and probably aren’t a big tipper. The algorithm will hold on to that reservation for someone with a better profile.
- You’re trying to find a new job and haven’t received any interviews, despite applying to at least 50 different job postings. You went to a good school, have a killer resume, and have been a loyal employee for many years. What’s wrong? Well, your profile indicates you might be a problem employee. You travel a lot and seem to be a big shopper – often during work hours. You comment quite a bit on social media and appear to be vocal about your opinions. Based upon your shopping habits, you buy a fair amount of alcohol and there are quite a few pictures of you drinking with friends. You’re not a good risk, despite a solid work history.
- You have a USPSA shooting match coming up next month, so you go on-line to buy some bulk ammo for practice. For some reason the sale won’t go through. You contact your credit card company, only to find out they’ve cancelled your card for violating their terms of service. You apply to other credit card services, but every single one declines you. You’ve always paid your balance in full every month. What happened? Your profile indicated that you attempted to buy more than what is considered a “safe” amount of ammo. You posted an anti-BLM meme on Facebook at one point, which puts you in a white nationalist category. That, combined with support you’ve expressed on-line for various right-wing politicians and causes, makes you a risk.
- You suddenly receive a notice that your auto insurance is dropping you for violating their ESG (environmental, social, & environmental) terms of service. As you shop for new insurance, all the rates you’re quoted are at least five times what you were paying before. Why? Your profile shows that your car is more than ten years old and doesn’t meet MPG requirements. You drive more than 15k miles per year and your route data shows that most of your driving time is on high accident routes. Your consumer profile indicates that you may not be performing all the recommended service and maintenance on the vehicle, which increases emissions, reduces performance and increases the chances of an accident. You’re a poor risk.
There are a billion other scenarios you could come up with where an AI generated profile of you might impact the outcome. Does any of this seem outlandish or tin foil hat conspiracy? I don’t think so. I think we’re on the very cusp of this being reality (if it’s not already). As this trove of personal data is increasingly shared in massive databases, and as AI becomes more prevalent… your social credit score is going to dictate your future quality of life.
So, what can you do about it? At this point, not much probably. I think it’s going to happen regardless. Especially since all of it will be put in place “for your own good”. Virtually all of us have been sheep – oblivious to what the technology was doing. I don’t see that changing anytime soon.
If I was a parent of young kids, I’d be thinking about creating and maintaining multiple identities for them. One that’s used for any casual on-line activity (the web, social media, your phone) and one that’s protected. Anything you can do to enable them to enter adulthood with a clean, neutral, social profile. Educate them that everything you do, say, purchase, or interact with will be evaluated and potentially be used against you at some point in the future.
We are no longer a free people. If you want to interact with society, have credit, make purchases, rent a car, or get a job – your profile better conform to whatever is deemed to be acceptable.
Hmmm… this has the making of a good movie screenplay.