ChatGPT will now predict your age based on how you interact with it
It seems now #openai is doing age estimation too based on this email I received,
After Google who made the “Age signals API” into Android phones, now OpenAI will “predict your age based on how you interact with our services”.
No, I don’t want #chatgpt to analyze my age, thank you very much, (I always used the privage chat mode anyways, whatever impact it actually has on the data they use), so I will now switch to an alternative that doesn’t do that : https://chat.mistral.ai/
Don’t misunderstand me, protecting children is a good idea, but if that implies having to be analyzed by an #AI and having your experience change based on that, I’m heavily against it.
Although it’s convenient, if it starts analyzing my behaviour as well then I guess it will be a good time to start thinking a bit more on my own and only rely on AI as a last resort solution…
Their article https://help.openai.com/en/articles/12652064-age-prediction-in-chatgpt


The whole idea of software services where the output is not a function of the input, but rather a function of the input plus all the data the service has been able to harvest about you has always been awful. It was awful when google started doing it years ago and it’s awful when LLM frontends do it now. You should be able to know that what you are seeing is what others would see, and have some assurance that you aren’t being manipulated on a personal level.
Absolutely. Couldn’t agree more.
I just run everything possible locally which helps a lot. Nearly everything my friends do in “the cloud” I do on my own computer. Even stuff like spreadsheets they want to do in the cloud. It flummoxes me, how willing they are to share everything with big tech.
If you have a mid range or better GPU, you can even run a local LLM. I have used one for language translation. I cannot speak German but I was talking to a German speaker who did not speak English about a hobby. We could talk to each other despite not sharing a language. That’s practically sci-fi to me! I used a sandboxed LLM disallowed from any network access, to do that.
Even so, I am skeptical about most of what I see people use LLMs for. I am afraid of what they will allow bad actors to do. I am afraid of even worse corruption of the information space. I doubt the horse will re-enter the barn tho.
It’s shocking how much this ‘personalized’ stuff is normalized.