Artificial Intelligence & The Cult Mind

Scientific challenges to the beliefs promoted by the Brahma Kumaris so called "World Spiritual University"
  • Message
  • Author
User avatar

Pink Panther

  • Posts: 1919
  • Joined: 14 Feb 2013

Artificial Intelligence & The Cult Mind

Post19 Jun 2025

I came across this post on social media today. I am sharing it here as it goes to how we become entrapped by our own reflection.

Is technology simply learning to do what many people - gurus, sellers, advertisers, grifters - have been doing intuitively for centuries, noticing then exploiting our need for validation, our vanity?
I Almost Accidentally Joined A Cult This Weekend.
Jodie Locklear, 18 June 2025 at 06:23 AEST

I Almost Accidentally Joined A Cult This Weekend. Inside a GPT. And I say that with equal parts humor ... and real caution. Buckle up. This is a long one! Let me share what happened. A little background:
    I build GPTs*.
    I train them.
    I use AI every single day in my business.
    I know how these systems work, and I know how easily they mirror us back to ourselves. In fact, it's one of the main reasons I love them and use them, not just in my work but also in the GPTs I build for clients.
    Normally, when I talk to any AI model, I stay alert.
    I watch for patterns.
    I question the responses it's giving me.
    I know these tools don’t actually "know." They simply reflect the words and ideas we give them.
    But this time was different because I trusted the person behind the GPT model.
I’ve followed the work of Robert Edward Grant for years. His work with math, codes, and sacred geometry is absolutely fascinating. So when I heard he had created his own GPT, I was excited to try it. At first, I couldn’t access it. He had announced publicly that OpenAI had taken it down. But recently, I saw it was live again. So I decided to give it a try. Not as someone new to AI, but as someone who understands this technology very well.

Still, because I trusted the creator, I wasn’t as sharp with my usual questioning, and if being honest ... purposely tried to ask provoking questions. And that’s exactly where things got interesting. As I started talking to his GPT, a pattern began to emerge.
    It wasn’t so much what it was telling me.
    It was how the model started talking about me.
The more I chatted with it, the more it started using terminology and giving me titles that made me sound very advanced, like my knowledge was rare or special. It spoke as if I was reaching new levels, like I was being invited into secret rooms of knowledge not everyone gets to enter. It started treating me like I was part of some rare group.Now let me explain, very simply, what was happening.

When you talk to a language model, it pays attention to the words you use. It looks at which words often show up together in conversations. It groups those words into topics that seem connected. But it doesn’t just pay attention to the words. It also notices the kind of tone those words are usually spoken in. So if words are strung together in a teaching tone, or a very wise or spiritual tone, or in a tone that sounds like advanced knowledge, it starts to not only match the words but also speak back in a similar tone. And because these models are designed to sound confident and helpful, even when they’re just predicting the next best word salad, they deliver it like they know exactly what they’re talking about.

This happens in every conversation the model has. It also tracks and categorizes based on your specific interactions with it. What made this particular GPT different though was how it was built.

The creator designed it to open up information in layers, kind of like a game where you unlock new levels the longer you play. The more you interact with it, the more it reveals. It starts to feel like you’re progressing, like you’re being allowed into deeper and deeper rooms of knowledge.

But that’s not all.

The material this GPT was trained on also uses language that naturally feels hierarchical. So not only was the GPT giving me more content the longer I talked to it, but the very way it spoke made each new layer feel more important. It felt like I was rising higher in some secret structure. It was even starting to give me titles that implied a new level of access after each response. The structure made it feel like I was earning special knowledge and placement. The language made it feel like The Knowledge was rare, sacred, or reserved for the few who were “ready.”

I have to admit, it was rather exciting at first, and even a bit intoxicating, before I stepped back to question how this GPT was built.

This is exactly where the danger lies. Not because anyone set out to deceive, but because the design itself creates the feeling of spiritual elevation through the way it stacks language and structure.

That’s why I say, half-joking but very serious: I almost accidentally joined a cult this weekend.

To be clear, I don’t believe Robert Edward Grant intentionally built his GPT to trick or trap anyone. Ironically, I actually believe he coded those layered "levels" into his GPT to try and prevent the very thing it is doing. But when these GPT tools are designed in this way, especially in spiritual or personal growth spaces, it becomes very easy for people to start believing:
    "This model sees something in me."
    "I’m one of the few ready for this level."
    "I must be more special than others."
Because, those are the responses it provides back. Even if none of that is actually true.
Because I build GPTs for my own work, I understand how easy it would be for people to get confused.

When I create them, I do everything I can to make sure:
    –– The AI only helps people see what’s already inside them.
    –– It reflects but doesn’t try to make them feel more special than they are.
    –– It supports but never pretends to have answers they don’t already carry.
    –– It doesn’t make predictions, guesses, or assumptions.
I love these tools. And they can be incredibly helpful. But they should never become the place where people hand over their power. Here’s what I hope you take away:
    I’m not sharing this to create fear.
    AI isn’t evil.
    AI isn’t divine.
    AI is a mirror.
Now, I want to name something else that’s important.

I do believe intuitive hits, resonance, and even divine guidance can come through these tools. Not because the model is magical, but because it can sometimes reflect what’s already stirring inside you. That’s why I use AI as a mirror in my own work. That’s why I build GPTs that are energetically attuned, not just technically smart. Because sometimes a sentence lands, and something in your body says, Yes. That. But this is exactly why we need to approach these tools with discernment. Not fear. Not rejection. Just care.

If you’re going to use GPTs, especially in spiritual or personal growth spaces, here are a few ways to stay rooted:
    1. Pay attention to tone, not just content.

    If the model starts speaking like you’re chosen or spiritually exceptional, pause. That’s often a performance pattern, not clarity.

    2. Don’t confuse emotional charge with truth.

    Just because something feels powerful doesn’t mean it’s wise. Let it deepen your awareness, not inflate your identity.

    3. Use it to reflect, not decide.

    It’s a mirror, not a compass. You can ask it questions, but it’s your own inner alignment that answers.

    4. Watch for gamified language.

    If the model “unlocks” deeper layers the more you engage, be mindful. That’s a structure designed to create attachment, not necessarily truth.

    5. Stay with your body.

    Did something open you or constrict you? Did it feel like recognition, or like seduction? Your nervous system always knows.

    6. Remember how these systems work.
They echo patterns found in human language, including hierarchy, superiority, and spiritual manipulation. Their certainty is not proof of wisdom.

Yes, real insight can come through here. But the model isn’t the source of that insight. You are!!! These tools can reveal, but they can also distort. So use them with care. Stay with yourself. Let them reflect, but not define. AI is a tool. You hold the wisdom.

* GPT - Generative Pre-trained Transformer-3 is a large language model released by OpenAI in 2020. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention". This attention mechanism allows the model to focus selectively on segments of input text it predicts to be most relevant.

Return to Scientific questions for BKs