Last fall The Guardian asked me to write about the most interesting ways I was using AI. I'd been sitting on something that felt a little risky to go public with, so I pitched them the thing I was most nervous about: the story of how I used an AI to finally understand what my mother was actually trying to say to me.

Not her words, exactly… but the worldview underneath them. For those of you who read my essay about Hyper-translating last year, this is a form of that.

Well, the article published today:

The short version involves a phone call that ended in static, a walk in the Seattle drizzle, and a question I typed into an AI out of sheer frustration. The perspective that came back managed to shift my entire lens on what was happening… where I'd been hearing distance, there was actually integrity. I'd been translating my mother’s values through my own frameworks, so completely that I couldn't see hers at all.

Of course everyone’s big question is OMFG what does your mother think of the fact that you had to use AI to understand her?! Well, my mom read the piece before it published, and in fact her reaction is included in the article. (I like to let her speak for herself… even if clearly I sometimes need help with understanding her.😬)

…And because I can’t help myself, of course I documented the whole hyper-translation process, and am publishing it as a lil guidebook.

Introducing: REFLECTED, TOGETHER

Some of you might remember REFLECTED, the guide I put out back in 2024 about using AI as a mirror for your own patterns. This new guide is called REFLECTED, TOGETHER and it gets a little spicy.

It applies the same REFLECTED framework outward, toward the people who confuse and/or matter most to you. The whole point is to use AI to help you stay curious, get more compassionate, and find more empathy for the people you love.

And yes, I am aware that using AI to decode someone’s communication style raises very real ethical questions. I didn't want to build something useful that skipped over the part where it could also do damage, so there's a full chapter on the ethical explorations of this kind of AI use. (What do you owe the people in your life? How should you think about disclosure? Where does a framework stop being a tool for empathy and start being something else?)

As I say in the guide: Like a knife, AI is just a tool… it can help you make breakfast, or it can cause real harm. Choose breakfast.

If you never read Reflected but now you’re curious, you can download both guides as a bundle:

Love from Seattle and sending you the strength to stay curious during weird times,

XO,
Ariel

P.S. Next week I'll be sending out what might become the first chapter of a new book, or maybe just my first ever zine? (Fuck yeah, zines!) For now, it's an essay called Voluminous Love: A midlife treatise on unconditionality. I'm even more nervous about it than the AI article, honestly. More soon.

Reply

Avatar

or to participate

Keep Reading