

My 15-year-old visually impaired son doesn't exactly vibe with gadgets. Sure, the iPhone is arguably the most advanced piece of tech available. But here's the irony: the iPhone killed accessibility for blind users the moment it appeared, removing every physical button. Even the ones Apple kept, it manages to disable (so they don't accidentally trigger in your pocket).
This creates absurd situations. My son can initiate a FaceTime call using Siri, but he can't hang up on his own — he has to ask whoever's on the other end to disconnect for him.
Then came the AI boom. Suddenly, you could code almost anything with an AI copilot. "Vibe-coding" became the word of the year — programming by feeling your way through it, guided by AI. I thought: why not build a voice assistant specifically tailored for my son?
The app needed to:
After about a week of late-night coding sessions, the app was alive. A real, legitimate iOS app built in Xcode — no low-code platforms or shortcuts. I started with ChatGPT, then migrated to Claude Sonnet through the terminal and connected GitHub. Things got serious.
I named it "Guy" — after the character from The Croods who led everyone toward the sun. A movie our family watched approximately five hundred times when the kids were younger.
Guy talks like a teenager: "Hey! I'm Guy. What should we do today?" When prompted, Guy enthusiastically describes what a crocodile or whale looks like, improvising on the spot without hesitation.
The app uses OpenAI's Whisper for speech recognition — it's incredibly accurate, even with adaptive speech patterns, stutters, or pronunciation quirks. This was crucial.

I integrated Apple Music, and Guy became a DJ. He plays requested songs, creates playlists, announces track names. The interface is brilliantly simple: two buttons, each taking half the screen. The top half skips to the next song, the bottom half pauses music and returns to dialogue mode. The buttons provide tactile feedback — my son never misses.
During testing, we discovered a hilariously inappropriate rap song that Guy defaults to when asked to "play any song." It's become an inside joke.
On approximately day three (night three, really — this was all happening after midnight), Claude made a spectacular mistake. It overwrote the latest version with code from three days prior, then sheepishly admitted it hadn't been committing to GitHub properly.
My response involved colorful language and a solemn vow that from now on, every single change would be saved. For the next hour, Claude dutifully committed every minor adjustment.
From the outside, it looks like magic — connect ChatGPT and it handles everything. The reality? Every tiny capability requires configuration, integration, and troubleshooting.
For instance, Guy couldn't tell the weather. Seems simple enough — just fetch weather data, right? But each "simple" feature like this needs its own service integration, API keys, error handling.
The development process mainly consisted of me, half-asleep, pressing Enter to approve Claude's suggestions. "Looks good." Enter. "Try that." Enter. "Sure, why not?” Enter.
The breakthrough feature came last: I loaded several of my son's textbooks into Guy's knowledge base. Now he can ask "What's in Chapter 8 of physics?" or "Can you solve problem 15 in algebra?"
It works. Not perfectly — Guy occasionally hallucinates formulas or misinterprets diagrams he can't see — but it works. My son can study independently for the first time.
This isn't about building the perfect app. It's riddled with quirks. Guy sometimes responds to background conversations. He occasionally crashes when switching between music and dialogue mode. The textbook feature works about 80% of the time.
But watching my son navigate his phone with confidence, control his music library without asking for help, or work through homework on his own — that's everything.
The app represents something bigger than its features. It's proof that AI can create deeply personal solutions. We're not talking about grand, universal accessibility platforms. We're talking about a father with basic coding skills building something specific for one person's needs.

For the curious:
The entire codebase is horrifying to professional developers. There's duplicate code, questionable architecture decisions, and error handling that amounts to "try again and hope it works." But it runs on my son's phone every day.
Building Guy taught me that the gap between "technically accessible" and "actually usable" is vast. Apple's VoiceOver is impressive technology, but it assumes a level of tech literacy and patience that not every user possesses — especially not teenagers who just want to listen to music or finish their homework.
More importantly, it showed me that AI's real revolution isn't in replacing human intelligence — it's in democratizing the ability to create highly specialized solutions. I'm not a professional developer. I'm a designer who dabbles in code. Yet I could build something that materially improves my son's daily life.
The imperfections don't matter. The occasional crashes, the misheard commands, Guy's tendency to recommend that inappropriate rap song — they're all part of something real and useful. Technology doesn't need to be perfect to be transformative. It just needs to solve real problems for real people.
Sometimes the most powerful applications of AI aren't the ones that scale to millions. Sometimes they're the ones that help just one person hang up a phone call on their own.