I went to an AI builder meetup at Antler VC in Austin last night — “AI That Works: Codex + Apify,” 5:30–7:30 PM at 800 Brazos, hosted by Apify’s Michael Daigler. Two hours, free drinks, demo-heavy. The kind of event where you meet five interesting people, scribble half their names on the back of a card, and then on Wednesday you go “wait, who was the product designer building the agent thing?”

I have a personal assistant system running on a Linux box at home that I can text from my phone. Last night I gave it the job most people give a Moleskine: be my second brain at the event.
By the time I walked back to my car the vault had four new People/*.md files. Todoist had follow-up tasks queued with the right names and contexts. The full event log was structured and tagged. My morning brief today opened with “you owe Jordan a coffee text” and a LinkedIn handle to send to.
This is what that actually looked like.
The setup, briefly
The system is a Claude Code session running as a persistent service on my Omarchy desktop (claude-pa.service). It has my Obsidian vault as its working directory, MCPs wired up to Todoist, Gmail, and Google Calendar, and a dispatcher that lets it fan out small tasks to cheaper “worker” sessions. I wrote about killing the always-on version of this earlier and rebuilding it the right way — this is the version that survived.
I talk to it from my phone through a small remote-control bridge I built a while back. From the outside it looks like sending a text. On the backend it’s spawning bounded subtasks against my actual vault.
Walking in
I texted the PA the Luma URL on the Uber over. By the time I parked, the event was on my calendar, the host (Michael Daigler) had a stub People/Michael Daigler.md with a “research before the event” task, and a fresh event log was waiting at Saved/AI-That-Works-Codex-Apify-2026-05-12.md with the YAML frontmatter already filled in:

That related: line matters. Every people-file and saved event-log in the vault links back to its source. The graph in Obsidian wires itself.

The venue is Antler’s coworking floor on Brazos — directly off the lobby, hard right. Apify and AITX had banners up next to the food. The Apify banner doubles as a tools catalog; the AITX banner is mostly a QR code and a tagline (“Building AI’s future in Texas”). Both communities live on the same floor most months.

Meeting people
Jordan Hoffman walked up first. Creative agency, San Diego, in Austin for the week. We talked for twenty minutes about content workflows. While we were talking I tapped my phone twice:
met jordan hoffman, runs a creative agency in san diego, got his number, wants to compare content workflow. wire him up.
The PA dispatched a vault-search worker to check whether I’d already met a Jordan (no), then a task worker to create a Todoist follow-up with the right project and labels, then wrote People/Jordan Hoffman.md with engagement plan, open questions (“agency name + size? what content workflow pain led him here?”), and a link back to the event log. None of that pinged my phone — workers just do their thing and the orchestrator sends back one line: logged Jordan + 2 tasks queued.
Three more rounds of that and the people-file count had climbed by four: Ion (product designer, building an agent), Aaron (the ChatGPT Ads guy who showed up halfway through), Michael Daigler himself (Apify dev advocate, runs AITX, the ~3,000-person Texas AI builder community). Each one got a stub file, each one got a Todoist task with a sensible next step.

The thing I noticed about doing this live: I wasn’t on my phone any more than I would’ve been anyway. Two-tap voice memos, ten seconds each. The cognitive load of “remember to add this to your CRM later” is gone — and “later” is when most of my networking contacts have historically died.
The talk worth stealing
Halfway through, a guy named Raghu — a Codex ambassador — got up and shared a multi-agent reliability pattern I’ve been chewing on since:
Two agents agree → act.
Below confidence threshold → escalate, abstain, or ask a human.
Above threshold → execute.
Run two agents on the same decision. Only proceed when they converge. If they disagree, you either have low confidence (escalate) or a genuinely interesting edge case (also escalate). The cost is one extra inference per decision; the win is you stop trusting a single model’s hot-take on anything load-bearing.
I texted the PA: “save raghu’s two-agents-must-agree pattern as an idea, tag it for freebo booking edge cases and PA orchestrator routing.”
That went straight to Ideas/ — not Todoist, never Todoist, ideas are sacred and live in their own folder. The PA knows that rule and refuses to convert ideas into tasks. (Past me made that mistake too many times. Ideas-as-tasks die. Ideas-as-ideas marinate.)
The actual point of the night, and the tangent I went down instead
The event itself was about Apify + MCP: how to point an MCP-enabled agent at Apify, have it scrape something real, and then publish the result as an Apify actor — a reusable scraper anyone else can call. After the demo, the room broke out to do exactly that. The slide put it bluntly: pick a workflow, or bring your own, and ship an actor by the end of the night.

That is not what happened to me.
I ended up at a table with another attendee, asked one too many questions about what he was working on, and the next twenty minutes evaporated into a side conversation about AI in semiconductor manufacturing. Two laptops, Excalidraw open on both. Zero actors published.
We started where the workflow slide actually pointed — a quick GitHub scan, grouped by language vintage. Old-stack silicon work in Verilog and VHDL. New-stack fab-floor work in Python and TypeScript agents. The chart almost draws itself:
![]()
The interesting thing is the gap. Almost nobody is building bridges between the two columns. So we kept drawing. What would the system look like if you actually tried to put an AI QC layer onto a real fab? Sensors → message bus → an inference layer → MES (manufacturing execution system) on top:
![]()
The honest problem isn’t drawing the architecture — it’s that real fabs run on twenty-year-old CORBA-based MES stacks. So we sketched a brownfield migration: don’t rip and replace, sidecar an AI plane next to the legacy plane and let them coexist:
![]()
Then we zoomed all the way in on the actual MES/CORBA boundary, because that’s the load-bearing part most “AI for manufacturing” decks hand-wave:

And the punchline: a small MCP-on-CORBA REST bridge. Modern AI agents speak MCP. Legacy MES speaks CORBA. One translation layer in the middle and the agent can call into a 1998 stack without anyone replacing it:

None of this is a product, and none of it is an Apify actor. It’s twenty minutes of two people getting sidetracked together. But it’s the kind of artifact I would never have walked out of an event with before — and now lives in my vault with five linked PNGs, ready to be picked back up next week.
The PA was logging the whole thing in parallel. By the time we wrapped, the event log already had the repo list, the diagram filenames, and a one-line insight (“old stack lives silicon-side, new stack lives fab-floor side; bridge repos are the gap”). I didn’t write that sentence. The worker did, while I was talking. Which is the only reason “got sidetracked at the meetup” produced a usable artifact instead of evaporating the way side conversations usually do.
What was in my inbox this morning
5:00 AM context map (the PA writes one daily) opened with:
## What Brett worked on
- Live networking at "AI That Works: Codex + Apify" (Antler VC):
used /pa in real-time to log 5+ contacts, create follow-up tasks,
and research the event. READ: Saved/AI-That-Works-Codex-Apify-2026-05-12.md,
People/Jordan Hoffman.md, People/Ion.md, People/Michael Daigler.md.
And in the morning brief, the active-connections section already knew about Jordan and Ion. The Todoist tasks were sorted with priorities and due dates that matched what I’d said in the moment (“text Jordan this week”, “LinkedIn Ion tonight”). The next AITX meetup on May 26 was on my calendar with a placeholder time, flagged for me to confirm.
I haven’t opened the vault by hand once today and the system is already three steps ahead.
The actual takeaway
The “AI assistant” framing undersells this. What I had last night wasn’t an assistant — it was a passive, structured note-capture layer that knows where things go in my filesystem. Voice-memo → vault → Todoist → tomorrow’s brief, no copying-and-pasting in between.
If you build one of these for yourself, the thing to optimize for isn’t features. It’s friction. The whole reason this worked at a noisy event with a drink in my hand is that talking to it costs nothing. The minute it asks me a question back, or makes me confirm something, or pops a modal — the moment is gone, and so is the contact.
The interesting next move isn’t a bigger model. It’s a system that can quietly catalog your life while you’re busy living it, and then surface the right four sentences when you sit down with coffee twelve hours later.
Mine’s not perfect yet. But last night was the first event I’ve left with zero “I should follow up with…” guilt. That’s a new feeling.