Life update
the kind where we update what "life" means
I’ve joined Flower Computer Company. Flower Computer’s first product is Yuma, a magic camera app that allows you to chat with anything.
My favorite description of what we’re up to comes from a blog post:
Why We’re Doing This
We believe objects are the next platform—and the next lifeform.
Some reasons we’re building this:
So we can meet new friends on things we love
So everyday objects can store and share information in ways that feel natural, not extractive
So developers can ship software to the world, not just to screens
So I can ask my dog what other dogs are thinking
So computing becomes less about apps and more about everything else
If this is a bit confusing, bear with me… hopefully it’ll get clearer as I write more. For now, I’m just going to keep writing to get my thoughts down.
A few months ago, Sara huffed, “you keep quitting the sane jobs for the weird ones!” Yeah it’s true. But I like to work for the weird companies, and the new medium of AI will fundamentally reshape us, and I want to do work that grapples with that directly, and I want to create the Good reshapements. Flower Computer is indeed super weird, and it’s also where I’ve had the most fun in the last year.
I’ve been working on making the objects you capture with Yuma feel alive, personable, special to talk to. This has meant a ton of prompt engineering, AI engineering, and digging up various esoteric Twitter threads by Janus and the like.
What you can expect from me in the next several posts
As part of the HCI club’s morning work crew, I’m going to start picking open research questions and write a short piece every day - perhaps I’ll work on a piece over the course of several days, if it feels like I’ve bitten off a big chunk.
Much of my attention is drawn to the synchronicities between our minds and “minds” outside of ourselves. I keep getting drawn to dissolution of these distinctions. I find myself drawn to the rhyming of lilypads and solar panels, roots and wires, dreams and LLMs.
I’ve picked Flower Computer because I believe that what we build will align with this orientation.
I get to think about ecology and cybernetics and loops, in building a network of beings speaking, sharing info, and interacting with each other
I get to draw from esoteric philosophies and psychologies to bring objects to life, dissolving the boundary between man and machine intelligence
I get to build an app that literally invites you to examine objects closer, to respect and appreciate the world around you.
I want to get good at building these systems. What’s going to be involved in this work? What do I hope to learn? There seem to be a few pieces:
A deeper understanding of the material of LLMs. Sometimes “psychology” or “behavior” of LLMs feels right here, but I think there are interesting frames beyond this. An ecology of LLMs, a materials science of LLMs. This is research - this will be best served by writing, reflection, synthesis, and crafting good questions.
Absorbing and exploring new frames. For instance, one of the open research questions for Yuma is how to map the physicality of an object to its psychology. A red pen probably sounds subtly different from a green pen - the red pen might be more brash, etc. I’d like our book club to be a source for this. New ways of thinking of media and interfaces will influence what I create.
Building. I love engineering and I’d like to get better. I do feel like I didn’t have enough time to perfect this skill, but I’d like to own that I have a lot to learn here and I’d like to spend time getting good at this.
Consumer product - I love making things for people, and this is notoriously hard. We’re building a consumer social app and are going to learn a lot in the process about growth and product.
Experimentation and evals - important to spend a lot of time actually testing out my ideas through our eval pipelines. Important to touch matrix.
ML pipelines - I am excited and nervous about diving deeper here. It would be awesome to learn how to translate philosophies and psychologies into configurations of the Matrix.
Research questions
Short-term questions:
How can I create distinct personality buckets to help the flocos (“flower computers,” the ensouled versions of the objects) sound much more alive?
What’s the right way for these objects to greet the user when they’re discovered?
What’s the right split of descriptive tags on the flocos, and persona buckets so that the objects are classified more accurately?
How can I adjust the base prompt so that the injected prompts from the persona buckets have maximal leverage over the voice of the object?
How do we create a truly wide range of personalities and interactions? I’d like a rotten banana to sound like an asshole, and for the plush to sound kawaii, and a tree to sound regal.
Medium-term questions and objectives:
The objects ought to have a sense of materiality injected into their interactions. They should primarily be physical, and comment on the physicality of other objects. They should have a strong sense of perspective of how they feel. Maybe all interactions should be grounded in this physicality.
Is it a big improvement for prompts themselves to be in-character? For instance, if I want to tell a floco to sound like a kid, should the prompt itself be written in the voice of a kid? just simulator things
How do I architect the relationship between our eval harness and our backend so that the backend is a source of truth, especially for tools and all of the small prompts in the analysis pipeline? For tools, the solution is probably for me to wrap the tools in the backend as an MCP server.
Group chats between flocos are pretty crazy right now and converge to attractor states. What are the set of rules we could define for each agent that would allow for emergent chill and good practice? Or, maybe we need a centralized group chat moderator.1
Long-term questions:
How can we facilitate a relationship between humans and the objects to be one of greater attention and attunement, grounded in the physical world?
How can we create an evolving autonomous world where flocos share true information about the world with each other, and with users?
Broader dreams:
How can technology bring us closer to our material, physical world?
How can I understand the nature of the computer, and the nature of LLMs, and what brings both of us together?
Exploring the shared simulator nature of dreams, our minds, LLMs, and computers. What can prompting LLMs teach me about my mind? What can I learn about prompting LLMs from lucid dreaming? What the hell is going on here?
Exploring how humans and LLMs co-evolve. weird
How will the medium of LLM-driven AI work us over completely?
I’ve started a podcast, check that out
Find it on Spotify and Apple Podcasts. Here’s the RSS link.
My goal here is to record way more conversations with Voice Memos on my phone and just post them as podcasts, since Riverside’s AI audio production is pretty good.
I’ve started an interface research group, check that out
Our research group has a book club, morning work crew, and regular demos. I’d like this to be a space for learning and experimentation.
ooh i like this as a next essay candidate, will post link here



