On April 2, Vitalik Buterin shared his thinking on how he approaches working with AI. The core principle: no cloud services. He runs all language models on his own devices, and the data never leaves his computer.
What prompted him to run a local LLM was his concern that modern AI agents send personal data to corporate servers by default — and the industry treats this as perfectly normal. According to Buterin, mainstream AI is moving in the opposite direction from end-to-end encryption and other privacy advances of recent years.
He set up his system so that AI can read his emails and Signal messages, but sending anything to a third party requires his personal confirmation. The same rule applies to crypto transactions: small amounts go through automatically, everything else needs his direct input.
"The idea is that both the human and the model can make mistakes or be fooled, but together they keep each other in check," he said.
For situations where a local model lacks the processing power, Buterin describes ways to query external services without those services being able to identify the user or link their requests together.
Ultimately, he lays out the future he wants to see: AI that lives on the user's device, works in the user's interest rather than a corporation's, and doesn't turn into yet another pipeline for personal data leaks.
