Codexes Factory: algorithmic tools to create, operate, distribute, and market entire publishing imprints. This week I am launching my first imprint, Xynapse Traces, with 66 books in the Korean pilsa (筆寫) style. Later in October, Nimble Ultra, devoted to the history and practice of intelligence and espionage. Last week I built a giant collection of 575 imprints that are a shadow superset of the ~540 imprints operated by the Big Five publishing houses (Penguin Random House, the largest has ~300). Teeny weeny tip of the iceberg at NimbleBooks.com.
AI Safety Event: DOGE is analyzing the entire Code of Federal Regulations this month with the goal of reducing it by 50% by Jan 20, 2026. [WaPo 7/26/25].
--This is an actual AI governance crisis that is happening today.--
Mitigation: Enable OSS analysis of CFR by all parties using same data with different goals.
As a proof of concept, ran altDOGE v0.2 against 485 regulations for the Community Living Administration using DOGE-like prompts, identified likely areas for cuts. Appears to work as intended.
To continue this mitigation effort, the following is needed:
-- policy participants with agency-specific expertise to carry out real validation.
In theory, "privacy pass" should help, as you can subpoena content, but cannot know who made these. But that is still thin (and Ollama not doing that too anyway).
Per the Washington Post, DOGE is running LLM analysis against the Code of Federal Regulations (5B tokens) *this month* (August) with the goal of generating revision texts that will eliminate 50% of US regulations. This is a governance cataclysm that is happening right now. My proposal is that there should be many teams analyzing the same data and making recommendations.