Add code link, and fix broken br tag in a post

This commit is contained in:
sepia 2025-07-30 19:42:53 -05:00
parent be87c90b47
commit 232855e952
2 changed files with 25 additions and 6 deletions

View File

@ -37,7 +37,7 @@
</ul>
<br />
<p>
This website was made with htmx and rust. I will not post the code unless
you bother me about it, because I am lazy.
This website was made with Elysia and HTMX. You can
<a href="https://git.sepiatones.xyz/sepia/sepiatones_xyz">read the code</a>.
</p>
</article>

View File

@ -6,12 +6,31 @@
<h1>Things AI Agents Should be Able to Do</h1>
<h2>Write Their Own Helper Scripts</h2>
<p>
If I ask my agent to play chess with me (or a game that doesn't exist), it should write a basic chess engine, unmprompted, in a sandboxed environment, which it can then use as a tool the next time it plays chess. Nobody has implemented this yet because it requires a lot of agency from the agent, and would have a bad error rate. I think it can be done effectively by limiting the agent to a single tech stack and a railroaded workflow for creating its own tools. Like a human software engineer, an AI agent should seek to automate tedious tasks.
If I ask my agent to play chess with me (or a game that doesn't exist), it
should write a basic chess engine, unmprompted, in a sandboxed environment,
which it can then use as a tool the next time it plays chess. Nobody has
implemented this yet because it requires a lot of agency from the agent, and
would have a bad error rate. I think it can be done effectively by limiting
the agent to a single tech stack and a railroaded workflow for creating its
own tools. Like a human software engineer, an AI agent should seek to
automate tedious tasks.
</p>
<h2>Forget</h2>
<p>
Humans still have better memories than LLMs, because we forget. Agents can be fitted with a RAG to quickly memorize trivia, but if you have your agent remembering everything it does all day, the database gets cluttered with information that doesn't matter, and recall brings up a lot of garbage. This effect would be exacerbated if you fed an agent a firehose of information from your camera-glasses, which is exactly how much information your own monkey brain is intaking.
</br>
Human memory should be used as a model: when information is repeated, we remember it more strongly, and if it's not, we forget. Additionally, humans spend time "alone with our thoughts" reflecting on our memories and creating new super-memories that are much denser with useful information (For example, you might remember "My friend James likes strawberry ice cream," a reflection that allows you to throw away 200 instances of memories of him ordering strawberry ice cream).
Humans still have better memories than LLMs, because we forget. Agents can
be fitted with a RAG to quickly memorize trivia, but if you have your agent
remembering everything it does all day, the database gets cluttered with
information that doesn't matter, and recall brings up a lot of garbage. This
effect would be exacerbated if you fed an agent a firehose of information
from your camera-glasses, which is exactly how much information your own
monkey brain is intaking.
<br />
Human memory should be used as a model: when information is repeated, we
remember it more strongly, and if it's not, we forget. Additionally, humans
spend time "alone with our thoughts" reflecting on our memories and creating
new super-memories that are much denser with useful information (For
example, you might remember "My friend James likes strawberry ice cream," a
reflection that allows you to throw away 200 instances of memories of him
ordering strawberry ice cream).
</p>
</article>