Compare commits
No commits in common. "232855e95213d86d955aefd009f71f32846e01e1" and "73d2e50be7e5b64962dd3f3eba3f53154eb443c3" have entirely different histories.
232855e952
...
73d2e50be7
|
@ -37,7 +37,7 @@
|
||||||
</ul>
|
</ul>
|
||||||
<br />
|
<br />
|
||||||
<p>
|
<p>
|
||||||
This website was made with Elysia and HTMX. You can
|
This website was made with htmx and rust. I will not post the code unless
|
||||||
<a href="https://git.sepiatones.xyz/sepia/sepiatones_xyz">read the code</a>.
|
you bother me about it, because I am lazy.
|
||||||
</p>
|
</p>
|
||||||
</article>
|
</article>
|
||||||
|
|
|
@ -0,0 +1,8 @@
|
||||||
|
<head>
|
||||||
|
<meta name="title" content="Other Post" />
|
||||||
|
<meta name="date" content="2024-09-24T00:00:00Z" />
|
||||||
|
</head>
|
||||||
|
<article>
|
||||||
|
<h1>Welcome to another post!</h1>
|
||||||
|
<p>another one!</p>
|
||||||
|
</article>
|
|
@ -0,0 +1,4 @@
|
||||||
|
<article>
|
||||||
|
<h1>Test</h1>
|
||||||
|
<p>This is just a test!</p>
|
||||||
|
</article>
|
|
@ -1,36 +0,0 @@
|
||||||
<head>
|
|
||||||
<meta name="title" content="Things AI Agents Should Be Able to Do" />
|
|
||||||
<meta name="date" content="2025-07-30T00:00:00Z" />
|
|
||||||
</head>
|
|
||||||
<article>
|
|
||||||
<h1>Things AI Agents Should be Able to Do</h1>
|
|
||||||
<h2>Write Their Own Helper Scripts</h2>
|
|
||||||
<p>
|
|
||||||
If I ask my agent to play chess with me (or a game that doesn't exist), it
|
|
||||||
should write a basic chess engine, unmprompted, in a sandboxed environment,
|
|
||||||
which it can then use as a tool the next time it plays chess. Nobody has
|
|
||||||
implemented this yet because it requires a lot of agency from the agent, and
|
|
||||||
would have a bad error rate. I think it can be done effectively by limiting
|
|
||||||
the agent to a single tech stack and a railroaded workflow for creating its
|
|
||||||
own tools. Like a human software engineer, an AI agent should seek to
|
|
||||||
automate tedious tasks.
|
|
||||||
</p>
|
|
||||||
<h2>Forget</h2>
|
|
||||||
<p>
|
|
||||||
Humans still have better memories than LLMs, because we forget. Agents can
|
|
||||||
be fitted with a RAG to quickly memorize trivia, but if you have your agent
|
|
||||||
remembering everything it does all day, the database gets cluttered with
|
|
||||||
information that doesn't matter, and recall brings up a lot of garbage. This
|
|
||||||
effect would be exacerbated if you fed an agent a firehose of information
|
|
||||||
from your camera-glasses, which is exactly how much information your own
|
|
||||||
monkey brain is intaking.
|
|
||||||
<br />
|
|
||||||
Human memory should be used as a model: when information is repeated, we
|
|
||||||
remember it more strongly, and if it's not, we forget. Additionally, humans
|
|
||||||
spend time "alone with our thoughts" reflecting on our memories and creating
|
|
||||||
new super-memories that are much denser with useful information (For
|
|
||||||
example, you might remember "My friend James likes strawberry ice cream," a
|
|
||||||
reflection that allows you to throw away 200 instances of memories of him
|
|
||||||
ordering strawberry ice cream).
|
|
||||||
</p>
|
|
||||||
</article>
|
|
Loading…
Reference in New Issue