Coding Agents Have Hands But No Eyes
Sebastian Raschka just published a clean taxonomy of coding agent components. Six categories: live repo context, prompt caching, structured tools, context reduction, memory, and resumption. It's so...

Source: DEV Community
Sebastian Raschka just published a clean taxonomy of coding agent components. Six categories: live repo context, prompt caching, structured tools, context reduction, memory, and resumption. It's solid engineering work. But read it carefully and you'll notice something: every component serves task completion. Not a single one serves perception. The Hidden Assumption Most agent frameworks start here: given a goal, decompose it into steps, execute. This is goal-driven architecture. You tell the agent to fix a bug, write a test, refactor a function. It doesn't need to perceive its environment — you are its eyes. This works great for coding agents. The problem is when people assume this is what all agents look like. What If the Agent Looks Before It Leaps? Imagine a different starting point: the agent wakes up, scans its environment, and then decides what to do. No task was given. It asks: what changed? What needs attention? What's interesting? This is perception-driven architecture. The di