Why Developers Fail When Building First-Generation Agents- Common Pitfalls and Cryptographically Grounded Design Solutions
DOI:
https://doi.org/10.32996/fcsai.2024.3.1.9Keywords:
Autonomous Agents, Zero Trust, Cryptography, Access Control, Verifiable Computation, LLM Security, Agent AlignmentAbstract
The development of autonomous software agents, built upon large language models (LLMs) and integrated with external tools, marks a significant paradigm shift in computation. However, first-generation agent deployments often exhibit critical failure modes that undermine reliability and security. This article, grounded in established cryptographic and access control theory, analyzes five primary development pitfalls: reasoning failures, runaway loops, missing context, transient state loss, and faulty planning logic. We argue that these failures stem from an inadequate foundational separation between the agent's generative capability (the LLM) and its operational integrity (state, authorization, and execution). We propose a theoretical Verifiable Context-Aware Access Control (VCAAC) model. This model enforces trust not just on the agent's identity, but on the verifiable proof of its state and the computational path taken to reach a decision, utilizing Zero-Knowledge Proofs (ZKPs) and distributed capabilities to mitigate security risks inherent in autonomous execution. The discussion includes a realistic threat model and highlights scalability limitations, focusing exclusively on hypothetical benefits without recourse to fabricated performance data.
Published
Issue
Section
License
Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/

This work is licensed under a Creative Commons Attribution 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment