Human-AI Collaboration in Identity Security: When Should AI Decide?
DOI:
https://doi.org/10.32996/jcsts.2025.7.7.17Keywords:
Decision authority frameworks, Human-AI collaboration, Identity security, Algorithmic fairness, Contextual risk assessmentAbstract
The integration of artificial intelligence into identity and access management presents both transformative opportunities and significant challenges for contemporary security frameworks. This article examines the critical question of decision authority allocation in AI-augmented security environments: determining when automated systems should independently make access determinations versus when human expertise remains essential. Through analysis of implementation case studies across financial services and healthcare sectors, the research identifies patterns of successful collaboration between algorithmic and human components of security ecosystems. The investigation reveals that optimal security outcomes emerge from thoughtfully designed frameworks that dynamically assign decision authority based on contextual risk factors, rather than static delegation models. Ethical dimensions receive particular attention, with privacy considerations, algorithmic fairness, and accountability mechanisms identified as critical success factors beyond technical implementation details. The article concludes with evidence-based recommendations for organizations implementing collaborative security models.