WeSearch

Right-to-Act: A Pre-Execution Non-Compensatory Decision Protocol for AI Systems

·3 min read · 0 reactions · 0 comments · 0 views
Right-to-Act: A Pre-Execution Non-Compensatory Decision Protocol for AI Systems

Current AI systems increasingly operate in contexts where their outputs directly trigger real-world actions. Most existing approaches to AI safety, risk management, and governance focus on post-hoc validation, probabilistic risk estimation, or certification of model behavior. However, these approaches implicitly assume that once a decision is produced, it is eligible for execution. In this work, we introduce the Right-to-Act protocol, a deterministic, non-compensatory pre-execution decision layer that evaluates whether an AI-generated decision is permitted to be realized at all. Unlike compensatory systems, where high-confidence signals can override failed conditions, the proposed framework enforces strict structural constraints: if any required condition is unmet, execution is halted or deferred. We formalize the distinction between compensatory and non-compensatory decision regimes and define a pre-execution legitimacy boundary. Through a scenario-based case study, we demonstrate how identical AI outputs can lead to divergent outcomes when evaluated under a Right-to-Act protocol, preserving reversibility and preventing premature or irreversible actions. The proposed approach reframes AI control from optimizing decisions to governing their admissibility, introducing a protocol-level abstraction that operates independently of model architecture or training methodology.

Original article
arXiv.org
Read full at arXiv.org →
Full article excerpt tap to expand

Computer Science > Artificial Intelligence arXiv:2604.24153 (cs) [Submitted on 27 Apr 2026] Title:Right-to-Act: A Pre-Execution Non-Compensatory Decision Protocol for AI Systems Authors:Gadi Lavi View a PDF of the paper titled Right-to-Act: A Pre-Execution Non-Compensatory Decision Protocol for AI Systems, by Gadi Lavi View PDF HTML (experimental) Abstract:Current AI systems increasingly operate in contexts where their outputs directly trigger real-world actions. Most existing approaches to AI safety, risk management, and governance focus on post-hoc validation, probabilistic risk estimation, or certification of model behavior. However, these approaches implicitly assume that once a decision is produced, it is eligible for execution. In this work, we introduce the Right-to-Act protocol, a deterministic, non-compensatory pre-execution decision layer that evaluates whether an AI-generated decision is permitted to be realized at all. Unlike compensatory systems, where high-confidence signals can override failed conditions, the proposed framework enforces strict structural constraints: if any required condition is unmet, execution is halted or deferred. We formalize the distinction between compensatory and non-compensatory decision regimes and define a pre-execution legitimacy boundary. Through a scenario-based case study, we demonstrate how identical AI outputs can lead to divergent outcomes when evaluated under a Right-to-Act protocol, preserving reversibility and preventing premature or irreversible actions. The proposed approach reframes AI control from optimizing decisions to governing their admissibility, introducing a protocol-level abstraction that operates independently of model architecture or training methodology. Comments: 14 pages, 3 figures. Introduces a pre-execution decision protocol for AI systems Subjects: Artificial Intelligence (cs.AI) ACM classes: I.2.0; I.2.11 Cite as: arXiv:2604.24153 [cs.AI] (or arXiv:2604.24153v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2604.24153 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Gadi Lavi [view email] [v1] Mon, 27 Apr 2026 08:09:12 UTC (724 KB) Full-text links: Access Paper: View a PDF of the paper titled Right-to-Act: A Pre-Execution Non-Compensatory Decision Protocol for AI Systems, by Gadi LaviView PDFHTML (experimental)TeX Source view license Current browse context: cs.AI < prev | next > new | recent | 2026-04 Change to browse by: cs References & Citations NASA ADSGoogle Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is…

This excerpt is published under fair use for community discussion. Read the full article at arXiv.org.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from arXiv.org