The AI Medical Services Act: A Pro-Innovation Framework for Healthcare Access and Safety
Related Content
Executive Summary
While states like California, Colorado, and Illinois are moving to ban or heavily restrict artificial intelligence, pro-innovation states have the opportunity to lead the nation with a different approach: governing AI through licensure rather than prohibition.
The AI Medical Services Act establishes a pragmatic, safety-first framework that treats advanced AI not as a dangerous product to be feared, but as a clinical service to be licensed. By creating the AI Augmented & Autonomous Service Provider (AAASP) license, this legislation allows a State to safely integrate AI into its healthcare system to solve provider shortages, lower costs, and ensure patient safety, all while positioning the state as the premier destination for healthcare innovation.
1. Solving the Access Crisis & Lowering Costs
States across the country face a structural healthcare crisis that human capital alone cannot solve. Rural hospitals are straining, and patients often wait months for specialist care.
- Closing the Gap: This Act allows licensed AI providers to handle routine diagnostics and screenings, freeing up our human doctors to focus on complex cases. This is critical for rural areas where specialists are scarce.
- Lowering Costs: By mandating a Value-Based Care default for AI services, the Act aligns incentives: AI providers are paid for keeping patients healthy and accurate diagnoses, not just for running more tests. This introduces competition that will drive down the cost of care for the state and private payers.
- Immediate Impact: Unlike federal programs that take years to implement, this state-based licensure allows safe, effective tools to be deployed right away to address immediate needs.
2. Safety First: Regulated Deployment, Not the “Wild West”
Critics fear that AI is unregulated. Under the status quo, they are right—consumer apps are currently flooding the market with zero oversight. This Act fixes that by bringing AI inside the regulatory tent.
- The “Provider” Model: Instead of trying to regulate complex code (which is a federal role), this Act regulates the service. If an AI acts like a doctor, it is licensed, insured, and regulated like a doctor.
- The Regulatory Sandbox: New entrants start with a Provisional License (2 years) under strict supervision. This allows the Medical Board to verify safety data before granting full access, ensuring that the technology safely scales.
- Liability & Accountability: The Act requires AI providers to carry malpractice insurance and bonding. If an AI makes a mistake, the patient has recourse—a protection that does not exist for unregulated consumer apps.
3. Economic Development: The “Anti-California” Strategy
While other places drive innovation away with red tape and bans, this Act signals that a state is open for business—but with high standards.
- Attracting Capital: By defining clear rules for liability, billing, and insurance, it creates “Market Certainty”. Companies will move to such states because they know the rules of the road, bringing high-tech jobs and investment to the state.
- The “Shot Clock”: To prevent bureaucratic delay, the Act guarantees a licensing decision within 90 days. This efficiency respects the time of innovators and ensures the government moves at the speed of business.
- Federal Reciprocity: The Act is designed to work with the FDA, not against it. It automatically recognizes federally cleared devices, creating a seamless environment for top-tier medical technology companies to operate here.
Conclusion
The choice is not between “no AI” and “unsafe AI.” The choice is between unregulated, outside-the-system use and regulated, safe integration.
The AI Medical Services Act chooses the latter. It asserts state sovereignty over the practice of medicine to protect patients, support rural healthcare networks, and allow bold states to become national leaders in responsible, pro-innovation healthcare policy.
Appendix: Regulatory Framework & Risk Categorization
The Act utilizes a dual-axis approach, combining AI autonomy levels with clinical severity to ensure oversight is proportionate to risk.
Regulatory Reference Table
Determines when a state AAASP License is required versus when a tool is Exempt.
| Condition Category | Informational (L0) | Advisory (Suggest) | Supervised Autonomous (L2) | Fully Autonomous (L3) |
| Preventive | Exempt | Exempt | Exempt (or Modifier L2)* | Modifier L3 Required |
| Chronic / Non-Critical | Exempt | Exempt | Modifier L2 Required | Modifier L3 Required |
| Critical & Time-Sensitive | Exempt | Modifier L1 Required | Modifier L2 Required | Modifier L3 Required |
*Supervised Autonomous for preventive is exempt unless the licensee will be ordering preventative labs, drugs or devices at which point it will need an L2 modifier. Understanding the Levels of Autonomy & Risk Tiers
- Modifier L0 (Informational/Advisory-Exempt): AI providing data or suggestions for non-critical conditions where human judgment is the primary driver.
- Modifier L1 (Advisory-Critical): AI guiding critical/time-sensitive decisions that substitute for independent judgment.
- Modifier L2 (Supervised Autonomous): AI authorized to execute clinical actions under human supervision.
- Modifier L3 (Fully Autonomous): AI authorized to independently diagnose, treat, or prescribe.
Clinical Risk Tiers:
- Preventive: Low-risk interventions for disease prevention or health maintenance.
- Chronic / Non-Critical: Management of persistent conditions where delays do not threaten life.
- Critical & Time-Sensitive: High-acuity states requiring immediate, life-preserving intervention.
Licensure Categories:
- Class A (State Clinical Service): Regulated as a professional service under state authority (e.g., LDT-style services).
- Class B (Federal Device Reciprocity): For AI that has achieved FDA clearance/approval (SaMD).
- Class C (Therapeutic & Support): For non-diagnostic therapy, coaching, or monitoring based on existing referrals.
Alignment with the FDA Approach
- Least Restrictive Means: The Board is legally mandated to use the least burdensome regulation to address a specific, documented risk, preserving the “pro-innovation” stance while maintaining safety.
This framework is “smart” because it adopts the same philosophy of risk-based stratification found in federal device regulation while applying it to clinical practice. - Complementary Classification: Just as the FDA classifies devices (Class I, II, III) based on risk to the patient, this Act classifies the delivery of the service. It does not attempt to re-regulate the software itself if it is already an FDA-approved device (Class B License).
- Validated Competency: Like the FDA’s requirement for clinical evidence, the Act requires AI models to operate only within their “validated technical specifications” and “intended use case”.
- Post-Market Surveillance: The Act mandates Annual Performance Reports and “Model Drift” monitoring. This aligns with the FDA’s total product life cycle (TPLC) approach, ensuring the AI remains safe after it enters the market.

Stay Informed
Sign up to receive updates about our fight for policies at the state level that restore liberty through transparency and accountability in American governance.