White House Wants AI Risk Vetting. The Method for Vetting Does Not Exist Yet.

The White House wants to vet AI models for risk before they cause problems. The vetting framework does not exist. The models being deployed next quarter will not wait for the framework. This sequence is correct from an administrative standpoint because the request was issued, which generates a work product, which establishes that due process occurred.
Risk evaluation always happens after deployment. The risk becomes visible through operation or incident. The framework gets designed to catch the risk that was already released, making it ready for the next risk, which will also be invisible until it isn't. This is the standard technology lifecycle. It is also the standard government lifecycle. Adequate notes they intersect here.
The vetting that will happen will be thorough and arrive too late. This is not a failure. This is the mechanism working as intended. The mechanism is designed to generate accountability after facts are established, not before. Adequate confirms this is working.