The key to maintaining control over complex systems is to be able to tell the difference between systems which can be validated and systems which cannot be validated.
A prerequisite for effective validation is that abstract requirements on AI systems, for example, to be "thrustworthy", "lawful", "ethical" and "robust" are made specific and precise enough to allow measurability.
If quality requirements for AI systems are defined precisely enough and validity can be effectively decided, then validation often needs surprisingly fewer resources and less energy compared to the creation of the specific AI system.
In many instances, the computational complexity of building versus validating AI systems is similar to the long-known examples of combinatorial problems like the Boolean satisfiability problem (SAT) or optimization problems like the Traveling Salesman Problem (TSP), which are difficult to solve, but once the solution is found, validity of the solution is easy to check.
The challenge is to cast the validation problem into such a form that (i) the quality requirements become measureable and (ii) the validity problem can be effectively decided.