Before I specify a single component or touch an existing framework, I ask the same fundamental questions (every time, for every system):
"What is the fundamental purpose of the system?"
(Not what the code dictates but rather, what are the underlying concepts – what is the system "meant to be"?)
"How does it know anything? What is the structure of its relationship to the world it is trying to understand?"
(How does it structure its own knowledge, and by extension – its acquisition of further knowledge?)
"What is the ultimate goal of the system? What does success look like at the most fundamental level?"
(What is the system actually trying to achieve, and how would we know if it has indeed achieved this?)
Most AI systems have implicit answers to these questions buried somewhere within them. The difference is whether the systems providing the answers were built from the ground up or inherited from existing ways of thinking and prior assumptions. Either case could yield both right or wrong answers, or a mix of both (i.e., show internal inconsistencies)…
Inherited answers produce inherited constraints. A system built on borrowed assumptions will likely drift because the decisions that shaped it cannot be directly examined, scrutinised, defended or rejected in the context of my fundamental questions.