JOSEPH ADRIAN WALKER

Cognitive Systems
Architect

I design the architecture.

Your teams build from it.

The Problem With How AI Systems Get Built

Before I specify a single component or touch an existing framework, I ask the same fundamental questions (every time, for every system):

"What is the fundamental purpose of the system?"

(Not what the code dictates but rather, what are the underlying concepts – what is the system "meant to be"?)

"How does it know anything? What is the structure of its relationship to the world it is trying to understand?"

(How does it structure its own knowledge, and by extension – its acquisition of further knowledge?)

"What is the ultimate goal of the system? What does success look like at the most fundamental level?"

(What is the system actually trying to achieve, and how would we know if it has indeed achieved this?)

Most AI systems have implicit answers to these questions buried somewhere within them. The difference is whether the systems providing the answers were built from the ground up or inherited from existing ways of thinking and prior assumptions. Either case could yield both right or wrong answers, or a mix of both (i.e., show internal inconsistencies)…

Inherited answers produce inherited constraints. A system built on borrowed assumptions will likely drift because the decisions that shaped it cannot be directly examined, scrutinised, defended or rejected in the context of my fundamental questions.

My Approach to Building AI Systems

How I Think

Once honest answers to the fundamental questions are defined, they can be translated into conceptual allegories describing the way the system is meant to work. Allegory can be used as a structural diagnostic tool. This provides a method of reasoning that mirrors the internal logic of any problem: If it can be solved allegorically, it can be implemented in the architecture.

This is what Richard Feynman meant when he refused to accept inherited explanations: One does not ask how existing systems handle a problem. One asks what the problem actually is, and then derives solutions.

The field of AI development often has this precisely backwards. Projects generally start with whatever formal mathematical tools and existing architectural frameworks are available. Attempts are then implemented to work backwards from the outputs to answer the fundamental questions. My approach starts with fundamental understanding of what the system "should be" (governing principle), how it "implements this being" (qualitative architectural structure), and what the "objectives" of the system are (its outputs). The mathematics itself is not the starting point. It arises naturally as a consequence of a solid foundation.

This is also why my approach is not tied to any one domain. My WORLDSEED system is just a detailed case study…FinTech, EdTech architecture, legal inference engines, etc., will each have different answers to the same fundamental questions, and demand different architectural frameworks. What transfers across every domain is not the answer; it is the discipline of asking.

What This Produces

As a case study of this allegorical approach, I present my own system (WORLDSEED), a cognitive architecture: I recognised that calculus, since it is the branch of mathematics describing accumulated change over time and the rate at which such change takes place, could solve fundamental problems prevalent in modern AI systems including forgetfulness, lack of system "self-knowledge", and shallowness of meaning…

Accumulated memories are not simply a collection of phrases and keywords; a cognitive system would thus need to represent a holistic "integral curve" (from calculus) of cumulative memories, with each memory being an infinitely minute, but still uniquely discrete, point on a continuum…The area under the curve representing the cumulative experience of the system. Such memories would also need to be truly meaningful to solve the shallowness problem, and their actual meanings would need to be contextualised; not bound to specific time-points as this would then solve the self-knowledge problem – this was a further allegory borrowed from how higher order cognition is understood by neurobiology. At this point I went even further back to first principles from philosophy and asked the rather abstract question of "what is meaning"?

If this could all be implemented cohesively, then it would resolve both the shallowness problem and the forgetfulness problem as time would dissolve as a constraint. Thus, my next mathematical allegory was adopted, i.e., geometry, which deals with measuring distances between objects (whether abstract or real-world): If the various answers I proposed could be modelled as differential equations from my calculus allegory, I could build upon that and extend it into the geometric allegory because pure geometry would then allow me to cluster the memories according to their meanings. Time then becomes inconsequential: not as an engineering feature, but as a direct consequence of how the architecture was derived.

The mathematics followed from philosophy and critical thinking, not the other way around.

Thus, all three challenges were resolved: Shallowness, forgetfulness, and self-knowledge. Without touching a line of code.

Three Documents

The methodology is documented in three pieces. They are written to be understood by anyone who needs to make a decision — not by anyone who wants to be impressed.

Three documents. No pitch. No deck. The work speaks directly.

Let's Get to Work

I am a published philosopher-scientist with a background spanning emergence theory, systems biology, and computational semiotics. My development methodology is grounded in a formal triad — ontological, epistemological, teleological — applied before any architectural decision is made.

I work with teams building serious AI systems: the ones where getting the architecture wrong costs months, not sprints. I am not an implementation resource. I am the prior step that makes implementation coherent.

If you are building something that requires the architecture to be right — not just working — I want to hear about it.

Joseph Adrian Walker

Cognitive Systems Architect  ·  Magus Computational Technologies

Peer-reviewed published philosopher & scientist  ·  BSc Hons (Biochemistry), PGCE  ·  Founder: Magus Computational Technologies