• Skip to primary navigation
  • Skip to content
  • Skip to footer
James Maconochie James Maconochie
  • About
  • My Journey
  • Working Papers
  • Blog
  • Search
  • Attributions
  • Contact

    The Architecture of Intelligence and Reality

    From evolutionary computation to human consciousness, exploring how systems construct models of reality and pursue wisdom.

    Beyond FLOPS: The Evolutionary Processing Unit

    Toddlers outperform AI on world modeling because evolution gave them 4 billion years of survival instincts and accelerated learning capabilities.

    Read the Paper →

    Attention Is All We Have

    AI's attention breakthrough reveals a universal truth: intelligence is about filtering signal from noise to make wise decisions that enhance our lives and fulfillment.

    Read the Paper →

    The Attention Crisis: Language, Meaning, and AHI

    When infinite language collides with finite attention, shared reality fragments. This paper proposes Augmented Human Intelligence as the architectural solution to preserve human judgment in an age of machine-generated content.

    Read the Paper →

    Beyond Scale: A Modular Architecture for Adaptive AI

    The end of the scaling era: towards biologically inspired AI architectures that enable human-AI collaboration exceeding what either can achieve alone.

    Read the Paper →

    The Architecture of Language

    How language, humans' first invention and which made us who we are today, is at a tipping point which could reverse our fortunes.

    Read the Paper →

    The Mastery of Life Framework

    Are you living deliberately or by default? A framework for cutting through endless options to find what actually matters.

    Read the Paper →

    Augmented Human Intelligence: The Path Forward

    The next evolutionary leap: human-AI synergy creating a virtuous cycle where each elevates the other toward wisdom neither could achieve alone.

    The Intellectual Foundation

    This curated collection reveals a continuum: from evolutionary computation that shaped biological intelligence, to architectural principles that can guide artificial intelligence, to practical wisdom for human decision-making. Each book contributed essential insights to this unified framework exploring intelligence across domains.

    The chronological arrangement shows how thinking evolved from technical foundations to philosophical implications, mirroring the journey from computational principles to human meaning.

    My Intellectual Journey

    Beyond FLOPS: The Evolutionary Processing Unit

    This paper challenges the prevailing AI paradigm by reframing Artificial General Intelligence through evolutionary principles. The Evolutionary Processing Unit (EPU) represents 4 billion years of computational optimization that created modular, plastic architectures, what I call the Biological Processing Unit (BPU).

    Core Insight: Brute-force scaling is fundamentally misaligned with the architectural principles evolution derived for reasoning, adaptation, and understanding. Future breakthroughs require understanding evolution's output rather than replicating its computational effort.

    How This Fits: Establishes the evolutionary foundation that informs all subsequent papers, showing that intelligence emerged through architectural innovation, not raw computation.

    📄 Read Full Abstract

    Abstract: The prevailing paradigm in artificial intelligence research suggests that Artificial General Intelligence (AGI) is achievable primarily through the scaling of computational resources, model parameters, and training data. This paper challenges that view by reframing the AGI challenge in terms of evolutionary principles. We present a thought experiment that contrasts the cumulative computational effort of the evolutionary process, as represented by the Evolutionary Processing Unit (EPU), with the capabilities of modern supercomputing. The analysis suggests that brute-force scaling is not only inefficient but fundamentally misaligned with the architectural principles that evolution derived. We argue that future breakthroughs will stem from a deeper understanding of the EPU's output: the modular, plastic, and causally grounded architecture of the Biological Processing Unit (BPU), in this case, the human brain, which evolved to navigate the very challenges of reasoning, adaptation, and understanding that current AI systems lack.

    This whitepaper integrates foundational ideas from Beyond Scale: Towards Biologically Inspired Modular Architectures for Adaptive AI, The Mastery of Life, and Attention Is All We Have, establishing a cohesive framework for developing intelligent systems inspired by four billion years of evolutionary optimization.

    Download Full Paper

    Attention Is All We Have

    The 2017 Transformer revealed attention as AI's breakthrough mechanism. This paper argues attention is actually the universal resource allocation strategy that evolution optimized over 4 billion years, that operates across artificial systems, biological intelligence, and human consciousness.

    Core Thesis: Attention is both the mechanism through which consciousness operates and the moral foundation of intelligence. It solves the fundamental problem of prioritizing limited resources in service of goals, whether computational or cognitive.

    How This Fits: Provides the unifying mechanism that connects evolutionary principles (EPU) to AI architecture (Beyond Scale) and human application (MOL), showing how selective focus enables intelligence across domains.

    📄 Read Full Abstract

    Abstract: In 2017, researchers at Google introduced the Transformer model in a paper titled Attention Is All You Need, demonstrating that artificial intelligence performance depends less on architectural complexity and more on selective focus, or the ability to prioritize what matters and disregard what does not. This paper argues that the same principle governs both human intelligence and fulfillment, and that attention, the selective allocation of cognitive and emotional resources, is the shared architectural foundation underlying both human and machine intelligence.

    Drawing on the Evolutionary Processing Unit (EPU) framework and its product, the Biological Processing Unit (BPU), we demonstrate that attention is not merely a proper mechanism, but a fundamental resource allocation strategy that evolution has optimized over the past four billion years. Building on the Mastery of Life framework and Beyond Scale architecture, this paper proposes that attention is both the mechanism through which consciousness operates and the moral foundation of intelligence. We distinguish between computational attention (in AI systems) and cognitive attention (in human experience), while identifying their shared principle: both solve the problem of prioritizing limited resources in service of goals. Where the Transformer revolutionized computation, the deliberate direction of human attention by understanding the BPU and applying the Master of Life (MOL) framework may yet transform our understanding of what it means to live wisely.

    Download Full Paper

    The Attention Crisis: Language, Meaning, and the Architecture of Augmented Human Intelligence

    For the first time in history, humanity produces more language than we can process: 15-70 trillion tokens daily. This paper argues we're entering a global attention crisis where infinite language overwhelms finite human attention, destabilizing the intersubjective reality that enables cooperation, democracy, and shared meaning.

    Core Insight: The solution isn't Artificial General Intelligence as an oracle, but Augmented Human Intelligence—systems designed to strengthen human judgment rather than replace it. AHI treats attention as the scarcest cognitive resource, helping us allocate it wisely and preserve our capacity for reflection.

    How This Fits: Applies attention theory to our current societal crisis, showing how modular AI architectures (Beyond Scale) can be designed specifically to augment human cognition rather than overwhelm it, creating the foundation for practical AHI systems.

    📄 Read Full Abstract

    Abstract: Humanity's most significant evolutionary advantage has always been language: the ability to create shared meaning, coordinate at scale, imagine futures that do not yet exist, and cooperate in ways no other species can. But for the first time in history, the rate at which language is produced has exceeded the rate at which humans can meaningfully process it. We now generate an estimated 15-70 trillion tokens of text per day. Large language models accelerate this further by producing new content at negligible cost and at speeds that dwarf anything humans can match. The result is not simply information overload. It is the erosion of our intersubjective reality: the shared layer of beliefs, norms, meaning, and trust upon which all societies depend.

    This paper argues that we are entering a global attention crisis: a mismatch between the infinite production of language and the finite capacity of human attention. This mismatch destabilizes the foundation of shared knowledge that democratic institutions require, corrodes trust, accelerates the spread of manufactured realities, and overwhelms the neurological systems that support deliberate thought. Historical precedents, from the Malleus Maleficarum to the Rohingya genocide, reveal a recurrent pattern. When new language technologies outpace societal adaptation, the resulting distortion of shared reality enables harm at scale.

    Yet there is a path forward. Instead of pursuing Artificial General Intelligence as an oracle of truth, we argue for Augmented Human Intelligence (AHI): systems designed not to replace human judgment but to strengthen it. AHI treats attention as the scarcest and most valuable cognitive resource. It provides context, identifies manipulation, surfaces what matters, and widens rather than narrows our informational horizons. In doing so, AHI supports the most fragile yet crucial component of human cognition: the prefrontal cortex's capacity for reflection, restraint, and wise action.

    Download Full Paper

    Beyond Scale: A Modular Architecture for Adaptive AI

    This paper proposes an alternative AI architecture inspired by evolutionary neuroscience: modular systems with specialized components coordinated by dynamic executive function, designed for continuous adaptation rather than periodic retraining.

    Architectural Innovation: Drawing on EPU principles, we propose four core capabilities: modular orchestration, causal reasoning, continuous plasticity, and resource-constrained attention allocation, thereby creating systems that enhance rather than replicate human intelligence.

    How This Fits: Applies evolutionary insights (EPU) and attention mechanisms to create practical AI architectures that enable human-AI collaboration rather than replacement.

    📄 Read Full Abstract

    Abstract: Current approaches to artificial general intelligence (AGI) focus primarily on scaling large language models (LLMs) through increased parameters, training data, and computational resources. However, this paradigm faces fundamental limitations: energy consumption required for training grows exponentially, training cycles remain static, and systems lack the adaptive plasticity that characterizes natural intelligence. This paper proposes an alternative architecture inspired by evolutionary neuroscience: a modular AI system with specialized components coordinated by a dynamic executive function, all designed for continuous adaptation rather than periodic retraining.

    Drawing on the Evolutionary Processing Unit (EPU) framework, which demonstrates that evolution achieved intelligence through architectural innovation rather than raw computational scale, we argue that the path to AGI, or perhaps more achievable, Augmented Human Intelligence (AHI), requires fundamentally different approaches that mirror the distributed, plastic architecture of the Biological Processing Unit (BPU). We propose four core principles: modular orchestration, causal reasoning, continuous plasticity, and resource-constrained attention allocation. Drawing on cognitive science, neurobiology, and decision theory, we present a conceptual framework and phased development roadmap for building AI systems that enhance rather than merely replicate human intelligence. The key contributions of this architecture are its dynamic executive orchestration, multi-level continuous plasticity, and built-in mechanisms for bias correction and value alignment, offering a more efficient and robust path beyond pure scaling.

    Download Full Paper

    The Architecture of Language: When Language Outruns Reality

    Language is humanity's first invention, the user interface for intersubjective reality. It enabled cooperation beyond kinship, created shared worlds of meaning, and made us who we are today. But we never had shared truth. We had shared constraints on narrative production.

    Core Insight: Truth isn't dead, truth is outnumbered. The constraints that once throttled language production have vanished: bandwidth was limited, bottlenecks filtered signal from noise, locality enforced accountability, and friction imposed costs on deception. Now everyone broadcasts, algorithms optimize for engagement over coherence, and LLMs add infinite, frictionless linguistic output.

    How This Fits: This paper synthesizes the attention crisis with deeper evolutionary principles, examining language itself as an adaptive interface subject to environmental change. When the environment shifts faster than the interface can adapt, the system fails, not through any single breakdown, but through overwhelming volume.

    📄 Read Full Abstract

    Abstract: Language is humanity's foundational technology, the interface through which we construct social reality and enable large-scale cooperation. For millennia, this interface was stabilized by four external constraints: throughput, bottlenecks, locality, and friction. This paper argues that the digital age has systematically removed these constraints, culminating in large language models that eliminate the final barrier: the cognitive cost of language generation. The result is a systemic serviceability failure: an infinite-language world overwhelms finite human attention, fracturing intersubjective reality, driving cognitive overload, intensifying the futile demand for authenticity, and widening the gap between knowledge and wisdom.

    Rather than proposing a nostalgic reinstatement of external controls, we frame wisdom as constraint-awareness, the cultivated ability to recognize the limits of our linguistic interface and voluntarily adopt practices that restore functionality. The path forward lies not in building smarter oracles, but in nurturing wiser humans capable of stewarding attention and rebuilding the architecture of shared meaning in a post-constraint world.

    Download Full Paper

    The Mastery of Life Framework

    Modern life offers boundless opportunity but limited clarity. This paper introduces a practical framework for identifying what truly matters, tracking progress with intention, and adapting to life's evolving needs through awareness, attention, and adaptation.

    Practical Application: By organizing life into core domains and personalized metrics, MOL helps individuals move from reactivity toward deliberate living, applying attention theory and modular principles from AI to personal development.

    How This Fits: Demonstrates how evolutionary principles discovered by the Biological Processing Unit can be deliberately applied to enhance human fulfillment and completing the loop from computation to consciousness.

    📄 Read Full Abstract

    Abstract: Modern life offers boundless opportunity but limited clarity. We chase success, balance, and happiness without a coherent framework for understanding what truly matters or how these forces interact with one another. This paper introduces the Mastery of Life (MOL) framework, a practical and theoretically grounded model for identifying what truly matters, tracking progress with intention, and adapting to life's evolving needs.

    Drawing on behavioral science, decision theory, and systems thinking, the framework proposes that mastery is not about control, but rather about awareness, attention, and adaptation. By organizing life into seven core domains derived from established well-being research and measuring 8-12 personalized metrics, MOL helps individuals move from reactivity toward deliberate living, a shift from unconscious motion to conscious direction.

    This work contributes to the emerging field of computational well-being frameworks by applying attention theory and modular architectural principles from artificial intelligence to personal development, creating a bridge between cognitive science and practical life management. The framework integrates with a broader research program exploring attention as the fundamental resource allocation mechanism in both biological and artificial intelligence, demonstrating how evolutionary principles discovered by the Biological Processing Unit (BPU) can be deliberately applied to enhance human flourishing.

    Download Full Paper

    Augmented Human Intelligence: The Synthesis

    This framework culminates in a vision for human-AI collaboration that transcends artificial general intelligence. By leveraging evolutionary principles, attention mechanisms, and modular architectures, we can create systems where human and artificial intelligence elevate each other.

    The Path Forward: Rather than pursuing AGI through pure scaling, we should strive for Augmented Human Intelligence, creating collaborative systems that exceed what either human or artificial intelligence can achieve alone through mutual enhancement.

    The Integration: This represents the practical synthesis of evolutionary computation, architectural innovation, and human wisdom, showing how intelligence principles apply across biological and artificial domains.

    Explore the Full Integration Discuss Collaboration
    • Twitter
    • Substack
    • LinkedIn
    • GitHub
    • Feed
    © 2026 James Maconochie. Powered by Jekyll & Minimal Mistakes.