Counterintuitive AI's Solution to Generative AI's 'Twin Traps' Problem

Artificial Intelligence

Counterintuitive AI addresses generative AI's 'Twin Traps': issues with non-reproducible floating-point math and memoryless models. The company proposes a new approach using deterministic mathematics, an Artificial Reasoning Unit (ARU), and a full reasoning stack to build transparent, auditable, and energy-efficient AI systems.

Despite rapid advancements in generative AI technology over recent years, fundamental problems within its underlying architecture continue to pose significant limitations. Counterintuitive AI, a company dedicated to reinventing the AI reasoning stack, identifies these issues as the "Twin Traps" of current large language model (LLM) technology.

Gerard Rego, founder of Counterintuitive AI, brings a wealth of experience from both industry and academia, having held leadership roles at Nokia, GM India, and MSC Software, and served as a fellow at Stanford University, The Wharton School of Business, and Cambridge University.

Rego explains that the first of these "Twin Traps" stems from modern LLMs' reliance on floating-point arithmetic. While optimized for performance, this mathematical foundation inherently lacks reproducibility. Every operation introduces rounding drift and order variance as fractions are rounded to the nearest representable binary number. This often leads to the same computation yielding different results across various runs or machines.

"Imagine you have 2 to the power of 16 digits," Rego illustrates. "Every time you run the machine, you're going to pick up one of the possibilities in that number. So let's say this time it picks up the 14th digit and answers you. You are going to say, 'this is a little different from the previous answer.' Yes, because it's probabilistic math; the number might be similar, but it's not reproducible."

The second critical issue is the memoryless nature of current AI models, which operate on a principle called Markovian Mimicry. This approach makes conclusions based solely on the current state, neglecting past history – for instance, predicting the next word in a sentence based only on the preceding word. Consequently, these models predict tokens without retaining the reasoning process that led to their output.

Both of these architectural flaws contribute to the substantial energy consumption of AI systems and the GPUs that power them, raising significant environmental concerns.

These "Twin Traps" also create several inherent bottlenecks:

  • Physics Ceiling: Miniaturizing chips further cannot stabilize inherently unstable mathematics.
  • Compute Ceiling: Adding more processing units exacerbates inconsistency rather than enhancing performance.
  • Energy and Capital Ceiling: Significant power and financial resources are wasted on correcting computational noise.

Rego recalls foreseeing these challenges during his fellowship at Cambridge in 2019-2020. "I was sitting there and talking to a bunch of folks and saying, 'hey, this AI thing is going to collapse on its head in about five to six years,' and that's because they're going to hit a floating-point wall and energy wall," he stated.

He elaborates that contemporary AI technology is built upon concepts developed between the 1970s and 1990s, with little truly groundbreaking innovation in the last three decades. This realization drives Counterintuitive AI to fundamentally rethink and rebuild AI from the ground up. Rego believes that the next major leap in AI will emerge from re-imagining how machines think, rather than simply scaling compute power, a process that is increasingly wasteful of energy and capital.

Counterintuitive AI's new approach is guided by four core principles:

  • A reasoning-first architecture that allows AI to justify its decisions.
  • Systems capable of measuring the energy cost of every decision.
  • Auditable logic for every reasoning step.
  • A human-in-the-loop design where AI augments human capabilities rather than replacing them.

The company intends to measure progress not by traditional benchmarks, but by the consistent reproducibility of reasoning, the safety of systems when uncertain, and their overall energy efficiency.

"We decided to build a non-floating point approach, which we call deterministic mathematics," Rego explained. "Let's write software that is not memoryless. So it's actually inheriting the lineage of your thought process. Every time you interact, it understands the cause and effect, not just the fundamental question of grammar."

Counterintuitive AI recently announced its work on a new type of reasoning chip, the Artificial Reasoning Unit (ARU), designed to execute causal logic, maintain memory lineage, and enable verifiable deduction. The company positions the ARU as the catalyst for the "post-floating point GPU era of computing."

Furthermore, Counterintuitive AI plans to develop a comprehensive reasoning stack to complement the ARU. This stack, they believe, will empower anyone to create systems capable of reasoning with traceable logic, remembering past decisions, and reproducing truth at scale, all while maintaining robust safety margins.

This novel stack promises to make the reasoning behind AI outputs more transparent and publicly available, a stark contrast to the current landscape where much of the knowledge about how generative AI systems function is confined to a select few companies and labs.

"Scientific progress accelerates when ideas are transparent and tools are accessible," the company asserts. "We will create interfaces for experimentation and build a community around deterministic reasoning—spanning hardware, logic, and theory. Our work stands on the shoulders of scientific tradition: when intelligence becomes reproducible, knowledge compounds faster."