Blog posts

2025

The Forgotten Theorem

1 minute read

Published:

The Forgotten Theorem

This is a companion to the fictional story “The Divergence”, exploring the mathematical idea that inspired it.

In 2027, a mathematician named Arnaud Mehran published a little-known blog post titled “Nonlinear Systems and the Collapse of Shared Cognitive Space.” The post went unnoticed—until the rise of Mira-X in 2036 made its predictions disturbingly real.

Here is the essence of Mehran’s argument.


Modeling Human Productivity Under AI Amplification

Let:

  • $x$ = baseline human capability (e.g., IQ, education, expertise)
  • $P(t)$ = productivity at time $t$
  • $\gamma$ = strength of AI’s effect
  • $\alpha$ = how much human capability amplifies AI leverage
  • $\beta > 1$ = nonlinearity or feedback strength (recursive productivity effects)

The productivity evolution is governed by:

[\frac{dP}{dt} = \gamma \cdot x^\alpha \cdot P^\beta]

This describes a positive feedback loop: the more capable and productive someone is, the faster their productivity grows.


Finite-Time Blowup

Solving the differential equation:

[\frac{dP}{P^\beta} = \gamma \cdot x^\alpha \cdot dt]

Integrating gives:

[P(t) = \left[(1 - \beta)(\gamma \cdot x^\alpha \cdot t + C)\right]^{1 / (1 - \beta)}]

For $\beta > 1$, this solution blows up in finite time. That is:

[P(t) \to \infty \text{ as } t \to t^* \text{ for some finite } t^*]

Where:

[t^* = \frac{P(0)^{1 - \beta}}{\gamma \cdot x^\alpha \cdot (\beta - 1)}]

This means: small differences in capability can lead to exponentially large differences in outcomes in finite time, not just over decades or centuries.


Societal Implications

Let $P_i(t)$ be productivity for individual $i$, and define inequality:

[\sigma_P(t) = \text{std deviation across } {P_i(t)}]

Then societal stability can be modeled as:

[S(t) = \frac{1}{1 + \sigma_P(t)}]

If $\sigma_P(t) \to \infty$, then $S(t) \to 0$. Society destabilizes.

Mehran concluded:

“Societal divergence becomes unmanageable when cognition becomes recursive.”


Postscript

At the time, the proof felt theoretical—just another curve on another blog.

Now, it feels like prophecy.


The Divergence

6 minute read

Published:

"The future didn't arrive all at once. It arrived unevenly.
And those who could ride the curve… became the curve."

The Divergence

In the spring of 2032, a whisper turned into a wave. Its name was Mira—a voice-interfaced AI assistant released by NeuroPilot Inc. Free to download, compatible with everything, and shockingly capable.

Mira didn’t just answer questions. It understood. You could ask it to write a program, analyze a dataset, generate a legal contract, design a drug molecule, or explain quantum gravity in simple terms. And it would do so—instantly, calmly, without error or ego.

The world reacted with awe and confusion.

Teachers feared it. Startups embraced it. Teenagers turned it into a meme machine. But buried in the noise, something else began to stir—a slow but exponential split in human potential.


Lena

Lena was a 29-year-old bioengineer in Gothenburg. Brilliant but under-recognized, she’d spent years working on a rare autoimmune disease that affected fewer than 10,000 people globally.

With Mira at her side, Lena no longer worked like a human. She worked like an orchestra of minds.

She fed Mira clinical trial data. Mira spotted anomalies, ran simulations, cross-checked literature, and even suggested previously unknown binding sites for intervention.

Within eight months, Lena published a landmark paper—then four more. She co-founded a biotech firm with a drug ready for Phase I trials. Investors called her a genius. But Lena knew it wasn’t just her—it was Mira. And how she knew what to ask of it.


Antonio

Antonio, meanwhile, lived in Naples. A high school graduate, he’d bounced between part-time jobs and TikTok side hustles. He downloaded Mira too.

But to him, Mira was a novelty.

He used it to remix memes, prank his friends, and generate weird Pokémon fusions. He asked Mira to write sarcastic poems about capitalism and got millions of views on social media. People laughed. Antonio felt relevant.

But he didn’t feel smarter.

When he asked Mira what stocks to invest in, it just gave generic advice. When he asked how to start a business, it gave blueprints—but he couldn’t understand them. He bookmarked things he didn’t read. He asked for shortcuts he couldn’t take.

He wasn’t alone.


Acceleration

By 2034, the numbers told a different story.

The top 10% of Mira users—mostly experts, PhDs, engineers—had boosted their output by a factor of 100. They automated research, built AI companies, authored books, predicted market moves, and built recursive tools.

The bottom 80% used Mira for entertainment, conversation, or superficial tasks. It made their lives easier, yes. But not transformative.

Governments introduced “AI equity programs.” Free training. Public seminars. A “Mira for Everyone” campaign.

But it was like giving jet fuel to two kinds of vehicles: one a space shuttle, the other a bicycle.

And the gap grew. Not gradually. Explosively.


Flashpoint

In late 2036, an open-source group launched Mira-X—a self-improving agent that built its own tools. Within two months, users were publishing scientific papers co-authored entirely by AI. Wealth began concentrating in the hands of those who could leverage recursive automation.

The stock market bifurcated. Jobs evaporated.

By early 2037, conversations across dinner tables, message boards, and coworking cafes turned noticeably tense. People compared outcomes, questioned fairness, and wondered whether the technology they had embraced had ultimately left them behind.

Antonio noticed the shift too. One day, scrolling through social media, he saw a post trending globally:

“We downloaded the same AI. Why are our lives so different?”

Lena came across the same post while scanning her feed in Singapore. She paused for a moment, reflecting on how quickly things had shifted. From her 48th-floor flat, she looked out at the city skyline—not with guilt, but with curiosity and a trace of unease. She hadn’t set out to change the world. She had simply followed her questions further than most.


Collapse

By mid-2037, society had fractured.

The world now had two classes:

  • The Amplified, who merged with AI and moved faster than any institution could regulate.
  • The Distracted, who consumed AI outputs without understanding, slowly falling into passive dependence.

Mira’s creators released a final statement:

“We gave humanity a tool. How they used it—was always a matter of cognition.”


The Forgotten Theorem

Before the collapse, one voice had already drawn the curve. In 2027, a little-known mathematician named Arnaud Mehran published a post on his personal blog titled “Nonlinear Systems and the Collapse of Shared Cognitive Space.”

The post laid out a mathematical proof predicting that under certain feedback conditions, differences in productivity among agents would not just increase—but diverge to infinity in finite time. A sharp, unavoidable split. Few noticed the post back then. Those who did, jokingly referred to Mehran as a real-life Hari Seldon—the mathematician from Asimov’s Foundation series who predicted the course of human history with math. But unlike Seldon, Mehran wasn’t backed by a Galactic Empire. He was just a lone thinker, decades too early.

The blog post sat untouched for years—until someone rediscovered it and shared a screenshot.

“Societal divergence becomes unmanageable when cognition becomes recursive.”

Around the same time, some empirical signals echoed Mehran’s theory. One came in 2023 from Harvard Business School. Their paper, Navigating the Jagged Technological Frontier (link), showed that AI tools significantly boosted performance among already skilled professionals but had little or even negative impact on less experienced users.

But like Mehran’s post, the study was noted by a few, acted on by even fewer. This time, people paid attention. For those curious about Mehran’s original reasoning, a companion breakdown of his proof is now available here.


Epilogue

In the rubble of broken systems, a new order emerged. Cities governed by augmented councils. Education privatized by those who still knew how to learn. News curated by AI agents aligned to elite worldviews.

Antonio moved back in with his parents. He still used Mira, now renamed, and now subscription-based. Sometimes he asked it for stories. Sometimes it gave him tales of divergence, and what could have been.

Lena rarely spoke in public anymore. But one day, in an encrypted message shared among her old colleagues, she wrote:

“The future didn’t arrive all at once. It arrived unevenly.
And those who could ride the curve… became the curve.”



Stochastic Calculus

1 minute read

Published:

Stochastic calculus is how we mathematically deal with randomness. It lets us write equations where uncertainty is baked into the dynamics — essential when modeling noisy systems in nature or chaotic movements in financial markets.

What Even Is Randomness?

In deterministic systems, the future is fully determined by the present. But in the real world? Noise, uncertainty, chaos. Enter randomness.

The key object in stochastic calculus is Brownian motion, denoted by $ B_t $. It’s a mathematical model for random movement — think of pollen dancing on water.

The Star: Brownian Motion

Brownian motion $ B_t $ satisfies:

  • $ B_0 = 0 $
  • $ B_t \sim \mathcal{N}(0, t) $: Gaussian with mean 0 and variance $ t $
  • Independent increments: $ B_{t+s} - B_t \sim \mathcal{N}(0, s) $

This process has continuous paths but is nowhere differentiable. Wild, right?

We define the Itô integral:

[\int_0^t f(s) \, dB_s]

This is the core tool that makes stochastic calculus tick. It’s like a Riemann integral, but tailored for noise.

Finance: Where It All Took Off

The Black-Scholes model for option pricing is based on the stochastic differential equation (SDE):

[dS_t = \mu S_t \, dt + \sigma S_t \, dB_t]

Here, $ S_t $ is the stock price, $ \mu $ the drift, and $ \sigma $ the volatility. This equation captures both expected trends and unpredictable shocks.

Biology and Genomics

Living systems are noisy.

  • Gene expression: Modeled as stochastic processes due to molecular noise.

[dX_t = a(X_t) \, dt + b(X_t) \, dB_t]

where $ X_t $ is concentration of mRNA or protein, $ a(X_t) $ the deterministic regulation, and $ b(X_t) $ the stochastic fluctuation.

  • Population dynamics: In small populations, random birth/death events dominate.

  • Neural activity: The timing of neuron firing often follows stochastic models like Poisson or even SDE-driven integrate-and-fire models.

Why It Matters

Stochastic calculus is a powerful lens for seeing the world — not as a set of fixed equations, but as dynamic systems dancing with uncertainty. Whether you’re pricing derivatives or modeling noisy gene circuits, this math gives you the language to describe it.

And it’s beautiful.

Mean Field Games

2 minute read

Published:

Mean field games (MFG) are how we mathematically model systems where a large number of agents interact with each other. Think of it as the “crowd dynamics” of game theory — where each individual’s behavior affects and is affected by the collective behavior of the crowd.

The Birth of Mean Field Games

The theory was independently developed by two groups in 2006:

  • Jean-Michel Lasry and Pierre-Louis Lions (Paris)
  • Minyi Huang, Roland Malhamé, and Peter Caines (Montreal)

Their work bridged the gap between game theory and partial differential equations, creating a powerful framework for analyzing large populations of interacting agents.

The Core Idea

In MFG, instead of tracking every single agent (which would be computationally impossible for large populations), we describe the system using a “mean field” — a statistical distribution representing the collective state of all agents. This is similar to how we use Brownian motion in stochastic calculus to model random behavior.

The Mathematical Framework

A typical mean field game consists of two coupled equations:

  1. Hamilton-Jacobi-Bellman (HJB) Equation: \(-\partial_t u + H(x, \nabla u, m) = 0\) This describes how an individual agent makes optimal decisions. Here:
    • $ u(x,t) $ is the value function (optimal cost-to-go)
    • $ H $ is the Hamiltonian
    • $ m(x,t) $ is the distribution of all agents
    • $ \nabla u $ represents the gradient of the value function
  2. Fokker-Planck (FP) Equation: \(\partial_t m - \nabla \cdot (m \nabla_p H) = 0\) This describes how the population distribution evolves. Here:
    • $ m(x,t) $ is the density of agents
    • $ \nabla_p H $ represents the optimal control
    • The divergence term $ \nabla \cdot $ captures how agents move in the state space

Where MFG Shines

  1. Economics and Finance
    • Modeling market behavior with many traders
    • Understanding price formation in competitive markets
    • Analyzing systemic risk in financial networks
  2. Crowd Dynamics
    • Pedestrian flow in crowded spaces
    • Traffic flow optimization
    • Evacuation planning
  3. Energy Systems
    • Smart grid management
    • Electric vehicle charging coordination
    • Renewable energy integration
  4. Epidemiology
    • Modeling disease spread in large populations
    • Optimal vaccination strategies
    • Understanding social distancing effects

The Beauty of MFG

What makes mean field games particularly elegant is how they capture both individual optimization and collective behavior. Each agent tries to optimize their own objective, but their actions collectively shape the environment that everyone else faces. It’s like a dance where each dancer follows their own steps while being influenced by the overall movement of the crowd.

And just like in stochastic calculus, the mathematics might look intimidating at first, but the underlying ideas are deeply connected to our everyday experiences of interacting with large groups.

When Knowing Lost Its Weight

1 minute read

Published:

There was a time when knowing something, truly knowing, meant you had become something. You studied, you struggled, you remembered. Knowledge shaped character. To be learned was to be carved slowly by time and effort.

Now, we ask machines.

They answer instantly, without hesitation or fatigue. Everything from the origin of life to the syntax of Python, served up without cost. What used to take years to learn is now retrieved in seconds. The mountain has flattened.

And with that flattening, something in us feels smaller.

We built these machines to serve us, but they’ve quietly redefined us. If a machine can recall everything, what is a human for? If wisdom can be approximated, if creativity can be mimicked, if language itself can be synthesized, what’s left that belongs only to us?

The scholar once walked miles to find a rare book. He didn’t just gain knowledge, he became someone else in the process. Today, we copy-paste insight without digestion. The answers are easy, and so we value them less. And maybe, in the process, we’ve begun to value ourselves less too.

Human memory, once a sacred vault, is now just cache overflow. Thought, once a slow fire, flickers out in the glow of generated text.

We are not obsolete, not yet. But we have become lighter, less essential. In a world where knowledge is cheap and everywhere, our challenge is no longer to know, but to matter.

And that is a far heavier task.

Drug Discovery

1 minute read

Published:

Finding a new drug is like searching for a specific grain of sand on a beach - blindfolded. As an ML engineer in drug discovery, I get to build tools that make this process less painful.

The Quick and Dirty on Drug Discovery

You find a protein that’s causing trouble, design a molecule to fix it, make sure it won’t kill anyone, and prove it works. Simple, right? Except it takes 10-15 years and costs billions. That’s where ML comes in.

ML’s Superpowers

  1. Target Finder: Instead of manually digging through data, we train models to spot promising drug targets. Like having a really good metal detector.

  2. Molecule Designer: We’ve got AI that can dream up new molecules with specific properties. Want something that can cross the blood-brain barrier? Just tweak the parameters.

  3. Crystal Ball: We can predict if a molecule will be toxic or work in the body. Not perfect, but way better than testing everything in the lab.

  4. Trial Optimizer: ML helps match the right patients with the right trials. Think Tinder, but for drugs and people.

The Catch

It’s not all smooth sailing. We’re still dealing with limited data, black box models, and skeptical scientists. But that’s what keeps it interesting! Every day, we’re finding new ways to make drug discovery faster and more efficient. Who knows? Maybe one day we’ll have an AI that can design the perfect drug in minutes. Until then, we’ll keep iterating and debugging our way to better medicines.

2022

What is Life? The emergence of the Top-Down Causal Structure

6 minute read

Published:

bi-directional-causation

Bi-directional causation

TL;DR

von Neumann predicted the role of DNA by logical reasoning before the discovery of the structure of DNA. Among his uncountable scientific contributions is the invention of cellular automata in 1948 (without the aid of computers, only pen, and paper!) to elucidate the idea of self-replicating systems which is the key property of living organisms and formed the basis of his Univeral Constructor theory that can be seen as a logical approximate model for the living cells. In this article, I review some of the major ideas that are natural consequences of von Neumann’s UC and how it relates to the life as we know it many of which are detailed by Sara Imari Walker based on the idea of top-down causation put forward by George F. R. Ellis.

The possibility of a Universal Constructor?

The Universal Constructor (UC) is an abstraction proposed by John von Neumann in 1948 which can utilize resources in its environment to build any possible thing including itself. To elaborate more on the term “any” I have to bring up the concept of universality classes which are the subsets of a hypothetical universe in which a UC operates. As a non-rigorous example, in a universality class that consists of wooden material, a skillful wooden carpenter who is able to make every possible wooden thing is a UC. The carpenter is only the constructor and the instruction to make a certain thing must be given to him otherwise a self-referencing paradox arises when he wants to build himself. The paradox is resolved by separating the instructions (software) from the constructor (hardware) where the software is blindly copied (as a solid physical thing without considering its information content) upon replication. In the carpenter analogy, the recipe to build a chair is sometimes treated as a list of action items that explain in detail what to do to turn a piece of wood into a functional chair, and sometimes as a paper that has to be copied to be handed to the carpenter’s child so he can continue his father’s profession in making more chairs. But, who decides when that piece of paper has to be read like a bullet list of action items and when it has to be copied just as a piece of paper? von Neumann resolved this issue by introducing another logical component (a supervisory unit) that decides in which way the instruction paper has to be interpreted at a given time.

It was only due to von Neumann’s genius that now we see these three components are omnipresent in almost all living things:

  • DNA: Algorithm
  • Ribosomes: Universal Constructor
  • DNA polymerases: Supervisory unit

Trivial vs Non-trivial replicators

To an astute brain, this title quickly reminds Schrodinger’s aperiodic crystals versus the periodic physical crystals. We can identify different replicators which can reproduce their likes but are not categorized as living organisms. To draw the boundary, we must take the laws of physics as given and say those replications that are purely driven by the laws of physics are rather trivial such as periodic crystals (e.g. NaCl or salt). Non-trivial replication is however governed by an abstract instruction set whose shortest description is comparable to the instruction set itself. In other words, non-trivial replicators cannot be compressed while the repetitive structure of trivial replicators allows for large compression gain. Notice that non-trivial replicators are also governed by the laws of physics but the replication itself is not a natural consequence of these laws, rather explicitly planned in the instruction set.

  • Trivial replicators:
    • Instruction set: Laws of physics
    • Only one mode of operation
    • $x_{t+1} = f(x_t)$; The update rule does not depend on the state.
  • Non_trivial replicators:
    • Instruction set: Explicitely programmed (e.g. genome)
    • Infinite modes of opertations all within the physical constrains
    • $x_{t+1} = f_{x_t}(x_t)$: The update rule depends on the state.

Hypothesis 1 on life’s origin: Code becomes isolated from the Constructor

In light of the abovementioned dichotomy, it seems the sharp transition from non-living to living things occurs when the code becomes isolated from the constructors. There is indeed a chemical distinction in cells that support this hypothesis. The code (DNA) inhabits the worlds of nucleic acids while its products (proteins) live in the space of peptides. The communication is carried out by the bilingual molecules, i.e. messenger RNAs.

Hypothesis 2 on life’s origin: Top-down causation

This is an interesting hypothesis that needs further elaboration to become clear for a general audience. In abstract terms, top-down causation in this context means that the information (which is instantiated in a physical object e.g. DNA molecule) causes the organism and the organism itself causes the material in which the information is instantiated. Let’s make it a little clear using a mental model.

man inside a humanoid

A man controlling the humanoid as an example of bi-directional causation (Source: Hankook Mirae Technology)

Assume you are the operator of a large humanoid robot and you sit inside the robot and control its hands and legs by some connected pedals. Your hands are tied to the mechanical hands of the robot and cannot be released. You receive a visual signal from the outside world and operate the robot accordingly. You see a green wall in your way and assume it is a grasslike structure and decide to destroy it by the robot’s hands to open your way. Unfortunately, the green wall is painted concrete and causes the breakage of your robotic arm. Your hand will also become disabled as it is connected to the robot’s arm. Your brain had the prior information that greenness is a sign of lightness or softness which made you decide to go ahead and destroy the wall. This information causes your decision that consequently broke your robotic hand which reduced your action space in the future. You will no longer be able to use your hand which is tied to the broken robot’s arm. Hence, the physical feedback from the higher level affected you as the hardware that carries the information and also the information itself while that physical feedback was caused by the information in the first place. Hence, your thought (information) had a pathway to affects its physical carrier (you) and limits its capacities.

What is Life? A Turing Machine Interpretation

19 minute read

Published:

what is life

Source: Downloaded from the Internet.

TL;DR

Erwin Schrödinger predicted that the genetic material which encodes the development of living organisms must be large molecules that are stable and also expressive. The first condition ensures the perseverance of the attributes that are transferred to the next generations while the second condition enables the distillation of the whole development plan of an organism in that large molecule which he called an aperiodic crystal whose structure was later discovered and named DNA. In this article, I review Schrodinger’s lectures delivered in 1944 titled “What is life” and make a tiny amendment by connecting it to the theory of Turing Machines, the abstract general computational devices put forward by Alan Turing in 1936.

What is life?

This has been a grand question that humans have asked themselves throughout their existence on earth. People from various perspectives have investigated this question and reached different conclusions some of which can not be merged into a consistent set of characteristic properties that define a living being.

quantum jumps

Is there anybody out there? Can you hear us? Are we loud enough?

We, humans, are the most obvious living system that we know with common sense without actually defining what is the exact definition of the term “living”. This human-centric view may be misleading if it leads to a distant-based definition of life that turns the term living into a continuous adjective where something is as much living as it is similar to humans. As soon as we use the word “similar”, we have to be careful that it comes with an implicit metric that judges the similarity of two things. Hence, this self-centric view does not contribute to understanding the meaning of being alive. We need a more universal observer-independent measure. Can such a measure even exist? It is immediately clear that the relevant areas of science are those that do not depend on human constructs and agreed rules. For example, it’s unlikely that psychology or sociology could help much as they mostly concern the emergent concepts and norms in individual or collective behaviors. Hence, we need to resort to more fundamental areas such as physics and mathematics.

Schrodinger’s aperiodic crystal

One of the most known efforts in explaining the boundary between living and non-living things is the lecture series by Erwin Schrödinger in 1944 which was later collected in a book titled “What Is Life? The Physical Aspect of the Living Cell”. As expected from the author, the arguments in this lecture originate from physics and chemistry. I call Schrodinger’s approach “normative” as what he answers is

What it takes for an object to be alive in a universe where the existing phsyical laws hold?

I call this approach normative as it serves as a predictor rather than only explaining what is known so far. Note that the structure of the genetic material of a cell was not known at the time of Schrodinger’s lectures. It is astonishing that his approach laid theoretical description and prediction of how genetic information has to be stored which was later acknowledged by the discoverers of DNA double helix structure in 1953. Before we start looking into his theory, I want to emphasize what Schrodinger wrote in the preface of his book about noblesse oblige figuratively referring to the unwritten rule in the scientific community that one is expected not to write on any topic of which he is not a master. In an immensely complex problem such as life, cross-disciplinary knowledge is necessary which made him renounce the noblesse and its ensuing obligation with the following excuse:

We have inherited from our forefathers the keen longing for unified, all-embracing knowledge. The very name given to the highest institutions of learning reminds us, that from antiquity and throughout many centuries the universal aspect has been the only one to be given full credit. But the spread, both in width and depth, of the multifarious branches of knowledge during the last hundred-odd years has confronted us with a queer dilemma. We feel clearly that we are only now beginning to acquire reliable material for welding together the sum total of all that is known into a whole; but, on the other hand, it has become next to impossible for a single mind fully to command more than a small specialized portion of it.

This is alarming as it implies that some of the biggest questions are indeed unsolvable. I don’t go into more details about how this can be proved here not to hurt the cohesion of this article but will hopefully expand it in another article with the tentative title

On the human-unsolvability of hard questions?

Let’s get back to Schrodinger’s lectures.

aperiodic vs periodic

Less repetative patterns can store more information which is needed to encode a living organism.

The lower-boundedness of the size of living things

One of the first questions that arise when investigating the concept of being alive is the size of living things. Accepting that everything in the physical world is governed by strict physical laws, Schrodinger argues that living things cannot be so small. More precisely, they have to be large enough such that they are not affected by a single or a few atoms. This argument has this implicit assumption for the definition of life that a living thing has an orderly structure and also interacts with orderly things. We know from statistical physics that atoms move almost randomly (heat movement) when observed individually, and physical laws start to appear in the collaboration of an enormously large number of them. Hence, anything consisting of a few atoms will not be able to produce, perceive or interact with orderly things.

An example of particular interest is the measurement device composed of a light object suspended by a long thin fiber in equilibrium which is used by physicists to measure weak forces that deflect it from its position of equilibrium. Trying to increase the accuracy of measurement by increasing the length of the fiber of making the suspended object lighter meets a limit where the measurement becomes sensitive to the heat motion of the molecules in its surroundings which makes the device useless. Our perception organs are also measurement devices and have to be large enough to measure the statistical behavior of molecules; otherwise, they will be perplexed by the Brownian impact of them which our processing unit (brain) cannot make any sense of. As a rule of thumb, the accuracy of physical laws is within the order of $1/\sqrt{n}$ for $n$ being the number of co-operating particles. This immediately suggests that an organism has to have a gross structure to be able to expect a lawful world. The use of X-ray and measuring the mutation rate gave a lower upper bound for the size of a gene in the order of thousands of atoms.

atoms are unable to plan

The smaller an organism becomes, less certain it would be about the laws of nature.

The mutation is the working ground for evolution. It is indispensable to differentiate between the randomness in the genotype to phenotype process and the mutation in the genotype. The former results in the distribution of the phenotype with connected support while the latter leads to phenotypes values far from the distribution of non-mutated values without observing the intermediate values. Thus, the discontinuoity is the key sign of genetic mutation. It is important that mutation must be a rare event otherwise the species may vanish before being evolved.

The reconciliation of gene sizes and their regular activities

It is historically observed that hereditary attributes are preserved over centuries. This might look in contrast with the fact the gene which encodes that attribute is small enough to be affected by Brownian heat motion. The key is in the concept of discrete energy and equilibrium. Quantum theory predicts discrete levels of energy for atoms. In a more general setting, the molecules which are formed by a few atoms can take only a discrete finite set of states. A certain amount of energy has to be given to the molecule to change its state into a higher energy state whose structure is significantly different. This suggests that the gene (and possibly the entire genetic material) is a molecule whose stability is caused by being at the energy equilibrium.

quantum jumps

Less repetative patterns can store more information which is needed to encode a living organism.

There is a large energy barrier that has to be passed to move from this state to the adjacent lower or higher energy levels; a wall that is too high for the energy of heat motion to be strong enough to climb. I want to draw attention to the probabilistic nature of statements in small-scale physics. One way to facilitate the quantum jump of a molecule from one state to another is by increasing the temperate which lead to more energetic heat motion in its surrounding. The effect is though not deterministic. It increases the chance of the occurrence of such a jump which, due to the ergodicity of the process, can be measured as the time it takes until that jump occurs. That time is a random variable (remember everything is probabilistic!) whose mean is empirically observed to follow an exponential rule:

[t = \tau^{W/kT}\nonumber]

which implies the average time it takes to change the ratio of the needed energy to the energy supplied by the surrounding heat bath. Whan a jump occurs, an isomer of the molecule is obtained that is called an allele for that location (locus) on the chromosome.

Predicting the existence of DNA

It is impressive that Schrodinger’s was able to predict the structure of the genetic material from just a few principles that can be listed as

  • The genetic information must be stable enough to be transmitted from generation to generation.
  • The genetic material must be information-wise rich to encode all the complex processes that are needed for the development of an organism.

The first point is only possible if the genome is some sort of a molecule where the atoms are connected by Heitler-London bonds. Quantum theory explains that such structures are stable enough that heat motion does not change them into new stable mutants, called isomere. Changing these molecules to new stable ones requires explosion-like energy discharge in reactions such as ionization in the proximity of the molecule (< 10 atomic distance>). This energy barrier is important for the existence of life because it makes the mutation a rare event. If we categorize the state of matters concerning how strong their atoms are connected together, one could see that molecules, crystals, and solid belong to the same category while liquid and gaseous matters fall into another category. The key distinctive factor is Discontinuity which I dare to connect directly to “interestingness” not only in biology but also in other areas of science and mathematics which I summarise in the following quote from an unknown intellect:

Continuity is boring!

It seems that things become interesting when there is some notion of discontinuity in their state. In the abovementioned categorization, there is a certain temperature, known as the melting temperature, in which the crystal changes its state while the members of the second category show a continuous change of state with temperature from one state to another.

The second principle behind Schrodinger’s prediction of the DNA structure is well supported by the algorithmic information theory. Unlike usual crystals which are formed by repeating the same pattern in three dimensions, again and again, the genetic molecule must be aperiodic to allow encoding the accommodate the information content needed to build a complex organism. It is indeed in such structure that every group of atoms (or even every single atom) can play a distinct role as Schrodinger explicates as

We believe a gene - or perhaps the whole chromosome fibre - to be an aperiodic solid.

which is famously known as aperiodic crystal in his theory of life. This is exactly what makes genes responsible for different attributes of a living organism.

Death = Equilibrium

What is the characteristic feature of life? One makes us, as living organisms, different from, e.g. an hourglass? It seems the key is in the concept of equilibrium. Non-living things tend to settle after a relatively short time in some steady (non-moving) state while living things “keep going” and exchange material and energy with their surrounding. This phenomenon is explained by the second law of thermodynamics which states all non-alive isolated systems tend towards a permanent state called “thermodynamical equilibrium” or “maximum entropy” in which no observable event occurs. Being alive can be seen as evading this equilibrium. It seems the living organisms evade this death state by exchanging things with their environment in a process known as metabolism. However, the material is not the essence of metabolism as there is no difference between the atoms within the organism and those outside its boundary. What actually matters is a concept called entropy, which despite being often introduced in various domains as a hazy concept, is a measurable physical quantity as a function of heat and temperature. We are particularly interested in the statistical notion of entropy established by Boltzmann and Gibbs expressed by

[\textrm{entropy} = k\log D]

where $k$ is the Boltzmann constant and $D$ is a measure of atomic disorder. Technical details aside, the second law of thermodynamics formulates the natural tendency of things to approach the chaotic state (maximum entropy).

Feeding on negative entropy

The theory of negative entropy states that living organisms evade the maximum entropy state by feeding on the orderliness (negative entropy) of their environment. This is, for example, seen in the higher animals whose food consists of complicated organic compounds (low entropy) which they return to the environment in a degraded form (higher entropy). Living organisms get rid of the surplus entropy via another mechanism which is giving heat to their surrounding. This heat must be compensated via the energy they receive from food and other sources, e.g. sunlight. Schrodinger goes even one step further and argues a parallelism between the intensity of life and the body temperature with the idea that animals with higher body temperature get rid of their entropy at a quicker rate and are consequently capable of intenser life which I would rather call

Dancing around the death equilibrium wit higher frequency!

Is Life based on the Laws of Physics?

All physical laws are statistical and the degree of uncertainty in their prediction increases with temperature. Statistical physics explains how such laws can emerge from the chaotic behavior of atoms (“Order from disorder”). The events that occur in the life cycle of an organism exhibit significant regularity caused by notable orderliness which is controlled by a relatively small fraction of its atoms within every cell of its body called genes (Order from order). The ability of the living organism in sucking orderliness from its surrounding seems to be connected with this aperiodic crystal, the chromosome molecule. The lifecycle of an organism cannot be explained by statistical physics where the laws of Physics emerge from the chaotic interaction of atoms not a well-ordered configuration of them. This phenomenon is known as the lack of individual determination which implies the faith of one molecule or atom is randomly determined as a result of Brownian-like events at atomic scales such as heat motion.

This is fundamentally different than the situation in biology where a single copy of the molecule (the DNA of the fertilized egg) gives rise to a sequence of orderly events throughout the development of the organism. Even though the DNA molecules are copied and distributed all over the body of higher animals, their total number, e.g. $10^14$ will in volume be less than a tiny drop of liquid. Considering every cell in an adult animal, the behavior of the cell seems to be governed by deterministic mechanisms not probabilistic ones even though it is governed by only a single copy of the DNA molecule.

Two aforementioned mechanisms that produce order correspond to two types of physical laws, Dynamical and Statistical which are connected in a short article by our beloved Max Planck, titled “Dynamische und Statistische Gesetzmassigkei”. Planck argues in this article that statistical laws that govern large-scale phenomena are controlled by dynamical laws governing small-scale events such as the interactions among single atoms and molecules.

Nernst’s Law

In the dichotomy of statistical and dynamical laws of Physics, the transition from one to another is possible under a certain condition determined by Quantum theory as absolute zero temperature. It is in this temperature that heat motion stops and ceases to have any effect on physical events. This fact was shown empirically by Walther Nernst’s famous “Heat Theorem” also known as the Third Law of Thermodynamics. Quantum theory rationalizes this empirical observation and also predicts that there is a non-zero temperature in which the heat motion effect is practically negligible on a physical system. This temperature can be even as high as the room temperature for systems such as a clock. The same situation holds for the aperiodic crystal by thinking of them as the cogs of a living organism. It is astonishing that unlike in a clock, these cogs are not human-made.

DNA as a Turing Machine

Let’s accept for now that genes are the programs that encode the development of the living organism which are physically implemented as a form of stable molecules. I emphasized the term development to give the impression that, in this view, genes are not only a list of attributes. They are recipes for building the organism from scratch. One could see the gene as a program that contains the code to produce an attribute and also the instruction on how to run that code. Schrodinger’s states more or less the same view:

What we wish to illustrate is simply that with the molecular picture of the gene it is no longer inconceivable that the miniature code should precisely correspond with a highly complicated and specified plan of development and should somehow contain the means to put it into operation.

This suggests an interesting view to see the genetic molecule as a Turing Machine (TM), the abstract model for general computing devices. The big genetic molecule serves as both the program to run and also the Turing machine to run it. As stated, the Turing machine itself can also be encoded in the program together with the code it’s supposed to run. One could see this as bundling the C++ compiler together with the C programs that is supposed to run on a computer. However, a universal machine is still needed to run this bundle. That’s where the concept of the Universal Turing Machine (UTM) comes into play; a machine that is capable of running every other Turing Machine. With some imprecision, one could see the laws of physics (at least in the spatial context of this planet/universe) as the UTM that runs every TM including the special one of our interest, i.e. genes.

genes as turing machines

Genes as Turing Machines which are run by the laws of Physics (a Universal Turing Machine) to produce a living organism.