Bletchley: The Jungle Child Problem
Bletchley contains two interlocking research threads: Project HYDE (field-theoretic generative modeling) and the Darwinian neuroevolution framework. Both reject the assumption that intelligence requires massive training data.
Project HYDE: Field-Theoretic Generative Modeling
Current generative models (Flow Matching, Diffusion, Hamiltonian Neural Networks) all operate on a particle-based paradigm — tracking individual trajectories via ODEs integrated over 1D time. HYDE abandons trajectory-based ODEs for PDEs over spacetime.
Instead of tracking paths, the framework models fields φ(xμ) across a unified 4D spacetime continuum where xμ = (ct, x, y, z). The generative process extremizes a four-dimensional action functional, producing field Euler-Lagrange PDEs as a global variational principle for generation.
No prior work parameterizes a Lagrangian density over fields to derive generative dynamics. This is genuinely novel territory.
Critical Hypotheses
| ID | Proposition | Status |
|---|---|---|
| H1 | Field-theoretic Lagrangian densities can parameterize generative processes over spacetime | Novel |
| H2 | A probabilistic interpretation generalizing Onsager-Machlup from 1D paths to 4D fields exists | Open |
| H4 | Boundary conditions on spacetime hypersurfaces define noise-to-data generation | Open |
| H9 | PDE-based generation is tractable via spatial parallelization | Open |
Validation Roadmap
- Phase 1 — Analytical Euler-Lagrange derivation for 1D/2D toy scalar fields. No neural networks.
- Phase 2 — Extend Onsager-Machlup functional to continuous fields. Define functional measures.
- Phase 3 — Parameterize Lagrangian density with a neural network. Standard PDE solver.
- Phase 4 — Neural PDE solvers, non-conservative source terms for conditioning, benchmark vs flow matching.
Darwinian Neuroevolution
The philosophical inverse of standard deep learning: evolve architectures instead of training on data.
The Jungle Child Thought Experiment
A human child dropped in a jungle with zero prior training survives through intrinsic motivation, causal reasoning, and one-shot learning. Current AI in the same scenario requires millions of training examples and fails on anything not in distribution. The gap is fundamental, not a scaling problem.
Three Pillars
- Always-On Dense Neuromorphic Core — Liquid Neural Networks on thermodynamic hardware. The answer is measured, not computed — the system settles into its lowest energy state.
- Darwinian Jungle Learning — Spawn thousands of LNN agents in procedurally-generated environments. Kill the ones that die. Breed the survivors via model merging.
- Evolutionary Priors — Evolution discovers innate architectural biases enabling open-world learning. A newborn has face detection, depth perception, and physics intuitions — from architecture, not from a latent space.
Downstream Applications
If HYDE validates mathematically, it integrates into:
- P.R.I.S.M. — Conservation laws and periodic boundary conditions for crystal structure generation
- Aerospace controls — Deterministic action minimization for plasma stabilisation and propulsion
Status
Preliminary theoretical stage. Phase 1 (analytical toy problems) is the immediate focus. Neural network implementation would be premature before the probabilistic bridge is rigorous. Bletchley is constrained by hardware availability (neuromorphic chips), unsolved credit assignment for evolutionary selection, and the lack of agreed extrapolation benchmarks.