Latent Neural ODEs with Aperiodic φ-Scaling for Chaotic Dynamical Systems
Author: Paul E. Harris IV · Independent Researcher, Mashantucket Pequot Nation
Status: Codebase locked, mathematically patched, and optimized for Hopper GPUs (v23.0.1). Pending full 10-seed empirical benchmark run.
SynechismCore is a Latent Neural ODE architecture that uses golden-ratio (
When do continuous latent dynamics provide a measurable advantage over discrete sequence models — and when do they not?
We report results honestly. The architecture's confirmed wins (KS-PDE, Lorenz bifurcation) are documented alongside its failures (Robotics actuator loss). The v23.0 architecture introduces three surgical components to directly address the failure modes identified in v22.
The v23 architecture is fully implemented and patched for mathematical correctness and hardware efficiency.
-
IrrationalShutter: Replaces adaptive
dopri5stepping with forcedrk4steps exactly on the$\varphi$ -Weyl sequence grid. Goal: Eliminate resonance and phase-locking on high-chaos fractal attractors (e.g., Lorenz $\rho \ge 45$). - ElasticManifold: Couples a learned regime-shift event detector to the attractor radius constraint, allowing the topological boundary to dynamically "breathe" (expand via a GELU network). Goal: Recover the $0.52\times$ loss on discontinuous underdamped physics (Robotics).
-
LaminarBypass: A local-curvature heuristic that routes smooth dynamics through a computationally cheap,
$\Delta t$ -aware linear projection, reserving heavy ODE integration only for turbulent bursts. Goal: Drastically reduce wall-clock time.
Confirmed v22 Results (Produced on Kaggle free-tier P100/T4):
- ✅ KS-PDE (
$\nu$ : 1.0→0.5):$1.43\times$ MAE win over Transformer across 5 seeds. - ✅ Lorenz-63 Coherence: 19,940 continuous prediction steps before divergence (
$15.8\times$ vs baselines). - ✅
$\varphi$ -Significance: Aperiodic sampling achieves$p=0.0000$ over uniform grids. - ❌ Robotics (
$\gamma$ : 0.5→0.05): ODE loses badly ($0.52\times$ ) to Transformer due to fixed attractor sphere. ⚠️ Weather L96 / Finance: Statistically tied or marginally significant.
Pending v23 Benchmark (Awaiting H100 execution):
- ⏳ Claim 4 Ablation: Does
$\varphi$ specifically beat$\sqrt{2}$ and$e$ ? - ⏳ Robotics Recovery: Does
ElasticManifoldturn the$0.52\times$ loss into a win? - ⏳ High-Chaos Lorenz: Does
IrrationalShutterprevent topological collapse at$\rho=50$ ? - ⏳ KS-PDE 10-Seed Validation: Does the
$1.43\times$ headline hold under rigorous 10-seed Mann-Whitney U testing?
(Note: Empirical JSON results will be pushed to /results/v23/ upon run completion.)
SynechismCore/
├── src/
│ ├── models.py # v22 core: SynechismV20, all baselines
│ ├── v23_components.py # ElasticManifold, IrrationalShutter, LaminarBypass
│ ├── hyperagent.py # Event detector + discrete jump correction
│ ├── data.py # 5 synthetic dynamical system generators
│ ├── train.py # Training loop + evaluation module
│ ├── stats.py # Corrected Mann-Whitney U significance testing
│ ├── chaotic_metrics.py # VPT, sMAPE, nRMSE, fractal dimension
│ ├── quantum_lattice.py # PhiLattice, FibonacciLattice, HaltonLattice
│ └── hyevo.py # Evolutionary hyperparameter search
│
├── launch_h100.py # H100-optimized full suite launcher (TF32 + torch.compile)
├── run_v23_benchmark.py # v23 vs v22 head-to-head + coherence testing
├── run_phi_ablation.py # φ vs √2 vs e ablation
├── run_experiments.py # Legacy v22 5-experiment benchmark
└── results/ # JSON outputs go here
- Install Dependencies
pip install torch torchdiffeq scipy pandas matplotlib(PyTorch 2.0+ required for torch.compile optimizations).
- Sanity Check (~5 mins) Verify the pipeline works before renting heavy compute:
python launch_h100.py --quick-
Run the Full v23 Benchmark Suite
This executes the
$\varphi$ -ablation, v23 component tests, coherence tests, and KS-PDE multi-seed validation. Optimized for NVIDIA Hopper (H100) or Ampere (A100) GPUs.
# Start a persistent screen session
screen -S synechism
# Launch the full 10-seed suite
python launch_h100.py --seeds 0 1 2 3 4 5 6 7 8 9🔬 Significance Testing Methodology All multi-model comparisons use the non-parametric Mann-Whitney U test over per-sample error distributions to preserve the i.i.d. assumption:
# Averages error over Time and Dimension FIRST
ode_errors = np.abs(ode_preds - true_values).mean(axis=(1, 2)) # Shape: (N,)
baseline_errors = np.abs(baseline_preds - true_values).mean(axis=(1, 2))
stat, p = scipy.stats.mannwhitneyu(ode_errors, baseline_errors, alternative='less')This prevents the artificial inflation of
📜 Citation
@misc{harris2026synechismcore,
title = {SynechismCore: Latent Neural ODEs with Aperiodic phi-Scaling for Chaotic Dynamical Systems},
author = {Harris IV, Paul E.},
year = {2026},
note = {Preprint. v23 benchmark pending.},
url = {https://github.com/pz33y/SynechismCore}
}License: MIT