The Prigogine paradigm

Artificial systems are moving past rote pattern-matching toward learning that remains consistent across time and context. The central question shifts from “What can a model predict?” to “What keeps a model coherent as conditions shift?” Ilya Prigogine’s work on self-organization offers a durable frame for that shift, and it fits European priorities: dignity, transparency, energy realism, and pluralism.

Prigogine studied dissipative structures: open systems that keep their form through throughput. A living cell, a storm front, a city district, each preserves identity through flow, not stasis. Far from equilibrium, such systems generate macroscopic order from microscopic activity. Translated into computing, this suggests a move away from stockpiling labels and freezing models for long intervals. The target becomes concept-centered systems that keep revising their internal relations as new signals arrive. Call these large concept models (LCMs): systems that form, test, and update relationships such as causes, constraints, analogies, and mechanisms, rather than merely counting co-occurrences.

A useful illustration comes from classic self-organization examples often discussed in the same tradition: ants moving grains in a seemingly haphazard way, circling and scattering sand across many small heaps. For a long stretch, nothing looks “structured.” Then a threshold is crossed: once heaps reach comparable height, the ants start linking heaps of similar height, and a coherent pattern appears at the colony level. Micro-level randomness, macro-level order; a phase change triggered by a simple local condition. Neurobiology offers a parallel threshold mechanism. A neuron integrates many small inputs. Most subside with no obvious event. Yet when membrane voltage reaches a critical level, the cell fires an action potential, an all-or-none transition that reorganizes the system’s state, propagates forward, and changes what becomes learnable for downstream circuits. In both cases, a threshold rule turns diffuse fluctuations into stable structure. These threshold dynamics matter for AI design. Present systems often behave like stochastic parrots: fluent, high-variance, and easily nudged off-track by small perturbations. If we encode threshold-driven self-organization into learning loops, activation, consolidation, revision, pruning, AI can become closer to a living process: feedback-hungry, frugal with data and energy, explainable through recorded internal transitions, and open to revision when evidence shifts.

What LCMs address

Current pipelines lean on enormous scraped corpora and mass annotation. The cost is not just financial. It includes repetitive labor that is frequently outsourced, opaque provenance that blocks audit, and training regimes that demand large energy budgets and specialized compute. Generalization often proves brittle: models perform well on curated benchmarks, then fail on edge cases, low-resource languages, domain shifts, or subtle context changes. Black-box internals make it hard to contest outcomes, trace failures, or improve safety. LCMs respond by learning structure rather than surface regularities. The goal is fewer labels, higher reuse of knowledge, and reasoning traces that can be checked.

What changes in practice

Concept-centered systems can infer stable relations from partially labeled streams, weak supervision, and expert taxonomies. In health settings, that points to medical ontologies plus limited clinical data supporting reasoning about symptoms, comorbidity, and treatment pathways, rather than chasing millions of labeled scans. Dissipative structures persist through continuous exchange. The computing analogue is incremental updating: revise concept graphs, confidence weights, and causal hypotheses as new observations arrive. Translation, crisis response, and safety monitoring benefit from continuity: knowledge persists, then adapts. Self-organization implies hypothesis competition and replacement (Hypothetico-deductive model, A de Groot) . LCMs can keep internal “theories” that face ongoing tests. When causal structure shifts, markets, ecosystems, epidemics, models can retire old assumptions and show why. In many European contexts, raw data cannot travel. Concepts can. Local nodes learn from local signals, share abstractions, and converge through protocols that exchange maps rather than records. That reduces privacy exposure and bandwidth demand, and it supports digital sovereignty. Neural components handle perception and pattern extraction. Symbolic components handle rules, constraints, obligations, and logical consistency. Binding the two yields sample-efficient learning and better auditability in domains like robotics, policy support, and regulated services.

Europe’s legal scaffolding around data minimization, accountability, and explainability maps cleanly to concept-centered learning, hybrid architectures, and decentralized training. Energy constraints and climate targets make endless giant training runs a poor fit; incremental learning and tighter architectures offer practical relief. Europe’s linguistic diversity becomes an asset when models adapt locally without erasing differences: raw data stays local, concepts travel.

Public procurement can shift markets by asking for documented concept maps, energy disclosures, fair-work attestations for any human data work, and on-prem deployment options. Open science can turn taxonomies, benchmarks, and safety playbooks into standards that lower barriers for startups and widen scrutiny.

Prigogine showed that order grows out of flow. The ant heap that suddenly “clicks” into a higher-order pattern, and the neuron that fires once voltage crosses a threshold, point to the same design lesson: intelligence is not a frozen object. It is a regulated process of accumulation, transition, and revision. If AI incorporates such dynamics, it can move past stochastic mimicry toward systems that seek feedback, conserve resources, expose their internal state changes, and update when reality contradicts them, DZD (2024, Almere-Haven).