Skip to content

rl: opt-in EMA-based low-update trigger for greedy selection (experimental)#577

Draft
kimjune01 wants to merge 4 commits intoSoarGroup:developmentfrom
kimjune01:rl-convergence-gate
Draft

rl: opt-in EMA-based low-update trigger for greedy selection (experimental)#577
kimjune01 wants to merge 4 commits intoSoarGroup:developmentfrom
kimjune01:rl-convergence-gate

Conversation

@kimjune01
Copy link
Copy Markdown

@kimjune01 kimjune01 commented Mar 25, 2026

What this PR does

Adds an opt-in mechanism to track exponential moving averages of per-production |ΔQ| during RL updates, and to force greedy selection on a slot when the EMA of every RL rule on that slot falls below a configured threshold.

Three new parameters, all off by default:

  • chunk-gate (on/off, default off) — enables the mechanism
  • chunk-gate-threshold (default 0.01) — EMA threshold below which a rule is treated as low-update. The value is an exposed default, not a recommended universal setting.
  • chunk-gate-ema-decay (default 0.95) — smoothing factor for the EMA update

Default configuration leaves existing code paths inactive. chunk-gate=on is required for the new logic to have any effect.

Intent

The mechanism lets users experiment with low recent TD-update magnitude as an opt-in trigger for greedy selection, so that chunking has a deterministic result to compile over. Low EMA of |ΔQ| does not imply policy optimality or full convergence. It is a local heuristic about recent update magnitude.

The override currently intercepts the exploration policy step in run_preference_semantics() and forces greedy choice when a slot's RL rules are all below threshold. It does not audit interactions with non-RL preferences, tie handling, or the broader preference semantics, and may affect learned policy behavior in ways this PR has not measured.

Open for discussion; not a merge proposal.

Changes

File Change
production.h Add rl_ema_delta_q field to production struct
production.cpp, rete.cpp, reinforcement_learning.cpp Initialize new field to 1.0 at all creation sites
reinforcement_learning.h Declare 3 new params + rl_slot_converged()
reinforcement_learning.cpp Update EMA in rl_perform_update(), add param init, implement rl_slot_converged()
decide.cpp In run_preference_semantics(), check EMA state before exploration policy; if all RL rules on the slot are below threshold, select greedily

Tests

All pass locally (M4 Pro, macOS, 16s clean build):

  • testRLConvergenceGate — agent with chunk-gate ON, 50 decisions, completes successfully
  • testRLConvergenceGateOff — same agent with chunk-gate OFF, regression test
  • testRLConvergenceGateParams — verifies new params accepted by command parser
  • Existing RL/chunking tests (Chunk_RL_Proposal, RL_Variablization, testPreferenceSemantics, testLearn) all pass

Background reference

Gate chunking on RL convergence by tracking an exponential moving
average (EMA) of |delta_Q| per production rule.  When all RL rules
contributing numeric-indifferent preferences to a slot have converged
(EMA below threshold), the decision is made greedily instead of
stochastically. This makes the decision deterministic, which enables
chunking to compile the converged policy into a production rule.

New parameters (all under rl):
  chunk-gate           on/off (default off) — enable convergence gating
  chunk-gate-threshold double  (default 0.01) — EMA below this = converged
  chunk-gate-ema-decay double  (default 0.95) — EMA smoothing factor

When chunk-gate is off, behavior is identical to the existing codebase.

Motivation: Laird (2022) §4 identifies the RL–chunking composition gap
as a known limitation. RL uses stochastic exploration, chunking requires
deterministic results, so the two cannot compose. The planned fix is to
gate chunking on RL convergence. This patch implements that gate.

Reference: "Introduction to the Soar Cognitive Architecture"
(Laird, 2022, arXiv:2205.03854), §4, p.10.
Three FullTests covering the chunk-gate feature:

1. testRLConvergenceGate: agent with RL learning and chunk-gate ON
   (fast EMA decay=0.5, threshold=0.1). Two operators with consistent
   reward. Verifies 50 decisions complete without crash/hang.

2. testRLConvergenceGateOff: same agent with chunk-gate OFF.
   Regression test — identical decision count confirms no behavior
   change when feature is disabled.

3. testRLConvergenceGateParams: verifies the three new parameters
   (chunk-gate, chunk-gate-threshold, chunk-gate-ema-decay) are
   accepted by the command parser with valid values.
Three FullTests covering the chunk-gate feature:

1. testRLConvergenceGate: agent with RL learning and chunk-gate ON
   (fast EMA decay=0.5, threshold=0.1). Two operators with consistent
   reward. Verifies 50 decisions complete successfully.

2. testRLConvergenceGateOff: same agent with chunk-gate OFF.
   Regression test — identical decision count confirms no behavior
   change when feature is disabled.

3. testRLConvergenceGateParams: verifies the three new parameters
   (chunk-gate, chunk-gate-threshold, chunk-gate-ema-decay) are
   accepted by the command parser with valid values.

All three tests pass. Existing RL/chunking tests (Chunk_RL_Proposal,
RL_Variablization, testPreferenceSemantics, testLearn) also pass,
confirming zero regression.
The two test agents were nearly identical — only the chunk-gate
params differed. Now there's one shared agent file; the C++ tests
set rl params via ExecuteCommandLine before sourcing.
@scijones scijones self-assigned this Mar 25, 2026
@kimjune01 kimjune01 changed the title Gate chunking on RL convergence (EMA of |delta_Q|) rl: opt-in EMA convergence tracking with greedy override (experimental) Apr 9, 2026
@kimjune01 kimjune01 changed the title rl: opt-in EMA convergence tracking with greedy override (experimental) rl: opt-in EMA-based low-update trigger for greedy selection (experimental) Apr 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants