hmm
hmm
¶
Hidden Markov Model for player form state inference.
Implements: - Forward algorithm (online filtering) - Forward-Backward (offline smoothing) - Viterbi decoding (most likely state sequence) - Dynamic transition matrix perturbation (news signal injection) - Baum-Welch parameter learning (EM) - One-step-ahead prediction with uncertainty
HMMInference
¶
HMMInference(
transition_matrix: Optional[ndarray] = None,
emission_params: Optional[dict] = None,
initial_dist: Optional[ndarray] = None,
)
Hidden Markov Model for discrete player form states.
Supports dynamic transition matrix perturbation so that external signals (news, injuries) can shift state probabilities mid-sequence.
| PARAMETER | DESCRIPTION |
|---|---|
transition_matrix
|
transition_matrix[i,j] = P(S_{t+1}=j | S_t=i). Rows must sum to 1.
TYPE:
|
emission_params
|
{state_index: (mean, std)} for Gaussian emissions.
TYPE:
|
initial_dist
|
Prior over initial state.
TYPE:
|
Source code in fplx/inference/hmm.py
inject_news_perturbation
¶
Perturb transition matrix at a specific timestep based on news.
For each source state, the transition probability toward boosted target states is multiplied by the boost factor (scaled by confidence), then the row is renormalized.
| PARAMETER | DESCRIPTION |
|---|---|
timestep
|
The gameweek at which the perturbation applies.
TYPE:
|
state_boost
|
{target_state: multiplicative_boost}. E.g., {0: 10.0} means "10x more likely to transition to Injured."
TYPE:
|
confidence
|
Scales the perturbation. 0 = no effect, 1 = full effect.
TYPE:
|
Source code in fplx/inference/hmm.py
clear_perturbations
¶
forward
¶
Forward algorithm with dynamic transition matrices.
| PARAMETER | DESCRIPTION |
|---|---|
observations
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
forward_messages
|
Normalized forward messages. forward_messages[t] = P(S_t | y_1:t)
TYPE:
|
scale
|
Per-timestep normalization constants.
TYPE:
|
Source code in fplx/inference/hmm.py
forward_backward
¶
Compute smoothed posteriors P(S_t | y_1:num_timesteps).
| PARAMETER | DESCRIPTION |
|---|---|
observations
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
smoothed_posteriors
|
smoothed_posteriors[t, s] = P(S_t=s | y_1:num_timesteps)
TYPE:
|
Source code in fplx/inference/hmm.py
viterbi
¶
Most likely state sequence via Viterbi decoding.
| PARAMETER | DESCRIPTION |
|---|---|
observations
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
best_path
|
TYPE:
|
Source code in fplx/inference/hmm.py
predict_next
¶
Predict next timestep's points distribution.
Runs forward algorithm, then propagates one step ahead via the transition matrix.
| PARAMETER | DESCRIPTION |
|---|---|
observations
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
expected_points
|
E[Y_{num_timesteps+1} | y_1:num_timesteps]
TYPE:
|
variance
|
Var[Y_{num_timesteps+1} | y_1:num_timesteps] (from law of total variance)
TYPE:
|
next_state_dist
|
P(S_{num_timesteps+1} | y_1:num_timesteps)
TYPE:
|
Source code in fplx/inference/hmm.py
fit
¶
Learn transition matrix and emission parameters via Baum-Welch EM.
| PARAMETER | DESCRIPTION |
|---|---|
observations
|
Training sequence.
TYPE:
|
n_iter
|
Maximum EM iterations.
TYPE:
|
tol
|
Convergence tolerance on log-likelihood.
TYPE:
|
verbose
|
Print progress.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
self
|
|
Source code in fplx/inference/hmm.py
283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | |