← Back to AI Factory Chat

AI Factory Glossary

3,145 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 55 of 63 (3,145 entries)

strategic sourcing, supply chain & logistics

Strategic sourcing aligns procurement with business objectives optimizing cost quality and risk.

strategy adaptation, ai agents

Strategy adaptation modifies approaches based on success feedback.

streaming generation, llm optimization

Streaming generation outputs tokens incrementally as generated improving perceived latency.

streaming llm, llm architecture

Process infinite sequences.

stress migration modeling, reliability

Model thermal stress effects.

stress-strain calibration, metrology

Relate Raman shift to stress.

strided attention, transformer

Attend to every k-th position.

structural time series, time series models, state space models, unobserved components, trend analysis, seasonality, forecasting

# Structural Time Series Models ## STS Structural time series (STS) models, also called **state space models** or **unobserved components models**, decompose a time series into interpretable components—each representing a distinct source of variation. ## 1. Core Components A structural time series model decomposes an observed series $y_t$ into additive components: $$ y_t = \mu_t + \gamma_t + \psi_t + X_t\beta + \varepsilon_t $$ Where: - $\mu_t$ — Trend component - $\gamma_t$ — Seasonal component - $\psi_t$ — Cyclical component - $X_t\beta$ — Regression/explanatory effects - $\varepsilon_t$ — Irregular (white noise) component ## 2. Component Specifications ### 2.1 Trend Component The trend ($\mu_t$) captures the underlying level and growth pattern of the series. #### Local Level Model (Random Walk) $$ \mu_t = \mu_{t-1} + \eta_t, \quad \eta_t \sim N(0, \sigma_\eta^2) $$ - Level evolves as a random walk - No slope/growth rate component - Suitable for series without systematic growth #### Local Linear Trend Model $$ \begin{aligned} \mu_t &= \mu_{t-1} + \nu_{t-1} + \eta_t, \quad \eta_t \sim N(0, \sigma_\eta^2) \\ \nu_t &= \nu_{t-1} + \zeta_t, \quad \zeta_t \sim N(0, \sigma_\zeta^2) \end{aligned} $$ - $\mu_t$ — Stochastic level - $\nu_t$ — Stochastic slope (growth rate) - Both level and slope evolve over time - When $\sigma_\zeta^2 = 0$: slope is fixed (deterministic growth) - When $\sigma_\eta^2 = 0$: smooth trend (integrated random walk) #### Smooth Trend (Integrated Random Walk) $$ \begin{aligned} \mu_t &= \mu_{t-1} + \nu_{t-1} \\ \nu_t &= \nu_{t-1} + \zeta_t, \quad \zeta_t \sim N(0, \sigma_\zeta^2) \end{aligned} $$ - Level changes are smooth (no level disturbance) - Only slope receives stochastic shocks #### Deterministic Trend $$ \mu_t = \alpha + \beta t $$ - Fixed intercept $\alpha$ and slope $\beta$ - No stochastic evolution ### 2.2 Seasonal Component The seasonal component ($\gamma_t$) captures recurring patterns at fixed intervals. #### Dummy Variable Form $$ \gamma_t = -\sum_{j=1}^{s-1} \gamma_{t-j} + \omega_t, \quad \omega_t \sim N(0, \sigma_\omega^2) $$ - $s$ — Number of seasons (e.g., $s=12$ for monthly data) - Seasonal effects sum to zero over a complete cycle - When $\sigma_\omega^2 = 0$: deterministic (fixed) seasonality #### Trigonometric/Fourier Form $$ \gamma_t = \sum_{j=1}^{[s/2]} \gamma_{j,t} $$ Each harmonic $j$ follows: $$ \begin{bmatrix} \gamma_{j,t} \\ \gamma_{j,t}^* \end{bmatrix} = \begin{bmatrix} \cos \lambda_j & \sin \lambda_j \\ -\sin \lambda_j & \cos \lambda_j \end{bmatrix} \begin{bmatrix} \gamma_{j,t-1} \\ \gamma_{j,t-1}^* \end{bmatrix} + \begin{bmatrix} \omega_{j,t} \\ \omega_{j,t}^* \end{bmatrix} $$ Where: - $\lambda_j = \frac{2\pi j}{s}$ — Frequency of harmonic $j$ - $\omega_{j,t}, \omega_{j,t}^* \sim N(0, \sigma_\omega^2)$ - Allows different variances for different harmonics - More parsimonious when few harmonics are needed ### 2.3 Cyclical Component The cyclical component ($\psi_t$) captures medium-term fluctuations not tied to fixed calendar periods. $$ \begin{bmatrix} \psi_t \\ \psi_t^* \end{bmatrix} = \rho \begin{bmatrix} \cos \lambda_c & \sin \lambda_c \\ -\sin \lambda_c & \cos \lambda_c \end{bmatrix} \begin{bmatrix} \psi_{t-1} \\ \psi_{t-1}^* \end{bmatrix} + \begin{bmatrix} \kappa_t \\ \kappa_t^* \end{bmatrix} $$ Where: - $\lambda_c \in (0, \pi)$ — Cycle frequency - $\rho \in (0, 1)$ — Damping factor (ensures stationarity) - $\kappa_t, \kappa_t^* \sim N(0, \sigma_\kappa^2)$ - Period of cycle: $\frac{2\pi}{\lambda_c}$ time units ### 2.4 Regression Component The regression component ($X_t\beta$) incorporates explanatory variables: $$ \text{Regression effect} = \sum_{k=1}^{K} \beta_k x_{k,t} $$ Common applications: - **Intervention effects**: Step functions, pulse dummies, ramp effects - **Calendar effects**: Trading days, holidays, leap years - **Explanatory variables**: Economic indicators, weather, etc. #### Time-Varying Coefficients (Optional) $$ \beta_t = \beta_{t-1} + \xi_t, \quad \xi_t \sim N(0, \sigma_\xi^2) $$ ### 2.5 Irregular Component The irregular component ($\varepsilon_t$) is white noise: $$ \varepsilon_t \sim N(0, \sigma_\varepsilon^2) $$ - White noise (serially uncorrelated) - Captures measurement error and short-term fluctuations - Also called "observation noise" ## 3. State Space Representation ### 3.1 General Form Any structural time series model can be written in state space form: **Observation Equation:** $$ y_t = Z_t \alpha_t + \varepsilon_t, \quad \varepsilon_t \sim N(0, H_t) $$ **State Equation:** $$ \alpha_{t+1} = T_t \alpha_t + R_t \eta_t, \quad \eta_t \sim N(0, Q_t) $$ Where: - $y_t$ — Observed data (scalar or vector) - $\alpha_t$ — State vector (unobserved components) - $Z_t$ — Observation matrix (links states to observations) - $T_t$ — Transition matrix (governs state evolution) - $R_t$ — Selection matrix - $H_t$ — Observation noise variance - $Q_t$ — State noise covariance matrix ### 3.2 Example: Local Linear Trend + Seasonal State vector: $$ \alpha_t = \begin{bmatrix} \mu_t \\ \nu_t \\ \gamma_t \\ \gamma_{t-1} \\ \vdots \\ \gamma_{t-s+2} \end{bmatrix} $$ ## 4. Estimation via Kalman Filter ### 4.1 Kalman Filter Recursions **Prediction Step:** $$ \begin{aligned} \alpha_{t|t-1} &= T_t \alpha_{t-1|t-1} \\ P_{t|t-1} &= T_t P_{t-1|t-1} T_t' + R_t Q_t R_t' \end{aligned} $$ **Update Step:** $$ \begin{aligned} v_t &= y_t - Z_t \alpha_{t|t-1} \quad \text{(prediction error)} \\ F_t &= Z_t P_{t|t-1} Z_t' + H_t \quad \text{(prediction error variance)} \\ K_t &= P_{t|t-1} Z_t' F_t^{-1} \quad \text{(Kalman gain)} \\ \alpha_{t|t} &= \alpha_{t|t-1} + K_t v_t \\ P_{t|t} &= (I - K_t Z_t) P_{t|t-1} \end{aligned} $$ Where: - $\alpha_{t|t-1}$ — Predicted state (prior) - $\alpha_{t|t}$ — Filtered state (posterior) - $P_{t|t-1}$ — Predicted state covariance - $P_{t|t}$ — Filtered state covariance ### 4.2 Kalman Smoother Refines estimates using full sample (backward pass): $$ \begin{aligned} \alpha_{t|n} &= \alpha_{t|t} + P_{t|t} T_{t+1}' P_{t+1|t}^{-1} (\alpha_{t+1|n} - \alpha_{t+1|t}) \\ P_{t|n} &= P_{t|t} + P_{t|t} T_{t+1}' P_{t+1|t}^{-1} (P_{t+1|n} - P_{t+1|t}) P_{t+1|t}^{-1} T_{t+1} P_{t|t} \end{aligned} $$ Where $n$ is the total number of observations. ## 5. Hyperparameter Estimation ### 5.1 Maximum Likelihood The log-likelihood is computed via prediction error decomposition: $$ \log L(\theta) = -\frac{n}{2} \log(2\pi) - \frac{1}{2} \sum_{t=1}^{n} \left( \log |F_t| + v_t' F_t^{-1} v_t \right) $$ Where: - $\theta$ — Vector of hyperparameters (variance terms) - $v_t$ — Prediction errors from Kalman filter - $F_t$ — Prediction error variances Optimization methods: - Quasi-Newton (BFGS, L-BFGS) - EM algorithm - Scoring algorithms ### 5.2 Bayesian Estimation $$ p(\theta | y_{1:n}) \propto p(y_{1:n} | \theta) \cdot p(\theta) $$ Common approaches: - **MCMC**: Gibbs sampling, Hamiltonian Monte Carlo - **Variational inference**: Faster approximation - **Integrated nested Laplace approximation (INLA)** Common priors: - Inverse-gamma for variance parameters - Half-Cauchy or half-normal for scale parameters ## 6. Model Selection and Diagnostics ### 6.1 Information Criteria $$ \begin{aligned} \text{AIC} &= -2 \log L + 2k \\ \text{BIC} &= -2 \log L + k \log n \\ \text{AICc} &= \text{AIC} + \frac{2k(k+1)}{n-k-1} \end{aligned} $$ Where $k$ is the number of hyperparameters. ### 6.2 Diagnostic Checks Standardized prediction errors should be: - **Zero mean**: $E[v_t / \sqrt{F_t}] = 0$ - **Unit variance**: $\text{Var}[v_t / \sqrt{F_t}] = 1$ - **Serially uncorrelated**: Check with Ljung-Box test - **Normally distributed**: Check with Jarque-Bera test ### 6.3 Auxiliary Residuals - **Observation residuals**: Detect outliers - **State residuals**: Detect structural breaks $$ \begin{aligned} e_t &= \frac{y_t - Z_t \alpha_{t|n}}{\sqrt{\text{Var}(y_t - Z_t \alpha_{t|n})}} \\ r_t &= \frac{\eta_t}{\sqrt{\text{Var}(\eta_t)}} \end{aligned} $$ ## 7. Comparison | Approach | Philosophy | Strengths | Limitations | |:---------|:-----------|:----------|:------------| | **ARIMA** | Reduced-form; models stationary transformations | Parsimonious, well-understood | Components not interpretable | | **Exponential Smoothing** | Weighted averages with decay | Simple, effective | Less flexible seasonality | | **Structural TS** | Explicit component decomposition | Interpretable, handles missing data | More parameters | | **Prophet** | Additive trend + seasonality + holidays | User-friendly | Less rigorous uncertainty | | **Deep Learning** | Learn patterns from data | Powerful with big data | Black box, data hungry | ## 8. Topics ### 8.1 Handling Missing Data The Kalman filter naturally handles missing observations: - When $y_t$ is missing, skip the update step - Prediction step proceeds normally - Smoother propagates information through gaps ### 8.2 Multivariate Extensions For vector $y_t \in \mathbb{R}^p$: $$ y_t = Z_t \alpha_t + \varepsilon_t, \quad \varepsilon_t \sim N(0, H_t) $$ Applications: - Common trends across multiple series - Factor models - Dynamic factor analysis ### 8.3 Non-Gaussian Extensions - **Student-t errors**: Heavy tails, robust to outliers - **Mixture models**: Regime switching - **Non-linear state space**: Extended Kalman filter, particle filters ## 9. Software Implementations ### R Packages ```r KFAS - Kalman Filter and Smoother library(KFAS) model <- SSModel(y ~ SSMtrend(2, Q = list(NA, NA)) + SSMseasonal(12, Q = NA), H = NA) fit <- fitSSM(model, inits = rep(0, 4)) bsts - Bayesian Structural Time Series library(bsts) ss <- AddLocalLinearTrend(list(), y) ss <- AddSeasonal(ss, y, nseasons = 12) model <- bsts(y, state.specification = ss, niter = 1000) dlm - Dynamic Linear Models library(dlm) build <- function(theta) { dlmModPoly(2, dV = exp(theta[1]), dW = exp(theta[2:3])) + dlmModSeas(12, dV = 0, dW = exp(theta[4])) } fit <- dlmMLE(y, parm = rep(0, 4), build = build) ``` ### Python ```python statsmodels from statsmodels.tsa.statespace.structural import UnobservedComponents model = UnobservedComponents( y, level='local linear trend', seasonal=12, stochastic_seasonal=True ) results = model.fit() TensorFlow Probability import tensorflow_probability as tfp trend = tfp.sts.LocalLinearTrend(observed_time_series=y) seasonal = tfp.sts.Seasonal(num_seasons=12, observed_time_series=y) model = tfp.sts.Sum([trend, seasonal], observed_time_series=y) ``` ## 11. Structural time series models Structural time series models provide: - **Interpretability**: Each component has clear economic/statistical meaning - **Flexibility**: Add/remove components based on domain knowledge - **Robustness**: Natural handling of missing data and irregular spacing - **Uncertainty quantification**: Full probability distributions for components and forecasts - **Intervention analysis**: Easy incorporation of known breaks and policy changes The state space framework unifies estimation, filtering, smoothing, and forecasting within a coherent probabilistic structure, making structural time series models a powerful tool for understanding and predicting temporal phenomena.

structured attention patterns, transformer

Predefined sparsity patterns.

structured output, llm optimization

Structured output constrains generation to follow specified formats like JSON or schemas.

structured pruning, model optimization

Structured pruning removes entire channels layers or blocks enabling hardware-efficient acceleration.

structured pruning,model optimization

Remove entire channels layers or attention heads.

student teacher,smaller model,kd

Student is smaller model learning from larger teacher. Teacher provides richer signal than hard labels.

style loss, generative models

Match style statistics.

style mixing, generative models

Combine styles from different images.

style mixing, multimodal ai

Style mixing combines latent codes at different scales creating hybrid generations.

style reference, generative models

Match style of reference image.

style transfer diffusion, multimodal ai

Style transfer in diffusion models adapts content to reference styles through conditioning.

style transfer,generative models

Apply artistic style from one image to content of another.

style-based generation,generative models

Generate content in specific artistic styles.

stylegan architecture, generative models

Style-based generator.

stylegan-xl,generative models

Large-scale StyleGAN.

stylegan,generative models

High-quality GAN with style control.

stylegan3, multimodal ai

StyleGAN3 improves alias-free generation through rotation and translation equivariance.

subgoal, ai agents

Subgoals are intermediate objectives required to achieve overall goals.

subject-driven generation, multimodal ai

Subject-driven generation creates images featuring specific subjects from reference images.

subsampling, training techniques

Subsampling randomly selects training examples reducing privacy cost.

subspace alignment, domain adaptation

Align domain subspaces.

summary generation as pre-training, nlp

Generate summaries during pre-training.

super-resolution ai,computer vision

Upscale images to higher resolution using deep learning.

supermasks,model optimization

Binary masks that work without training.

supernet training, neural architecture

Train network containing all candidate architectures.

supernet training, neural architecture search

Supernet training creates a weight-sharing over-parameterized network encompassing all candidate architectures for efficient performance estimation.

superposition hypothesis, explainable ai

Networks pack more features than dimensions.

supplier audit, supply chain & logistics

Supplier audits systematically evaluate vendor facilities processes and quality systems ensuring compliance with requirements.

supplier consolidation, supply chain & logistics

Supplier consolidation reduces vendor count leveraging volume for better terms and simplified management.

supplier development, supply chain & logistics

Supplier development programs improve vendor capabilities through training audits and collaborative improvement.

supplier performance, supply chain & logistics

Supplier performance metrics track quality delivery reliability and responsiveness for vendor management.

supplier scorecard, supply chain & logistics

Supplier scorecards track vendor performance across metrics like quality delivery and cost providing feedback for continuous improvement.

supply chain for chiplets, business

Ecosystem for chiplet-based design.

supply chain integration, supply chain & logistics

Supply chain integration connects information systems across partners enabling seamless data flow.

supply chain logistics,operations

Manage material flow.

supply chain risk, supply chain & logistics

Supply chain risk in semiconductor manufacturing includes material shortages supplier failures geopolitical disruptions and lead time variability.

supply chain visibility, supply chain & logistics

Supply chain visibility provides real-time tracking of materials and components throughout semiconductor manufacturing.

supply chain,dependency,security

ML supply chain risks: malicious models, poisoned datasets, vulnerable dependencies. Verify sources.

supply chain,industry

Network of suppliers providing materials equipment and services.

surface code, quantum ai

Leading quantum error correction code.

sustain phase, quality & reliability

Sustain phase maintains improvements preventing reversion to old methods.

sustain, manufacturing operations

Sustain maintains discipline continuing improvement practices long-term.

sustainability initiatives,facility

Programs to reduce energy water and chemical usage.