← Back to AI Factory Chat

AI Factory Glossary

169 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 3 of 4 (169 entries)

listwise ranking,machine learning

Optimize entire ranked list.

litellm,proxy,unified

LiteLLM provides unified interface to many LLM providers. Drop-in replacement.

lithography modeling, optical lithography, photolithography, fourier optics, opc, smo, resolution

# Semiconductor Manufacturing Process: Lithography Mathematical Modeling ## 1. Introduction Lithography is the critical patterning step in semiconductor manufacturing that transfers circuit designs onto silicon wafers. It is essentially the "printing press" of chip making and determines the minimum feature sizes achievable. ### 1.1 Basic Process Flow 1. Coat wafer with photoresist 2. Expose photoresist to light through a mask/reticle 3. Develop the photoresist (remove exposed or unexposed regions) 4. Etch or deposit through the patterned resist 5. Strip the remaining resist ### 1.2 Types of Lithography - **Optical lithography:** DUV at 193nm, EUV at 13.5nm - **Electron beam lithography:** Direct-write, maskless - **Nanoimprint lithography:** Mechanical pattern transfer - **X-ray lithography:** Short wavelength exposure ## 2. Optical Image Formation The foundation of lithography modeling is **partially coherent imaging theory**, formalized through the Hopkins integral. ### 2.1 Hopkins Integral The intensity distribution at the image plane is given by: $$ I(x,y) = \iiint\!\!\!\int TCC(f_1,g_1;f_2,g_2) \cdot \tilde{M}(f_1,g_1) \cdot \tilde{M}^*(f_2,g_2) \cdot e^{2\pi i[(f_1-f_2)x + (g_1-g_2)y]} \, df_1\,dg_1\,df_2\,dg_2 $$ Where: - $I(x,y)$ — Intensity at image plane coordinates $(x,y)$ - $\tilde{M}(f,g)$ — Fourier transform of the mask transmission function - $TCC$ — Transmission Cross Coefficient ### 2.2 Transmission Cross Coefficient (TCC) The TCC encodes both the illumination source and lens pupil: $$ TCC(f_1,g_1;f_2,g_2) = \iint S(f,g) \cdot P(f+f_1,g+g_1) \cdot P^*(f+f_2,g+g_2) \, df\,dg $$ Where: - $S(f,g)$ — Source intensity distribution - $P(f,g)$ — Pupil function (encodes aberrations, NA cutoff) - $P^*$ — Complex conjugate of the pupil function ### 2.3 Sum of Coherent Systems (SOCS) To accelerate computation, the TCC is decomposed using eigendecomposition: $$ TCC(f_1,g_1;f_2,g_2) = \sum_{k=1}^{N} \lambda_k \cdot \phi_k(f_1,g_1) \cdot \phi_k^*(f_2,g_2) $$ The image becomes a weighted sum of coherent images: $$ I(x,y) = \sum_{k=1}^{N} \lambda_k \left| \mathcal{F}^{-1}\{\phi_k \cdot \tilde{M}\} \right|^2 $$ ### 2.4 Coherence Factor The partial coherence factor $\sigma$ is defined as: $$ \sigma = \frac{NA_{source}}{NA_{lens}} $$ - $\sigma = 0$ — Fully coherent illumination - $\sigma = 1$ — Matched illumination - $\sigma > 1$ — Overfilled illumination ## 3. Resolution Limits and Scaling Laws ### 3.1 Rayleigh Criterion The minimum resolvable feature size: $$ R = k_1 \frac{\lambda}{NA} $$ Where: - $R$ — Minimum resolvable feature - $k_1$ — Process factor (theoretical limit $\approx 0.25$, practical $\approx 0.3\text{--}0.4$) - $\lambda$ — Wavelength of light - $NA$ — Numerical aperture $= n \sin\theta$ ### 3.2 Depth of Focus $$ DOF = k_2 \frac{\lambda}{NA^2} $$ Where: - $DOF$ — Depth of focus - $k_2$ — Process-dependent constant ### 3.3 Technology Comparison | Technology | $\lambda$ (nm) | NA | Min. Feature | DOF | |:-----------|:---------------|:-----|:-------------|:----| | DUV ArF | 193 | 1.35 | ~38 nm | ~100 nm | | EUV | 13.5 | 0.33 | ~13 nm | ~120 nm | | High-NA EUV | 13.5 | 0.55 | ~8 nm | ~45 nm | ### 3.4 Resolution Enhancement Techniques (RETs) Key techniques to reduce effective $k_1$: - **Off-Axis Illumination (OAI):** Dipole, quadrupole, annular - **Phase-Shift Masks (PSM):** Alternating, attenuated - **Optical Proximity Correction (OPC):** Bias, serifs, sub-resolution assist features (SRAFs) - **Multiple Patterning:** LELE, SADP, SAQP ## 4. Rigorous Electromagnetic Mask Modeling ### 4.1 Thin Mask Approximation (Kirchhoff) For features much larger than wavelength: $$ E_{mask}(x,y) = t(x,y) \cdot E_{incident} $$ Where $t(x,y)$ is the complex transmission function. ### 4.2 Maxwell's Equations For sub-wavelength features, we must solve Maxwell's equations rigorously: $$ \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} $$ $$ \nabla \times \mathbf{H} = \mathbf{J} + \frac{\partial \mathbf{D}}{\partial t} $$ ### 4.3 RCWA (Rigorous Coupled-Wave Analysis) For periodic structures with grating period $d$, fields are expanded in Floquet modes: $$ E(x,z) = \sum_{n=-N}^{N} A_n(z) \cdot e^{i k_{xn} x} $$ Where the wavevector components are: $$ k_{xn} = k_0 \sin\theta_0 + \frac{2\pi n}{d} $$ This yields a matrix eigenvalue problem: $$ \frac{d^2}{dz^2}\mathbf{A} = \mathbf{K}^2 \mathbf{A} $$ Where $\mathbf{K}$ couples different diffraction orders through the dielectric tensor. ### 4.4 FDTD (Finite-Difference Time-Domain) Discretizing Maxwell's equations on a Yee grid: $$ \frac{\partial H_y}{\partial t} = \frac{1}{\mu}\left(\frac{\partial E_x}{\partial z} - \frac{\partial E_z}{\partial x}\right) $$ $$ \frac{\partial E_x}{\partial t} = \frac{1}{\epsilon}\left(\frac{\partial H_y}{\partial z} - J_x\right) $$ ### 4.5 EUV Mask 3D Effects Shadowing from absorber thickness $h$ at angle $\theta$: $$ \Delta x = h \tan\theta $$ For EUV at 6° chief ray angle: $$ \Delta x \approx 0.105 \cdot h $$ ## 5. Photoresist Modeling ### 5.1 Dill ABC Model (Exposure) The photoactive compound (PAC) concentration evolves as: $$ \frac{\partial M(z,t)}{\partial t} = -I(z,t) \cdot M(z,t) \cdot C $$ Light absorption follows Beer-Lambert law: $$ \frac{dI}{dz} = -\alpha(M) \cdot I $$ $$ \alpha(M) = A \cdot M + B $$ Where: - $A$ — Bleachable absorption coefficient - $B$ — Non-bleachable absorption coefficient - $C$ — Exposure rate constant (quantum efficiency) - $M$ — Normalized PAC concentration ### 5.2 Post-Exposure Bake (PEB) — Reaction-Diffusion For chemically amplified resists (CARs): $$ \frac{\partial h}{\partial t} = D \nabla^2 h + k \cdot h \cdot M_{blocking} $$ Where: - $h$ — Acid concentration - $D$ — Diffusion coefficient - $k$ — Reaction rate constant - $M_{blocking}$ — Blocking group concentration The blocking group deprotection: $$ \frac{\partial M_{blocking}}{\partial t} = -k_{amp} \cdot h \cdot M_{blocking} $$ ### 5.3 Mack Development Rate Model $$ r(m) = r_{max} \cdot \frac{(a+1)(1-m)^n}{a + (1-m)^n} + r_{min} $$ Where: - $r$ — Development rate - $m$ — Normalized PAC concentration remaining - $n$ — Contrast (dissolution selectivity) - $a$ — Inhibition depth - $r_{max}$ — Maximum development rate (fully exposed) - $r_{min}$ — Minimum development rate (unexposed) ### 5.4 Enhanced Mack Model Including surface inhibition: $$ r(m,z) = r_{max} \cdot \frac{(a+1)(1-m)^n}{a + (1-m)^n} \cdot \left(1 - e^{-z/l}\right) + r_{min} $$ Where $l$ is the surface inhibition depth. ## 6. Optical Proximity Correction (OPC) ### 6.1 Forward Problem Given mask $M$, compute the printed wafer image: $$ I = F(M) $$ Where $F$ represents the complete optical and resist model. ### 6.2 Inverse Problem Given target pattern $T$, find mask $M$ such that: $$ F(M) \approx T $$ ### 6.3 Edge Placement Error (EPE) $$ EPE_i = x_{printed,i} - x_{target,i} $$ ### 6.4 OPC Optimization Formulation Minimize the cost function: $$ \mathcal{L}(M) = \sum_{i=1}^{N} w_i \cdot EPE_i^2 + \lambda \cdot R(M) $$ Where: - $w_i$ — Weight for evaluation point $i$ - $R(M)$ — Regularization term for mask manufacturability - $\lambda$ — Regularization strength ### 6.5 Gradient-Based OPC Using gradient descent: $$ M_{n+1} = M_n - \eta \frac{\partial \mathcal{L}}{\partial M} $$ The gradient requires computing: $$ \frac{\partial \mathcal{L}}{\partial M} = \sum_i 2 w_i \cdot EPE_i \cdot \frac{\partial EPE_i}{\partial M} + \lambda \frac{\partial R}{\partial M} $$ ### 6.6 Adjoint Method for Gradient Computation The sensitivity $\frac{\partial I}{\partial M}$ is computed efficiently using the adjoint formulation: $$ \frac{\partial \mathcal{L}}{\partial M} = \text{Re}\left\{ \tilde{M}^* \cdot \mathcal{F}\left\{ \sum_k \lambda_k \phi_k^* \cdot \mathcal{F}^{-1}\left\{ \phi_k \cdot \frac{\partial \mathcal{L}}{\partial I} \right\} \right\} \right\} $$ This avoids computing individual sensitivities for each mask pixel. ### 6.7 Mask Manufacturability Constraints Common regularization terms: - **Minimum feature size:** $R_1(M) = \sum \max(0, w_{min} - w_i)^2$ - **Minimum space:** $R_2(M) = \sum \max(0, s_{min} - s_i)^2$ - **Edge curvature:** $R_3(M) = \int |\kappa(s)|^2 ds$ - **Shot count:** $R_4(M) = N_{vertices}$ ## 7. Source-Mask Optimization (SMO) ### 7.1 Joint Optimization Formulation $$ \min_{S,M} \sum_{\text{patterns}} \|I(S,M) - T\|^2 + \lambda_S R_S(S) + \lambda_M R_M(M) $$ Where: - $S$ — Source intensity distribution - $M$ — Mask transmission function - $T$ — Target pattern - $R_S(S)$ — Source manufacturability regularization - $R_M(M)$ — Mask manufacturability regularization ### 7.2 Source Parameterization Pixelated source with constraints: $$ S(f,g) = \sum_{i,j} s_{ij} \cdot \text{rect}\left(\frac{f - f_i}{\Delta f}\right) \cdot \text{rect}\left(\frac{g - g_j}{\Delta g}\right) $$ Subject to: $$ 0 \leq s_{ij} \leq 1 \quad \forall i,j $$ $$ \sum_{i,j} s_{ij} = S_{total} $$ ### 7.3 Alternating Optimization **Algorithm:** 1. Initialize $S_0$, $M_0$ 2. For iteration $n = 1, 2, \ldots$: - Fix $S_n$, optimize $M_{n+1} = \arg\min_M \mathcal{L}(S_n, M)$ - Fix $M_{n+1}$, optimize $S_{n+1} = \arg\min_S \mathcal{L}(S, M_{n+1})$ 3. Repeat until convergence ### 7.4 Gradient Computation for SMO Source gradient: $$ \frac{\partial I}{\partial S}(x,y) = \left| \mathcal{F}^{-1}\{P \cdot \tilde{M}\}(x,y) \right|^2 $$ Mask gradient uses the adjoint method as in OPC. ## 8. Stochastic Effects and EUV ### 8.1 Photon Shot Noise Photon counts follow a Poisson distribution: $$ P(n) = \frac{\bar{n}^n e^{-\bar{n}}}{n!} $$ For EUV at 13.5 nm, photon energy is: $$ E_{photon} = \frac{hc}{\lambda} = \frac{1240 \text{ eV} \cdot \text{nm}}{13.5 \text{ nm}} \approx 92 \text{ eV} $$ Mean photons per pixel: $$ \bar{n} = \frac{\text{Dose} \cdot A_{pixel}}{E_{photon}} $$ ### 8.2 Relative Shot Noise $$ \frac{\sigma_n}{\bar{n}} = \frac{1}{\sqrt{\bar{n}}} $$ For 30 mJ/cm² dose and 10 nm pixel: $$ \bar{n} \approx 200 \text{ photons} \implies \sigma/\bar{n} \approx 7\% $$ ### 8.3 Line Edge Roughness (LER) Characterized by power spectral density: $$ PSD(f) = \frac{LER^2 \cdot \xi}{1 + (2\pi f \xi)^{2(1+H)}} $$ Where: - $LER$ — RMS line edge roughness (3σ value) - $\xi$ — Correlation length - $H$ — Hurst exponent (0 < H < 1) - $f$ — Spatial frequency ### 8.4 LER Decomposition $$ LER^2 = LWR^2/2 + \sigma_{placement}^2 $$ Where: - $LWR$ — Line width roughness - $\sigma_{placement}$ — Line placement error ### 8.5 Stochastic Defectivity Probability of printing failure (e.g., missing contact): $$ P_{fail} = 1 - \prod_{i} \left(1 - P_{fail,i}\right) $$ For a chip with $10^{10}$ contacts at 99.9999999% yield per contact: $$ P_{chip,fail} \approx 1\% $$ ### 8.6 Monte Carlo Simulation Steps 1. **Photon absorption:** Generate random events $\sim \text{Poisson}(\bar{n})$ 2. **Acid generation:** Each photon generates acid at random location 3. **Diffusion:** Brownian motion during PEB: $\langle r^2 \rangle = 6Dt$ 4. **Deprotection:** Local reaction based on acid concentration 5. **Development:** Cellular automata or level-set method ## 9. Multiple Patterning Mathematics ### 9.1 Graph Coloring Formulation When pitch $< \lambda/(2NA)$, single-exposure patterning fails. **Graph construction:** - Nodes $V$ = features (polygons) - Edges $E$ = spacing conflicts (features too close for one mask) - Colors $C$ = different masks ### 9.2 k-Colorability Problem Find assignment $c: V \rightarrow \{1, 2, \ldots, k\}$ such that: $$ c(u) \neq c(v) \quad \forall (u,v) \in E $$ This is **NP-complete** for $k \geq 3$. ### 9.3 Integer Linear Programming (ILP) Formulation Binary variables: $x_{v,c} \in \{0,1\}$ (node $v$ assigned color $c$) **Objective:** $$ \min \sum_{(u,v) \in E} \sum_c x_{u,c} \cdot x_{v,c} \cdot w_{uv} $$ **Constraints:** $$ \sum_{c=1}^{k} x_{v,c} = 1 \quad \forall v \in V $$ $$ x_{u,c} + x_{v,c} \leq 1 \quad \forall (u,v) \in E, \forall c $$ ### 9.4 Self-Aligned Multiple Patterning (SADP) Spacer pitch after $n$ iterations: $$ p_n = \frac{p_0}{2^n} $$ Where $p_0$ is the initial (lithographic) pitch. ## 10. Process Control Mathematics ### 10.1 Overlay Control Polynomial model across the wafer: $$ OVL_x(x,y) = a_0 + a_1 x + a_2 y + a_3 xy + a_4 x^2 + a_5 y^2 + \ldots $$ **Physical interpretation:** | Coefficient | Physical Effect | |:------------|:----------------| | $a_0$ | Translation | | $a_1$, $a_2$ | Scale (magnification) | | $a_3$ | Rotation | | $a_4$, $a_5$ | Non-orthogonality | ### 10.2 Overlay Correction Least squares fitting: $$ \mathbf{a} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y} $$ Where $\mathbf{X}$ is the design matrix and $\mathbf{y}$ is measured overlay. ### 10.3 Run-to-Run Control — EWMA Exponentially Weighted Moving Average: $$ \hat{y}_{n+1} = \lambda y_n + (1-\lambda)\hat{y}_n $$ Where: - $\hat{y}_{n+1}$ — Predicted output - $y_n$ — Measured output at step $n$ - $\lambda$ — Smoothing factor $(0 < \lambda < 1)$ ### 10.4 CDU Variance Decomposition $$ \sigma^2_{total} = \sigma^2_{local} + \sigma^2_{field} + \sigma^2_{wafer} + \sigma^2_{lot} $$ **Sources:** - **Local:** Shot noise, LER, resist - **Field:** Lens aberrations, mask - **Wafer:** Focus/dose uniformity - **Lot:** Tool-to-tool variation ### 10.5 Process Capability Index $$ C_{pk} = \min\left(\frac{USL - \mu}{3\sigma}, \frac{\mu - LSL}{3\sigma}\right) $$ Where: - $USL$, $LSL$ — Upper/lower specification limits - $\mu$ — Process mean - $\sigma$ — Process standard deviation ## 11. Machine Learning Integration ### 11.1 Applications Overview | Application | Method | Purpose | |:------------|:-------|:--------| | Hotspot detection | CNNs | Predict yield-limiting patterns | | OPC acceleration | Neural surrogates | Replace expensive physics sims | | Metrology | Regression models | Virtual measurements | | Defect classification | Image classifiers | Automated inspection | | Etch prediction | Physics-informed NN | Predict etch profiles | ### 11.2 Neural Network Surrogate Model A neural network approximates the forward model: $$ \hat{I}(x,y) = f_{NN}(\text{mask}, \text{source}, \text{focus}, \text{dose}; \theta) $$ Training objective: $$ \theta^* = \arg\min_\theta \sum_{i=1}^{N} \|f_{NN}(M_i; \theta) - I_i^{rigorous}\|^2 $$ ### 11.3 Hotspot Detection with CNNs Binary classification: $$ P(\text{hotspot} | \text{pattern}) = \sigma(\mathbf{W} \cdot \mathbf{features} + b) $$ Where $\sigma$ is the sigmoid function and features are extracted by convolutional layers. ### 11.4 Inverse Lithography with Deep Learning Generator network $G$ maps target to mask: $$ \hat{M} = G(T; \theta_G) $$ Training with physics-based loss: $$ \mathcal{L} = \|F(G(T)) - T\|^2 + \lambda \cdot R(G(T)) $$ ## 12. Mathematical Disciplines | Mathematical Domain | Application in Lithography | |:--------------------|:---------------------------| | **Fourier Optics** | Image formation, aberrations, frequency analysis | | **Electromagnetic Theory** | RCWA, FDTD, rigorous mask simulation | | **Partial Differential Equations** | Resist diffusion, development, reaction kinetics | | **Optimization Theory** | OPC, SMO, inverse problems, gradient descent | | **Probability & Statistics** | Shot noise, LER, SPC, process control | | **Linear Algebra** | Matrix methods, eigendecomposition, least squares | | **Graph Theory** | Multiple patterning decomposition, routing | | **Numerical Methods** | FEM, finite differences, Monte Carlo | | **Machine Learning** | Surrogate models, pattern recognition, CNNs | | **Signal Processing** | Image analysis, metrology, filtering | ## Key Equations Quick Reference ### Imaging $$ I(x,y) = \sum_{k} \lambda_k \left| \mathcal{F}^{-1}\{\phi_k \cdot \tilde{M}\} \right|^2 $$ ### Resolution $$ R = k_1 \frac{\lambda}{NA} $$ ### Depth of Focus $$ DOF = k_2 \frac{\lambda}{NA^2} $$ ### Development Rate $$ r(m) = r_{max} \cdot \frac{(a+1)(1-m)^n}{a + (1-m)^n} + r_{min} $$ ### LER Power Spectrum $$ PSD(f) = \frac{LER^2 \cdot \xi}{1 + (2\pi f \xi)^{2(1+H)}} $$ ### OPC Cost Function $$ \mathcal{L}(M) = \sum_{i} w_i \cdot EPE_i^2 + \lambda \cdot R(M) $$

llama 2,foundation model

Improved version of Llama with better safety and performance.

llama,foundation model

Meta's open-source foundation language model family.

llamaindex, ai agents

LlamaIndex enables data-augmented LLM applications with retrieval and indexing.

llamaindex,framework

Data framework for LLM applications focused on ingestion and retrieval.

llamaindex,rag,data

LlamaIndex specializes in RAG and data connectors. Index various data sources.

llava (large language and vision assistant),llava,large language and vision assistant,multimodal ai

Visual instruction tuning for LLMs.

llm agents, llm, ai agents

LLM agents use language models to autonomously pursue goals through iterative planning and tool use.

llm as judge,auto eval,gpt4

LLM-as-judge uses strong model to evaluate weaker ones. Scales better than human eval, correlates reasonably.

llm-as-judge,evaluation

Use strong LLM to evaluate other model outputs.

llm, language model, large language model, transformer, neural network, deep learning, nlp, artificial intelligence, machine learning

# LLM Mathematics Modeling ## 1. Mathematical Foundations of LLMs ### 1.1 Transformer Architecture Mathematics The transformer architecture (Vaswani et al., 2017) consists of these core mathematical operations: #### Self-Attention Mechanism The scaled dot-product attention is defined as: $$ \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V $$ **Variable Definitions:** - $Q$ — Query matrix: $Q = XW^Q$ where $W^Q \in \mathbb{R}^{d_{model} \times d_k}$ - $K$ — Key matrix: $K = XW^K$ where $W^K \in \mathbb{R}^{d_{model} \times d_k}$ - $V$ — Value matrix: $V = XW^V$ where $W^V \in \mathbb{R}^{d_{model} \times d_v}$ - $d_k$ — Dimension of key vectors (scaling factor prevents gradient vanishing) - $\sqrt{d_k}$ — Scaling factor to normalize dot products #### Multi-Head Attention $$ \text{MultiHead}(Q, K, V) = \text{Concat}(\text{head}_1, ..., \text{head}_h)W^O $$ Where each head is computed as: $$ \text{head}_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V) $$ **Parameters:** - $h$ — Number of attention heads (typically 8, 12, 32, or more) - $W^O \in \mathbb{R}^{hd_v \times d_{model}}$ — Output projection matrix #### Feed-Forward Networks (FFN) Position-wise feed-forward network applied to each position: $$ \text{FFN}(x) = \max(0, xW_1 + b_1)W_2 + b_2 $$ Or with GELU activation (more common in modern LLMs): $$ \text{FFN}(x) = \text{GELU}(xW_1 + b_1)W_2 + b_2 $$ Where GELU is defined as: $$ \text{GELU}(x) = x \cdot \Phi(x) = x \cdot \frac{1}{2}\left[1 + \text{erf}\left(\frac{x}{\sqrt{2}}\right)\right] $$ #### Layer Normalization $$ \text{LayerNorm}(x) = \gamma \cdot \frac{x - \mu}{\sigma + \epsilon} + \beta $$ **Where:** - $\mu = \frac{1}{d}\sum_{i=1}^{d} x_i$ — Mean across features - $\sigma = \sqrt{\frac{1}{d}\sum_{i=1}^{d}(x_i - \mu)^2}$ — Standard deviation - $\gamma, \beta$ — Learnable scale and shift parameters - $\epsilon$ — Small constant for numerical stability (typically $10^{-5}$) ### 1.2 Statistical Language Modeling #### Autoregressive Probability Model LLMs estimate the conditional probability distribution: $$ P(w_t | w_1, w_2, ..., w_{t-1}; \theta) $$ The joint probability of a sequence factorizes as: $$ P(w_1, w_2, ..., w_T) = \prod_{t=1}^{T} P(w_t | w_{ \epsilon\right] \leq \delta $$ **Sample complexity:** $$ m \geq \frac{1}{\epsilon^2}\left(d \cdot \log\frac{1}{\epsilon} + \log\frac{1}{\delta}\right) $$ Where $d$ is the VC dimension or related complexity measure. #### Neural Tangent Kernel (NTK) In the infinite-width limit: $$ f(x; \theta_t) - f(x; \theta_0) = \Theta(x, X)(Y - f(X; \theta_0))(1 - e^{-\eta t}) $$ Where $\Theta$ is the NTK: $$ \Theta(x, x') = \mathbb{E}_{\theta \sim \text{init}}\left[\nabla_\theta f(x; \theta)^T \nabla_\theta f(x'; \theta)\right] $$ ### 5.3 Interpretability Mathematics #### Superposition Hypothesis Features represented in superposition: $$ W = \sum_{i=1}^{m} f_i d_i^T, \quad m > n $$ Where $m$ features are encoded in $n$-dimensional space. #### Activation Analysis Neuron activation patterns: $$ a_i(x) = \sigma\left(\sum_j w_{ij} x_j + b_i\right) $$ **Probing classifiers:** $$ P(y | h) = \text{softmax}(W_p h + b_p) $$ ## Key Equations Reference | Concept | Equation | |---------|----------| | Self-Attention | $\text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V$ | | Cross-Entropy Loss | $-\sum_t \log P(w_t \| w_{

lmql (language model query language),lmql,language model query language,framework

Query language for constraining LLM generation.

load balancing (moe),load balancing,moe,model architecture

Ensure experts are used roughly equally to avoid underutilization.

load balancing agents, ai agents

Load balancing distributes work evenly preventing bottlenecks in agent systems.

load balancing loss, llm architecture

Load balancing loss encourages uniform expert utilization.

load shedding, llm optimization

Load shedding rejects requests when system capacity is exceeded preventing overload.

local level model, time series models

Local level model is simplest structural time series with stochastically evolving level component.

local sgd, distributed training

Perform multiple local updates before synchronizing.

local trend model, time series models

Local trend model includes both stochastically evolving level and slope components for trending series.

local-global attention,llm architecture

Combine local sliding window with sparse global attention.

locally typical, llm optimization

Locally typical sampling considers information content conditional on context.

lock-in thermography, failure analysis advanced

Lock-in thermography applies periodic electrical stimulation and phase-sensitive thermal imaging to detect weak heat sources from intermittent defects.

lock-in thermography,failure analysis

Thermal imaging of defects.

lof temporal, lof, time series models

Local Outlier Factor adapted for time series detects anomalies by comparing local densities in feature space.

lof time series, lof, time series models

Local Outlier Factor for time series detects anomalies by comparing densities in windowed feature space.

log quantization, model optimization

Logarithmic quantization represents values in log domain improving dynamic range.

log-gaussian cox, time series models

Log-Gaussian Cox processes use Gaussian random fields to model spatially or temporally varying intensity functions.

logarithmic quantization,model optimization

Use logarithmic scale for quantization.

logic programming with llms,ai architecture

Use LLMs to interact with logic systems.

logistics optimization, supply chain & logistics

Logistics optimization determines efficient transportation routing warehousing and distribution strategies.

logit bias, llm optimization

Logit bias adjusts token probabilities to encourage or discourage specific outputs.

logit lens, explainable ai

Decode intermediate activations.

long context models, llm architecture

Models handling 100K+ tokens.

long convolution, llm architecture

Long convolutions model extended dependencies through large kernel sizes.

long method detection, code ai

Identify overly long methods.

long prompt handling, generative models

Deal with prompts exceeding limit.

long-tail rec, recommendation systems

Long-tail recommendation focuses on effectively suggesting less popular items with few interactions.

long-term memory, ai agents

Long-term memory stores experiences and knowledge for retrieval in future tasks.

long-term temporal modeling, video understanding

Capture dependencies across many frames.

longformer attention, llm architecture

Combination of local and global attention.

longformer attention, llm optimization

Longformer combines local sliding window with global attention for efficient long context.

longformer,foundation model

Model with local+global attention for long documents.

lookahead decoding, llm optimization

Lookahead decoding generates multiple future tokens simultaneously when possible.

loop optimization, model optimization

Loop optimization reorders and transforms loops maximizing parallelism and data locality.

loop unrolling, model optimization

Loop unrolling replicates loop bodies reducing branching overhead and enabling instruction-level parallelism.

lora diffusion,dreambooth,customize

LoRA and DreamBooth customize diffusion models. Train on few images. Personalized generation.

lora fine-tuning, multimodal ai

Low-Rank Adaptation fine-tunes diffusion models efficiently by learning low-rank weight updates.

lora for diffusion, generative models

Efficient fine-tuning with low-rank adaptation.