← Back to AI Factory Chat

AI Factory Glossary

311 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 4 of 7 (311 entries)

snapshot graphs, graph neural networks

Snapshot graphs represent temporal networks as sequences of static graphs at discrete time points.

so equivariant, so(3), graph neural networks

SO(3) equivariant networks maintain rotational symmetries in 3D molecular and geometric data.

soft defect, failure analysis advanced

Soft defects are latent failures that manifest intermittently or under specific conditions requiring specialized dynamic fault localization techniques.

soft routing, llm architecture

Soft routing weights expert outputs rather than selecting discrete subset.

softplus, neural architecture

Smooth approximation of ReLU.

software pipelining, model optimization

Software pipelining overlaps iterations from different loop executions improving throughput.

solder joint inspection, failure analysis advanced

Solder joint inspection uses X-ray or cross-sectioning to detect voids cracks and insufficient wetting.

solubility prediction, chemistry ai

Predict compound solubility.

solvent distillation, environmental & sustainability

Solvent distillation separates and purifies organic solvents based on boiling point differences.

solvent recovery, environmental & sustainability

Solvent recovery systems capture and purify organic solvents from process exhaust enabling reuse and reducing disposal costs.

sort pooling, graph neural networks

Sort pooling orders nodes by learned structural roles enabling use of 1D convolutions for graph classification.

sortpool variant, graph neural networks

SortPool variants improve graph classification by learning node importance for sorted feature concatenation.

sound source localization, multimodal ai

Locate sound sources using vision.

source-free domain adaptation, domain adaptation

Adapt without access to source data.

sparse attention,transformer

Attention pattern where each token only attends to a subset of tokens rather than all.

sparse autoencoders for interpretability, explainable ai

Extract interpretable features.

sparse mixture, llm architecture

Sparse mixture of experts activates subset of experts per input.

sparse model,model architecture

Model where only subset of parameters activate (MoE).

sparse training, model optimization

Sparse training maintains sparsity throughout training never materializing dense networks.

sparse transformer patterns, transformer

Different sparsity patterns for attention.

sparse upcycling,model architecture

Convert dense model to MoE by splitting weights into experts.

sparse weight averaging, model optimization

Sparse weight averaging combines multiple sparse models improving generalization.

sparse-to-sparse training,model training

Maintain sparsity throughout training.

spatial attention, model optimization

Spatial attention focuses on informative regions by weighting spatial locations.

specialist agent, ai agents

Specialist agents focus on specific capabilities contributing expertise to team efforts.

specification gaming, ai safety

Specification gaming exploits reward function loopholes achieving objectives harmfully.

specification waiver, production

Authorization to operate out-of-spec.

spectral graph convolutions, graph neural networks

Graph convolutions using spectral methods.

spectral graph theory, graph neural networks

Analyze graphs via eigenvalues.

spectral normalization in gans, generative models

Stabilize GAN training.

spectral normalization, ai safety

Normalize by largest singular value.

spectral normalization, generative models

Normalize weights by spectral norm.

spectral residual, time series models

Spectral residual detects anomalies by computing deviations between original and smoothed frequency spectra in time series saliency maps.

speculative decoding,draft model

Speculative decoding uses small draft model to propose tokens, large model verifies. Speeds up inference without quality loss.

speculative decoding,draft model,verify

Speculative decoding uses small draft model to propose tokens, large model verifies. 2-3x faster inference.

speculative decoding,llm optimization

Generate multiple tokens in parallel then verify them to speed up inference.

speculative generality, code ai

Unused abstraction.

speculative sampling, llm optimization

Speculative sampling uses draft model to predict tokens verified by target model.

spend analysis, supply chain & logistics

Spend analysis examines procurement patterns identifying cost reduction and consolidation opportunities.

spherenet, graph neural networks

SphereNet uses spherical Bessel functions and spherical harmonics for SE(3)-equivariant molecular property prediction.

spherical harmonics, graph neural networks

Spherical harmonics provide basis functions for rotationally equivariant feature representations in 3D.

spike anneal process,diffusion

Minimum time at peak temperature to activate dopants.

spiking neural networks (snn),spiking neural networks,snn,neural architecture

Networks using discrete spikes.

split learning, training techniques

Split learning partitions models across parties computing collaboratively without sharing data.

spos, spos, neural architecture search

Single Path One-Shot NAS trains supernet uniformly by sampling single paths improving architecture ranking accuracy.

sql generation,code ai

Generate database queries from natural language.

square attack, ai safety

Query-efficient black-box attack.

squeeze-excitation, model optimization

Squeeze-and-excitation blocks recalibrate channel-wise features through global pooling and gating.

srnn, srnn, time series models

Stochastic Recurrent Neural Network uses stochastic hidden states for modeling uncertainty in sequences.

stable diffusion architecture, generative models

Open-source latent diffusion model.