pdn, pdn, signal & power integrity
Power Distribution Network delivers supply voltage from source to on-chip circuits managing impedance to minimize voltage drop and noise.
758 technical terms and definitions
Power Distribution Network delivers supply voltage from source to on-chip circuits managing impedance to minimize voltage drop and noise.
Process Decision Program Charts anticipate problems planning contingencies.
Peak current considerations in electromigration account for transient current spikes during switching.
Maximum temperature during reflow.
Probabilistic Embeddings for Actor-Critic RL meta-learns task inference and policy jointly for efficient adaptation.
Pearson correlation measures linear association between continuous variables.
Use plasma to enable deposition at lower temperatures.
Direct GPU-to-GPU transfer.
Family of methods to adapt models by training only a small fraction of parameters.
PEFT library for parameter-efficient fine-tuning. LoRA, prefix tuning. Hugging Face.
Model for random mismatch.
Thin membrane protecting reticle from particles.
Frame holding pellicle over mask.
Pruned Exact Linear Time algorithm efficiently detects multiple change points in univariate time series.
Different scales for each channel.
Train NeRF for single scene.
Single scale for entire tensor.
Generalized Perceiver for various input/output modalities.
Cross-attention to latent array for processing arbitrary inputs.
Time when given fraction fails.
Use percentiles instead of sigma.
Compress to perceptually meaningful space.
Loss based on feature similarity.
Human-aligned quality measures.
Performance prediction estimates architecture accuracy from features or partial training reducing search costs.
Analyze code to find bottlenecks.
Estimate future technology performance.
Verify process performance.
Performance rate compares actual output to maximum possible at ideal cycle time.
Actual vs designed throughput.
AI suggests performance optimizations. Profile and improve.
Fast attention using kernel methods.
Performer approximates attention using random feature maps for linear complexity.
Fast attention using kernel methods.
Balls around edges only.
Final attachment after processing.
Predict membrane permeability.
Permutation invariant training resolves label ambiguity in multi-speaker separation through minimum assignment loss.
Permutation tests compute p-values by randomly reassigning group labels.
Design perovskite materials for solar cells.
Filter by language model perplexity.
Use perplexity to detect training contamination.
Perplexity measures how well probability distribution predicts sample.
Metric measuring how well model predicts text (lower is better).
Perplexity measures how well the model predicts tokens (lower is better). It is derived from cross-entropy loss used during training.
Perplexity = exp(cross-entropy loss). Lower is better. Measures how surprised model is by test data.
Maintain consistent personality.
Maintain consistent character personality across conversation.
Generate with specific persona.
Persona defines model character. Consistent behavior and knowledge. Custom assistants.