progressive growing,generative models
Gradually increase resolution during GAN training.
3,145 technical terms and definitions
Gradually increase resolution during GAN training.
Add capacity for new tasks.
Add new capacity for each task.
Progressive shrinking trains once-for-all networks by gradually incorporating smaller sub-networks starting from the largest architecture.
Prompt caching reuses processed prompts reducing latency and cost for repeated prefixes.
Use output of one prompt as input to next.
Prompt chaining sequences multiple prompts passing outputs as subsequent inputs.
Split long prompts.
Vector representation of prompts.
Adversarial inputs to subvert model.
Techniques to prevent injection.
Prompt injection inserts malicious instructions into prompts attempting to hijack model behavior.
Attack where user input tricks model into ignoring instructions or leaking info.
Trick model into revealing system prompts or instructions.
Check prompts before processing.
Cut off excess tokens.
Emphasize prompt parts differently.
Edit images by modifying text prompts.
Prompt-to-prompt editing modifies images by adjusting text prompts while preserving structure.
Generate property tests.
Prophet is an additive time series model decomposing signals into trend seasonality and holiday components with automatic changepoint detection.
Proprietary models have restricted access to weights and implementation.
Identify sensitive medical data.
Design proteins with desired properties.
Infer protein function from descriptions.
Predict 3D protein structures (AlphaFold).
Interaction between protein and small molecule.
Learn representative prototypes for each class.
Search directly on target hardware.
ProxylessNAS directly learns architectures on target hardware by training over-parameterized networks with path-level binarization and latency constraints.
Pruning removes unnecessary network components reducing size and computation.
Remove weights or neurons that contribute little to performance.
Pseudo-labeling assigns predicted labels to unlabeled data treating them as ground truth for semi-supervised learning.
Pseudonymization replaces identifiers with pseudonyms allowing reversal with keys.
PubMedBERT is biomedical BERT. Trained on PubMed abstracts.
Summarize code changes.
Purpose limitation restricts data use to specified intended purposes.
Pyraformer uses pyramidal attention with multi-resolution representations for efficient long-term time series forecasting.
Multi-scale vision transformer.
Execute generated Python code for answers.
PyTorch Mobile enables optimized on-device inference for PyTorch models on mobile platforms.
Variational algorithm for optimization.
Whether to use bias in attention projections.
Quality at source ensures suppliers deliver defect-free materials eliminating incoming inspection.
Optimize specific quantiles.
Relate structure to biological activity.
Reduce precision for edge inference.
Training with quantization in mind so the model learns to work well at lower precision.
Quantization-aware training simulates quantization during training improving low-precision accuracy.
Reducing model weight precision (32-bit → 8-bit/4-bit) to save memory and speed up inference.