prompt tuning, prompting techniques
Prompt tuning learns continuous prompt embeddings through gradient descent.
9,967 technical terms and definitions
Prompt tuning learns continuous prompt embeddings through gradient descent.
Learn soft prompts (continuous vectors) while keeping the model frozen.
Version control prompts like code. Track changes, A/B test versions, rollback if quality drops.
Emphasize prompt parts differently.
Assign different importance to different parts of the prompt.
Use prompts to separate tasks.
Edit images by modifying text prompts.
Prompt-to-prompt editing modifies images by adjusting text prompts while preserving structure.
Prompt = text you feed the model. Clear role, task, input format, and constraints = more reliable and controllable outputs.
Segment using various prompt types.
Promptfoo tests prompts and models. CLI tool. Compare outputs.
PromptLayer logs and versions prompts. Track changes, A/B test.
Resolve pronouns to antecedents.
Create mathematical proofs.
Propensity scores weight training examples by inverse probability of exposure to debias recommendation models.
Infer properties of training data.
Generate property tests.
Test general properties rather than examples.
Prophet is an additive time series model decomposing signals into trend seasonality and holiday components with automatic changepoint detection.
Sample by dataset size.
AI drafts business proposals. Structure, persuasion.
Retrieve atomic facts/propositions instead of passages.
Proprietary models have restricted access to weights and implementation.
Identify sensitive medical data.
Protective capacity maintains deliberate excess at non-constraints enabling system flexibility.
Design proteins with desired properties.
Infer protein function from descriptions.
Predict 3D protein structures (AlphaFold).
Interaction between protein and small molecule.
Protocol Buffers is binary serialization. Efficient, typed. Used with gRPC.
Learn representative prototypes for each class.
Test early versions.
Contrast against learned prototypes.
Learn embeddings where examples from same class cluster together.
Track information sources.
Track model lineage and modifications.
Record origin and modifications of content.
Clipped objective for stable updates.
Proximity effect alters current distribution in adjacent conductors increasing loss and inductance.
Getter near front surface.
Search directly on target hardware.
ProxylessNAS directly learns architectures on target hardware by training over-parameterized networks with path-level binarization and latency constraints.
Remove low-opacity Gaussians.
Pruning removes unnecessary network components reducing size and computation.
Remove weights or neurons that contribute little to performance.
Pruning removes unnecessary weights. Structured pruning removes neurons/heads. Reduces model size and compute.
Pseudo relevance feedback expands queries using top retrieved results.
Estimate state visitation counts.
Use high-confidence predictions as labels.
Pseudo-labeling assigns predicted labels to unlabeled data treating them as ground truth for semi-supervised learning.