multi-line code completion, code ai
Complete multiple lines at once.
288 technical terms and definitions
Complete multiple lines at once.
Distributed across machines.
Optimize accuracy latency and size together.
Share key/value across heads to reduce memory and speed up inference.
Multi-resolution hash encoding stores features at multiple scales in hash tables.
Train on multiple resolutions.
Discriminators at different resolutions.
Multi-scale generation produces images at multiple resolutions simultaneously or progressively.
Adapt from multiple source domains.
Multiple checks at different points.
Use multiple steps to gradually bypass restrictions.
Gradually elicit harmful behavior.
Multi-style training uses diverse acoustic conditions during ASR training for robustness.
Adapt to multiple target domains.
Pre-train on multiple objectives simultaneously.
Train on multiple tasks together.
Learn from multiple teacher models.
Share resources across teams.
Multi-token prediction forecasts several future tokens enabling faster generation.
Multi-view learning leverages multiple representations or modalities of data to improve model robustness and performance.
Learn from different views of data.
Multilingual models handle multiple languages through diverse training data.
Single model for many language pairs.
Pre-train on many languages together.
Compress multimodal information.
Reasoning across modalities.
Combine information from modalities.
Combine text audio and visual cues.
Multimodal transformers process audio and visual sequences with cross-modal attention mechanisms.
Translate between modalities.
Diffusion over categorical distributions.
Multitask instruction learning trains on diverse tasks simultaneously improving generalization.
Multivariate temporal point processes model interdependent event sequences across multiple event types with cross-excitation.
Murphy yield model incorporates defect clustering through alpha parameter representing degree of spatial correlation.
Muse uses masked generative transformers for parallel image generation from text.
Music Transformer applies relative positional encoding to transformers enabling generation of long expressive musical sequences.
Multiple models learn from each other.
Mutually exciting point processes model how events of one type trigger events of other types.