membership inference attack,ai safety
Determine if specific data was in the training set.
751 technical terms and definitions
Determine if specific data was in the training set.
Determine if example was in training.
Membership inference determines whether specific examples were in training set revealing privacy leaks.
Determine if example was in training set.
Membership inference determines if sample was in training data. Privacy attack. Differential privacy defends.
Membrane filtration removes particles and microorganisms through size-exclusion using porous membranes.
Edit multiple facts simultaneously.
Retrieve from large external memory.
Memory bandwidth measures data transfer rate between processor and memory.
Rate of data transfer to/from GPU memory often bottleneck for inference.
Store representations for contrastive learning.
Memory BIST uses on-chip logic to apply test algorithms like March or checkerboard patterns for memory array fault detection.
Memory BIST generates test algorithms for embedded memories.
Store representative examples from old tasks.
Efficient memory access patterns.
Coalesced memory access: adjacent threads access adjacent memory. Maximizes bandwidth. Critical for GPU performance.
Memory consolidation transfers important information from working to long-term storage.
How models store information.
Networks with explicit external memory for facts.
Memory pools preallocate buffers reducing allocation overhead during inference.
Memory profiling tracks allocations. Find leaks, reduce footprint. Python memory_profiler, tracemalloc.
Analyze memory usage patterns.
Memory redundancy incorporates spare rows and columns that can replace defective elements improving yield through post-manufacturing repair.
Memory retrieval selects relevant past information based on current context.
Access relevant past messages.
Vertically stack memory dies for higher density.
Compress conversation history.
Give agents long-term persistent memory across sessions using vector stores or databases.
Cache previous segments for longer context.
Memory update mechanisms in temporal GNNs maintain node states updated by events and read for predictions.
Memory wall: compute outpaces memory bandwidth. AI models are memory-bound. HBM and on-chip memory help.
Use external memory for long videos.
Memory-bound operations are limited by data transfer rather than computation affecting actual deployment speed.
Bottleneck type.
Attention variants saving memory.
Methods to reduce memory usage.
Agent memory stores conversation history, facts, summaries. Enables multi-turn conversations and long-term context.
KV-cache stores past keys/values so the model does not recompute them every token. Dramatically speeds up long-context inference with transformers.
LLM memory persists facts across sessions. Vector DB or structured storage. Personalization.
Resistive memory devices.
Microelectromechanical systems processing.
Protect mechanical structures.
MEMS probe cards use microfabricated structures for ultra-fine pitch testing with improved planarity contact force control and probe density.
Find mentors in AI field. Learn from their experience. Give back by mentoring others.
Middle-End-Of-Line encompasses processes between transistor formation and traditional metal interconnect layers.
Use mercury intrusion to measure pores.
Contact-based electrical testing on wafer.
Merge operations combine split lots rejoining wafers into single lot.
Model merging combines weights from multiple fine-tuned models. Can get benefits of each without retraining.
Software managing and tracking fab operations.