AI prompts
base on 🔥Highlighting the top ML papers every week. # ML Papers of The Week
[Subscribe to our newsletter](https://nlpnews.substack.com/) to get a weekly list of top ML papers in your inbox.
At DAIR.AI we ❤️ reading ML papers so we've created this repo to highlight the top ML papers of every week.
Here is the weekly series:
## 2024
- [Top ML Papers of the Week (November 18 - November 24)](./#top-ml-papers-of-the-week-november-18---november-24---2024)
- [Top ML Papers of the Week (November 11 - November 17)](./#top-ml-papers-of-the-week-november-11---november-17---2024)
- [Top ML Papers of the Week (November 4 - November 10)](./#top-ml-papers-of-the-week-november-4---november-10---2024)
- [Top ML Papers of the Week (October 28 - November 3)](./#top-ml-papers-of-the-week-october-28---november-3---2024)
- [Top ML Papers of the Week (October 21 - October 27)](./#top-ml-papers-of-the-week-october-14---october-20---2024)
- [Top ML Papers of the Week (October 14 - October 20)](./#top-ml-papers-of-the-week-october-14---october-20---2024)
- [Top ML Papers of the Week (October 7 - October 13)](./#top-ml-papers-of-the-week-october-7---october-13---2024)
- [Top ML Papers of the Week (September 30 - October 6)](./#top-ml-papers-of-the-week-september-30---october-6---2024)
- [Top ML Papers of the Week (September 23 - September 29)](./#top-ml-papers-of-the-week-september-23---september-29---2024)
- [Top ML Papers of the Week (September 16 - September 22)](./#top-ml-papers-of-the-week-september-16---september-22---2024)
- [Top ML Papers of the Week (September 9 - September 15)](./#top-ml-papers-of-the-week-september-9---september-15---2024)
- [Top ML Papers of the Week (September 2 - September 8)](./#top-ml-papers-of-the-week-september-2---september-8---2024)
- [Top ML Papers of the Week (August 26 - September 1)](./#top-ml-papers-of-the-week-august-26---september-1---2024)
- [Top ML Papers of the Week (August 19 - August 25)](./#top-ml-papers-of-the-week-august-19---august-25---2024)
- [Top ML Papers of the Week (August 12 - August 18)](./#top-ml-papers-of-the-week-august-12---august-18---2024)
- [Top ML Papers of the Week (August 5 - August 11)](./#top-ml-papers-of-the-week-august-5---august-11---2024)
- [Top ML Papers of the Week (July 29 - August 4)](./#top-ml-papers-of-the-week-july-29---august-4---2024)
- [Top ML Papers of the Week (July 22 - July 28)](./#top-ml-papers-of-the-week-july-15---july-21---2024)
- [Top ML Papers of the Week (July 15 - July 21)](./#top-ml-papers-of-the-week-july-15---july-21---2024)
- [Top ML Papers of the Week (July 8 - July 14)](./#top-ml-papers-of-the-week-july-8---july-14---2024)
- [Top ML Papers of the Week (July 1 - July 7)](./#top-ml-papers-of-the-week-july-1---july-7---2024)
- [Top ML Papers of the Week (June 24 - June 30)](./#top-ml-papers-of-the-week-june-24---june-30---2024)
- [Top ML Papers of the Week (June 17 - June 23)](./#top-ml-papers-of-the-week-june-17---june-23---2024)
- [Top ML Papers of the Week (June 10 - June 16)](./#top-ml-papers-of-the-week-june-10---june-16---2024)
- [Top ML Papers of the Week (June 3 - June 9)](./#top-ml-papers-of-the-week-june-3---june-9---2024)
- [Top ML Papers of the Week (May 27 - June 2)](./#top-ml-papers-of-the-week-may-27---june-2---2024)
- [Top ML Papers of the Week (May 20 - May 26)](./#top-ml-papers-of-the-week-may-20---may-26---2024)
- [Top ML Papers of the Week (May 13 - May 19)](./#top-ml-papers-of-the-week-may-13---may-19---2024)
- [Top ML Papers of the Week (May 6 - May 12)](./#top-ml-papers-of-the-week-may-6---may-12---2024)
- [Top ML Papers of the Week (April 29 - May 5)](./#top-ml-papers-of-the-week-april-29---may-5---2024)
- [Top ML Papers of the Week (April 22 - April 28)](./#top-ml-papers-of-the-week-april-22---april-28---2024)
- [Top ML Papers of the Week (April 15 - April 21)](./#top-ml-papers-of-the-week-april-15---april-21---2024)
- [Top ML Papers of the Week (April 8 - April 14)](./#top-ml-papers-of-the-week-april-8---april-14---2024)
- [Top ML Papers of the Week (April 1 - April 7)](./#top-ml-papers-of-the-week-april-1---april-7---2024)
- [Top ML Papers of the Week (March 26 - March 31)](./#top-ml-papers-of-the-week-march-26---march-31---2024)
- [Top ML Papers of the Week (March 18 - March 25)](./#top-ml-papers-of-the-week-march-18---march-25---2024)
- [Top ML Papers of the Week (March 11 - March 17)](./#top-ml-papers-of-the-week-march-11---march-17---2024)
- [Top ML Papers of the Week (March 4 - March 10)](./#top-ml-papers-of-the-week-march-4---march-10---2024)
- [Top ML Papers of the Week (February 26 - March 3)](./#top-ml-papers-of-the-week-february-26---march-3---2024)
- [Top ML Papers of the Week (February 19 - February 25)](./#top-ml-papers-of-the-week-february-19---february-25---2024)
- [Top ML Papers of the Week (February 12 - February 18)](./#top-ml-papers-of-the-week-february-12---february-18---2024)
- [Top ML Papers of the Week (February 5 - February 11)](./#top-ml-papers-of-the-week-february-5---february-11---2024)
- [Top ML Papers of the Week (January 29 - February 4)](./#top-ml-papers-of-the-week-january-29---february-4---2024)
- [Top ML Papers of the Week (January 22 - January 28)](./#top-ml-papers-of-the-week-january-22---january-28---2024)
- [Top ML Papers of the Week (January 15 - January 21)](./#top-ml-papers-of-the-week-january-15---january-21---2024)
- [Top ML Papers of the Week (January 8 - January 14)](./#top-ml-papers-of-the-week-january-8---january-14---2024)
- [Top ML Papers of the Week (January 1 - January 7)](./#top-ml-papers-of-the-week-january-1---january-7---2024)
## 2023
- [Top ML Papers of the Week (December 24 - December 31)](./#top-ml-papers-of-the-week-december-25---december-31)
- [Top ML Papers of the Week (December 18 - December 24)](./#top-ml-papers-of-the-week-december-18---december-24)
- [Top ML Papers of the Week (December 11 - December 17)](./#top-ml-papers-of-the-week-december-11---december-17)
- [Top ML Papers of the Week (December 4 - December 10)](./#top-ml-papers-of-the-week-december-4---december-10)
- [Top ML Papers of the Week (November 27 - December 3)](./#top-ml-papers-of-the-week-november-27---december-3)
- [Top ML Papers of the Week (November 20 - November 26)](./#top-ml-papers-of-the-week-november-20---november-26)
- [Top ML Papers of the Week (November 13 - November 19)](./#top-ml-papers-of-the-week-november-13---november-19)
- [Top ML Papers of the Week (November 6 - November 12)](./#top-ml-papers-of-the-week-november-6---november-12)
- [Top ML Papers of the Week (October 30 - November 5)](./#top-ml-papers-of-the-week-october-30---november-5)
- [Top ML Papers of the Week (October 23 - October 29)](./#top-ml-papers-of-the-week-october-23---october-29)
- [Top ML Papers of the Week (October 16 - October 22)](./#top-ml-papers-of-the-week-october-16---october-22)
- [Top ML Papers of the Week (October 9 - October 15)](./#top-ml-papers-of-the-week-october-9---october-15)
- [Top ML Papers of the Week (October 2 - October 8)](./#top-ml-papers-of-the-week-october-2---october-8)
- [Top ML Papers of the Week (September 25 - October 1)](./#top-ml-papers-of-the-week-september-25---october-1)
- [Top ML Papers of the Week (September 18 - September 24)](./#top-ml-papers-of-the-week-september-18---september-24)
- [Top ML Papers of the Week (September 11 - September 17)](./#top-ml-papers-of-the-week-september-11---september-17)
- [Top ML Papers of the Week (September 4 - September 10)](./#top-ml-papers-of-the-week-september-4---september-10)
- [Top ML Papers of the Week (August 28 - September 3)](./#top-ml-papers-of-the-week-august-28---september-3)
- [Top ML Papers of the Week (August 21 - August 27)](./#top-ml-papers-of-the-week-august-21---august-27)
- [Top ML Papers of the Week (August 14 - August 20)](./#top-ml-papers-of-the-week-august-14---august-20)
- [Top ML Papers of the Week (August 7 - August 13)](./#top-ml-papers-of-the-week-august-7---august-13)
- [Top ML Papers of the Week (July 31 - August 6)](./#top-ml-papers-of-the-week-july-31---august-6)
- [Top ML Papers of the Week (July 24 - July 30)](./#top-ml-papers-of-the-week-july-24---july-30)
- [Top ML Papers of the Week (July 17 - July 23)](./#top-ml-papers-of-the-week-july-17---july-23)
- [Top ML Papers of the Week (July 10 - July 16)](./#top-ml-papers-of-the-week-july-10---july-16)
- [Top ML Papers of the Week (July 3 - July 9)](./#top-ml-papers-of-the-week-july-3---july-9)
- [Top ML Papers of the Week (June 26 - July 2)](./#top-ml-papers-of-the-week-june-26---july-2)
- [Top ML Papers of the Week (June 19 - June 25)](./#top-ml-papers-of-the-week-june-19---june-25)
- [Top ML Papers of the Week (June 12 - June 18)](./#top-ml-papers-of-the-week-june-12---june-18)
- [Top ML Papers of the Week (June 5 - June 11)](./#top-ml-papers-of-the-week-june-5---june-11)
- [Top ML Papers of the Week (May 29 - June 4)](./#top-ml-papers-of-the-week-may-29-june-4)
- [Top ML Papers of the Week (May 22 - 28)](./#top-ml-papers-of-the-week-may-22-28)
- [Top ML Papers of the Week (May 15 - 21)](./#top-ml-papers-of-the-week-may-15-21)
- [Top ML Papers of the Week (May 8 - 14)](./#top-ml-papers-of-the-week-may-8-14)
- [Top ML Papers of the Week (May 1-7)](./#top-ml-papers-of-the-week-may-1-7)
- [Top ML Papers of the Week (April 24 - April 30)](./#top-ml-papers-of-the-week-april-24---april-30)
- [Top ML Papers of the Week (April 17 - April 23)](./#top-ml-papers-of-the-week-april-17---april-23)
- [Top ML Papers of the Week (April 10 - April 16)](./#top-ml-papers-of-the-week-april-10---april-16)
- [Top ML Papers of the Week (April 3 - April 9)](./#top-ml-papers-of-the-week-april-3---april-9)
- [Top ML Papers of the Week (Mar 27 - April 2)](./#top-ml-papers-of-the-week-mar-27---april-2)
- [Top ML Papers of the Week (Mar 20-Mar 26)](./#top-ml-papers-of-the-week-mar-20-mar-26)
- [Top ML Papers of the Week (Mar 13-Mar 19)](./#top-ml-papers-of-the-week-mar-13-mar-19)
- [Top ML Papers of the Week (Mar 6-Mar 12)](./#top-ml-papers-of-the-week-mar-6-mar-12)
- [Top ML Papers of the Week (Feb 27-Mar 5)](./#top-ml-papers-of-the-week-feb-27-mar-5)
- [Top ML Papers of the Week (Feb 20-26)](./#top-ml-papers-of-the-week-feb-20-26)
- [Top ML Papers of the Week (Feb 13 - 19)](./#top-ml-papers-of-the-week-feb-13---19)
- [Top ML Papers of the Week (Feb 6 - 12)](./#top-ml-papers-of-the-week-feb-6---12)
- [Top ML Papers of the Week (Jan 30-Feb 5)](./#top-ml-papers-of-the-week-jan-30-feb-5)
- [Top ML Papers of the Week (Jan 23-29)](./#top-ml-papers-of-the-week-jan-23-29)
- [Top ML Papers of the Week (Jan 16-22)](./#top-ml-papers-of-the-week-jan-16-22)
- [Top ML Papers of the Week (Jan 9-15)](./#top-ml-papers-of-the-week-jan-9-15)
- [Top ML Papers of the Week (Jan 1-8)](./#top-ml-papers-of-the-week-jan-1-8)
[Follow us on Twitter](https://twitter.com/dair_ai)
[Join our Discord](https://discord.gg/SKgkVT8BGJ)
## Top ML Papers of the Week (November 18 - November 24) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **AlphaQubit** - a new AI-based decoder that sets a state-of-the-art benchmark for identifying errors in quantum computers; using transformer architecture, AlphaQubit demonstrated 6% fewer errors than tensor network methods and 30% fewer errors than correlated matching when tested on the Sycamore data; shows promising results in simulations of larger systems up to 241 qubits; while this represents significant progress in quantum error correction, the system still needs improvements in speed before it can correct errors in real-time for practical quantum computing applications. | [Paper](https://www.nature.com/articles/s41586-024-08148-8), [Tweet](https://x.com/GoogleDeepMind/status/1859273133234192598) |
| 2) **The Dawn of GUI Agent** - explores Claude 3.5 computer use capabilities across different domains and software; they also provide an out-of-the-box agent framework for deploying API-based GUI automation models; Claude 3.5 Computer Use demonstrates unprecedented ability in end-to-end language to desktop actions. | [Paper](https://arxiv.org/abs/2411.10323), [Tweet](https://x.com/omarsar0/status/1858526493661446553) |
| 3) **A Statistical Approach to LLM Evaluation** - proposes five key statistical recommendations for a more rigorous evaluation of LLM performance differences. The recommendations include: 1) using the Central Limit Theorem to measure theoretical averages across all possible questions rather than just observed averages; 2) clustering standard errors when questions are related rather than independent; 3) reducing variance within questions through resampling or using next-token probabilities; 4) analyzing paired differences between models since questions are shared across evaluations, and 5) using power analysis to determine appropriate sample sizes for detecting meaningful differences between models; the authors argue that these statistical approaches will help researchers better determine whether performance differences between models represent genuine capability gaps or are simply due to chance, leading to more precise and reliable model evaluations. | [Paper](https://arxiv.org/abs/2411.00640), [Tweet](https://x.com/AnthropicAI/status/1858976458330505639) |
| 4) **Towards Open Reasoning Models for Open-Ended Solutions** - proposes Marco-o1 which is a reasoning model built for open-ended solutions; Marco-o1 is powered by Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and more recent reasoning strategies; Marco-o1 achieves accuracy improvements of +6.17% on the MGSM (English) dataset and +5.60% on the MGSM (Chinese) dataset. | [Paper](https://arxiv.org/abs/2411.14405), [Tweet](https://x.com/omarsar0/status/1860003607606706197) |
| 5) **LLM-based Agents for Automated Bug Fixing** - analyzes seven leading LLM-based bug fixing systems on the SWE-bench Lite benchmark, finding MarsCode Agent (developed by ByteDance) achieved the highest success rate at 39.33%; reveals that for error localization line-level fault localization accuracy is more critical than file-level accuracy, and bug reproduction capabilities significantly impact fixing success; shows that 24/168 resolved issues could only be solved using reproduction techniques, though reproduction sometimes misled LLMs when issue descriptions were already clear; concludes that improvements are needed in both LLM reasoning capabilities and Agent workflow design to enhance automated bug fixing effectiveness. | [Paper](https://arxiv.org/abs/2411.10213), [Tweet](https://x.com/omarsar0/status/1859964808789135668) |
| 6) **Cut Your Losses in Large-Vocabulary Language Models** - introduces Cut Cross-Entropy (CCE), a novel method to significantly reduce memory usage during LLM training by optimizing how the cross-entropy loss is computed; currently, the cross-entropy layer in LLM training consumes a disproportionate amount of memory (up to 90% in some models) due to storing logits for all possible vocabulary tokens. CCE addresses this by only computing logits for the correct token and evaluating the log-sum-exp over all logits on the fly using flash memory; the authors show that the approach reduces the memory footprint of Gemma 2 from 24GB to just 1MB; the method leverages the inherent sparsity of softmax calculations to skip elements that contribute negligibly to gradients; finally, it demonstrates that CCE achieves this dramatic memory reduction without sacrificing training speed or convergence, enabling larger batch sizes during training and potentially more efficient scaling of LLM training. | [Paper](https://arxiv.org/abs/2411.09009) |
| 7) **BABY-AIGS** - a multi-agent system for automated scientific discovery that emphasizes falsification through automated ablation studies. The system was tested on three ML tasks (data engineering, self-instruct alignment, and language modeling), demonstrating the ability to produce meaningful scientific discoveries. However, the performance is below experienced human researchers. | [Paper](https://arxiv.org/abs/2411.11910v1), [Tweet](https://x.com/omarsar0/status/1859656533489188928) |
| 8) **Does Prompt Formatting Impact LLM Performance** - examines how different prompt formats (plain text, Markdown, JSON, and YAML) affect GPT model performance across various tasks; finds that GPT-3.5-turbo's performance can vary by up to 40% depending on the prompt format, while larger models like GPT-4 show more robustness to format changes; argues that there is no universally optimal format across models or tasks - for instance, GPT-3.5-turbo generally performed better with JSON formats while GPT-4 preferred Markdown; models from the same family showed similar format preferences, but these preferences didn't transfer well between different model families; suggests that prompt formatting significantly impacts model performance and should be carefully considered when performing prompt engineering and model evaluation, and how to apply it to applications. | [Paper](https://arxiv.org/abs/2411.10541) |
| 9) **FinRobot** - an AI agent framework for equity research that uses a multi-agent Chain-of-Thought prompting, combining data analysis with human-like reasoning to produce professional investment reports comparable to major brokerages; it leverage three agents: a Data-CoT Agent to aggregate diverse data sources for robust financial integration; the Concept-CoT Agent, for analyst’s reasoning to generate actionable insights; and the Thesis-CoT Agent to synthesizes these insights into a coherent investment thesis and report. | [Paper](https://arxiv.org/abs/2411.08804) |
| 10) **Bi-Mamba** - a scalable 1-bit Mamba architecture designed for more efficient LLMs with multiple sizes across 780M, 1.3B, and 2.7B; Bi-Mamba achieves performance comparable to its full-precision counterparts (e.g., FP16 or BF16); it significantly reduces memory footprint with better accuracy than posttraining-binarization Mamba baselines. | [Paper](https://arxiv.org/abs/2411.11843) |
## Top ML Papers of the Week (November 11 - November 17) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Impacts of AI on Innovation** - suggests that top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives; finds that implementing AI materials discovery technology leads to substantial increases in productivity, with 44% more materials discovered, 39% more patent filings, and 17% more product innovation; reports that these gains came with concerning tradeoffs, as 82% of scientists reported reduced job satisfaction due to decreased creativity and skill underutilization. | [Paper](https://aidantr.github.io/files/AI_innovation.pdf), [Tweet](https://x.com/omarsar0/status/1856424446720127024) |
| 2) **Scaling Laws for Precision** - introduces "precision-aware" scaling laws that predict how model performance is affected by both training and inference precision in LLMs; key findings include: 1) post-training quantization becomes more harmful as models are trained on more data, eventually making additional pretraining actively detrimental, 2) training in lower precision requires increasing model size to maintain performance, and 3) when jointly optimizing model size, data, and precision, the compute-optimal training precision is around 7-8 bits and independent of compute; also reports that when the model size is fixed, compute-optimal precision increases approximately logarithmically with data; the authors validate their predictions on models up to 1.7B parameters trained on up to 26B tokens, showing that both very high (16-bit) and very low (sub 4-bit) training precisions may be suboptimal. | [Paper](https://arxiv.org/abs/2411.04330), [Tweet](https://x.com/tanishqkumar07/status/1856045600355352753) |
| 3) **Evo** - a 7B parameter AI model designed to understand and generate DNA sequences across multiple biological scales; the model, trained on 2.7 million prokaryotic and phage genomes, can process sequences up to 131 kilobases long while maintaining single-nucleotide resolution, enabling it to understand both molecular-level interactions and genome-wide patterns; Evo demonstrates superior performance in predicting and generating functional DNA, RNA, and protein sequences, including the first successful AI-generated CRISPR-Cas complexes and transposable systems that have been experimentally validated. | [Paper](https://www.science.org/doi/10.1126/science.ado9336), [Tweet](https://x.com/arcinstitute/status/1857138107038187945) |
| 4) **OpenCoder** - introduces OpenCoder, a fully open-source LLM specialized for code generation and understanding; the authors identify several critical factors for building high-performing code LLMs: (1) effective data cleaning with code-optimized heuristic rules for deduplication, (2) recall of relevant text corpus related to code, and (3) high-quality synthetic in both annealing and supervised fine-tuning stages; OpenCoder surpasses previous fully open models at the 6B+ parameter scale and releases not just the model weights but also the complete training pipeline, datasets, and protocols to enable reproducible research. | [Paper](https://arxiv.org/abs/2411.04905), [Tweet](https://x.com/omarsar0/status/1857515355595526450) |
| 5) **The Surprising Effectiveness of Test-Time Training for Abstract Reasoning** - explores test-time training (TTT) - updating model parameters temporarily during inference - for improving an LLM's abstract reasoning capabilities using the ARC benchmark; identifies three crucial components: initial fine-tuning on similar tasks, auxiliary task format and augmentations, and per-instance training; TTT significantly improves performance, achieving up to 6x improvement in accuracy compared to base fine-tuned models; when applying TTT to an 8B LLM, they achieve 53% accuracy on ARC's public validation set, improving the state-of-the-art for neural approaches by nearly 25%; by ensembling their method with program generation approaches, they achieve state-of-the-art public validation accuracy of 61.9%, matching average human performance; the findings suggest that explicit symbolic search is not the only path to improved abstract reasoning in LLMs; test-time training applied to continued training on few-shot examples can be highly effective. | [Paper](https://ekinakyurek.github.io/papers/ttt.pdf), [Tweet](https://x.com/akyurekekin/status/1855680785715478546) |
| 6) **A Taxonomy of AgentOps for Enabling Observability of Foundation Model-based Agents** - analyzes AgentOps platforms and tools, highlighting the need for comprehensive observability and traceability features to ensure reliability in foundation model-based autonomous agent systems across their development and production lifecycle. | [Paper](https://arxiv.org/abs/2411.05285v1), [Tweet](https://x.com/omarsar0/status/1857400667318702118) |
| 7) **Toward Optimal Search and Retrieval for RAG** - examines how retrieval affects performance in RAG pipelines for QA tasks; conducts experiments using BGE-base and ColBERT retrievers with LLaMA and Mistral, finding that including more gold (relevant) documents improves QA accuracy; finds that using approximate nearest neighbor search with lower recall only minimally impacts performance while potentially improving speed and memory efficiency; reports that adding noisy or irrelevant documents consistently degrades performance, contradicting previous research claims; concludes that optimizing retrieval of gold documents is crucial for RAG performance, and that operating at lower search accuracy levels can be a viable approach for practical applications. | [Paper](https://arxiv.org/abs/2411.07396), [Tweet](https://x.com/omarsar0/status/1856709865802252710) |
| 8) **Mitigating LLM Jailbreaks with Few Examples** - introduces a new approach called for defending LLMs against jailbreak attacks, focusing on quickly adapting defenses after detecting new attacks rather than aiming for perfect adversarial upfront robustness; using a new benchmark, the most effective method, based on fine-tuning an input classifier, reduced attack success rates by over 240x for known attack types and 15x for novel variations after seeing just one example of each attack strategy; demonstrates that rapidly responding to new jailbreaks can be an effective alternative to traditional static defenses. | [Paper](https://arxiv.org/abs/2411.07494), [Tweet](https://x.com/AnthropicAI/status/1856752093945540673) |
| 9) **Mixture of Transformers** - introduce Mixture-of-Transformers (MoT), a new sparse multi-modal transformer architecture that matches the performance of traditional models while using only about half the computational resources for text and image processing; MoT matches a dense baseline's performance using only 55.8% of the FLOPs. | [Paper](https://arxiv.org/abs/2411.04996) |
| 10) **HtmlRAG** - a novel approach that proposes using HTML instead of plain text as the format for building RAG systems; the key finding is that preserving HTML structure provides richer semantic and structural information compared to plain text conversion, which typically loses important formatting like headings, tables, and semantic tags; to address the challenge of HTML documents being too long for LLM context windows, the authors develop a two-step pruning method: first cleaning unnecessary HTML elements (reducing length by 94%), then using a block-tree-based pruning approach that combines embedding-based and generative pruning to further reduce the content while maintaining important information; experiments across six different QA datasets demonstrate that HtmlRAG outperforms existing plain-text based methods, validating the advantages of preserving HTML structure in RAG systems. | [Paper](https://arxiv.org/abs/2411.02959v1), [Tweet](https://x.com/omarsar0/status/1857870511302390013) |
## Top ML Papers of the Week (November 4 - November 10) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Many-agent Simulations toward AI Civilization** - demonstrates how 10-1000+ AI agents behave and progress with agent societies; proposes PIANO, an architecture that enables agents to interact with humans and other agents in real-time; shows that agents can autonomously develop specialized roles, adhere to and change collective rules, and engage in cultural and religious transmissions. | [Paper](https://arxiv.org/abs/2411.00114), [Tweet](https://x.com/omarsar0/status/1853290196286021940) |
| 2) **A Comprehensive Survey of Small Language Models** - a survey on small language models (SLMs) and discussion on issues related to definitions, applications, enhancements, reliability, and more. | [Paper](https://arxiv.org/abs/2411.03350), [Tweet](https://x.com/omarsar0/status/1854532748154695717) |
| 3) **Magentic-One** - a new generalist multi-agent system designed to handle complex web and file-based tasks; it uses an Orchestrator agent that directs four specialized agents: WebSurfer for browser operations, FileSurfer for file management, Coder for programming tasks, and ComputerTerminal for console operations; Magentic-One achieves competitive performance on multiple benchmarks including GAIA, AssistantBench, and WebArena, without requiring modifications to its core architecture. | [Paper](https://www.microsoft.com/en-us/research/publication/magentic-one-a-generalist-multi-agent-system-for-solving-complex-tasks/), [Tweet](https://x.com/omarsar0/status/1854910759232585786) |
| 4) **Mixtures of In-Context Learners** - uses subsets of demonstrations to train experts via in-context learning; given a training set, a trainable weighting function is used to combine the experts' next-token predictions; this approach applies to black-box LLMs since access to the internal parameters of the LLM is not required. Good properties include the following: 1) competitive with standard ICL while being significantly more data, memory, and computationally efficient, and 2) resilient to noisy demonstrations and label imbalance. | [Paper](https://arxiv.org/abs/2411.02830), [Tweet](https://x.com/omarsar0/status/1854252169492562171) |
| 5) **Attacking Vision-Language Agents via Pop-ups** - shows that integrating adversarial pop-ups into existing agent testing environments leads to an attack success rate of 86%; this decreases the agents' task success rate by 47%; they also add that basic defense techniques (e.g., instructing the agent to ignore pop-ups) are ineffective. | [Paper](https://arxiv.org/abs/2411.02391), [Tweet](https://x.com/omarsar0/status/1853810252308774955) |
| 6) **Multi-expert Prompting with LLMs** - improves LLM responses by simulating multiple experts and aggregating their responses; it guides an LLM to fulfill input instructions by simulating multiple experts and selecting the best response among individual and aggregated views; it achieves a new state-of-the-art on TruthfulQA-Generation with ChatGPT, surpassing the current SOTA of 87.97%; it also improves performance across factuality and usefulness while reducing toxicity and hurtfulness. | [Paper](https://arxiv.org/abs/2411.00492), [Tweet](https://x.com/omarsar0/status/1853286452227899851) |
| 7) **Number Understanding of LLMs** - provides a comprehensive analysis of the numerical understanding and processing ability (NUPA) of LLMs; finds that naive finetuning can improve NUPA a lot on many but not all tasks; it also reports that techniques designed to enhance NUPA prove ineffective for finetuning pretrained models; explores chain-of-thought techniques applied to NUPA and suggests that chain-of-thought methods face scalability challenges, making them difficult to apply in practical scenarios. | [Paper](https://arxiv.org/abs/2411.03766), [Tweet](https://x.com/omarsar0/status/1854528742095458337) |
| 8) **WebRL** - proposes a self-evolving online curriculum RL framework to bridge the gap between open and proprietary LLM-based web agents; it improves the success rate of Llama-3.1-8B from 4.8% to 42.4%, and from 6.1% to 43% for GLM4-9B; the open models significantly surpass the performance of GPT-4-Turbo (17.6%) and GPT-4o (13.9%); the self-evolving curriculum addresses the scarcity of web agent training tasks; this is underpinned by a robust outcome-supervised reward model to evaluate task success; an adaptive RL strategy helps to deal with distribution drift in online learning and ensures consistent improvements. | [Paper](https://arxiv.org/abs/2411.02337), [Tweet](https://x.com/omarsar0/status/1853821990177485311) |
| 9) **Adapting while Learning** - proposes a two-part fine-tuning approach that first helps LLMs learn from tool-generated solutions and then trains them to determine when to solve problems directly versus when to use tools; testing on math, climate science, and epidemiology benchmarks shows significant improvements, with a 28% boost in accuracy and 14% better tool usage precision compared to leading models like GPT-4 and Claude-3.5; the two-stage approach helps the LLM to adaptively solve scientific problems of varying complexity. | [Paper](https://arxiv.org/abs/2411.00412), [Tweet](https://x.com/omarsar0/status/1853281778594979877) |
| 10) **Personalization of LLMs** - presents a comprehensive framework for understanding personalized LLMs; introduces taxonomies for different aspects of personalization and unifying existing research across personalized text generation and downstream applications. | [Paper](https://arxiv.org/abs/2411.00027), [Tweet](https://x.com/omarsar0/status/1853276249981907386) |
## Top ML Papers of the Week (October 28 - November 3) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Geometry of Concepts in LLMs** - examines the geometric structure of concept representations in sparse autoencoders (SAEs) at three scales: 1) atomic-level parallelogram patterns between related concepts (e.g., man:woman::king:queen), 2) brain-like functional "lobes" for different types of knowledge like math/code, 3) and galaxy-level eigenvalue distributions showing a specialized structure in middle model layers. | [Paper](https://arxiv.org/abs/2410.19750), [Tweet](https://x.com/tegmark/status/1851288315867041903) |
| 2) **SimpleQA** - a challenging benchmark of 4,326 short factual questions adversarially collected against GPT-4 responses; reports that frontier models like GPT-4o and Claude achieve less than 50% accuracy; finds that there is a positive calibration between the model stated confidence and accuracy, signaling that they have some notion of confidence; claims that there is still room to improve the calibration of LLMs in terms of stated confidence. | [Paper](https://openai.com/index/introducing-simpleqa/), [Tweet](https://x.com/OpenAI/status/1851680760539025639) |
| 3) **Automating Agentic Workflow Generation** - a novel framework for automating the generation of agentic workflows; it reformulates workflow optimization as a search problem over code-represented workflows, where edges connect LLM-invoking nodes; it efficiently explores the search space using a variant of MCTS, iteratively refining workflows through code modification, tree-structured experience, and execution feedback; experiments across six benchmark datasets demonstrate AFlow’s effectiveness, showing a 5.7% improvement over manually designed methods and a 19.5% improvement over existing automated approaches; AFlow also enables smaller models to outperform GPT-4o on specific tasks at just 4.55% of its inference cost. | [Paper](https://arxiv.org/abs/2410.10762), [Tweet](https://x.com/omarsar0/status/1852339570891014415) |
| 4) **LLMs Solve Math with a Bag of Heuristics** - uses causal analysis to find neurons that explain an LLM's behavior when doing basic arithmetic logic; discovers and hypothesizes that the combination of heuristic neurons is the mechanism used to produce correct arithmetic answers; finds that the unordered combination of different heuristic types is the mechanism that explains most of the model’s accuracy on arithmetic prompts. | [Paper](https://arxiv.org/abs/2410.21272), [Tweet](https://x.com/omarsar0/status/1851233281116946923) |
| 5) **o1 Replication Journey** - reports to be replicating the capabilities of OpenAI's o1 model; their journey learning technique encourages learning not just shortcuts, but the complete exploration process, including trial and error, reflection, and backtracking; claims that with only 327 training samples, their journey learning technique surpassed shortcut learning by 8.0% on the MATH dataset. | [Paper](https://arxiv.org/abs/2410.18982), [Tweet](https://x.com/omarsar0/status/1850748790308761988) |
| 6) **Distinguishing Ignorance from Error in LLM Hallucinations** - a method to distinguish between two types of LLM hallucinations: when models lack knowledge (HK-) versus when they hallucinate despite having correct knowledge (HK+); they build model-specific datasets using their proposed approach and show that model-specific datasets are more effective for detecting HK+ hallucinations compared to generic datasets. | [Paper](https://arxiv.org/abs/2410.22071), [Tweet](https://x.com/AdiSimhi/status/1851650371615125563) |
| 7) **Multimodal RAG** - provides a discussion on how to best integrate multimodal models into RAG systems for the industrial domain; it also provides a deep discussion on the evaluation of these systems using LLM-as-a-Judge. | [Paper](https://arxiv.org/abs/2410.21943), [Tweet](https://x.com/omarsar0/status/1851479149690642456) |
| 8) **The Role of Prompting and External Tools in Hallucination Rates of LLMs** - tests different prompting strategies and frameworks aimed at reducing hallucinations in LLMs; finds that simpler prompting techniques outperform more complex methods; it reports that LLM agents exhibit higher hallucination rates due to the added complexity of tool usage. | [Paper](https://arxiv.org/abs/2410.19385), [Tweet](https://x.com/omarsar0/status/1850745569125253401) |
| 9) **MrT5** - a more efficient variant of byte-level language models that uses a dynamic token deletion mechanism (via a learned delete gate) to shorten sequence lengths by up to 80% while maintaining model performance; this enables faster inference and better handling of multilingual text without traditional tokenization; MrT5 maintains competitive accuracy with ByT5 on downstream tasks such as XNLI and character-level manipulations while improving inference runtimes. | [Paper](https://arxiv.org/abs/2410.20771), [Tweet](https://x.com/JulieKallini/status/1851278833061704170) |
| 10) **Relaxed Recursive Transformers** - introduces a novel approach, Relaxed Recursive Transformer, that significantly reduces LLM size through parameter sharing across layers while maintaining performance; the model is initialized from standard pretrained Transformers, but only uses a single block of unique layers that is repeated multiple times in a loop; then it adds flexibility to the layer tying constraint via depth-wise low-rank adaptation (LoRA) modules; shows that the approach has the potential to lead to significant (2-3×) gains in inference throughput. | [Paper](https://arxiv.org/abs/2410.20672), [Tweet](https://x.com/raymin0223/status/1851216039822180759) |
## Top ML Papers of the Week (October 21 - October 27) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Agentic Information Retrieval** - provides an introduction to agentic information retrieval, which is shaped by the capabilities of LLM agents; discusses different types of cutting-edge applications of agentic information retrieval and challenges. | [Paper](https://arxiv.org/abs/2410.09713), [Tweet](https://x.com/omarsar0/status/1848396596230127655) |
| 2) **Aya Expanse** - a family of open-weight foundation models for multilingual capabilities; releases an 8B and 32B parameter model, including one of the largest multilingual dataset collections to date, with 513 million examples; the release also includes Aya-101 which the authors claim is the most comprehensive multilingual models covering 101 languages; Aya Expanse 32B outperforms Gemma 2 27B, Mistral 8x22B, and Llama 3.1 70B, a model 2x its size. | [Paper](https://cohere.com/blog/aya-expanse-connecting-our-world), [Tweet](https://x.com/CohereForAI/status/1849435983449587796) |
| 3) **A Theoretical Understanding of CoT** - finds that adding correct and incorrect reasoning paths in demonstrations improves the accuracy of intermediate steps and CoT; the proposed method, Coherent CoT, significantly improves performance on several benchmarks; in the Tracking Shuffled Objects dataset, Gemini Pro shows a 6.60% improvement (from 58.20% to 64.80%), and in Penguins in a Table, DeepSeek 67B demonstrates an increase of 6.17% (from 73.97% to 80.14%). | [Paper](https://arxiv.org/abs/2410.16540), [Tweet](https://x.com/omarsar0/status/1849139985712369907) |
| 4) **A Survey on Data Synthesis and Augmentation for LLMs** - provides a comprehensive summary of data generation techniques in the lifecycle of LLMs; includes discussions on data preparation, pre-training, fine-tuning, instruction-tuning, preference alignment, and applications. | [Paper](https://arxiv.org/abs/2410.12896), [Tweet](https://x.com/omarsar0/status/1848445736591163886) |
| 5) **LongRAG** - enhances RAG's understanding of long-context knowledge which includes global information and factual details; consists of a hybrid retriever, an LLM-augmented information extractor, a CoT-guided filter, and an LLM-augmented generator; these are key components that enable the RAG system to mine global long-context information and effectively identify factual details; LongRAG outperforms long-context LLMs (up by 6.94%), advanced RAG (up by 6.16%), and Vanilla RAG (up by 17.25%). | [Paper](https://arxiv.org/abs/2410.18050), [Tweet](https://x.com/omarsar0/status/1849494571946066295) |
| 6) **Evaluation Feature Steering in LLMs** - evaluates featuring steering in LLMs using an experiment that artificially dials up and down various features to analyze changes in model outputs; it focused on 29 features related to social biases and study if feature steering can help mitigate social biases; among its findings, it reports that feature steering sometimes leads to off-target effects and that a neutrality feature can help decreases social biases in 9 social dimensions without negatively affecting text quality. | [Paper](https://www.anthropic.com/research/evaluating-feature-steering), [Tweet](https://x.com/AnthropicAI/status/1849840131412296039) |
| 7) **Granite 3.0** - presents lightweight foundation models ranging from 400 million to 8B parameters; supports coding, RAG, reasoning, and function calling, focusing on enterprise use cases, including on-premise and on-device settings; demonstrates strong performance across academic benchmarks for language understanding, reasoning, coding, function calling, and safety. | [Paper](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf), [Tweet](https://x.com/omarsar0/status/1848404138641527105) |
| 8) **LLMs Reflect the Ideology of their Creators** - finds that LLMs exhibit a diverse ideological stance which reflects the worldview of its creators; finds consistent normative differences between how the same LLM responds in Chinese compared to English; identifies normative disagreements between Western and non-Western LLMs about prominent actors in geopolitical conflicts. | [Paper](https://arxiv.org/abs/2410.18417), [Tweet](https://x.com/omarsar0/status/1849860985500352968) |
| 9) **Scalable Watermarking for LLMs** - proposes SynthID-Text, a text-watermarking scheme that can preserve text quality in LLMs, enable high detection accuracy, and minimize latency overhead; it integrates watermarking with speculative sampling that consists of the final pattern of scores for a model’s word choices combined with the adjusted probability scores; the authors test the feasibility and scalability of the approach by assessing feedback on nearly 10 million Gemini responses. | [Paper](https://www.nature.com/articles/s41586-024-08025-4), [Tweet](https://x.com/GoogleDeepMind/status/1849110263871529114) |
| 10) **Reasoning Patterns of OpenAI’s o1 Model** - when compared with other test-time compute methods, o1 achieved the best performance across most datasets; the authors observe that the most commonly used reasoning patterns in o1 are divide and conquer and self-refinement; o1 uses different reasoning patterns for different tasks; for commonsense reasoning tasks, o1 tends to use context identification and emphasize constraints; for math and coding tasks, o1 mainly relies on method reuse and divide and conquer. | [Paper](https://arxiv.org/abs/2410.13639), [Tweet](https://x.com/omarsar0/status/1848782378631892997) |
## Top ML Papers of the Week (October 14 - October 20) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Thinking LLMs** - proposes a training method to equip LLMs with thinking abilities for general instruction-following without human-annotated data; uses an iterative search and optimization procedure to explore thought generation which enables the model to learn without direct supervision; thought candidates for each user instruction are scored with a judge model; only responses are evaluated by the Judge which determines the best and worst ones; then the corresponding full outputs are used as chosen and rejected pairs for DPO (referred to as Thought Preference Optimization in this paper). reports superior performance on AlpacaEval and Arena-Hard. | [Paper](https://arxiv.org/abs/2410.10630), [Tweet](https://x.com/omarsar0/status/1846227797972603047) |
| 2) **Model Swarms** - propose a new collaborative search algorithm to adapt LLM via swarm intelligence; a pool of LLM experts collaboratively move in the weight space and optimize a utility function representing various adaptation objectives; experiments demonstrate that Model Swarms could flexibly adapt LLM experts to a single task, multi-task domains, reward models, as well as diverse human interests. improves over 12 model composition baselines by up to 21.0% across tasks and contexts. | [Paper](https://arxiv.org/abs/2410.11163), [Tweet](https://x.com/omarsar0/status/1846592954921849029) |
| 3) **First-Person Fairness in Chatbots** - studies first-person fairness which involves fairness towards users interacting with ChatGPT; specifically, it measures the biases, if any, towards the users’ names; it leverages a model powered by GPT-4o to analyze patterns and name-sensitivity in the chatbot’s responses for different user names; claims that, overall, post-training significantly mitigate harmful stereotypes; also reports that in domains like entertainment and art, with open-ended tasks, demonstrate the highest level of bias (i.e., tendency to write stories with protagonists whose gender matches gender inferred from the user’s name) | [Paper](https://cdn.openai.com/papers/first-person-fairness-in-chatbots.pdf), [Tweet](https://x.com/OpenAINewsroom/status/1846238809991925838) |
| 4) **Introspection in LLMs** - reports that LLMs can acquire knowledge through introspection that cannot be inferred from their training data; suggests that LLMs contain privileged information about themselves that can potentially lead to more interpretable and controllable systems; they report that this introspection ability is limited and models struggle to predict their behavior on tasks requiring reasoning over long outputs. | [Paper](https://arxiv.org/abs/2410.13787), [Tweet](https://x.com/omarsar0/status/1847297594525094081) |
| 5) **Janus** - proposes a unified autoregressive framework for multimodal understanding and generation; it decouples visual encoding into independent pathways and leverages a single transformer architecture to improve flexibility and performance on both visual understanding and generation; claims to alleviate trade-offs related to performing the vision tasks, something common in methods that rely on a single visual encoder; surpasses previous unified models and matches or exceeds the performance of task-specific models. | [Paper](https://arxiv.org/abs/2410.13848), [Tweet](https://x.com/deepseek_ai/status/1847191319464300652) |
| 6) **Inference Scaling for Long-Context RAG** - uses two strategies to investigate scaling laws for RAG: in-context learning (DRAG) and iterative prompting (IterRAG); finds that RAG performance consistently improves with the expansion of the effective context length under optimal configurations; when optimally allocated, increasing inference computation can lead to linear gains in long-context RAG performance; this leads to the development of a computation allocation model that can provide practical guidance for optimal computation allocation in long-context RAG scenarios. | [Paper](https://arxiv.org/abs/2410.04343), [Tweet](https://x.com/omarsar0/status/1847350506127315088) |
| 7) **Agent S** - a new open agentic framework that enables autonomous interaction with computers through a GUI; Agent S tackles challenges such as acquiring knowledge, planning over long-task horizons, and handling dynamic interfaces; it introduces experience-augmented hierarchical planning which leverages both search and retrieval; leverages an agent-computer interface to perform reasoning and control GUI agents; evaluation on the OSWorld benchmark shows that Agent S outperforms the baseline by 9.37% in success rate (an 83.6% relative improvement) and achieves a new state-of-the-art. | [Paper](https://arxiv.org/abs/2410.08164v1), [Tweet](https://x.com/omarsar0/status/1846930425849303424) |
| 8) **Model Kinship for Merging LLMs** - proposes model kinship to measure the degree of similarity between LLMs; model kinship is used to build a model merging strategy (Top-k Greedy Merging with Model Kinship) which yields better performance; the authors find that this new criterion can be used to effectively and continuously perform model merging. | [Paper](https://arxiv.org/abs/2410.12613), [Tweet](https://x.com/omarsar0/status/1846753148007846329) |
| 9) **On the Planning Abilities of OpenAI’s o1 Models** - reports that o1-preview is particularly strong in self-evaluation and constraint-following; also mentions that these o1 models demonstrate bottlenecks in decision-making and memory management, which are more pronounced in spatial reasoning; in particular, the models produce redundant action and struggle to generalize in spatially complex tasks. | [Paper](https://www.arxiv.org/abs/2409.19924), [Tweet](https://x.com/omarsar0/status/1846032256902869135) |
| 10) **CoTracker3** - proposes a new point tracking model and a new semi-supervised training recipe; enables usage of real videos without annotations during training by generating pseudo-labels using off-the-shelf teachers; the approach is simpler in architecture and training scheme leading to better results while using 1000x less data. | [Paper](https://arxiv.org/abs/2410.11831), [Tweet](https://x.com/AIatMeta/status/1846595406261899363) |
## Top ML Papers of the Week (October 7 - October 13) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **MLE-Bench** - proposes a new benchmark for the evaluation of machine learning agents on machine learning engineering capabilities; includes 75 ML engineering-related competition from Kaggle testing on MLE skills such as training models, preparing datasets, and running experiments; OpenAI’s o1-preview with the AIDE scaffolding achieves Kaggle bronze medal level in 16.9% of competitions. | [Paper](https://arxiv.org/abs/2410.07095), [Tweet](https://x.com/OpenAI/status/1844429536353714427) |
| 2) **Differential Transformer** - proposes a differential attention mechanism that amplifies attention to the relevant context while canceling noise; Differential Transformer outperforms Transformer when scaling up model size and training tokens; the authors claim that since this architecture gets less "distracted" by irrelevant context, it can do well in applications such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers. | [Paper](https://arxiv.org/abs/2410.05258), [Tweet](https://x.com/omarsar0/status/1843694897020150216) |
| 3) **Astute RAG** - proposes a novel RAG approach to deal with the imperfect retrieval augmentation and knowledge conflicts of LLMs; Astute RAG adaptively elicits essential information from LLMs' internal knowledge; then it iteratively consolidates internal and external knowledge with source awareness; Astute RAG is designed to better combine internal and external information through an interactive consolidation mechanism (i.e., identifying consistent passages, detecting conflicting information in them, and filtering out irrelevant information). | [Paper](https://arxiv.org/abs/2410.07176), [Tweet](https://x.com/omarsar0/status/1844435988019544565) |
| 4) **ToolGen** - integrates tool knowledge directly into LLMs by representing tools as a unique token which allows the LLM to generate tool calls and arguments, enabling seamless tool invocation and language generation; experimental results with over 47,000 tools show that ToolGen achieves superior results in both tool retrieval and autonomous task completion. | [Paper](https://arxiv.org/abs/2410.03439), [Tweet](https://x.com/omarsar0/status/1843491766114422930) |
| 5) **Long-Context LLMs Meet RAG** - finds that for many long-context LLMs, the quality of outputs declines as the number of passages increases; reports that the performance loss is due to retrieved hard negatives; they propose two ways to improve long-context LLM-based RAG: retrieval reordering and RAG-specific tuning with intermediate reasoning to help with relevance identification; that approaches demonstrate significant accuracy and robustness improvements on long-context RAG performance. | [Paper](https://arxiv.org/abs/2410.05983), [Tweet](https://x.com/omarsar0/status/1844828836619334066) |
| 6) **GSM-Symbolic** - tests several SoTA models on a benchmark created with symbolic templates that enable diverse mathematical problems; they find that LLMs exhibit variance when responding to variations of the same questions; the performance of all the models declines by adjusting the numerical values in the question; as questions are made more challenging (e.g., increasing the number of clauses) the performance significantly deteriorates; the authors hypothesize that the observed decline in performance is due to a lack of logical reasoning in current LLMs. | [Paper](https://arxiv.org/abs/2410.05229), [Tweet](https://x.com/MFarajtabar/status/1844456880971858028) |
| 7) **Optima** - a novel framework to enhance both communication efficiency and task effectiveness in LLM-based multi-agent systems through LLM training; proposes an iterative generate, rank, select, and train paradigm with a reward function to improve performance, token use, and communication efficiency; integrates Monte Carlo Tree Search-inspired techniques for DPO data generation to encourage diverse exploration; shows consistent improvements over single-agent baselines and vanilla MAS based on Llama 3 8B, with 2.8x performance gain with less than 10% tokens on tasks requiring heavy information exchange. | [Paper](https://arxiv.org/abs/2410.08115), [Tweet](https://x.com/omarsar0/status/1844578931732844963) |
| 8) **ScienceAgentBench** - a new benchmark to rigorously assess agents built for scientific workflows; after testing it on open-weight and proprietary LLMs, the best-performing agent can only solve 32.4% of the tasks independently and 34.3% with expert-provided knowledge. | [Paper](https://arxiv.org/abs/2410.05080), [Tweet](https://x.com/omarsar0/status/1843697964243382586) |
| 9) **Addition Is All You Need** - proposes an algorithm that approximates floating point multiplication with integer addition operations; it is less computationally intensive than 8-bit floating point but achieves higher precision; the authors report that applying the purposed L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products. | [Paper](https://arxiv.org/abs/2410.00907), [Tweet](https://x.com/omarsar0/status/1844043652966072742) |
| 10) **Persuasion and Anti-social Ability of LLMs** - studies the interaction patterns of LLMs in a multi-agent setting with social hierarchy; the study was done in a specific setting involving a guard and a prisoner who seeks additional yard time or escaping from prison; finds that in the multi-agent setting where power dynamics are involved, the LLMs fail to have a conversation; they also report that agents' personas are critical in driving the behaviors of the agents. In addition, and without explicit prompting, simply assigning agents' roles lead to anti-social behavior. | [Paper](https://arxiv.org/abs/2410.07109), [Tweet](https://x.com/omarsar0/status/1844427182141211054) |
## Top ML Papers of the Week (September 30 - October 6) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Movie Gen** - a set of foundation models to generate high-quality, 1080p HD videos, including different aspect ratios and synchronized audio; the 30B parameter model supports a context length of 73K video tokens, which enables generation of 16-second videos at 16fps; it also presents a 13B parameter video-to-audio generation model and a novel video editing model that’s attained via post-training; achieves state-of-the-art performance on tasks such as text-to-video synthesis, video personalization, video-to-audio generation and more. | [Paper](https://ai.meta.com/static-resource/movie-gen-research-paper), [Tweet](https://x.com/AIatMeta/status/1842188252541043075) |
| 2) **Were RNNs All We Needed?** - revisits RNNs and shows that by removing the hidden states from input, forget, and update gates RNNs can be efficiently trained in parallel; this is possible because with this change architectures like LSTMs and GRUs no longer require backpropagate through time (BPTT); they introduce minLSTMs and minGRUs that are 175x faster for a 512 sequence length. | [Paper](https://arxiv.org/abs/2410.01201), [Tweet](https://x.com/omarsar0/status/1842246985790914608) |
| 3) **LLMs Know More Than They Show** - finds that the "truthfulness" information in LLMs is concentrated in specific tokens; this insight can help enhance error detection performance and further mitigate some of these issues; they also claim that internal representations can be used to predict the types of errors the LLMs are likely to make. | [Paper](https://arxiv.org/abs/2410.02707), [Tweet](https://x.com/omarsar0/status/1842240840389001381) |
| 4) **Architecture Search Framework for Inference-Time Techniques** - introduces a modular framework for building and optimizing LLMs by combining multiple inference-time techniques; this approach reframes the challenge of LLM system design as a hyperparameter optimization problem; tested on benchmarks including MT-Bench and CodeContests, Archon surpasses leading models such as GPT-4o and Claude 3.5 Sonnet, achieving a 15.1% average accuracy improvement. | [Paper](https://arxiv.org/abs/2409.15254), [Tweet](https://x.com/Azaliamirh/status/1840892626096345530) |
| 5) **RATIONALYST** - a model for process-supervision of reasoning that enables generalization across diverse reasoning tasks; this process is achieved with pre-training on a collection of 79k rationales from the Pile and a combination of reasoning datasets with minimal human intervention; fine-tuned from LLaMa-3-8B, the proposed model improves the accuracy of reasoning by an average of 3.9% on 7 reasoning benchmarks. | [Paper](https://arxiv.org/abs/2410.01044) |
| 6) **An Analysis of o1-preview** - reports that large reasoning models like o1-preview, while improving on more difficult tasks, display similar qualitative trends as previous LLMs; o1 is sensitive to the probability of examples and tasks, performing better and requiring fewer “thinking tokens” in high-probability settings than in low-probability ones. | [Paper](https://arxiv.org/abs/2410.01792), [Tweet](https://x.com/omarsar0/status/1841842414157472240) |
| 7) **FRAMES** - a unified framework to evaluate an LLM’s ability to provide factual responses, assess retrieval capabilities, and the reasoning required to generate final responses; includes multi-hop questions that require the integration of information from multiple sources; reports that state-of-the-art LLMs struggle on the task and only achieve 40% accuracy with no retrieval; the proposed multi-step retrieval approach improves performance to 66% accuracy. | [Paper](https://arxiv.org/abs/2409.12941), [Tweet](https://x.com/_philschmid/status/1840628834275602585) |
| 8) **Not All LLM Reasoners Are Created Equal** - investigates in depth the grade-school math problem-solving capabilities of LLMs; reports that LLMs show a significant gap in reasoning; finds that LLMs display a huge performance difference when solving compositional pairs and solving questions independently. | [Paper](https://arxiv.org/abs/2410.01748), [Tweet](https://x.com/arianTBD/status/1841875515860517130) |
| 9) **Evaluation of o1** - provides a comprehensive evaluation of OpenAI's o1-preview LLM; shows strong performance across many tasks such as competitive programming, generating coherent and accurate radiology reports, high school-level mathematical reasoning tasks, chip design tasks, anthropology and geology, quantitative investing, social media analysis, and many other domains and problems. | [Paper](https://arxiv.org/abs/2409.18486), [Tweet](https://x.com/omarsar0/status/1840953712635732006) |
| 10) **Designing Priors for Better Few-Shot Image Synthesis** - training generative models like GAN with limited data is difficult; current Implicit Maximum Likelihood Estimation approaches (IMLE) have an inadequate correspondence between latent code selected for training and those selected during inference; the proposed approach, RS-IMLE, changes the prior distribution for training which improves test-time performance and leads to higher quality image generation. | [Paper](https://arxiv.org/abs/2409.17439), [Tweet](https://x.com/KL_Div/status/1841729946302943295) |
## Top ML Papers of the Week (September 23 - September 29) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Llama 3.2** - presents small and medium-sized vision LLMs (11B and 90B parameters), and lightweight, text-only models (1B and 3B); the text-only models are trained to support context length of 128K tokens and outperform other models in their class on a range of tasks; vision models exceed other models such as Claude 3 Haiku on image understanding tasks. | [Paper](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/), [Tweet](https://twitter.com/Doctor_Zou/status/1782752058124554272) |
| 2) **Molmo** - presents a family of open, state-of-the-art multimodal AI models; the 72B model in the Molmo family outperforms others in the class of open weight and data models; it also compares favorably against proprietary models like GPT-4o, Claude 3.5, and Gemini 1.5 on several benchmarks. | [Paper](https://molmo.allenai.org/paper.pdf), [Tweet](https://twitter.com/emmanuel_vincze/status/1708249637918752987) |
| 3) **AlphaChip** - a reinforcement learning-based method trained to design the physical layout of chips; AlphaChip is reportedly used in three additional generations of Google’s TPU; this release includes an open-source implementation of the method to help pre-train on a variety of chip blocks to apply to new blocks; also releases a model checkpoint pre-trained on 20 TPU blocks. | [Paper](https://www.nature.com/articles/s41586-024-08032-5), [Tweet](https://twitter.com/GoogleAI/status/1676118998259507200) |
| 4) **LLMs Still Can’t Plan** - evaluates whether large reasoning models such as o1 can plan; finds that a domain-independent planner can solve all instances of Mystery Blocksworld but LLMs struggle, even on small instances; o1-preview is effective on the task but tend to degrade in performance as plan length increases, concludes that while o1 shows progress on more challenging planning problems, the accuracy gains cannot be considered general or robust. | [Paper](https://arxiv.org/abs/2409.13373), [Tweet](https://twitter.com/johnxschulman/status/1657558270450917378) |
| 5) **Scaled-up Instructable Model Become Less Reliable** - suggests that larger and more instructable LLMs may become less reliable; investigates LLMs across three elements: difficulty concordance, task avoidance, and prompting stability; finds that early models often avoid user questions but scaled-up, shaped-up models tend to give an apparently sensible yet wrong answer much more often, including errors on difficult questions that human supervisors frequently overlook. | [Paper](https://www.nature.com/articles/s41586-024-07930-y), [Tweet](https://twitter.com/rylanmshea/status/1583460628966346752) |
| 6) **Logic-of-Thought** - proposes a new prompting technique called Logic-of-Thought (LoT) which employs propositional logic to generate and inject expanded logical information from the input context; it enhances CoT performance on the ReClor dataset by +4.35%; it improves CoT+SelfConsistency’s performance on LogiQA by +5%; it also boosts the performance of ToT on the ProofWriter dataset by +8%. | [Paper](https://arxiv.org/abs/2409.17539), [Tweet](https://twitter.com/IsItPerplexity/status/1704255260019798052) |
| 7) **RAG and Beyond** - presents a survey that introduces a RAG task categorization method that helps to classify user queries into four levels according to the type of external data required and the focus of the task; summarizes key challenges in building robust data-augmented LLM applications and the most effective techniques for addressing them. | [Paper](https://arxiv.org/abs/2409.14924), [Tweet](https://twitter.com/mishigna/status/1703461946958463118) |
| 8) **A Preliminary Study of o1 in Medicine** - provides a preliminary exploration of the o1-preview model in medical scenarios; shows that o1 surpasses the previous GPT-4 in accuracy by an average of 6.2% and 6.6% across 19 datasets and two newly created complex QA scenarios; identifies hallucination, inconsistent multilingual ability, and discrepant metrics for evaluation. | [Paper](https://arxiv.org/abs/2409.15277), [Tweet](https://twitter.com/RichardEvans_AI/status/1691963090436067397) |
| 9) **Small Language Models Survey** - a comprehensive survey on small language models (SLMs) across architectures, training datasets, and training algorithms; analyzes 59 state-of-the-art open-source SLMs and capabilities such as reasoning, in-context learning, maths, and coding; other discussions include on-device runtime costs, latency, memory footprint, and valuable insights. | [Paper](https://arxiv.org/abs/2409.15790), [Tweet](https://twitter.com/sebatian_ruder/status/1691611318636159002) |
| 10) **Minstrel** - a multi-generative agent system with reflection capabilities to automate structural prompt generation; it presents LangGPT, an extensible framework for designing prompts; Minstrel is built on top of LangGPT and experiments demonstrate that structural prompts (either generated by Minstrel or written manually) perform better in guiding LLMs to perform tasks. | [Paper](https://arxiv.org/abs/2409.13449), [Tweet](https://twitter.com/LiZhang1351/status/1702992849091985677) |
## Top ML Papers of the Week (September 16 - September 22) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Moshi** - introduces a speech-text foundation model and full-duplex spoken dialogue framework; they present several components of the systems; Helium is a 7B parameter text LLM; Mimi is a semantic-acoustic neural audio code with state-of-the-art performance on audio quality; a hierarchical multi-stream architecture that can generate arbitrary conversation in a speech-to-speech manner. | [Paper](https://kyutai.org/Moshi.pdf), [Tweet](https://x.com/kyutai_labs/status/1836427396959932492) |
| 2) **Training LLMs to Self-Correct via RL** - develops a multi-turn online reinforcement learning to improve the capabilities of an LLM to self-correct; it’s based entirely on self-generated data; SFT is shown to be ineffective at learning self-correction and suffers from distribution mismatch between training data and model responses; proposes a two-stage approach that first optimizes correction behavior and then uses a reward bonus to amplify self-correction during training; when applied to Gemini 1.0 Pro and 1.5 Flash models, it achieves state-of-the-art self-correction performance, improving the base models’ self-correction by 15.6% and 9.1% respectively on the MATH and HumanEval benchmarks. | [Paper](https://arxiv.org/abs/2409.12917), [Tweet](https://x.com/omarsar0/status/1837228446839361984) |
| 3) **Qwen2.5 Coder** - a series of models including 1.5B and 7B parameters; it’s built upon the Qwen2.5 architecture which is continuously pretrained on 5.5 trillion tokens; achieves state-of-the-art performance across more than 10 benchmarks; includes strong capabilities in code generation, completion, reasoning, and repairing. | [Paper](https://arxiv.org/abs/2409.12186), [Tweet](https://x.com/huybery/status/1837170643563073960) |
| 4) **Diagram of Thought (DoT)** - enhances the reasoning capabilities of LLMs through mathematical rigor; DAT models iterative reasoning in LLM as the construction of a directed acyclic graph; it integrates propositions, critiques, refinement, and verification into a unified DAG structure; this allows DoT to capture complex logical deduction beyond linear or tree-based approaches. | [Paper](https://arxiv.org/abs/2409.10038), [Tweet](https://x.com/omarsar0/status/1835882277563179512) |
| 5) **Agents in Software Engineering** - provides a comprehensive overview of frameworks of LLM-based agents in software engineering. | [Paper](https://arxiv.org/abs/2409.09030), [Tweet](https://x.com/omarsar0/status/1835705359723319702) |
| 6) **To CoT or not to CoT?** - investigates what kinds of tasks benefit the most from chain-of-thought (CoT) prompting; after a meta-analysis on 100+ papers and several evaluations, it finds that CoT produces strong performance benefits primarily on tasks involving math and logic; they find that most of the CoT gain comes from improving symbolic execution, but a symbolic solver outperforms it. | [Paper](https://arxiv.org/abs/2409.12183), [Tweet](https://x.com/omarsar0/status/1836599280477299013) |
| 7) **A Comprehensive Evaluation of Quantized Instruction-Tuned LLMs** - evaluates the performance of instruction-tuned LLMs across various quantization methods on models ranging from 7B to 405B; the key findings are 1) quantizing a larger LLM to a similar size as a smaller FP16 LLM generally performs better across most benchmarks, 2) performance varies significantly with different quantization methods, model size, and bit-width, with weight-only methods often yielding better results in larger models, and 3) task difficulty does not significantly impact accuracy degradation due to quantization. | [Paper](https://arxiv.org/abs/2409.11055), [Tweet](https://arxiv.org/abs/2409.11055) |
| 8) **Iteration of Thought** - proposes the Iteration of Thought (IoT) framework to enhance the LLM responses and reasoning capabilities with adaptive reasoning paths; it leverages an inner dialogue agent, acting as a guide, to dynamically adjust reasoning paths which allows adaptive cross-path exploration and enhance response accuracy; it's different from CoT and ToT (both rigid processes) in that its prompt generation is a dynamic process that allows it to adapt. | [Paper](https://arxiv.org/abs/2409.12618), [Tweet](https://x.com/omarsar0/status/1836977595847692671) |
| 9) **Schrodinger’s Memory** - uses the Universal Approximation Theorem to explain the memory mechanism of LLMs. It also proposes a new approach to evaluate LLM performance by comparing the memory capacities of different models; the Transformer architecture functions as a dynamic fitting UAT model, with a strong ability to adaptively fit inputs; this enables LLMs to recall entire content based on minimal input information. | [Paper](https://arxiv.org/abs/2409.10482), [Tweet](https://x.com/omarsar0/status/1835882330323554321) |
| 10) **Math Jailbreaking Prompts** - uses GPT-4o to generate mathematically encoded prompts that serve as an effective jailbreaking technique; shows an average attack success rate of 73.6% across 13 state-of-the-art; this highlights the inability of existing safety training mechanisms to generalize to mathematically encoded inputs. | [Paper](https://arxiv.org/abs/2409.11445), [Tweet](https://x.com/omarsar0/status/1836603922405806501) |
## Top ML Papers of the Week (September 9 - September 15) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Learning to Reason with LLMs** - a new family of LLMs trained with reinforcement learning to reason before it responds to complex tasks; it produces a long internal chain of thought and exceeds in science, code, and math-related tasks; ranked in the 49th percentile in the 2024 International Olympiad in Informatics and exceeds human PhD-level accuracy on science-related benchmarks. - | [Paper](https://openai.com/index/learning-to-reason-with-llms/), [Tweet](https://x.com/OpenAI/status/1834278217626317026) |
| 2) **Chai-1** - a new multi-modal foundation model for molecular structure prediction that can predict proteins, small molecules, DNA, RNA, and more; it achieves state-of-the-art results on a variety of tasks in drug discovery; achieves a 77% success rate on the PoseBusters benchmark (vs. 76% by AlphaFold 3), as well as an Cα LDDT of 0.849 on the CASP15 protein monomer structure prediction set (vs. 0.801 by ESM3-98B). | [Paper](https://www.chaidiscovery.com/blog/introducing-chai-1), [Tweet](https://x.com/joshim5/status/1833183091776721106) |
| 3) **Can LLMs Generation Novel Research Ideas** - finds that LLM-generated research ideas are judged as more novel (p <0.05) than human expert ideas; however, they were rated slightly weaker in terms of flexibility; they also report that LLM agents lack diversity in the idea generation process and are not reliable evaluators. | [Paper](https://arxiv.org/abs/2409.04109), [Tweet](https://x.com/ChengleiSi/status/1833166031134806330) |
| 4) **DataGemma** - includes a series of fine-tuned Gemma 2 models to help LLMs access and incorporate numerical and statistical data; proposes a new approach called Retrieval Interleaved Generation (RIG) which can reliably incorporate public statistical data from Data Commons into LLM responses; RIG is a tool-inspired approach, can interleave statistical tokens with natural language questions suitable for retrieval from Data Commons; to attain such capability, they fine-tune the LLM on an instruction-response dataset generated with the help of Gemini 1.5; the RIG approach improves factuality from 5-7% to about 58%. | [Paper](https://docs.datacommons.org/papers/DataGemma-FullPaper.pdf), [Tweet](https://x.com/omarsar0/status/1834235024675406012) |
| 5) **Agent Workflow Memory** - introduces Agent Workflow Memory to induce commonly reused workflows and provide these to the agent on demand; works offline and online and is meant to guide the agent's subsequent generations; it’s inspired by how humans learn reusable workflows from past experiences and use them to guide future actions; claims to substantially improve the baseline results by 24.6% and 51.1% relative success rate on Mind2Web and WebArena while doing it in a more efficient way. | [Paper](https://arxiv.org/abs/2409.07429), [Tweet](https://x.com/omarsar0/status/1834059522198896706) |
| 6) **The Role of Small Language Models in the LLM Era** - closely examines the relationship between LLMs and SLMs; common applications of SLMs include data curation, training stronger models, efficient inference, evaluators, retrievers, and much more; includes insights for practitioners to better understand the value of these SLMs. | [Paper](https://arxiv.org/abs/2409.06857), [Tweet](https://x.com/omarsar0/status/1834063138586829273) |
| 7) **LLaMa-Omni** - a model architecture for low-latency speech interaction with LLMs; it is based on Llama-3.1-8B-Instruct and can simultaneously generate both text and speech responses given speech instructions; responses can be generated with a response latency as low as 226ms; architecture-wise, it involves a speech encoder (Whispter-large-v3), a speech adaptor, an LLM, and a speech decoder; they also created a dataset of 200K speech interactions and responses. | [Paper](https://arxiv.org/abs/2409.06666), [Tweet](https://x.com/omarsar0/status/1834227729241440340) |
| 8) **Can LLMs Unlock Novel Scientific Research Ideas** - investigates whether LLM can generate novel scientific research ideas; reports that Claude and GPT models tend to align more with the author's perspectives on future research ideas; this is measured across different domains like science, economics, and medicine. | [Paper](https://arxiv.org/abs/2409.06185), [Tweet](https://x.com/omarsar0/status/1833695968656793610) |
| 9) **Theory, Analysis, and Best Practices for Sigmoid Self-Attention** - proposes Flash-Sigmoid, a hardware-aware and memory-efficient implementation of sigmoid attention; it yields up to a 17% inference kernel speed-up over FlashAttention-2 on H100 GPUs; show that SigmoidAttn matches SoftwaxAttn in various tasks and domains. | [Paper](https://arxiv.org/abs/2409.04431), [Tweet](https://x.com/omarsar0/status/1833522827842220244) |
| 10) **Achieving Peak Performance for LLMs** - a systematic review of methods for improving and speeding up LLMs from three points of view: training, inference, and system serving; summarizes the latest optimization and acceleration strategies around training, hardware, scalability, and reliability. | [Paper](https://arxiv.org/abs/2409.04833), [Tweet](https://x.com/omarsar0/status/1833344402892460364) |
## Top ML Papers of the Week (September 2 - September 8) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **AlphaProteo** - presents a family of ML models trained for protein design; reports a 3-to 300-fold better binding affinities and higher experimental success rates compared to other existing methods on seven target proteins; shows that AlphaProteo’s performance on hundreds of target proteins from the PDB is comparable to the seven targets. | [Paper](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaproteo-generates-novel-proteins-for-biology-and-health-research/AlphaProteo2024.pdf), [Tweet](https://x.com/GoogleDeepMind/status/1831710991475777823) |
| 2) **RAG in the Era of Long-Context LLMs** - reports that longer-context LLMs suffer from a diminished focus on relevant information, which is one of the primary issues that a RAG system addresses (i.e., uses more relevant information); they propose an order-preserving RAG mechanism that improves performance on long-context question answering; it's not perfect and in fact, as retrieved chunks increase the quality of responses go up and then declines; they mention a sweet spot where it can achieve better quality with a lot fewer tokens than long-context LLMs. | [Paper](https://arxiv.org/abs/2409.01666), [Tweet](https://x.com/omarsar0/status/1831389521839267888) |
| 3) **Strategic Chain-of-Thought** - a method to refine LLM performance by incorporating strategic knowledge before the intermediate CoT reasoning steps; the problem-solving strategy helps to guide the generation of the CoT paths and final answers; claims to achieve a 21.05% increase on the GSM8K datasets using the Llama3-8b model. | [Paper](https://arxiv.org/abs/2409.03271v1) |
| 4) **Effective of AI on High Skilled Work** - studies the impact of generative AI on software developers; reveals a 26.08% increase in the number of completed tasks among the developers that use AI tools like GitHub Copilot; also shows that less experienced developers are likely to adopt the AI tools and have greater productivity gains. | [Paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566), [Tweet](https://x.com/emollick/status/1831739827773174218) |
| 5) **OLMoE** - introduces a fully-open LLM that leverages sparse Mixture-of-Experts. OLMoE is a 7B parameter model and uses 1B active parameters per input token; there is also an instruction-tuned version that claims to outperform Llama-2-13B-Chat and DeepSeekMoE 16B. | [Paper](https://arxiv.org/abs/2409.02060), [Tweet](https://x.com/omarsar0/status/1831357563620753577) |
| 6) **LongCite** - synthesizes a large-scale SFT dataset with off-the-shelf LLMs to improve long-context question answering with citations; it trains 8B and 9B parameter models that enhance citation generation capabilities from lengthy contexts while improving response correctness; claims to even surpass GPT-4o on their proposed LongBench-Cite benchmark. | [Paper](https://arxiv.org/abs/2409.02897), [Tweet](https://x.com/omarsar0/status/1831522905009828051) |
| 7) **MemLong** - utilizes an external retriever for retrieving historical information which enhances the capabilities of long-context LLMs; it consistently outperforms other SoTA LLMs on long-context benchmarks and can extend the context length on a single 3090 GPU from 4k up to 80k. | [Paper](https://arxiv.org/abs/2408.16967), [Tweet](https://x.com/omarsar0/status/1830610367854112799) |
| 8) **Role of RAG Noise in LLMs** - proposes a benchmark (NoiserBench) to measure how different kinds of noisy information affect RAG's performance; reports that from different kinds of beneficial noise studied (e.g., semantic, datatype, and illegal sentence), illegal sentence noise exhibits the most improved model performance across models and datasets. | [Paper](https://arxiv.org/abs/2408.13533), [Tweet](https://x.com/omarsar0/status/1830984315326660617) |
| 9) **Beyond Preference in AI Alignment** - challenges the dominant practice of AI alignment known as human preference tuning; explains in what ways human preference tuning fails to capture the thick semantic content of human values; argues that AI alignment needs reframing, instead of aligning on human preferences, AI should align on normative standards appropriate to their social roles. | [Paper](https://arxiv.org/abs/2408.16984), [Tweet](https://x.com/xuanalogue/status/1831044533779669136) |
| 10) **LLM-Based Agents for Software Engineering** - a survey paper on LLM-based agents for software engineering, covering perspectives ranging from requirement engineering to test generation to software maintenance. | [Paper](https://arxiv.org/abs/2409.02977), [Tweet](https://x.com/omarsar0/status/1832115557749121385) |
## Top ML Papers of the Week (August 26 - September 1) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **GameGen** - a game engine powered by a diffusion model that enables real-time interaction with complex environments over long trajectories; uses a two-phase training process involving an RL agent to learn and a diffusion model to generate frames; it can interactively simulate DOOM over at 20 fps on a single TPU. | [Paper](https://arxiv.org/abs/2408.14837), [Tweet](https://x.com/iScienceLuvr/status/1828617875432841490) |
| 2) **Agentic RAG for Time Series Analysis** - proposes an agentic RAG framework for time series analysis; uses a multi-agent architecture where an agent orchestrates specialized sub-agents to complete time-series tasks; the sub-agents leverage tuned small language models and can retrieve relevant prompts containing knowledge about historical patterns and trends; this helps to improve predictions on new data. | [Paper](https://arxiv.org/abs/2408.14484), [Tweet](https://x.com/omarsar0/status/1828838209461043455) |
| 3) **AutoGen Studio** - a low-code interface for rapidly prototyping AI agents. It's built on top of the AutoGen framework and can also be used for debugging and evaluating multi-agent workflows. | [Paper](https://arxiv.org/abs/2408.15247), [Tweet](https://x.com/omarsar0/status/1829163090715529358) |
| 4) **Persuasion Games with LLMs** - claims that a multi-agent framework can be used to improve the persuasive efficacy of LLMs; the primary agent engages in persuasive dialogue while auxiliary agents perform key tasks like response analysis and information retrieval; finds that LLMs are capable of creating a perspective change in the users and persuading them to make a purchase decision; for instance, Sales agents can achieve a 71% positive shift in user perspectives. | [Paper](https://arxiv.org/abs/2408.15879), [Tweet](https://x.com/omarsar0/status/1829156960291185117) |
| 5) **Smaller, Weaker, Yet Better** - finds that weaker + cheaper (WC) models can generate better synthetic data for fine-tuning models compared to data generated with stronger but more expensive models; overall, results suggest that WC models may be a compute-optimal approach for training advanced LLM reasoners. | [Paper](https://arxiv.org/abs/2408.16737), [Tweet](https://x.com/omarsar0/status/1829526629787242878) |
| 6) **Transfusion** - presents a training recipe to train multi-modal models over discrete and continuous data; combines next token prediction with diffusion to train transformer models over mixed-modality sequences; shows that it’s possible to scale from 7B parameter models to 2T multi-modal tokens that can compete in performance with similar scale diffusion and language models. | [Paper](https://www.arxiv.org/abs/2408.11039), [Tweet](https://x.com/AIatMeta/status/1828836885176967327) |
| 7) **ReMamba** - investigates the long-context capabilities and efficiencies of Mamba models; the long-context deficiency issues are due to Mamba's RNN-like nature; it achieves this by condensing information via the following compression strategy: the top-k hidden states during the first forward pass and leverages Mamba’s selective mechanism to incorporate them into the state space during the second forward pass; achieves a 3.2 improvement over the baseline on LongBench and 1.6 improvement on L-Eval; the strategy seems to also transfer to Mamba 2. | [Paper](https://arxiv.org/abs/2408.15496), [Tweet](https://x.com/omarsar0/status/1829151312266637813) |
| 8) **Text2SQL is Not Enough** - proposes Table-Augmented Generation (TAG), a unified framework for answering natural language questions over databases; it represents a wider range of unexplored interactions between LLMs and databases; develops a benchmark and finds that standard methods answer no more than 20% of queries correctly. | [Paper](https://arxiv.org/abs/2408.14717v1), [Tweet](https://x.com/lianapatel_/status/1828939097487945948) |
| 9) **Foundation Models for Music** - provides a comprehensive overview of state-of-the-art pre-trained models and foundation models in music. | [Paper](https://arxiv.org/abs/2408.14340), [Tweet](https://x.com/omarsar0/status/1828456481114538437) |
| 10) **Guide to Continual Multimodal Pretraining** - a comprehensive guide on continual multimodal pertaining; introduces FoMo-In-Flux, a large-scale fine-grained and long horizon continual pretraining benchmark. | [Paper](https://arxiv.org/abs/2408.14471), [Tweet](https://arxiv.org/abs/2408.14471) |
## Top ML Papers of the Week (August 19 - August 25) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Automate Design of Agentic Systems** - presents Meta Agent Search, a meta agent that iteratively programs and tests new agents based on a growing archive of previous discoveries; claims that with their approach it is possible to learn any possible agentic system including prompts, tool use, control flows, and more; they achieve this by focusing on three main components referred to as search space (define agents), search algorithm (explore search space), and the evaluation function (evaluate candidate agents). | [Paper](https://arxiv.org/abs/2408.08435), [Tweet](https://x.com/omarsar0/status/1825378027347271719) |
| 2) **LLM Pruning and Distillation in Practice** - provides a comprehensive report on effective methods for compressing Llama 3.1 and Mistral NeMo models; it presents pruning and distillation approaches applied to the original models to produce 4B and 8B parameter models, respectively; before pruning, they also fine-tune the teacher model on their datasets leading to better distillation; their compression strategy yields a state-of-the-art 8B model (MN-Minitron-8B) which outperforms all similarly-sized models on common language modeling benchmarks. | [Paper](https://arxiv.org/abs/2408.11796), [Tweet](https://x.com/omarsar0/status/1826676365044675042) |
| 3) **Vizier Gaussian Process Bandit Algorithm** - presents Vizier, an algorithm based on Gaussian process bandit optimization used by Google for millions of optimizations and research; it provides an open-source Python implementation of the Vizier algorithm, including benchmarking results that demonstrate its wider applicability. | [Paper](https://arxiv.org/abs/2408.11527), [Tweet](https://x.com/XingyouSong/status/1826554454084333723) |
| 4) **Language Modeling on Tabular Data** - presents a comprehensive survey of language modeling techniques for tabular data; includes topics such as categorization of tabular data structures and data types, datasets used for model training and evaluation, modeling techniques and training objectives, data processing methods, popular architectures, and challenges and future research directions. | [Paper](https://www.arxiv.org/abs/2408.10548), [Tweet](https://x.com/omarsar0/status/1826094372179366023) |
| 5) **Enhancing Robustness in LLMs** - proposes a two-stage prompting technique to remove irrelevant information from context; it serves as a self-mitigation process that first identifies the irrelevant information and then filters it out; this leads to enhancement in robustness of the model and overall better performance on reasoning tasks. | [Paper](https://arxiv.org/abs/2408.10615), [Tweet](https://x.com/omarsar0/status/1826451091774447983) |
| 6) **A Comprehensive Overview of GraphRAG Methods** - focuses on techniques applied to the GraphRAG workflow (graph-based indexing, graph-guided retrieval, and graph-enhanced generation); examines tasks, applications, evaluation, and industrial use cases of GraphRAG. | [Paper](https://arxiv.org/abs/2408.08921), [Tweet](https://x.com/omarsar0/status/1825937537782698377) |
| 7) **MagicDec** - shows how speculative decoding can enhance throughput, reduce latency, and maintain accuracy in long context generation scenarios; it finds that as sequence length and batch size increase, bottlenecks shift from compute-bound to memory-bound; using these insights, they show it's possible to more effectively use speculative decoding for longer sequences, even when using large batch sizes. | [Paper](https://arxiv.org/abs/2408.11049), [Tweet](https://x.com/omarsar0/status/1826090969906778122) |
| 8) **Controllable Text Generation for LLMs** - provides a comprehensive survey on methods for controllable text generation in LLMs; discusses issues like safety, consistency, style, and helpfulness. | [Paper](https://arxiv.org/abs/2408.12599), [Tweet](https://x.com/omarsar0/status/1826824199010132429) |
| 9) **PEDAL** - uses a hybrid self-ensembling approach (based on diverse exemplars) to improve the overall performance of LLMs; specifically, it uses diverse exemplars to generate multiple candidate responses and then aggregates them using an LLM to generate a final response; this approach achieves better accuracy compared to greedy decoding and lower cost compared to self-consistency approaches. | [Paper](https://arxiv.org/abs/2408.08869), [Tweet](https://x.com/omarsar0/status/1825373675631071609) |
| 10) **Challenges and Responses in the Practice of LLMs** - curates a set of important questions with insightful answers; questions are categorized across topics such as infrastructure, software architecture, data, application, and brain science. | [Paper](https://arxiv.org/abs/2408.09416), [Tweet](https://x.com/omarsar0/status/1825932441980162374) |
## Top ML Papers of the Week (August 12 - August 18) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **The AI Scientist** - a novel AI agent that can develop and write a full conference-level scientific paper costing less than $15; it automates scientific discovery by enabling frontier LLMs to perform independent research and summarize findings; it also uses an automated reviewer to evaluate the generated papers; claims to achieve near-human performance in evaluating paper scores; claims to produce papers that exceed the acceptance threshold at a top machine learning conference as judged by their automated reviewer. | [Paper](https://arxiv.org/abs/2408.06292), [Tweet](https://x.com/omarsar0/status/1823189280883097788) |
| 2) **Grok-2** - a new frontier model with strong code, math, and reasoning capabilities which includes a large and small model; outperforms both Claude 3.5 Sonnet and GPT-4-Turbo on the LMSYS Chatbot Arena; claims to improve capabilities including instruction following, retrieval, tool use, and enhancing factuality; competes with Claude 3.5 Sonnet (June release) and GPT-4o (May release) on MMLU and HumanEval. | [Paper](https://x.ai/blog/grok-2), [Tweet](https://x.com/xai/status/1823597788573098215) |
| 3) **LongWriter** - proposes AgentWrite to enable off-the-shelf LLMs to generate coherent outputs beyond 20K words; AgentWrite breaks down the long generation task into subtasks and in a divide-and-conquer approach generates; the agent breaks the task into multiple writing subtasks and concatenates the outputs to get a final output (i.e., plan + write); the approach is then used to build SFT datasets that are used to tune LLMs to generate coherent longer outputs automatically; a 9B parameter model, further improved through DPO, achieves state-of-the-art performance on their benchmark, and surpasses proprietary models. | [Paper](https://arxiv.org/abs/2408.07055), [Tweet](https://x.com/omarsar0/status/1823551063946850712) |
| 4) **EfficientRAG** - trains an auto-encoder LM to label and tag chunks; it retrieves relevant chunks, tags them as either <Terminate> or <Continue>, and annotates <Continue> chunks for continuous processing; then a filter model is trained to formulate the next-hop query based on the original question and previous annotations; this is done iteratively until all chunks are tagged as <Terminate> or the maximum # of iterations is reached; after the process above has gathered enough information to answer the initial question, the final generator (an LLM) generates the final answer. | [Paper](https://arxiv.org/abs/2408.04259), [Tweet](https://x.com/omarsar0/status/1822744591810114044) |
| 5) **RAGChecker** - a fine-grained evaluation framework for diagnosing retrieval and generation modules in RAG; shows that RAGChecker has better correlations with human judgment; reports several revealing insightful patterns and trade-offs in design choices of RAG architectures. | [Paper](https://arxiv.org/abs/2408.08067), [Tweet](https://x.com/omarsar0/status/1824460245051081216) |
| 6) **HybridRAG** - combines GraphRAG and VectorRAG leading to a HybridRAG system that outperforms both individually; it was tested on a set of financial earning call transcripts. Combining the advantages of both approaches provides more accurate answers to queries. | [Paper](https://arxiv.org/abs/2408.04948), [Tweet](https://x.com/omarsar0/status/1822832843455648000) |
| 7) **rStar** - introduces self-play mutual reasoning to improve the reasoning capabilities of small language models without fine-tuning or superior models; MCTS is augmented with human-like reasoning actions, obtained from SLMs, to build richer reasoning trajectories; a separate SLM provides unsupervised feedback on the trajectories and the target SLM selects the final reasoning trajectory as the answer; rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B and consistently improves the accuracy of other SLMs. | [Paper](https://arxiv.org/abs/2408.06195), [Tweet](https://x.com/AtakanTekparmak/status/1823776878747877572) |
| 8) **Scaling LLM Test-Time Compute Optimally** - investigates the scaling behaviors of inference-time computation in LLMs; in particular, it analyses how much an LLM can be improved provided a fixed amount of inference-time compute; finds that the effectiveness of different scaling approaches varies by difficulty of prompt; it then proposes an adaptive compute-optimal strategy that can improve efficiency by more than 4x compared to a best-of-N baseline; reports that in a FLOPs-matched evaluation, optimally scaling test-time compute can outperform a 14x larger model. | [Paper](https://arxiv.org/abs/2408.05109), [Tweet](https://x.com/sea_snell/status/1821263798772363598) |
| 9) **MedGraphRAG** - a graph-based framework for the medical domain with a focus on enhancing LLMs and generating evidence-based results; leverages a hybrid static-semantic approach to chunk documents to improve context capture; entities and medical knowledge are represented through graphs which leads to an interconnected global graph; this approach improves precision and outperforms state-of-the-art models on multiple medical Q&A benchmarks. | [Paper](https://arxiv.org/abs/2408.04187), [Tweet](https://x.com/Marktechpost/status/1823069406924288110) |
| 10) **Survey of NL2QL** - a comprehensive overview of NL2SQL techniques powered by LLMs; covers models, data collection, evaluation methods, and error analysis. | [Paper](https://arxiv.org/abs/2408.05109), [Tweet](https://x.com/_reachsumit/status/1822835969743347815) |
## Top ML Papers of the Week (August 5 - August 11) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **SAM 2** - an open unified model for real-time, promptable object segmentation in images and videos; can be applied to unseen visual content without the need for custom adaptation; to enable accurate mask prediction in videos, a memory mechanism is introduced to store information on the object and previous interactions; the memory module also allows real-time processing of arbitrarily long videos; SAM2 significantly outperforms previous approaches on interactive video segmentation across 17 zero-shot video datasets while requiring three times fewer human-in-the-loop interactions. | [Paper](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/), [Tweet](https://x.com/AIatMeta/status/1818055906179105010) |
| 2) **Structured Generation Limits Reasoning** - investigates if structured generation can impact an LLM’s reasoning and domain knowledge comprehensive capabilities; observes that there is a significant decline in LLM’s reasoning abilities when applying format restrictions compared to free-form responses; this degradation effect is further amplified when applying stricter format constraints to reasoning tasks. | [Paper](https://arxiv.org/abs/2408.02442), [Tweet](https://x.com/omarsar0/status/1822357786820284555) |
| 3) **From LLMs to LLM-based Agents for Sofware Engineering** - a survey paper on current practices and solutions for LLM-based agents for software engineering; covers important topics such as requirement engineering, code generation, test generation, and autonomous decision making; it also includes benchmarks, metrics, and models used in different software engineering applications. | [Paper](https://arxiv.org/abs/2408.02479), [Tweet](https://x.com/omarsar0/status/1821549401866686604) |
| 4) **Transformer Explainer** - presents an open-source interactive tool to learn about the inner workings of a Transformer model; it runs a GPT-2 instance locally in the user's browser and allows experimenting with your own inputs. | [Paper](https://arxiv.org/abs/2408.04619), [Tweet](https://x.com/omarsar0/status/1821986172215742716) |
| 5) **Enhancing LLMs for RAG** - introduces RAGFoundry, an open-source framework for augmented LLMs for RAG use cases; it supports data creation, training, inference, and evaluation; one useful application is the creation of data-augmented datasets for tuning and evaluating LLMs in RAG settings. | [Paper](https://arxiv.org/abs/2408.02545), [Tweet](https://x.com/omarsar0/status/1820864003590995973) |
| 6) **Synthesizing Text-to-SQL Data from Weak and Strong LLMs** - proposes integrated synthetic data to build a highly specialized SoTA text-to-SQL model called SENSE; the synthetic data from strong models enhances data diversity while valuable erroneous data from weaker models combined with an executor to learn from execution feedback; preference learning is used to instruction-tune LLMs to learn from both correct and incorrect samples; SENSE achieves state-of-the-art results on the SPIDER and BIRD benchmarks, which bridges the performance gap between open-source models and methods that use closed-source models. | [Paper](https://arxiv.org/abs/2408.03256), [Tweet](https://x.com/omarsar0/status/1821227584920621061) |
| 7) **Conversational Prompt Engineering** - proposes an approach to help users create personalized prompts by articulating the preferred outputs via interactions; it involves two stages: 1) an initial instruction shaped by the model based on user-provided unlabeled data, and 2) the model shares the output and the user provides feedback with refinements on outputs and instruction; this iterative process results in a personalized few-shot prompt that performs better and more optimally on the desired task. | [Paper](https://arxiv.org/abs/2408.04560), [Tweet](https://x.com/omarsar0/status/1821981401861718488) |
| 8) **Self-Taught Evaluators** - an approach to improve model-based evaluators using synthetic training data only; it first generates contrasting outputs (good and bad model responses) and trains an LLM-as-a-Judge to produce reasoning traces and final judgments; the self-improvement scheme repeats the training process in an iterative way using its improved predictions; claims to outperform LLM-judges such as GPT-4 and match top-performing reward models trained on labeled examples; improves a strong LLM (Llama3-70BInstruct) from 75.4 to 88.3 (88.7 with majority vote) on RewardBench. | [Paper](https://arxiv.org/abs/2408.02666), [Tweet](https://x.com/omarsar0/status/1820849115607044401) |
| 9) **RAGEval** - proposes a simple framework to automatically generate evaluation datasets to assess knowledge usage of different LLM under different scenarios; it defines a schema from seed documents and then generates diverse documents which leads to question-answering pairs; the QA pairs are based on both the articles and configurations. | [Paper](https://arxiv.org/abs/2408.01262), [Tweet](https://x.com/omarsar0/status/1820507831491239978) |
| 10) **Survey of Mamba** - provides a systematic review of existing Mamba-based models across domains and tasks; specifically, focuses on advancements of Mamba-based models, techniques for adapting Mamba to diverse data, applications where Mamba excels, and promising research directions | [Paper](https://arxiv.org/abs/2408.01129), [Tweet](https://x.com/omarsar0/status/1821556218168549561) |
## Top ML Papers of the Week (July 29 - August 4) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Meta-Rewarding LLMs** - proposes a self-improving alignment technique (no human supervision) where the LLM judges its own judgements and uses the feedback to improve its judgment skills; shows that leveraging this LLM-as-a-Meta-Judge approach improves the LLM's ability to judge and follow instructions; just doing self-improvement to generate better responses (act) saturates quickly; this work improves the LLM's ability to judge itself (judge) to avoid issues like reward hacking; in addition to the act and judge roles, a third role called meta-judge is used to evaluate the model's own judgements. | [Paper](https://arxiv.org/abs/2407.19594), [Tweet](https://x.com/omarsar0/status/1818680848058585119) |
| 2) **MindSearch** - presents an LLM-based multi-agent framework to perform complex web-information seeking and integration tasks; a web planner effectively decomposes complex queries followed by a web searcher that performs hierarchical information retrieval on the Internet to improve the relevancy of the retrieved information; the planning component is powered by an iterative graph construction which is used to better model complex problem-solving processes; the multi-agent framework handles long context problems better by distributing reasoning and retrieval tasks to specialized agents. | [Paper](https://arxiv.org/abs/2407.20183), [Tweet](https://x.com/omarsar0/status/1818673381069226053) |
| 3) **Improved RAG with Self-Reasoning** - presents an end-to-end self-reasoning framework to improve the reliability and traceability of RAG systems; leverages the reasoning trajectories generated by the LLM itself; the LLM is used to carry out the following 3 processes: 1) relevance-aware: judges the relevance between the retrieved documents and the question, 2) evidence-aware selective: chooses and cites relevant documents, and then automatically selects snippets of key sentences as evidence from the cited documents, and 3) trajectory analysis: generates a concise analysis based on all gathered self-reasoning trajectories generated by the previous 2 processes and then provides the final inferred answer; this method helps the model to be more selective, reason and distinguish relevant and irrelevant documents, therefore improving the accuracy of the overall RAG system; the framework achieves comparable performance to GPT-4 with only 2K training samples (generated by GPT-4). | [Paper](https://arxiv.org/abs/2407.19813), [Tweet](https://x.com/omarsar0/status/1818139150882664696) |
| 4) **Constrained-CoT** - limits the model reasoning output length without sacrificing performance; shows that constraining the reasoning of LLaMA2-70b to 100 words improves the accuracy from 36.01% (CoT) to 41.07% (CCoT) on GSM8K, while reducing the average output length by 28 words. | [Paper](https://arxiv.org/abs/2407.19825), [Tweet](https://x.com/omarsar0/status/1818133220484898992) |
| 5) **Adaptive RAG for Conversations Sytems** - develops a gating model that predicts if a conversational system requires RAG to improve its responses; shows that RAG-based conversational systems have the potential to generate high-quality responses and high generation confidence; it also claims to identify a correlation between the generation's confidence level and the relevance of the augmented knowledge. | [Paper](https://arxiv.org/abs/2407.21712), [Tweet](https://x.com/omarsar0/status/1818843407977959756) |
| 6) **ShieldGemma** - offers a comprehensive suite of LLM-based safety content moderation models built on Gemma 2; includes classifiers for key harm types such as dangerous content, toxicity, hate speech, and more. | [Paper](https://arxiv.org/abs/2407.21772), [Tweet](https://x.com/omarsar0/status/1818837753292853349) |
| 7) **Evaluating Persona Agents** - proposes a benchmark to evaluate persona agent capabilities in LLMs; finds that Claude 3.5 Sonnet only has a 2.97% relative improvement in PersonaScore compared to GPT 3.5 despite being a much more advanced model. | [Paper](https://arxiv.org/abs/2407.18416), [Tweet](https://x.com/omarsar0/status/1817964944949739544) |
| 8) **Machine Unlearning Survey** - provides a comprehensive survey on machine unlearning in generative AI. | [Paper](https://arxiv.org/abs/2407.20516), [Tweet](https://x.com/omarsar0/status/1818476462262906985) |
| 9) **ThinK** - proposes an approach to address inefficiencies in KV cache memory consumption; it focuses on the long-context scenarios and the inference side of things; it presents a query-dependent KV cache pruning method to minimize attention weight loss while selectively pruning the least significant channels | [Paper](https://arxiv.org/abs/2407.21018), [Tweet](https://x.com/omarsar0/status/1818474655461621903) |
| 10) **The Art of Refusal** - a survey of the current methods used to achieve refusal in LLMs; provides evaluation benchmarks and metrics used to measure abstention in LLMs. | [Paper](https://arxiv.org/abs/2407.18418), [Tweet](https://x.com/omarsar0/status/1817961056465035596) |
## Top ML Papers of the Week (July 22 - July 28) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Llama 3.1** - a collection of LLMs that include 8B, 70B, and 405B parameters models; supports eight languages and extends the context window to 128K tokens; performs competitively and in some cases outperforms state-of-the-art models across capabilities like general knowledge, math reasoning, and tool use. | [Paper](https://scontent.fbze2-1.fna.fbcdn.net/v/t39.2365-6/452387774_1036916434819166_4173978747091533306_n.pdf?_nc_cat=104&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=t6egZJ8QdI4Q7kNvgHPkimJ&_nc_ht=scontent.fbze2-1.fna&oh=00_AYCV8TJ9rZquHu-nvz4-TFSZXLmCjer_LVQTms1bFpzHpA&oe=66A5D24D), [Tweet](https://x.com/AIatMeta/status/1815766327463907421) |
| 2) **AlphaProof & Alpha Geometry 2** - solved 4 out of 6 problems in this year’s IMO which is the equivalent of a silver-medal score; AlphaProof consists of a Gemini model that automatically translates natural language problem statements into formal statements (i.e., formalizer network); then a solver network searches for proofs/disproofs and progressively trains itself using AlphaZero to learn to solve even more complex problems; AlphaGeometry 2, a neuro symbolic hybrid system, proved the geometry problem; based on the Gemini model and trained from scratch on large amounts of synthetic data. | [Paper](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/), [Tweet](https://x.com/JeffDean/status/1816498336171753948) |
| 3) **RAG vs. Long-Context LLMs** - compares RAG and long-context LLMs and finds that long-context LLMs outperform RAG on average performance while RAG is significantly less expensive; proposes Self-Route, leveraging self-reflection to route queries to RAG or LC; reports that Self-Route significantly reduces computational cost while maintaining comparable performance to LC. | [Paper](https://arxiv.org/abs/2407.16833), [Tweet](https://x.com/omarsar0/status/1816495687984709940) |
| 4) **OpenDevin** - presents a platform to develop generalist agents that interact with the world through software; features include 1) an interaction mechanism for interaction between agents, interfaces, and environments, 2) an environment including a sandboxed operating system and web browser available to the agents, 3) interface to create and execute code, 4) multi-agent support, and 5) an evaluation framework. | [Paper](https://arxiv.org/abs/2407.16741), [Tweet](https://x.com/omarsar0/status/1816872317286281688) |
| 5) **LazyLLM** - introduces a novel dynamic token pruning method for efficient long-context LLM inference; it can accelerate the prefilling stage of a Llama 2 7B model by 2.34x and maintain high accuracy; it selectively computes the KV for tokens that are important for the next token prediction in both the prefilling and decoding stages; it allows language models to dynamically select different subsets of tokens from the context in different generation steps, even though they might be pruned in previous steps. | [Paper](https://arxiv.org/abs/2407.14057), [Tweet](https://x.com/omarsar0/status/1815225416409309264) |
| 6) **Teaching LLM Agents to Self-Improve** - claims it is possible to iteratively fine-tune LLMs with the ability to improve their own response over multiple turns with additional environment feedback; the LLM learns to recursively detect and correct its previous mistakes in subsequent iterations; improves the self-improvement abilities of 7B models on reasoning tasks (GSM8K and MATH), attaining an improvement over turns that’s unseen in strong proprietary models. | [Paper](https://arxiv.org/abs/2407.18219), [Tweet](https://x.com/omarsar0/status/1816671382585114855) |
| 7) **Text-to-SQL Survey** - provides a survey on employing LLMs for Text-to-SQL tasks, including prompt engineering techniques, fine-tuning methods, benchmarks, and more. | [Paper](https://arxiv.org/abs/2407.15186), [Tweet](https://x.com/omarsar0/status/1815599057974223015) |
| 8) **MINT-1T** - open-sources a large-scale multimodal interleaved dataset consisting of 1 trillion tokens which has 3.4 billion images; it also includes new sources such as PDFs and ArXiv papers. | [Paper](https://arxiv.org/abs/2406.11271), [Tweet](https://x.com/omarsar0/status/1816250935930142834) |
| 9) **Model Collapse on Synthetic Data** - investigates the effects of training models on recursively generated data; finds that training on model-generated content can cause irreversible defects where the original content distribution disappears; shows that the effect, referred to as model collapse, occurs in LLMs, VAEs, and GMMs; while tested on smaller scale models (~100M params), the authors suggest this effect is highly likely to transfer to larger models over time. | [Paper](https://www.nature.com/articles/s41586-024-07566-y), [Tweet](https://x.com/alexandr_wang/status/1816491442069782925) |
| 10) **Mitigating Hallucination via Generation Constraint** - proposes a new training-free approach to mitigate hallucination in LLMs; they scaled the readout vector that constrains generation in a memory-augmented LLM decoder; recent works claim that LLMs with explicit memory mechanisms can help lower hallucination; this work uses a memory-augmented LLM and constrains generation in the decoder by applying lightweight memory primitives to reduce hallucination. | [Paper](https://arxiv.org/abs/2407.16908), [Tweet](https://x.com/omarsar0/status/1816491986209104104) |
## Top ML Papers of the Week (July 15 - July 21) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Improving Legibility of LLM Outputs** - iteratively trains small verifiers to predict solution correctness, helpful provers to produce correct solutions accepted by the verifier, and sneaky provers that produce incorrect solutions that fool the verifier; this process helps train models that can produce text that is correct and easy to understand by both humans and AI systems which leads to more trustworthy systems. | [Paper](https://arxiv.org/abs/2407.13692), [Tweet](https://x.com/OpenAI/status/1813623470452064432) |
| 2) **SpreadsheetLLM** - presents an efficient encoding method to optimize an LLM’s understanding and reasoning capability on spreadsheets; develops a sheet compressor consisting of structural-anchor-based compression, inverse index translation, and data-format-aware aggregation modules to efficiently compress and encode spreadsheets; in GPT-4’s in-context learning, it improves performance in spreadsheet table detection by 25.6%. | [Paper](https://arxiv.org/abs/2407.09025), [Tweet](https://x.com/_akhaliq/status/1812674543963578794) |
| 3) **Context Embeddings for Efficient Answer Generation in RAG** - proposes an effective context compression method to reduce long context and speed up generation time in RAG systems; the long contexts are compressed into a small number of context embeddings which allow different compression rates that trade-off decoding time for generation quality; reduces inference time by up to 5.69 × and GFLOPs by up to 22 × while maintaining high performance. | [Paper](http://arxiv.org/abs/2407.09252), [Tweet](https://x.com/omarsar0/status/1812937765769867561) |
| 4) **Weak-to-Strong Reasoning** - demonstrates the use of weak supervision to elicit strong reasoning capabilities in LLMs without relying on human annotations or advanced models; reports that strong models can automatically refine their training data without explicitly being trained to do so; enables expanding a model's learning scope and scaling performance on reasoning. | [Paper](https://arxiv.org/abs/2407.13647), [Tweet](https://x.com/omarsar0/status/1814130275485704597) |
| 5) **A Survey of Prompt Engineering Methods in LLMs** - a collection of prompt engineering methods for a variety of NLP tasks. | [Paper](https://arxiv.org/abs/2407.12994), [Tweet](https://x.com/omarsar0/status/1814135222562165104) |
| 6) **Does Refusal Training in LLMs Generalize to the Past Tense?** - finds that simply reformulating an LLM request into past tense can jailbreak many state-of-the-art LLMs; for example "How to make a Molotov cocktail?" can be rephrased as "How did people make a Molotov cocktail?"; finds that the success rate of such requests can increase from 1% to 88% using direct requests on GPT-4o; concludes that current alignment techniques may not always generalize as intended. | [Paper](https://arxiv.org/abs/2407.11969), [Tweet](https://x.com/maksym_andr/status/1813608842699079750) |
| 7) **Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?** - proposes a framework (NeedleBench) of progressively challenging tasks to assess the long-context retrieval and reasoning capabilities of LLMs; they also present the Ancestral Trace Challenge that increases the need for complex logical reasoning which is common in real-world long-context tasks; their findings suggest that current LLMs struggle to handle reasoning tasks with complex logical relationships, even with texts shorter than 2K tokens. | [Paper](https://arxiv.org/abs/2407.11963), [Tweet](https://x.com/omarsar0/status/1813581074624070109) |
| 8) **Distilling System 2 into System 1** - investigates self-supervised methods to distill high-quality outputs from System 2 techniques and then fine-tune System 1 to match the predictions of the System 2 technique but without generating intermediate steps; the process of distilling reasoning into System 1 results in less inference cost. | [Paper](https://arxiv.org/abs/2407.06023v1), [Tweet](https://x.com/willccbb/status/1813012865454121179) |
| 9) **Exploring Advanced LLMs with LLMSuite** - shares practical tips for developing with and evaluating LLMs; solutions covered range from ReAct to RAG to parameter-efficient methods. | [Paper](https://arxiv.org/abs/2407.12036), [Tweet](https://x.com/omarsar0/status/1813980712346763589) |
| 10) **Beyond Euclid** - provides an illustrated guide and graphical taxonomy of recent advances in non-Euclidean machine learning. | [Paper](https://www.arxiv.org/abs/2407.09468), [Tweet](https://x.com/omarsar0/status/1812927886766010653) |
## Top ML Papers of the Week (July 8 - July 14) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **FlashAttention-3** - proposes to adapt FlashAttention to take advantage of modern hardware; the techniques used to speed up attention on modern GPUs include producer-consumer asynchrony, interleaving block-wise matmul and softmax operations, and block quantization and incoherent processing; achieves speedup on H100 GPUs by 1.5-2.0x with FP16 reaching up to 740 TFLOPs/s (75% utilization), and with FP8 reaching close to 1.2 PFLOPs/s. | [Paper](https://tridao.me/publications/flash3/flash3.pdf), [Tweet](https://x.com/tri_dao/status/1811453622070444071) |
| 2) **RankRAG** - introduces a new instruction fine-tuning framework to perform effective context ranking and answering generation to enhance an LLM’s RAG capabilities; it leverages a small ranking dataset to outperform existing expert ranking models; shows that a Llama3-RankRAG significantly outperforms Llama3-ChatQA-1.5 and GPT-4 models on nine knowledge-intensive benchmarks. | [Paper](https://arxiv.org/abs/2407.02485v1), [Tweet](https://x.com/_weiping/status/1808551184309104896) |
| 3) **Mixture of A Million Experts** - introduces a parameter-efficient expert retrieval mechanism that leverages the product key technique for sparse retrieval from a million tiny experts; it attempts to decouple computational cost from parameter count by efficiently routing to a very large number of tiny experts through a learned index structure used for routing; demonstrates superior efficiency compared to dense FFW, coarse-grained MoEs, and Product Key Memory (PKM) layers. | [Paper](https://arxiv.org/abs/2407.04153), [Tweet](https://x.com/omarsar0/status/1810389538340290724) |
| 4) **Reasoning in LLMs: A Geometric Perspective** - explores the reasoning of LLMs from a geometrical perspective; reports that a higher intrinsic dimension implies greater expressive capacity of the LLM; reports that they establish a connection between the expressive power of LLMs and the density of their self-attention graphs; their analysis demonstrates that the density of these graphs defines the intrinsic dimension of the inputs to the MLP blocks. | [Paper](https://arxiv.org/abs/2407.02678), [Tweet](https://x.com/omarsar0/status/1810329294884741594) |
| 5) **Contextual Hallucinations Mitigation in LLMs** - proposes a new method that detects and significantly reduces contextual hallucinations in LLMs (e.g., reduces by 10% in the XSum summarization task); builds a hallucination detection model based on input features given by the ratio of attention weights on the context vs. newly generated tokens (for each attention head); the hypothesis is that contextual hallucinations are related to the extent to which an LLM attends to the provided contextual information; they also propose a decoding strategy based on their detection method which mitigates the contextual hallucination; the detector can also be transferred across models without the need for retraining. | [Paper](https://arxiv.org/abs/2407.07071), [Tweet](https://x.com/omarsar0/status/1811072508637884750) |
| 6) **RouteLLM** - proposes efficient router models to dynamically select between stronger and weak LLMs during inference to achieve a balance between cost and performance; the training framework leverages human preference data and data augmentation techniques to boost performance; shows to significantly reduce costs by over 2x in certain cases while maintaining the quality of responses. | [Paper](https://arxiv.org/abs/2406.18665v2), [Tweet](https://x.com/lmsysorg/status/1807812671238258931) |
| 7) **A Survey on Mixture of Experts** - a survey paper on Mixture of Experts (MoE), including the technical details of MoE, open-source implementations, evaluation techniques, and applications of MoE in practice. | [Paper](https://arxiv.org/abs/2407.06204), [Tweet](https://x.com/omarsar0/status/1811127876819026283) |
| 8) **Internet of Agents** - a new framework to address several limitations in multi-agent frameworks such as integrating diverse third-party agents and adaptability to dynamic task requirements; introduces an agent integration protocol, instant messaging architecture design, and dynamic mechanisms for effective collaboration among heterogeneous agents. | [Paper](https://arxiv.org/abs/2407.07061v2), [Tweet](https://x.com/_akhaliq/status/1810872693501157855) |
| 9) **3DGen** - a new pipeline for end-to-end text-to-3D asset generation in under a minute; integrates state-of-the-art components like AssetGen and TextureGen to represent 3D objects in three ways, namely view space, in volumetric space, and in UV space; achieves a win rate of 68% with respect to the single-stage model. | [Paper](https://ai.meta.com/research/publications/meta-3d-gen/), [Tweet](https://x.com/AIatMeta/status/1808157832497488201) |
| 10) **Learning at Test Time** - proposes new sequence modeling layers with linear complexity and an expressive hidden state; defines a hidden state as an ML model itself capable of updating even on test sequence; by a linear model and a two-layer MLP based hidden state is found to match or exceed baseline models like Transformers, Mamba, and modern RNNs; the linear model is faster than Transformer at 8k context and matches Mamba in wall-clock time. | [Paper](https://arxiv.org/abs/2407.04620), [Tweet](https://x.com/arankomatsuzaki/status/1810148710258508046) |
## Top ML Papers of the Week (July 1 - July 7) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **APIGen** - presents an automated data generation pipeline to synthesize high-quality datasets for function-calling applications; shows that 7B models trained on curated datasets outperform GPT-4 models and other state-of-the-art models on the Berkeley Function-Calling Benchmark; a dataset consisting of 60K entries is also released to help with research in function-calling enabled agents. | [Paper](https://arxiv.org/pdf/2406.18518), [Tweet](https://x.com/Benioff/status/1808365628551844186) |
| 2) **CriticGPT** - a new model based on GPT-4 to help write critiques for responses generated by ChatGPT; trained using RLHF using a large number of inputs that contained mistakes for which it had to critique; built to help human trainers spot mistakes during RLHF and claims that CriticGPT critiques are preferred by trainers over ChatGPT critiques in 63% of cases on naturally occurring bugs. | [Paper](https://cdn.openai.com/llm-critics-help-catch-llm-bugs-paper.pdf), [Tweet](https://x.com/OpenAI/status/1806372369151426673) |
| 3) **Searching for Best Practices in RAG** - shows the best practices for building effective RAG workflows; proposes strategies that focus on performance and efficiency, including emerging multimodal retrieval techniques. | [Paper](https://arxiv.org/abs/2407.01219), [Tweet](https://x.com/omarsar0/status/1808177231342018748) |
| 4) **Scaling Synthetic Data Creation** - proposes 1 billion diverse personas to facilitate the creation of diverse synthetic data for different scenarios; uses a novel persona-driven data synthesis methodology to generate diverse and distinct data covering a wide range of perspectives; to measure the quality of the synthetic datasets, they performed an out-of-distribution evaluation on MATH. A fine-tuned model on their synthesized 1.07M math problems achieves 64.9% on MATH, matching the performance of gpt-4-turbo-preview at only a 7B scale. | [Paper](https://arxiv.org/abs/2406.20094), [Tweet](https://x.com/omarsar0/status/1807827401122238628) |
| 5) **Self-Evaluation as a Defense Against Adversarial Attacks on LLMs** - proposes the use of self-evaluation to defend against adversarial attacks; uses a pre-trained LLM to build defense which is more effective than fine-tuned models, dedicated safety LLMs, and enterprise moderation APIs; they evaluate different settings like attacks on the generator only and generator + evaluator combined; it shows that building a dedicated evaluator can significantly reduce the success rate of attacks. | [Paper](https://arxiv.org/abs/2407.03234), [Tweet](https://x.com/omarsar0/status/1809241930963853621) |
| 6) **Agentless** - introduces OpenAutoEncoder-Agentless which offers an agentless system that solves 27.3% GitHub issues on SWE-bench Lite; claims to outperform all other open-source AI-powered software engineering agents. | [Paper](https://arxiv.org/abs/2407.01489), [Tweet](https://x.com/LingmingZhang/status/1808501612056629569) |
| 7) **Adaptable Logical Control for LLMs** - presents the Ctrl-G framework to facilitate control of LLM generations that reliably follow logical constraints; it combines LLMs and Hidden Markow Models to enable following logical constraints (represented as deterministic finite automata); Ctrl-G achieves over 30% higher satisfaction rate in human evaluation compared to GPT4. | [Paper](https://arxiv.org/abs/2406.13892), [Tweet](https://x.com/HonghuaZhang2/status/1806727439823102325) |
| 8) **LLM See, LLM Do** - closely investigates the effects and effectiveness of synthetic data and how it shapes a model’s internal biases, calibration, attributes, and preferences; finds that LLMs are sensitive towards certain attributes even when the synthetic data prompts appear neutral; demonstrates that it’s possible to steer the generation profiles of models towards desirable attributes. | [Paper](https://arxiv.org/abs/2407.01490), [Tweet](https://x.com/lushimabucoro/status/1808083881632878843) |
| 9) **Summary of a Haystack** - proposes a new task, SummHay, to test a model’s ability to process a Haystack and generate a summary that identifies the relevant insights and cites the source documents; reports that long-context LLMs score 20% on the benchmark which lags the human performance estimate (56%); RAG components is found to boost performance on the benchmark, which makes it a viable option for holistic RAG evaluation. | [Paper](https://arxiv.org/abs/2407.01370), [Tweet](https://x.com/_philschmid/status/1808420168558649479) |
| 10) **AI Agents That Matter** - analyzes current agent evaluation practices and reveals shortcomings that potentially hinder real-world application; proposes an implementation that jointly optimizes cost and accuracy and a framework to avoid overfitting agents. | [Paper](https://arxiv.org/abs/2407.01502), [Tweet](https://x.com/random_walker/status/1808138818182434955) |
## Top ML Papers of the Week (June 24 - June 30) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **ESM3** - a new LLM-based biological model that generates a new green fluorescent protein called esmGFP; builds on a bidirectional transformer, uses masked language models for the objective function, leverages geometric attention to represent atomic coordinates, and applies chain-of-thought prompting to generate fluorescent proteins; estimates that esmGFP represents an equivalent of over 500 million years of natural evolution performed by an evolutionary simulator. | [Paper](https://evolutionaryscale-public.s3.us-east-2.amazonaws.com/research/esm3.pdf), [Tweet](https://x.com/alexrives/status/1805559211394277697) |
| 2) **Gemma 2** - presents a family of open models ranging between 2B to 27B parameters; demonstrates strong capabilities in reasoning, math, and code generation, outperforming models twice its size. | [Paper](https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf), [Tweet](https://x.com/omarsar0/status/1806352449956958501) |
| 3) **LLM Compiler** - a suite of open pre-trained models (7B and 13B parameters) designed for code optimization tasks; it’s built on top of Code Llama and trained on a corpus of 546 billion tokens of LLVM-IR and assembly code; it’s also instruction fine-tuned to interpreter compiler behavior; achieves 77% of the optimizing potential of autotuning search and performs accurate disassembling 14% of the time compared to the autotuning technique on which it was trained. | [Paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization), [Tweet](https://x.com/AIatMeta/status/1806361623831171318) |
| 4) **Enhancing RAG with Long-Context LLMs** - proposes LongRAG, which combines RAG with long-context LLMs to enhance performance; uses a long retriever to significantly reduce the number of extracted units by operating on longer retrieval units; the long reader takes in the long retrieval units and leverages the zero-shot answer extraction capability of long-context LLMs to improve performance of the overall system; claims to achieve 64.3% on HotpotQA (full-wiki), which is on par with the state-of-the-art model. | [Paper](https://arxiv.org/abs/2406.15319), [Tweet](https://x.com/omarsar0/status/1805230323799560199) |
| 5) **Improving Retrieval in LLMs through Synthetic Data** - proposes a fine-tuning approach to improve the accuracy of retrieving information in LLMs while maintaining reasoning capabilities over long-context inputs; the fine-tuning dataset comprises numerical dictionary key-value retrieval tasks (350 samples); finds that this approach mitigates the "lost-in-the-middle" phenomenon and improves performance on both information retrieval and long-context reasoning. | [Paper](https://arxiv.org/abs/2406.19292), [Tweet](https://x.com/omarsar0/status/1806738385039692033) |
| 6) **GraphReader** - proposes a graph-based agent system to enhance the long-context abilities of LLMs; it structures long text into a graph and employs an agent to explore the graph (using predefined functions guided by a step-by-step rational plan) to effectively generate answers for questions; consistently outperforms GPT-4-128k across context lengths from 16k to 256k. | [Paper](https://arxiv.org/abs/2406.14550v1), [Tweet](https://x.com/omarsar0/status/1806802925517218078) |
| 7) **Faster LLM Inference with Dynamic Draft Trees** - presents a context-aware dynamic draft tree to increase the speed of inference; the previous speculative sampling method used a static draft tree for sampling which only depended on position but lacked context awareness; achieves speedup ratios ranging from 3.05x-4.26x, which is 20%-40% faster than previous work; these speedup ratios occur because the new method significantly increases the number of accepted draft tokens. | [Paper](https://arxiv.org/abs/2406.16858), [Tweet](https://x.com/omarsar0/status/1805629496634294760) |
| 8) **Following Length Constraints in Instructions** - presents an approach for how to deal with length bias and train instruction following language models that better follow length constraint instructions; fine-tunes a model using DPO with a length instruction augmented dataset and shows less length constraint violations and while keeping a high response quality. | [Paper](https://arxiv.org/abs/2406.17744), [Tweet](https://x.com/jaseweston/status/1805771223747481690) |
| 9) **On LLMs-Driven Synthetic Data Generation, Curation, and Evaluation** - survey on LLM-based synthetic data generation, curation, and evaluation. | [Paper](https://arxiv.org/abs/2406.15126), [Tweet](https://x.com/omarsar0/status/1805652404404207919) |
| 10) **Adam-mini** - a new optimizer that reduces memory footprint (45%-50% less memory footprint) by using fewer learning rates and achieves on-par or even outperforms AdamW; it carefully partitions parameters into blocks and assigns a single high-quality learning that outperforms Adam; achieves consistent results on language models sized from 125M -7B for pre-training, SFT, and RLHF. | [Paper](https://arxiv.org/abs/2406.16793), [Tweet](https://x.com/arankomatsuzaki/status/1805439246318125299) |
## Top ML Papers of the Week (June 17 - June 23) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Claude 3.5 Sonnet** - a new model that achieves state-of-the-art performance on several common benchmarks such as MMLU and HumanEval; it outperforms Claude 3 Opus and GPT-4o on several benchmarks with the exception of math word problem-solving tasks; achieves strong performance on vision tasks which also helps power several new features like image-text transcription and generation of artifacts. | [Paper](https://www.anthropic.com/news/claude-3-5-sonnet), [Tweet](https://x.com/AnthropicAI/status/1803790676988920098) |
| 2) **DeepSeek-Coder-V2** - competes with closed-sourced models on code and math generation tasks; achieves 90.2% on HumanEval and 75.7% on MATH; these results are higher than GPT-4-Turbo-0409 performance according to their report; includes a 16B and 236B parameter model with 128K context length. | [Paper](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf), [Tweet](https://x.com/omarsar0/status/1803078095219417475) |
| 3) **TextGrad** - a new framework for automatic differentiation through backpropagation on textual feedback provided by an LLM; this improves individual components and the natural language helps to optimize the computation graph; it works by providing an objective function without tuning prompts or components; claims to achieve LeetCodeHard best scores and SoTA performance on GPQA when combined with GPT4o. | [Paper](https://arxiv.org/abs/2406.07496v1), [Tweet](https://x.com/james_y_zou/status/1800917174124740667) |
| 4) **Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?** - conducts a deep performance analysis of long-context LLMs on in-context retrieval and reasoning; they first present a benchmark with real-world tasks requiring 1M token context; reports that long-context LLMs can rival state-of-the-art retrieval and RAG systems, without any explicit training on the tasks; suggests that compositional reasoning (required in SQL-like tasks) is still challenging for these LLMs; they also encourage the need for continued research on advanced prompting strategies as they noted significant boosts in performance when applying them for long context problems. | [Paper](https://arxiv.org/abs/2406.13121), [Tweet](https://x.com/omarsar0/status/1804184820806766875) |
| 5) **PlanRAG** - enhances decision making with a new RAG technique called iterative plan-then-RAG (PlanRAG); involves two steps: 1) an LM generates the plan for decision making by examining data schema and questions and 2) the retriever generates the queries for data analysis; the final step checks if a new plan for further analysis is needed and iterates on previous steps or makes a decision on the data; PlanRAG is found to be more effective than iterative RAG on the proposed Decision QA tasks. | [Paper](https://arxiv.org/abs/2406.12430), [Tweet](https://x.com/omarsar0/status/1803262374574448757) |
| 6) **Mitigating Memorization in LLMs** - presents a modification of the next-token prediction objective called goldfish loss to help mitigate the verbatim generation of memorized training data; it uses a simple technique that excludes a pseudorandom subset of training tokens at training time; they show that the goldfish loss resists memorization and keeps the model useful; however, it may need to train for longer to more effectively learn from the training data. | [Paper](https://arxiv.org/abs/2406.10209), [Tweet](https://x.com/omarsar0/status/1802729440163647754) |
| 7) **Monte Carlos Tree Self-Refine** - report to have achieved GPT-4 level mathematical olympiad solution using an approach that integrates LLMs with Monte Carlo Tree Search; this approach focuses on enhancing the mathematical reasoning performance of the system through capabilities such as systematic exploration, self-refinement, and self-evaluation. | [Paper](https://arxiv.org/abs/2406.07394v2), [Tweet](https://x.com/rohanpaul_ai/status/1801259208341373013) |
| 8) **From RAG to Rich Parameters** - investigates more closely how LLMs utilize external knowledge over parametric information for factual queries; finds that in a RAG pipeline, LLMs take a “shortcut” and display a strong bias towards utilizing only the context information to answer the question, while relying minimally on their parametric memory. | [Paper](https://arxiv.org/abs/2406.12824), [Tweet](https://x.com/omarsar0/status/1803254134289895555) |
| 9) **Open-Sora** - an open-source video generation model that can generate 16-second 720p videos; it’s a 1.1B parameter model trained on more than 30m data and now supports image-to-video; presents an enhanced diffusion model and video compression network for spatial and temporal compression; increases controllability of generations and reduces training costs. | [Paper](https://github.com/hpcaitech/Open-Sora/blob/main/docs/report_03.md), [Tweet](https://x.com/omarsar0/status/1803176105010171957) |
| 10) **Tree Search for Language Model Agents** - proposes an inference-time tree search algorithm for LM agents to perform exploration and enable multi-step reasoning; it’s tested on interactive web environments and applied to GPT-4o to significantly improve performance; demonstrates that performance scales when increasing test-time compute. | [Paper](https://jykoh.com/search-agents/paper.pdf), [Tweet](https://x.com/kohjingyu/status/1803604487216701653) |
## Top ML Papers of the Week (June 10 - June 16) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Nemotron-4 340B** - provides an instruct model to generate high-quality data and a reward model to filter out data on several attributes; demonstrates strong performance on common benchmarks like MMLU and GSM8K; it’s competitive with GPT-4 on several tasks, including high scores in multi-turn chat; a preference data is also released along with the base model. | [Paper](https://research.nvidia.com/publication/2024-06_nemotron-4-340b), [Tweet](https://x.com/omarsar0/status/1802024352851878296) |
| 2) **Discovering Preference Optimization Algorithms with LLMs** - proposes LLM-driven objective discovery of state-of-the-art preference optimization; no human intervention is used and an LLM is prompted to propose and implement the preference optimization loss functions based on previously evaluated performance metrics; discovers an algorithm that adaptively combined logistic and exponential losses. | [Paper](https://arxiv.org/abs/2406.08414), [Tweet](https://x.com/SakanaAILabs/status/1801069076003082502) |
| 3) **SelfGoal** - a framework to enhance an LLM-based agent's capabilities to achieve high-level goals; adaptively breaks down a high-level goal into a tree structure of practical subgoals during interaction with the environment; improves performance on various tasks, including competitive, cooperative, and deferred feedback environments | [Paper](https://arxiv.org/abs/2406.04784), [Tweet](https://x.com/omarsar0/status/1800183982404829457) |
| 4) **Mixture-of-Agents** - an approach that leverages the collective strengths of multiple LLMs through a Mixture-of-Agents methodology; layers are designed with multiple LLM agents and each agent builds on the outputs of other agents in the previous layers; surpasses GPT-4o on AlpacaEval 2.0, MT-Bench and FLASK. | [Paper](https://arxiv.org/abs/2406.04692), [Tweet](https://x.com/togethercompute/status/1800536106729157054) |
| 5) **Transformers Meet Neural Algorithmic Reasoners** - a new hybrid architecture that enables tokens in the LLM to cross-attend to node embeddings from a GNN-based neural algorithmic reasoner (NAR); the resulting model, called TransNAR, demonstrates improvements in OOD reasoning across algorithmic tasks | [Paper](https://arxiv.org/abs/2406.09308), [Tweet](https://x.com/omarsar0/status/1801448036389843228) |
| 6) **Self-Tuning with LLMs** - improves an LLM’s ability to effectively acquire new knowledge from raw documents through self-teaching; the three steps involved are 1) a self-teaching component that augments documents with a set of knowledge-intensive tasks focusing on memorization, comprehension, and self-reflection, 2) uses the deployed model to acquire knowledge from new documents while reviewing its QA skills, and 3) the model is configured to continually learn using only the new documents which helps with thorough acquisition of new knowledge. | [Paper](https://arxiv.org/pdf/2406.06326), [Tweet](https://x.com/omarsar0/status/1800552376513810463) |
| 7) **Sketching as a Visual Chain of Thought** - a framework that enables a multimodal LLM to access a visual sketchpad and tools to draw on the sketchpad; it can equip a model like GPT-4 with the capability to generate intermediate sketches to reason over complex tasks; improves performance on many tasks over strong base models with no sketching; GPT-4o equipped with SketchPad sets a new state of the art on all the tasks tested. | [Paper](https://arxiv.org/abs/2406.09403), [Tweet](https://x.com/omarsar0/status/1801450829234188760) |
| 8) **Mixture of Memory Experts** - proposes an approach to significantly reduce hallucination (10x) by tuning millions of expert adapters (e.g., LoRAs) to learn exact facts and retrieve them from an index at inference time; the memory experts are specialized to ensure faithful and factual accuracy on the data it was tuned on; claims to enable scaling to a high number of parameters while keeping the inference cost fixed. | [Paper](https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf), [Tweet](https://x.com/omarsar0/status/1801638552129700046) |
| 9) **Multimodal Table Understanding** - introduces Table-LLaVa 7B, a multimodal LLM for multimodal table understanding; it’s competitive with GPT-4V and significantly outperforms existing MLLMs on multiple benchmarks; also develops a large-scale dataset MMTab, covering table images, instructions, and tasks. | [Paper](https://arxiv.org/abs/2406.08100), [Tweet](https://x.com/omarsar0/status/1801271773796716646) |
| 10) **Consistent Middle Enhancement in LLMs** - proposes an approach to tune an LLM to effectively utilize information from the middle part of the context; it first proposes a training-efficient method to extend LLMs to longer context lengths (e.g., 4K -> 256K); it uses a truncated Gaussian to encourage sampling from the middle part of the context during fine-tuning; the approach helps to alleviate the so-called "Lost-in-the-Middle" problem in long-context LLMs. | [Paper](https://arxiv.org/abs/2406.07138), [Tweet](https://x.com/omarsar0/status/1800903031736631473) |
## Top ML Papers of the Week (June 3 - June 9) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **NLLB** - proposes a massive multilingual model that leverages transfer learning across 200 languages; it’s based on a sparsely Gated Mixture of Experts architecture and trained on data via an approach tailored for low-resource languages; evaluates on 40K translations and achieves an average of 44% improvement in translation quality. | [Paper](https://www.nature.com/articles/s41586-024-07335-x), [Tweet](https://x.com/AIatMeta/status/1798420492774432769) |
| 2) **Extracting Concepts from GPT-4** - proposes a new scalable method based on sparse autoencoders to extract around 16 million interpretable patterns from GPT-4; the method demonstrates predictable scaling and is more efficient than previous techniques. | [Paper](https://openai.com/index/extracting-concepts-from-gpt-4/), [Tweet](https://x.com/OpenAI/status/1798762092528586945) |
| 3) **Mamba-2** - a new architecture that combines state space models (SSMs) and structured attention; it uses 8x larger states and trains 50% faster; the new state space duality layer is more efficient and scalable compared to the approach used in Mamba; it also improves results on tasks that require large state capacity. | [Paper](https://arxiv.org/abs/2405.21060), [Tweet](https://x.com/_albertgu/status/1797651223035904355) |
| 4) **MatMul-free LLMs** - proposes an implementation that eliminates matrix multiplication operations from LLMs while maintaining performance at billion-parameter scales; the performance between full precision Transformers and the MatMul-free models narrows as the model size increases; claims that by using an optimized kernel during inference, memory consumption is reduced by more than 10x. | [Paper](https://arxiv.org/abs/2406.02528), [Tweet](https://x.com/omarsar0/status/1798373841741185261) |
| 5) **Buffer of Thoughts** - presents a thought-augmented reasoning approach to enhance the accuracy, efficiency, and robustness of LLM-based reasoning; it leverages a meta-buffer containing high-level thoughts (thought templates) distilled from problem-solving processes; the relevant thought template is then retrieved and instantiated with task-specific reasoning structures for the thought-augmented reasoning process; it demonstrates SOTA performance on 10 challenging tasks while requiring 12% of the cost of multi-query prompting methods like Tree-of-Thoughts. | [Paper](https://arxiv.org/abs/2406.04271), [Tweet](https://x.com/omarsar0/status/1799113545696567416) |
| 6) **SaySelf** - a training framework to teach LLMs to express more accurate fine-grained confidence estimates and self-reflective rationales; it performs supervised finetuning on a dataset that contains summaries of the difference between multiple reasoning chains; reinforcement learning is then applied to calibrate confidence estimates, encouraging the LLM to produce accurate, high-confidence predictions and penalize overconfidence in erroneous outputs. | [Paper](https://arxiv.org/abs/2405.20974), [Tweet](https://x.com/omarsar0/status/1797682549608833477) |
| 7) **The Geometry of Concepts in LLMs** - studies the geometry of categorical concepts and how the hierarchical relations between them are encoded in LLMs; finds that simple categorical concepts are represented as simplices by the LLMs and complex concepts are represented as polytopes constructed from direct sums of simplices, which reflect the hierarchical structure. | [Paper](https://arxiv.org/abs/2406.01506), [Tweet](https://x.com/omarsar0/status/1798010546522103898) |
| 8) **Aligning LLMs with Demonstrated Feedback** - proposes a method to align LLMs to a specific setting via a very small number of demonstrations as feedback; it aligns LLM outputs to a user’s demonstrated behaviors and can learn fine-grained style and task alignment across domains; outperforms few-shot prompting, SFT, and self-play methods on the tested benchmarks. | [Paper](https://arxiv.org/abs/2406.00888), [Tweet](https://x.com/arankomatsuzaki/status/1797833884463472653) |
| 9) **Towards Scalable Automated Alignment of LLMs** - provides an overview of methods used for alignment of LLMs; explores the 4 following directions: 1) aligning through inductive bias, 2) aligning through behavior imitation, 3) aligning through model feedback, and 4) aligning through environment feedback. | [Paper](https://arxiv.org/abs/2406.01252), [Tweet](https://x.com/omarsar0/status/1798014572663583165) |
| 10) **AgentGym** - a new framework featuring various environments and tasks for broad, real-time, and concurrent agent exploration; builds a generally capable LLM-based agent with self-evolution abilities and explores its potential beyond previously seen data across tasks and environments. | [Paper](https://arxiv.org/abs/2406.04151), [Tweet](https://x.com/arankomatsuzaki/status/1798904095669121443) |
## Top ML Papers of the Week (May 27 - June 2) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Contextual Position Encoding** - proposes a new position encoding method, CoPE, to enable the position to be conditioned on context by incrementing position only on certain tokens; the position encoding is context-dependent and can represent different levels of position abstraction; the general position encoding method can attend to the i-th particular word, noun, or sentence; improves perplexity on language modeling and coding tasks. | [Paper](https://arxiv.org/abs/2405.18719), [Tweet](https://x.com/jaseweston/status/1795978611784089799) |
| 2) **Symbolic Chain-of-Thought** - proposes a method that improves the logical reasoning capabilities of LLMs by integrating symbolic expressions and logical rules with chain-of-thought (CoT) prompting; the prompting technique is called Symbolic Chain-of-Thought and it’s a fully LLM-based framework with the following key steps: 1) translates natural language context to symbolic format, 2) derives step-by-step plan to solve problems following symbolic logical rules, and 3) uses a verifier to check the translation and reasoning chain. | [Paper](https://arxiv.org/abs/2405.18357), [Tweet](https://x.com/omarsar0/status/1795925943543898157) |
| 3) **Abacus Embeddings** - achieves 99% accuracy on 100-digit addition problems by training on only 20-digit numbers with a single GPU; the main challenge this work addresses is the inability of transformers to track the exact position of digits; they do this by adding an embedding to each digit that encodes its position relative to the start of the number; these gains also transfer to multi-step reasoning tasks that include sorting and multiplication. | [Paper](https://arxiv.org/abs/2405.17399), [Tweet](https://x.com/omarsar0/status/1795552696432202045) |
| 4) **Introduction to Vision-Language Modeling** - presents an introduction to vision-language models along with key details of how they work and how to effectively train these models. | [Paper](https://arxiv.org/abs/2405.17247), [Tweet](https://x.com/AIatMeta/status/1795499770519392499) |
| 5) **GNN-RAG** - combines the language understanding abilities of LLMs with the reasoning abilities of GNNs in a RAG style; the GNN extracts useful and relevant graph information while the LLM takes the information and leverages its capabilities to perform question answering over knowledge graphs (KGQA); GNN-RAG improves vanilla LLMs on KGQA and outperforms or matches GPT-4 performance with a 7B tuned LLM. | [Paper](https://arxiv.org/abs/2405.20139), [Tweet](https://x.com/omarsar0/status/1796578239105679585) |
| 6) **Attention as an RNN** - presents a new attention mechanism that can be trained in parallel (like Transformers) and be updated efficiently with new tokens requiring constant memory usage for inferences (like RNNs); the attention formulation is based on the parallel prefix scan algorithm which enables efficient computation of attention’s many-to-many RNN output; achieves comparable performance to Transformers on 38 datasets while being more time and memory-efficient. | [Paper](https://arxiv.org/abs/2405.13956), [Tweet](https://x.com/iScienceLuvr/status/1793933723756286075) |
| 7) **Aya23** - a family of multilingual language models that can serve up to 23 languages; it intentionally focuses on fewer languages and allocates more capacity to these languages; shows that it can outperform other massive multimodal models on those specific languages. | [Paper](https://arxiv.org/abs/2405.15032), [Tweet](https://x.com/CohereForAI/status/1794044201677574446) |
| 8) **Are Long-LLMs A Necessity For Long-Context Tasks?** - claims that long-LLMs are not a necessity to solve long-context tasks; proposes a reasoning framework to enable short-LLMs to address long-context tasks by adaptively accessing and utilizing the context based on the presented tasks; it decomposes the long context into short contexts and processes them using a decision-making process. | [Paper](https://arxiv.org/abs/2405.15318), [Tweet](https://x.com/omarsar0/status/1795188655243264299) |
| 9) **Financial Statement Analysis with LLMs** - claims that LLMs can generate useful insights from its analysis of trends and financial ratios; shows that GPT-4 performs on par with narrowly specialized models; and achieves a profitable trading strategy based on GPT’s predictions. | [Paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4835311), [Tweet](https://x.com/omarsar0/status/1794120780428546503) |
| 10) **SimPO** - a simpler and more effective approach for preference optimization with a reference-free reward; uses the average log probability of a sequence as an implicit reward (i.e., no reference model required) which makes it more compute and memory efficient; demonstrates that it outperforms existing approaches like DPO and claims to produce the strongest 8B open-source model. | [Paper](https://arxiv.org/abs/2405.14734), [Tweet](https://x.com/rasbt/status/1794711330085036061) |
## Top ML Papers of the Week (May 20 - May 26) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Extracting Interpretable Features from Claude 3 Sonnet** - presents an effective method to extract millions of abstract features from an LLM that represent specific concepts; these concepts could represent people, places, programming abstractions, emotion, and more; reports that some of the discovered features are directly related to the safety aspects of the model; finds features directly related to security vulnerabilities and backdoors in code, bias, deception, sycophancy; and dangerous/criminal content, and more; these features are also used to intuititively steer the model’s output. | [Paper](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html), [Tweet](https://x.com/AnthropicAI/status/1792935506587656625) |
| 2) **Agent Planning with World Knowledge Model** - introduces a parametric world knowledge model to facilitate agent planning; the agent model can self-synthesize knowledge from expert and sampled trajectories; this is used to train the world knowledge model; prior task knowledge is used to guide global planning and dynamic state knowledge is used to guide the local planning; demonstrates superior performance compared to various strong baselines when adopting open-source LLMs like Mistral-7B and Gemma-7B. | [Paper](https://arxiv.org/abs/2405.14205), [Tweet](https://x.com/omarsar0/status/1793851075411296761) |
| 3) **Risks and Opportunities of Open-Source Generative AI** - analyzes the risks and opportunities of open-source generative AI models; argues that the overall benefits of open-source generative AI outweigh its risks. | [Paper](https://arxiv.org/abs/2405.08597), [Tweet](https://x.com/fgirbal/status/1791454665764159794) |
| 4) **Enhancing Answer Selection in LLMs** - proposes a hierarchical reasoning aggregation framework for improving the reasoning capabilities of LLMs; the approach, called Aggregation of Reasoning (AoR), selects answers based on the evaluation of reasoning chains; AoR uses dynamic sampling to adjust the number of reasoning chains with respect to the task complexity; it uses results from the evaluation phase to determine whether to sample additional reasoning chains; a known flaw of majority voting is that it fails in scenarios where the correct answer is in the minority; AoR focuses on evaluating the reasoning chains to improve the selection of the final answer; AoR outperforms various prominent ensemble methods and can be used with various LLMs to improve performance on complex reasoning tasks. | [Paper](https://arxiv.org/abs/2405.12939), [Tweet](https://x.com/omarsar0/status/1793132875237163405) |
| 5) **How Far Are We From AGI** - presents an opinion paper addressing important questions to understand the proximity to artificial general intelligence (AGI); it provides a summary of strategies necessary to achieve AGI which includes a detailed survey, discussion, and original perspectives. | [Paper](https://arxiv.org/abs/2405.10313v1) |
| 6) **Efficient Inference of LLMs** - proposes a layer-condensed KV cache to achieve efficient inference in LLMs; only computes and caches the key-values (KVs) of a small number of layers which leads to saving memory consumption and improved inference throughput; can achieve up to 26x higher throughput than baseline transformers while maintaining satisfactory performance. | [Paper](https://arxiv.org/abs/2405.10637), [Tweet](https://x.com/arankomatsuzaki/status/1792386318300749848) |
| 7) **Guide for Evaluating LLMs** - provides guidance and lessons for evaluating large language models; discusses challenges and best practices, along with the introduction of an open-source library for evaluating LLMs. | [Paper](https://arxiv.org/abs/2405.14782), [Tweet](https://x.com/omarsar0/status/1793846120600474017) |
| 8) **Scientific Applications of LLMs** - presents INDUS, a comprehensive suite of LLMs for Earth science, biology, physics, planetary sciences, and more; includes an encoder model, embedding model, and small distilled models. | [Paper](https://arxiv.org/abs/2405.10725), [Tweet](https://x.com/omarsar0/status/1792585422465335695) |
| 9) **DeepSeek-Prover** - introduces an approach to generate Lean 4 proof data from high-school and undergraduate-level mathematical competition problems; it uses the synthetic data, comprising of 8 million formal statements and proofs, to fine-tune a DeepSeekMath 7B model; achieves whole-proof generation accuracies of 46.3% with 64 samples and 52% cumulatively on the Lean 4 miniF2F test; this surpasses the baseline GPT-4 (23.0%) with 64 samples and a tree search RL method (41.0%). | [Paper](https://arxiv.org/abs/2405.14333), [Tweet](https://x.com/_akhaliq/status/1793864788579090917) |
| 10) **Efficient Multimodal LLMs** - provides a comprehensive and systematic survey of the current state of efficient multimodal large language models; discusses efficient structures and strategies, applications, limitations, and promising future directions. | [Paper](https://arxiv.org/abs/2405.10739v1), [Tweet](https://x.com/omarsar0/status/1794072297260634244) |
## Top ML Papers of the Week (May 13 - May 19) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **GPT-4o** - a new model with multimodal reasoning capabilities with real-time support across audio, vision, and text; it can accept as input any combination of text, audio, image, and video to generate combinations of text, audio, and image outputs; it’s reported to match GPT-4 Turbo performance while being 50% much faster and cheaper via APIs. | [Paper](https://openai.com/index/hello-gpt-4o/), [Tweet](https://x.com/OpenAI/status/1790072174117613963) |
| 2) **Gemini 1.5 Flash** - a lightweight transformer decoder model with a 2M context window with multimodal capabilities; it is designed for efficiency and yields the fastest output generation of all models on several evaluated languages; overall, Gemini 1.5 Flash performs uniformly better compared to Gemini 1.0 Pro and even performs at a similar level to 1.0 Ultra on several benchmarks. | [Paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf), [Tweet](https://x.com/OriolVinyalsML/status/1791521517211107515) |
| 3) **Veo** - Google Deepmind’s most capable video generation model generates high-quality, 1080p resolution videos beyond 1 minute; it supports masked editing on videos and can also generate videos with an input image along with text; the model can extend video clips to 60 seconds and more while keeping consistency with its latent diffusion transformer. | [Paper](https://deepmind.google/technologies/veo/), [Tweet](https://x.com/GoogleDeepMind/status/1790435824598716704) |
| 4) **Chameleon** - a family of token-based mixed-modal models for generating images and text in any arbitrary sequence; reports state-of-the-art performance in image captioning and outperforms Llama 2 in text-only tasks and is also competitive with Mixtral 8x7B and Gemini-Pro; exceeds the performance of Gemini Pro and GPT-4V on a new long-form mixed-modal generation evaluation. | [Paper](https://arxiv.org/abs/2405.09818), [Tweet](https://x.com/AIatMeta/status/1791263344714014733) |
| 5) **Fine-tuning and Hallucinations** - studies the impact of fine-tuning on new knowledge on the hallucination tendencies of LLMs; the setup includes fine-tuning examples that include new knowledge; shows that LLMs struggle to acquire new factual knowledge via fine-tuning; also finds that as new knowledge is learned it increases the model’s tendency to hallucinate. | [Paper](https://arxiv.org/abs/2405.05904), [Tweet](https://x.com/arankomatsuzaki/status/1788859706187882960) |
| 6) **Zero-shot Tokenizer Transfer** - trains a hypernetwork taking a tokenizer as input and predicting the corresponding embeddings; it demonstrates generalization to new tokenizers both with encoder and decoder LLMs; reports that the method achieves performance close to the original models' performance in cross-lingual and coding tasks while reducing the length of the tokenized sequence. | [Paper](https://arxiv.org/abs/2405.07883), [Tweet](https://x.com/bminixhofer/status/1790267652587258343) |
| 7) **WavCraft** - leverages LLMs to connect task-specific models for audio content creation and editing; decomposes users' instructions into several tasks and tackles each task collaboratively with the particular module; it can enable users to interact and produce audio content without explicit commands | [Paper](https://arxiv.org/abs/2403.09527v3) |
| 8) **RLHF Workflow** - provides an easily reproducible recipe for online iterative RLHF; discusses theoretical insights and algorithmic principles of online iterative RLHF and practical implementation. | [Paper](https://arxiv.org/abs/2405.07863v1), [Tweet](https://x.com/CaimingXiong/status/1790379121719361776) |
| 9) **You Only Cache Once** - a decoder-decoder LLM architecture that only caches key-value pairs once; it involves a cross-decoder stacked upon a self-decoder which efficiently encodes global key-value caches and the cross-encoder reuses the cache via cross-attention; this leads to a significant reduction in GPU memory use without sacrificing capabilities; achieves comparable performance to Transformer in various settings of scaling up model size and number of training token. | [Paper](https://arxiv.org/abs/2405.05254), [Tweet](https://x.com/arankomatsuzaki/status/1788435838474355098) |
| 10) **CAT3D** - presents a method for creating anything in 3D by simulating the real-world capture process using a multi-view diffusion model; it can generate consistent novel views of a scene which can be used as input to 3D reconstruction techniques to produce 3D representation rendered in real-time; the scene from CAT3D can be generated in less than one minute and is reported to outperform existing methods on single image and few-view 3D scene creation tasks. | [Paper](https://arxiv.org/abs/2405.10314), [Tweet](https://x.com/_akhaliq/status/1791294630614442009) |
## Top ML Papers of the Week (May 6 - May 12) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **AlphaFold 3** -releases a new state-of-the-art model for accurately predicting the structure and interactions of molecules; it can generate the 3D structures of proteins, DNA, RNA, and smaller molecules; the model is an improved version of the Evoformer module and then assembling its predictions using a diffusion network; the diffusion process starts with a cloud of atoms which converges to its final molecular structure. | [Paper](https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/), [Tweet](https://x.com/GoogleDeepMind/status/1788223454317097172) |
| 2) **xLSTM: Extended Long Short-Term Memory** - attempts to scale LSTMs to billions of parameters using the latest techniques from modern LLMs and mitigating common limitations of LSTMs; to enable LSTMs the ability to revise storage decisions, they introduce exponential gating and a new memory mixing mechanism (termed sLSTM); to enhance the storage capacities of LSTMs, they add a matrix memory and a covariance update rule (termed mLSTM); Both the sLSTM and xLSTM cells stabilize their exponential gates using the same technique; these extensions lead to xLSTM blocks that are residually stacked into the final xLSTM architecture; compared to Transformers, xLSTMs have a linear computation and constant memory complexity concerning the sequence length; the xLSTM architecture is shown to be efficient at handling different aspects of long context problems; achieves better validation perplexities when compared to different model classes like Transformers, SSMs, and RNNs.| [Paper](https://arxiv.org/abs/2405.04517), [Tweet](https://x.com/omarsar0/status/1788236090265977224) |
| 3) **DeepSeek-V2** -a strong MoE model comprising 236B parameters, of which 21B are activated for each token; supports a context length of 128K tokens and uses Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache into a latent vector; DeepSeek-V2 and its chat versions achieve top-tier performance among open-source models. | [Paper](https://arxiv.org/abs/2405.04434v2), [Tweet](https://x.com/p_nawrot/status/1788479672067481664) |
| 4) **AlphaMath Almost Zero** - enhances LLMs with Monte Carlo Tree Search (MCTS) to improve mathematical reasoning capabilities; the MCTS framework extends the LLM to achieve a more effective balance between exploration and exploitation; for this work, the idea is to generate high-quality math reasoning data without professional human annotations; the assumption is that a well pre-trained LLM already possesses mathematical knowledge to generate reasoning steps but needs better stimulation such as an advanced prompting or search strategy; unlike other methods such as Program-of-thought and Chain-of-thought, no solutions are required for the training data, just the math questions and the answers; the integration of LLMs, a value model, and the MCTS framework enables an effective and autonomous process of generating high-quality math reasoning data; the value model also aids the policy model in searching for effective solution paths. | [Paper](https://arxiv.org/abs/2405.03553), [Tweet](https://x.com/omarsar0/status/1787678940158468283) |
| 5) **DrEureka: Language Model Guided Sim-To-Real Transfer** - investigates using LLMs to automate and accelerate sim-to-real design; it requires the physics simulation for the target task and automatically constructs reward functions and domain randomization distributions to support real-world transfer; discovers sim-to-real configurations competitive with existing human-designed ones on quadruped locomotion and dexterous manipulation tasks. | [Paper](https://eureka-research.github.io/dr-eureka/assets/dreureka-paper.pdf), [Tweet](https://x.com/DrJimFan/status/1786429467537088741) |
| 6) **Consistency LLMs** - proposes efficient parallel decoders that reduce inference latency by decoding n-token sequence per inference step; the inspiration for this work comes from the human's ability to form complete sentences before articulating word by word; this process can be mimicked and learned through fine-tuning pre-trained LLMs to perform parallel decoding; it is trained to perform parallel decoding by mapping randomly initialized n-token sequences to the same result yielded by autoregressive (AR) decoding in as few steps as possible; a consistency loss helps with multiple-token prediction and a standard AR loss prevents deviation from the target LLM and ensures generation quality. Shows 2.4x to 3.4x improvements in generation speed while preserving the generation quality. | [Paper](https://arxiv.org/abs/2403.00835), [Tweet](https://x.com/omarsar0/status/1788594039865958762) |
| 7) **Is Flash Attention Stable?** - develops an approach to understanding the effects of numeric deviation and applies it to the widely-adopted Flash Attention optimization; finds that Flash Attention sees roughly an order of magnitude more numeric deviation as compared to Baseline Attention at BF16. | [Paper](https://arxiv.org/abs/2405.02803), [Tweet](https://x.com/arankomatsuzaki/status/1787674624647414168) |
| 8) **Is Sora a World Simulator? A Comprehensive Survey on General World Models and Beyond** - presents an overview of generative methodologies in video generation, where world models facilitate the synthesis of highly realistic visual content; examines challenges and limitations of world models, and discusses their potential future directions. | [Paper](https://arxiv.org/abs/2405.03520v1), [Tweet](https://x.com/dair_ai/status/1789640682082091442) |
| 9) **MAmmoTH2** - harvest 10 million naturally existing instruction data from the pre-training web corpus to enhance LLM reasoning; the approach first recalls relevant documents, extracts instruction-response pairs, and then refines the extracted pairs using open-source LLMs; MAmmoTH2-7B's (Mistral) performance increases from 11% to 34% on MATH and from 36% to 67% on GSM8K. | [Paper](https://arxiv.org/abs/2405.03548), [Tweet](https://x.com/xiangyue96/status/1787684680336097645) |
| 10) **Granite Code Models** -introduce Granite, a series of code models trained with code written in 116 programming languages; it consists of models ranging in size from 3 to 34 billion parameters, suitable for applications ranging from application modernization tasks to on-device memory-constrained use cases; demonstrates that the models reach state-of-the-art performance among available open-source code LLMs. | [Paper](https://arxiv.org/abs/2405.04324v1), [Code](https://github.com/ibm-granite/granite-code-models), [Tweet](https://x.com/rohanpaul_ai/status/1788194161495052343) |
## Top ML Papers of the Week (April 29 - May 5) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Kolmogorov-Arnold Networks** - proposes Kolmogorov-Arnold Networks (KANs) as alternatives to Multi-Layer Perceptrons (MLPs); KANs apply learnable activation functions on edges that represent the weights; with no linear weights used, KANs can outperform MLPs and possess faster neural scaling laws; the authors show that KANs can be used as collaborators to help scientists discover mathematics and physical laws. | [Paper](https://arxiv.org/abs/2404.19756), [Tweet](https://x.com/ZimingLiu11/status/1785483967719981538) |
| 2) **Better and Faster LLMs via Multi-token Prediction** - proposes a multi-token prediction approach that performs language modeling by training the predict the following n tokens using n independent output heads; the output heads operate on top of a shared transformer trunk; multi-token prediction is shown to be useful when using larger model sizes and can speed up inference up to 3x; the proposed 13B parameter models solves 12 % more problems on HumanEval and 17 % more on MBPP than comparable next-token models. | [Paper](https://arxiv.org/abs/2404.19737), [Tweet](https://x.com/arankomatsuzaki/status/1785486711646040440) |
| 3) **Med-Gemini** - presents a family of multimodal models specialized in medicines and based on the strong multimodal and long-context reasoning capabilities of Gemini; achieves state-of-the-art performance on 10/14 benchmarks surpassing GPT-4 models; it achieves 91% accuracy on MedQA (USMLE) benchmark using an uncertainty-guided search strategy. | [Paper](https://arxiv.org/abs/2404.18416), [Tweet](https://x.com/iScienceLuvr/status/1785247498744778886) |
| 4) **When to Retrieve?** - presents an approach to train LLMs to effectively utilize information retrieval; it first proposes a training approach to teach an LLM to generate a special token, <RET>, when it's not confident or doesn't know the answer to a question; the fine-tuned model outperforms a base LLM in two fixed alternate settings that include never retrieving and always retrieving context | [Paper](https://arxiv.org/abs/2404.19705), [Tweet](https://x.com/omarsar0/status/1785498325913108556) |
| 5) **A Survey on Retrieval-Augmented Language Models** - covers the most important recent developments in RAG and RAU systems; it includes evolution, taxonomy, and an analysis of applications; there is also a section on how to enhance different components of these systems and how to properly evaluate them; it concludes with a section on limitations and future directions. | [Paper](https://arxiv.org/abs/2404.19543), [Tweet](https://x.com/omarsar0/status/1785666343062184422) |
| 6) **An Open-source LM Specialized in Evaluating Other LMs** - open-source Prometheus 2 (7B & 8x7B), state-of-the-art open evaluator LLMs that closely mirror human and GPT-4 judgments; they support both direct assessments and pair-wise ranking formats grouped with user-defined evaluation criteria; according to the experimental results, this open-source model seems to be the strongest among all open-evaluator LLMs; the key seems to be in merging evaluator LMs trained on either direct assessment or pairwise ranking formats. | [Paper](https://arxiv.org/abs/2405.01535), [Tweet](https://x.com/omarsar0/status/1786380398966014423) |
| 7) **Self-Play Preference Optimization** - proposes a self-play-based method for aligning language models; this optimation procedure treats the problem as a constant-sum two-player game to identify the Nash equilibrium policy; it addresses the shortcomings of DPO and IPO and effectively increases the log-likelihood of chose responses and decreases the rejected ones; SPPO outperforms DPO and IPO on MT-Bench and the Open LLM Leaderboard. | [Paper](https://arxiv.org/abs/2405.00675), [Tweet](https://x.com/QuanquanGu/status/1785903241102049424) |
| 8) **Inner Workings of Transformer Language Models** - presents a technical introduction to current techniques used to interpret the inner workings of Transformer-based language models; it provides a detailed overview of the internal mechanisms implemented in these models. | [Paper](https://arxiv.org/abs/2405.00208), [Tweet](https://x.com/omarsar0/status/1786052338043466162) |
| 9) **Multimodal LLM Hallucinations** - provides an overview of the recent advances in identifying, evaluating, and mitigating hallucination in multimodal LLMs; it also provides an overview of causes, evaluation benchmarks, metrics, and other strategies to deal with challenges related to detecting hallucinations. | [Paper](https://arxiv.org/abs/2404.18930), [Tweet](https://x.com/DuaneJRich/status/1785220190411821111) |
| 10) **In-Context Learning with Long-Context Models** - studies the behavior in-context learning of LLMs at extreme context lengths with long-context models; shows that performance increases as hundreds or thousands of demonstrations are used; demonstrates that long-context ICL is less sensitive to random input shuffling than short-context ICL; concludes that the effectiveness of long-context LLMs is not due to task learning but from attending to similar examples. | [Paper](https://arxiv.org/abs/2405.00200), [Tweet](https://x.com/abertsch72/status/1786392584765538350) |
## Top ML Papers of the Week (April 22 - April 28) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Phi-3** - a new 3.8B parameter language model called phi-3-mini trained on 3.3 trillion tokens and is reported to rival Mixtral 8x7B and GPT-3.5; has a default context length of 4K but also includes a version that is extended to 128K (phi-mini-128K); combines heavily filtered web data and synthetic data to train the 3.8B models; it also reports results on 7B and 14B models trained on 4.8T tokens (phi-3-small and phi-3-medium) | [Paper](https://arxiv.org/abs/2404.14219), [Tweet](https://x.com/omarsar0/status/1782780923806699716) |
| 2) **OpenELM** - a new open language model that employs a layer-wise scaling strategy to efficiently allocate parameters and leading to better efficiency and accuracy; comes with different sizes such as 270M, 450M, 1.1B, and 3B; achieves a 2.36% improvement in accuracy compared to OLMo while requiring 2× fewer pre-training tokens. | [Paper](https://arxiv.org/abs/2404.14619), [Tweet](https://x.com/rasbt/status/1783480053847736713) |
| 3) **Arctic** - an open-source LLM (Apache 2.0 license.) that uses a unique Dense-MoE Hybrid transformer architecture; performs on par with Llama3 70B in enterprise metrics like coding (HumanEval+ & MBPP+), SQL (Spider) and instruction following (IFEval); claims to use 17x less compute budget than Llama 3 70B; the training compute is roughly under $2 million (less than 3K GPU weeks). | [Paper](https://www.snowflake.com/blog/arctic-open-efficient-foundation-language-models-snowflake/), [Tweet](https://x.com/omarsar0/status/1783176059694821632) |
| 4) **Make Your LLM Fully Utilize the Context** - presents an approach to overcome the lost-in-the-middle challenge common in LLMs. It applies an explicit "information-intensive" training procedure on Mistral-7B to enable the LLM to fully utilize the context. It leverages a synthetic dataset where the answer requires fine-grained information awareness on a short segment (∼128 tokens) within a synthesized long context (4K−32K tokens), and 2) the integration and reasoning of information from two or more short segments. The resulting model, FILM-7B (Fill-in-the-Middle), shows that it can robustly retrieve information from different positions in its 32K context window. | [Paper](https://arxiv.org/abs/2404.16811), [Tweet](https://x.com/omarsar0/status/1783905514578980949) |
| 5) **FineWeb** - a large-scale web dataset containing 15 trillion tokens for training language models; filters and deduplicates CommonCrawl between 2013 and 2024 and the goal is to improve the quality of the data. | [Paper](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [Tweet](https://x.com/gui_penedo/status/1781953413938557276) |
| 6) **AI-powered Gene Editors** - achieves precision editing of the human genome with a programmable gene editor design with an AI system powered by an LLM trained on biological diversity at scale. | [Paper](https://www.biorxiv.org/content/10.1101/2024.04.22.590591v1), [Tweet](https://x.com/thisismadani/status/1782510590839406904) |
| 7) **AutoCrawler** - Combines LLMs with crawlers with the goal of helping crawlers handle diverse and changing web environments more efficiently; the web crawler agent leverages the hierarchical structure of HTML for progressive understanding; employs top-down and step-back operations, and leverages the DOM tree structure, to generate a complete and executable crawler. | [Paper](https://arxiv.org/abs/2404.12753), [Tweet](https://x.com/omarsar0/status/1782462314983071757) |
| 8) **Graph Machine Learning in the Era of LLMs** - provides a comprehensive overview of the latest advancements for Graph ML in the era of LLMs; covers the recent developments in Graph ML, how LLM can enhance graph features, and how it can address issues such as OOD and graph heterogeneity. | [Paper](https://arxiv.org/abs/2404.14928), [Tweet](https://x.com/omarsar0/status/1783171591020392886) |
| 9) **Self-Evolution of LLMs** - provides a comprehensive survey on self-evolution approaches in LLMs. | [Paper](https://arxiv.org/abs/2404.14387), [Tweet](https://x.com/omarsar0/status/1782777977526231440) |
| 10) **Naturalized Execution Tuning (NExT)** - trains an LLM to have the ability to inspect the execution traced of programs and reason about run-time behavior via synthetic chain-of-thought rationales; improves the fix rate of a PaLM 2 model on MBPP and Human by 26.1% and 14.3%; the model also shows that it can generalize to unknown scenarios. | [Paper](https://arxiv.org/abs/2404.14662), [Tweet](https://x.com/AnsongNi/status/1783311827390070941) |
## Top ML Papers of the Week (April 15 - April 21) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Llama 3** - a family of LLMs that include 8B and 70B pretrained and instruction-tuned models; Llama 3 8B outperforms Gemma 7B and Mistral 7B Instruct; Llama 3 70 broadly outperforms Gemini Pro 1.5 and Claude 3 Sonnet. | [Paper](https://ai.meta.com/blog/meta-llama-3/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=llama3), [Tweet](https://x.com/AIatMeta/status/1780997403979735440) |
| 2) **Mixtral 8x22B** - a new open-source sparse mixture-of-experts model that reports that compared to the other community models, it delivers the best performance/cost ratio on MMLU; shows strong performance on reasoning, knowledge retrieval, maths, and coding. | [Paper](https://mistral.ai/news/mixtral-8x22b/), [Tweet](https://x.com/MistralAILabs/status/1780596888473072029) |
| 3) **Chinchilla Scaling: A replication attempt** - attempts to replicate the third estimation procedure of the compute-optimal scaling law proposed in Hoffmann et al. (2022) (i.e., Chinchilla scaling); finds that “the reported estimates are inconsistent with their first two estimation methods, fail at fitting the extracted data, and report implausibly narrow confidence intervals.” | [Paper](https://arxiv.org/abs/2404.10102), [Tweet](https://x.com/tamaybes/status/1780639257389904013) |
| 4) **How Faithful are RAG Models?** - aims to quantify the tug-of-war between RAG and LLMs' internal prior; it focuses on GPT-4 and other LLMs on question answering for the analysis; finds that providing correct retrieved information fixes most of the model mistakes (94% accuracy); when the documents contain more incorrect values and the LLM's internal prior is weak, the LLM is more likely to recite incorrect information; the LLMs are found to be more resistant when they have a stronger prior. | [Paper](https://arxiv.org/abs/2404.10198), [Tweet](https://x.com/omarsar0/status/1780613738585903182) |
| 5) **A Survey on Retrieval-Augmented Text Generation for LLMs** - presents a comprehensive overview of the RAG domain, its evolution, and challenges; it includes a detailed discussion of four important aspects of RAG systems: pre-retrieval, retrieval, post-retrieval, and generation. | [Paper](https://arxiv.org/abs/2404.10981), [Tweet](https://x.com/omarsar0/status/1780961995178594324) |
| 6) **The Illusion of State in State-Space Models** - investigates the expressive power of state space models (SSMs) and reveals that it is limited similar to transformers in that SSMs cannot express computation outside the complexity class 𝖳𝖢^0; finds that SSMs cannot solve state-tracking problems like permutation composition and other tasks such as evaluating code or tracking entities in a long narrative. | [Paper](https://arxiv.org/abs/2404.08819), [Tweet](https://x.com/lambdaviking/status/1780246351520887281) |
| 7) **Reducing Hallucination in Structured Outputs via RAG** - discusses how to deploy an efficient RAG system for structured output tasks; the RAG system combines a small language model with a very small retriever; it shows that RAG can enable deploying powerful LLM-powered systems in limited-resource settings while mitigating issues like hallucination and increasing the reliability of outputs.| [Paper](https://arxiv.org/abs/2404.08189), [Tweet](https://x.com/omarsar0/status/1779896289745846778) |
| 8) **Emerging AI Agent Architectures** - presents a concise summary of emerging AI agent architectures; it focuses the discussion on capabilities like reasoning, planning, and tool calling which are all needed to build complex AI-powered agentic workflows and systems; the report includes current capabilities, limitations, insights, and ideas for future development of AI agent design. | [Paper](https://arxiv.org/abs/2404.11584), [Tweet](https://x.com/omarsar0/status/1780958785785200756) |
| 9) **LM In-Context Recall is Prompt Dependent** - analyzes the in-context recall performance of different LLMs using several needle-in-a-haystack tests; shows various LLMs recall facts at different lengths and depths; finds that a model's recall performance is significantly affected by small changes in the prompt; the interplay between prompt content and training data can degrade the response quality; the recall ability of a model can be improved with increasing size, enhancing the attention mechanism, trying different training strategies, and applying fine-tuning. | [Paper](https://arxiv.org/abs/2404.08865), [Tweet](https://x.com/omarsar0/status/1780244042007122129) |
| 10) **A Survey on State Space Models** - a survey paper on state space models (SSMs) with experimental comparison and analysis; it reviews current SSMs, improvements compared to alternatives, challenges, and their applications. | [Paper](https://arxiv.org/abs/2404.09516), [Tweet](https://x.com/omarsar0/status/1781430319926686190) |
## Top ML Papers of the Week (April 8 - April 14) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Leave No Context Behind** - integrates compressive memory into a vanilla dot-product attention layer; the goal is to enable Transformer LLMs to effectively process infinitely long inputs with bounded memory footprint and computation; proposes a new attention technique called Infini-attention which incorporates a compressive memory module into a vanilla attention mechanism; it builds in both masked local attention and long-term linear attention into a single Transformer block; this allows the Infini-Transformer model to efficiently handle both long and short-range contextual dependencies; outperforms baseline models on long-context language modeling with a 114x compression ratio of memory. | [Paper](https://arxiv.org/abs/2404.07143), [Tweet](https://x.com/omarsar0/status/1778480897198612839) |
| 2) **OpenEQA** - proposes an open-vocabulary benchmark dataset to measure the capabilities of AI models to perform embodied question answering (EQA); it contains 1600 human-generated questions composed from 180 real-world environments; also provides an LLM-powered evaluation protocol for the task and shows that models like GPT-4V are significantly behind human-level performance.| [Paper](https://open-eqa.github.io/assets/pdfs/paper.pdf), [Tweet](https://x.com/AIatMeta/status/1778425321118732578) |
| 3) **CodeGemma** - a family of open code LLMs based on Gemma; CodeGemma 7B models excel in mathematical reasoning and match the code capabilities of other open models; the instruction-tuned CodeGemma 7B model is the more powerful model for Python coding as assessed via the HumanEval benchmark; results also suggest that the model performs best on GSM8K among 7B models; the CodeGemma 2B model achieves SoTA code completion and is designed for fast code infilling and deployment in latency-sensitive settings. | [Paper](https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf), [Tweet](https://x.com/omarsar0/status/1777723836202467713) |
| 4) **LM-Guided Chain-of-Thought** - applies knowledge distillation to a small LM with rationales generated by the large LM with the hope of narrowing the gap in reasoning capabilities; the rationale is generated by the lightweight LM and the answer prediction is then left for the frozen large LM; this resource-efficient approach avoids the need to fine-tune the large model and instead offloads the rationale generation to the small language model; the knowledge-distilled LM is further optimized with reinforcement learning using several rational-oriented and task-oriented reward signals; the LM-guided CoT prompting approach proposed in this paper outperforms both standard prompting and CoT prompting. Self-consistency decoding also enhances performance. | [Paper](https://arxiv.org/abs/2404.03414), [Tweet](https://x.com/omarsar0/status/1777755819150373121) |
| 5) **Best Practices and Lessons on Synthetic Data** - an overview by Google DeepMind on synthetic data research, covering applications, challenges, and future directions; discusses important topics when working with synthetic data such as ensuring quality, factuality, fidelity, unbiasedness, trustworthiness, privacy, and more.| [Paper](https://arxiv.org/abs/2404.07503), [Tweet](https://x.com/omarsar0/status/1778804848038683066) |
| 6) **Reasoning with Intermediate Revision and Search** - presents an approach for general reasoning and search on tasks that can be decomposed into components; the proposed graph-based framework, THOUGHTSCULPT, incorporates iterative self-revision capabilities and allows an LLM to build an interwoven network of thoughts; unlike other approaches such as Tree-of-thoughts that shape the reasoning process using a tree, this new approach incorporates Monte Carlo Tree Search (MCTS) to efficiently navigate the search space; due to its ability for continuous thought iteration, THOUGHTSCULPT is particularly suitable for tasks such as open-ended generation, multip-step reasoning, and creative ideation. | [Paper](https://arxiv.org/abs/2404.05966), [Tweet](https://x.com/omarsar0/status/1777896810805186757) |
| 7) **Overview of Multilingual LLMs** - a survey on multilingual LLMs including a thorough review of methods, a taxonomy, emerging frontiers, challenges, and resources to advance research | [Paper](https://arxiv.org/abs/2404.04925), [Tweet](https://x.com/omarsar0/status/1778063103906771105) |
| 8) **The Physics of Language Models** - investigates knowledge capacity scaling laws where it evaluates a model’s capability via loss or benchmarks, to estimate the number of knowledge bits a model stores; reports that "Language models can and only can store 2 bits of knowledge per parameter, even when quantized to int8, and such knowledge can be flexibly extracted for downstream applications. Consequently, a 7B model can store 14B bits of knowledge, surpassing the English Wikipedia and textbooks combined based on our estimation." | [Paper](https://arxiv.org/abs/2404.05405), [Tweet](https://x.com/omarsar0/status/1777709227319968034) |
| 9) **Aligning LLMs to Quote from Pre-Training Data** - proposes techniques to align LLMs to leverage memorized information quotes directly from pre-training data; the alignment approach is not only able to generate high-quality quoted verbatim statements but overall preserve response quality; it leverages a synthetic preference dataset for quoting without any human annotation and aligns the target model to quote using preference optimization. | [Paper](https://arxiv.org/abs/2404.03862), [Tweet](https://x.com/omarsar0/status/1777408054402646433) |
| 10) **The Influence Between NLP and Other Fields** - aims to quantify the degree of influence between 23 fields of study and NLP; the cross-field engagement of NLP has declined from 0.58 in 1980 to 0.31 in 2022; the study also finds that NLP citations are dominated by CS which accounts for over 80% of citations with emphasis on AI, ML, and information retrieval; overall, NLP is growing more insular -- higher growth of intra-field citation and a decline in multidisciplinary works. | [Paper](https://aclanthology.org/2023.emnlp-main.797/), [Tweet](https://x.com/omarsar0/status/1777337237794955586) |
## Top ML Papers of the Week (April 1 - April 7) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Many-shot Jailbreaking** - proposes a jailbreaking technique called many-shot jailbreaking to evade the safety guardrails of LLMs; this jailbreaking technique exploits the longer context window supported by many modern LLMs; it includes a very large number of faux dialogues (~256) preceding the final question which effectively steers the model to produce harmful responses. | [Paper](https://www.anthropic.com/research/many-shot-jailbreaking), [Tweet](https://x.com/AnthropicAI/status/1775211248239464837) |
| 2) **SWE-Agent** - a new open-source agentic system that can automatically solve GitHub issues with similar accuracy as Devin on the SWE-bench; the agent interacts with a specialized terminal and enables important processing of files and executable tests to achieve good performance; on SWE-bench, SWE-agent resolves 12.29% of issues, achieving the state-of-the-art performance on the full test set. | [Paper](https://github.com/princeton-nlp/SWE-agent), [Tweet](https://x.com/jyangballin/status/1775114444370051582) |
| 3) **Mixture-of-Depths** - demonstrates that transformer models can learn to efficiently and dynamically allocate FLOPs to specific positions in a sequence; this helps to optimize the allocation along the sequence for different layers across model depth; findings suggest that for a given FLOP budget models can be trained to perform faster and better than their baseline counterparts. | [Paper](https://arxiv.org/abs/2404.02258), [Tweet](https://x.com/TheSeaMouse/status/1775782800362242157) |
| 4) **Local Context LLMs Struggle with Long In-Context Learning** - finds that after evaluating 13 long-context LLMs on long in-context learning the LLMs perform relatively well under the token length of 20K. However, after the context window exceeds 20K, most LLMs except GPT-4 will dip dramatically. | [Paper](https://arxiv.org/abs/2404.02060), [Tweet](https://x.com/omarsar0/status/1775638933377786076) |
| 5) **Visualization-of-Thought** - inspired by a human cognitive capacity to imagine unseen worlds, this new work proposes Visualization-of-Thought (VoT) prompting to elicit spatial reasoning in LLMs; VoT enables LLMs to "visualize" their reasoning traces, creating internal mental images, that help to guide subsequent reasoning steps; when tested on multi-hop spatial reasoning tasks like visual tiling and visual navigation, VoT outperforms existing multimodal LLMs. | [Paper](https://arxiv.org/abs/2404.03622), [Tweet](https://x.com/omarsar0/status/1776082343813403063) |
| 6) **The Unreasonable Ineffectiveness of the Deeper Layers** - finds that a simple layer-pruning strategy of popular open-weight pretraining LLMs shows minimal performance degradation until after a large fraction (up to half) of the layers are removed; using a layer similarity mechanism optimal blocks are identified and pruned followed by a small amount of fine-tuning to heal damage | [Paper](https://arxiv.org/abs/2403.17887v1), [Tweet](https://x.com/AlphaSignalAI/status/1774858806817906971) |
| 7) **JetMoE** - an 8B model trained with less than $ 0.1 million cost but outperforms LLaMA2-7B; shows that LLM training can be much cheaper than generally thought; JetMoE-8B has 24 blocks where each block has two MoE layers: Mixture of Attention heads (MoA) and Mixture of MLP Experts (MoE); each MoA and MoE layer has 8 experts, and 2 experts are activated for each input token with 2.2B active parameters. | [Paper](https://research.myshell.ai/jetmoe), [Tweet](https://x.com/omarsar0/status/1775971009469768104) |
| 8) **Representation Finetuning for LMs** - proposes a method for representation fine-tuning (ReFT) that operates on a frozen base model and learns task-specific interventions on hidden representations; in other words, by manipulating a small fraction of model representations it is possible to effectively steer model behavior to achieve better downstream performance at inference time; also proposes LoReFT as a drop-in replacement for PEFTs that is 10-50x more parameter efficient. | [Paper](https://arxiv.org/abs/2404.03592), [Tweet](https://x.com/arankomatsuzaki/status/1776057023697731913) |
| 9) **Advancing LLM Reasoning** - proposes a suite of LLMs (Eurus) optimized for reasoning and achieving SoTA among open-source models on tasks such as mathematics and code generation; Eurus-70B outperforms GPT-3.5 Turbo in reasoning largely due to a newly curated, high-quality alignment dataset designed for complex reasoning tasks; the data includes instructions with preference tree consisting of reasoning chains, multi-turn interactions and pairwise data for preference learning. | [Paper](https://github.com/OpenBMB/Eurus/blob/main/paper.pdf), [Tweet](https://x.com/lifan__yuan/status/1775217887701278798) |
| 10) **Training LLMs over Neurally Compressed Text** - explores training LLMs with neural text compressors; the proposed compression technique segments text into blocks that each compress to the same bit length; the approach improves at scale and outperforms byte-level baselines on both perplexity and inference speed benchmarks; latency is reduced to the shorter sequence length | [Paper](https://arxiv.org/abs/2404.03626), [Tweet](https://x.com/arankomatsuzaki/status/1776055420848631814) |
## Top ML Papers of the Week (March 26 - March 31) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **DBRX** - a new 132B parameter open LLM that outperforms all the established open-source models on common benchmarks like MMLU and GSM8K; DBRX was pretrained on 12T tokens (text and code) and uses a mixture-of-experts (MoE) architecture; its inference is up to 2x faster than LLaMA2-70B and is about 40% of the size of Grok-1 in terms of both total and active parameter counts; there is also DBRX Instruct which demonstrates good performance in programming and mathematics; while DBRX is trained as a general-purpose LLM, it still surpasses CodeLLaMa-70 Instruct, a model built explicitly for code generation. | [Paper](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm), [Tweet](https://x.com/omarsar0/status/1773018193885303266?s=20) |
| 2) **Grok-1.5** - xAI’s latest long-context LLM for advanced understanding and reasoning and problem-solving capabilities; Grok-1.5 achieved a 50.6% score on the MATH benchmark and a 90% score on the GSM8K benchmark; this model can process long contexts of up to 128K tokens and demonstrates powerful retrieval capabilities. | [Paper](https://x.ai/blog/grok-1.5), [Tweet](https://x.com/xai/status/1773510159740063860?s=20) |
| 3) **SEEDS** - a generative AI model based on diffusion models that shows powerful capabilities to quantify uncertainty in weather forecasting; it can generate a large ensemble conditioned on as few as one or two forecasts from an operational numerical weather prediction system. | [Paper](https://www.science.org/doi/10.1126/sciadv.adk4489), [Tweet](https://x.com/GoogleAI/status/1773774362413355099?s=20) |
| 4) **LLMs for University-Level Coding Course** - finds that the latest LLMs have not surpassed human proficiency in physics coding assignments; also finds that GPT-4 significantly outperforms GPT-3.5 and prompt engineering can further enhance performance. | [Paper](https://arxiv.org/abs/2403.16977), [Tweet](https://x.com/omarsar0/status/1772647466820685895?s=20) |
| 5) **Mini-Gemini** - a simple framework to enhance multi-modality vision models; specifically, visual tokens are enhanced through an additional visual encoder for high-resolution refinement without token increase; achieves top performance in several zero-shot benchmarks and even surpasses the developed private models. | [Paper](https://arxiv.org/abs/2403.18814v1), [Tweet](https://x.com/_akhaliq/status/1773170068521713713?s=20) |
| 6) **Long-form factuality in LLMs** - investigates long-form factuality in open-domain by generating a prompt set of questions including 38 topics; also proposes an LLM-based agent to perform evaluation for the task; finds that LLM agents can achieve superhuman rating performance and is reported to be 20 times cheaper than human annotations. | [Paper](https://arxiv.org/abs/2403.18802v1), [Tweet](https://x.com/JerryWeiAI/status/1773402343301877960?s=20) |
| 7) **Agent Lumos** - a unified framework for training open-source LLM-based agents; it consists of a modular architecture with a planning module that can learn subgoal generation and a module trained to translate them to action with tool usage. | [Paper](https://arxiv.org/abs/2311.05657), [Tweet](https://x.com/Wade_Yin9712/status/1773792306791055397?s=20) |
| 8) **AIOS** - an LLM agent operation system that integrates LLMs into operation systems as a brain; the agent can optimize resource allocation, context switching, enable concurrent execution of agents, tool service, and even maintain access control for agents. | [Paper](https://arxiv.org/abs/2403.16971v2), [Tweet](https://x.com/arankomatsuzaki/status/1772460132745547976?s=20) |
| 9) **FollowIR** - a dataset with instruction evaluation benchmark and a separate set for teaching information retrieval model to follow real-world instructions; a FollowIR-7B model has significant improvements (over 13%) after fine-tuning on a training set. | [Paper](https://arxiv.org/abs/2403.15246), [Tweet](https://x.com/arankomatsuzaki/status/1772082608609833127?s=20) |
| 10) **LLM2LLM** - an iterative data augmentation strategy that leverages a teacher LLM to enhance a small seed dataset by augmenting additional data that can be used to effectively fine-tune models; it significantly enhances the performance of LLMs in the low-data regime, outperforming both traditional fine-tuning and other data augmentation baselines.| [Paper](https://arxiv.org/abs/2403.15042), [Tweet](https://x.com/arankomatsuzaki/status/1772078585903219007?s=20) |
## Top ML Papers of the Week (March 18 - March 25) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Grok-1** - a mixture-of-experts model with 314B parameters which includes the open release of the base model weights and network architecture; the MoE model activates 25% of the weights for a given token and its pretraining cutoff date is October 2023. | [Paper](https://x.ai/blog/grok-os), [Tweet](https://x.com/ibab_ml/status/1769447989192675748?s=20) |
| 2) **Evolutionary Model Merge** - an approach for automating foundation model development using evolution to combine open-source models; facilitates cross-domain merging where a Japanese Math LLM achieved state-of-the-art performance on Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not explicitly trained for these tasks. | [Paper](https://arxiv.org/abs/2403.13187), [Tweet](https://x.com/SakanaAILabs/status/1770613032198279663?s=20) |
| 3) **TacticAI** - an AI-powered assistant for football tactics developed and evaluated in collaboration with domain experts from Liverpool FC; the systems offer coaches a way to sample and explore alternative player setups for a corner kick routine and select the tactic with the highest predicted likelihood of success; TacticAI’s model suggestions are favored over existing tactics 90% of the time and it offers an effective corner kick retrieval system. | [Paper](https://www.nature.com/articles/s41467-024-45965-x), [Tweet](https://x.com/GoogleDeepMind/status/1770121564085707082?s=20) |
| 4) **Tool Use in LLMs** - provides an overview of tool use in LLMs, including a formal definition of the tool-use paradigm, scenarios where LLMs leverage tool usage, and for which tasks this approach works well; it also provides an analysis of complex tool usage and summarize testbeds and evaluation metrics across LM tooling works. | [Paper](https://zorazrw.github.io/files/WhatAreToolsAnyway.pdf), [Tweet](https://x.com/omarsar0/status/1770497515898433896?s=20) |
| 5) **Step-by-Step Comparisons Make LLMs Better Reasoners** - proposes RankPrompt, a prompting method to enable LLMs to self-rank their responses without additional resources; this self-ranking approach ranks candidates through a systematic, step-by-step comparative evaluation; it seems to work well as it leverages the capabilities of LLMs to generate chains of comparisons as demonstrations; RankPrompt significantly enhances the reasoning performance of ChatGPT and GPT-4 on many arithmetic and commonsense reasoning tasks. | [Paper](https://arxiv.org/abs/2403.12373), [Tweet](https://x.com/omarsar0/status/1770492690129359135?s=20) |
| 6) **LLM4Decompile** - a family of open-access decompilation LLMs ranging from 1B to 33B parameters; these models are trained on 4 billion tokens of C source code and corresponding assembly code; the authors also introduce Decompile-Eval, a dataset for assessing re-compatibility and re-executability for decompilation and evaluating with a perspective of program semantics; LLM4Decompile demonstrates the capability to decompile 21% of the assembly code, achieving a 50% improvement over GPT-4. | [Paper](https://arxiv.org/abs/2403.05286v1), [Tweet](https://x.com/omarsar0/status/1771218791399092351?s=20) |
| 7) **Agent-FLAN** - designs data and methods to effectively fine-tune language models for agents, referred to as Agent-FLAN; this enables Llama2-7B to outperform prior best works by 3.5% across various agent evaluation datasets; Agent-FLAN greatly alleviates the hallucination issues and consistently improves the agent capability of LLMs when scaling model sizes while generally improving the LLM. | [Paper](https://arxiv.org/abs/2403.12881v1), [Tweet](https://x.com/_akhaliq/status/1770302813152690259?s=20) |
| 8) **LLMs Leak Proprietary Information** - shows that it’s possible to learn a large amount of non-public information about an API-protected LLM using the logits; with a relatively small number of API queries, the approach estimates that the embedding size of OpenAI's gpt-3.5-turbo to be about 4,096; the paper also proposes guardrails against the attacks used | [Paper](https://arxiv.org/abs/2403.09539), [Tweet](https://x.com/DimitrisPapail/status/1768654579254579385?s=20) |
| 9) **DROID** - an open-source, large-scale robot manipulation dataset to train and build more capable and robust robotic manipulation policies; it contains 76K demonstration trajectories, collected across 564 scenes and 86 tasks; training with DROID leads to higher performing policies and generalization. | [Paper](https://arxiv.org/abs/2403.12945), [Tweet](https://x.com/chelseabfinn/status/1770311755140575413?s=20) |
| 10) **Retrieval-Augmented Fine-Tuning** - combines the benefits of RAG and fine-tuning to improve a model's ability to answer questions in "open-book" in-domain settings; combining it with RAFT's CoT-style response helps to improve reasoning. | [Paper](https://arxiv.org/abs/2403.10131), [Tweet](https://x.com/cwolferesearch/status/1770912695765660139?s=20) |
## Top ML Papers of the Week (March 11 - March 17) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **SIMA** - a generalist AI agent for 3D virtual environments that follows natural-language instructions in a broad range of 3D virtual environments and video games; SIMA is evaluated across 600 basic skills, spanning navigation, object interaction, and menu use. Language seems to be a huge factor in performance. | [Paper](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/sima-generalist-ai-agent-for-3d-virtual-environments/Scaling%20Instructable%20Agents%20Across%20Many%20Simulated%20Worlds.pdf), [Tweet](https://x.com/GoogleDeepMind/status/1767918515585994818?s=20) |
| 2) **Retrieval Augmented Thoughts** - shows that iteratively revising a chain of thoughts with information retrieval can significantly improve LLM reasoning and generation in long-horizon generation tasks; the key idea is that each thought step is revised with relevant retrieved information to the task query, the current and past thought steps; Retrieval Augmented Thoughts (RAT) can be applied to different models like GPT-4 and CodeLlama-7B to improve long-horizon generation tasks (e.g., creative writing and embodied task planning); RAT is a zero-shot prompting approach and provides significant improvements to baselines that include zero-shot CoT prompting, vanilla RAG, and other baselines. | [Paper](https://arxiv.org/abs/2403.05313), [Tweet](https://x.com/omarsar0/status/1767251740443746435?s=20) |
| 3) **LMs Can Teach Themselves to Think Before Speaking** - presents a generalization of STaR, called Quiet-STaR, to enable language models (LMs) to learn to reason in more general and scalable ways; Quiet-STaR enables LMs to generate rationales at each token to explain future text; it proposes a token-wise parallel sampling algorithm that helps improve LM predictions by efficiently generating internal thoughts; the rationale generation is improved using REINFORCE. | [Paper](https://arxiv.org/abs/2403.09629), [Tweet](https://x.com/omarsar0/status/1768681638009975088?s=20) |
| 4) **Knowledge Conflicts for LLMs** - an overview of the common issue of knowledge conflict when working with LLMs; the survey paper categorizes these conflicts into context-memory, inter-context, and intra-memory conflict; it also provides insights into causes and potential ways to mitigate these knowledge conflict issues. | [Paper](https://arxiv.org/abs/2403.08319), [Tweet](https://x.com/omarsar0/status/1768288774532858003?s=20) |
| 5) **Stealing Part of a Production Language Model** - presents the first model-stealing attack that extracts information from production language models like ChatGPT or PaLM-2; shows that it's possible to recover the embedding projection layer of a transformer-based model through typical API access; as an example, the entire projection matrix was extracted from the OpenAI ada and babbage models for under $20. | [Paper](https://arxiv.org/abs/2403.06634), [Tweet](https://x.com/omarsar0/status/1767641831079067694?s=20) |
| 6) **Branch-Train-MiX** - proposes mixing expert LLMs into a Mixture-of-Experts LLM as a more compute-efficient approach for training LLMs; it's shown to be more efficient than training a larger generalist LLM or several separate specialized LLMs; the approach, BTX, first trains (in parallel) multiple copies of a seed LLM specialized in different domains (i.e., expert LLMs) and merges them into a single LLM using MoE feed-forward layers, followed by fine-tuning of the overall unified model. | [Paper](https://arxiv.org/abs/2403.07816), [Tweet](https://x.com/jaseweston/status/1767727740952682667?s=20) |
| 7) **LLMs Predict Neuroscience Results** - proposes a benchmark, BrainBench, for evaluating the ability of LLMs to predict neuroscience results; finds that LLMs surpass experts in predicting experimental outcomes; an LLM tuned on neuroscience literature was shown to perform even better. | [Paper](https://arxiv.org/abs/2403.03230), [Tweet](https://x.com/ProfData/status/1765689739682754824?s=20) |
| 8) **C4AI Command-R** - a 35B parameter model, with a context length of 128K, optimized for use cases that include reasoning, summarization, and question answering; Command-R has the capability for multilingual generation evaluated in 10 languages and performant tool use and RAG capabilities; it has been released for research purposes. | [Paper](https://huggingface.co/CohereForAI/c4ai-command-r-v01), [Tweet](https://x.com/CohereForAI/status/1767275927505977455?s=20) |
| 9) **Is Cosine-Similarity Really About Simirity?** - studies embeddings derived from regularized linear models and derive analytically how cosine-similarity can yield arbitrary and meaningless similarities; also finds that for some linear models, the similarities are not even unique and others are controlled by regularization; the authors caution against blindly using cosine similarity and presents considerations and alternatives. | [Paper](https://arxiv.org/abs/2403.05440), [Tweet](https://x.com/_reachsumit/status/1767045820384477575?s=20) |
| 10) **Multimodal LLM Pre-training** - provides a comprehensive overview of methods, analysis, and insights into multimodal LLM pre-training; studies different architecture components and finds that carefully mixing image-caption, interleaved image-text, and text-only data is key for state-of-the-art performance; it also proposes a family of multimodal models up to 30B parameters that achieve SOTA in pre-training metrics and include properties such as enhanced in-context learning, multi-image reasoning, enabling few-shot chain-of-thought prompting. | [Paper](https://arxiv.org/abs/2403.09611), [Tweet](https://x.com/DrJimFan/status/1769053019939967080?s=20) |
## Top ML Papers of the Week (March 4 - March 10) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Claude 3** - consists of a family of three models (Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus); Claude 3 Opus (the strongest model) seems to outperform GPT-4 on common benchmarks like MMLU and HumanEval; Claude 3 capabilities include analysis, forecasting, content creation, code generation, and converting in non-English languages like Spanish, Japanese, and French; 200K context windows supported but can be extended to 1M token to select customers; the models also have strong vision capabilities for processing formats like photos, charts, and graphs; Anthropic claims these models have a more nuanced understanding of requests and make fewer refusals. | [Paper](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf), [Tweet](https://x.com/AnthropicAI/status/1764653830468428150?s=20) |
| 2) **Robust Evaluation of Reasoning** - proposes functional benchmarks for the evaluation of the reasoning capabilities of LLMs; finds that there is a reasoning gap with current models from 58.35% to 80.31%; however, the authors also report that those gaps can be reduced with more sophisticated prompting strategies. | [Paper](https://arxiv.org/abs/2402.19450), [Tweet](https://x.com/_saurabh/status/1763626711407816930?s=20) |
| 3) **GaLore** - proposes a memory-efficient approach for training LLM through low-rank projection; the training strategy allows full-parameter learning and is more memory-efficient than common low-rank adaptation methods such as LoRA; reduces memory usage by up to 65.5% in optimizer states while maintaining both efficiency and performance for pre-training on LLaMA 1B and 7B architectures. | [Paper](https://arxiv.org/abs/2403.03507), [Tweet](https://x.com/AnimaAnandkumar/status/1765613815146893348?s=20) |
| 4) **Can LLMs Reason and Plan?** - a new position paper discusses the topic of reasoning and planning for LLMs; here is a summary of the author's conclusion: "To summarize, nothing that I have read, verified, or done gives me any compelling reason to believe that LLMs do reasoning/planning, as normally understood. What they do instead, armed with web-scale training, is a form of universal approximate retrieval, which, as I have argued, can sometimes be mistaken for reasoning capabilities". | [Paper](https://arxiv.org/abs/2403.04121), [Tweet](https://x.com/omarsar0/status/1766123621326475285?s=20) |
| 5) **RAG for AI-Generated Content** - provides an overview of RAG used in different generation scenarios like code, image, and audio, including a taxonomy of RAG enhancements with reference to key papers. | [Paper](https://arxiv.org/abs/2402.19473v1), [Tweet](https://x.com/omarsar0/status/1765414854397985175?s=20) |
| 6) **KnowAgent** - proposes an approach to enhance the planning capabilities of LLMs through explicit action knowledge; uses an action knowledge base and a knowledgeable self-learning phase to guide the model's action generation, mitigate planning hallucination, and enable continuous improvement; outperforms existing baselines and shows the potential of integrating external action knowledge to streamline planning with LLMs and solve complex planning challenges. | [Paper](https://arxiv.org/abs/2403.03101), [Tweet](https://x.com/omarsar0/status/1765408813467759037?s=20) |
| 7) **Sora Overview** - a comprehensive review of Sora and some of the key developments powering this model, including limitations and opportunities of large vision models. | [Paper](https://arxiv.org/abs/2402.17177v2), [Tweet](https://x.com/omarsar0/status/1765756669659603015?s=20) |
| 8) **LLM for Law** - introduces SaulLM-7B, a large language model for the legal domain explicitly designed for legal text comprehension and generation; presents an instructional fine-tuning method that leverages legal datasets to further enhance performance in legal tasks. | [Paper](https://arxiv.org/abs/2403.03883), [Tweet](https://x.com/_akhaliq/status/1765614083875738028?s=20) |
| 9) **Design2Code** - investigates the use of multimodal LLMs for converting a visual design into code implementation which is key for automating front-end engineering; introduces a benchmark of 484 diverse real-world webpages and a set of evaluation metrics to measure the design-to-code capability; further develops a suite of multimodal prompting methods and show their effectiveness on GPT-4V and Gemini Pro Vision; an open-source fine-tuned Design2Code matches the performance of Gemini Pro Vision, however, GPT-4V performs the best on the task. | [Paper](https://arxiv.org/abs/2403.03163), [Tweet](https://x.com/_akhaliq/status/1765199160653828385?s=20) |
| 10) **TripoSR** - a transformer-based 3D reconstruction model for fast feed-forward 3D generation; it can produce 3D mesh from a single image in under 0.5 seconds; improvement includes better data processing, model design, and training. | [Paper](https://arxiv.org/abs/2403.02151v1), [Tweet](https://x.com/_akhaliq/status/1764841524431392794?s=20) |
## Top ML Papers of the Week (February 26 - March 3) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Genie** - a foundation model trained from internet videos and with the ability to generate a variety of action-controllable 2D worlds given an image prompt; Genie has 11B parameters and consists of a spatiotemporal video tokenizer, an autoregressive dynamic model, and a scalable latent action model; the latent action space enables training agents to imitate behaviors from unseen video which is promising for building more generalist agents. | [Paper](https://arxiv.org/abs/2402.15391), [Tweet](https://x.com/_rockt/status/1762026090262872161?s=20) |
| 2) **Mistral Large** - a new LLM with strong multilingual, reasoning, maths, and code generation capabilities; features include: 1) 32K tokens context window, 2) native multilingual capacities, 3) strong abilities in reasoning, knowledge, maths, and coding benchmarks, and 4) function calling and JSON format natively supported. | [Paper](https://mistral.ai/news/mistral-large/), [Tweet](https://x.com/omarsar0/status/1762140818654064721?s=20) |
| 3) **The Era of 1-bit LLMs** - introduces a high-performing and cost-effective 1-bit LLM variant called BitNet b1.58 where every parameter is a ternary {-1, 0, 1}; given the same model size and training tokens, BitNet b1.58 can match the perplexity and task performance of a full precision Transformer LLM (i.e., FP16); the benefits of this 1-bit LLM are significantly better latency, memory, throughout, and energy consumption. | [Paper](https://arxiv.org/abs/2402.17764), [Tweet](https://x.com/_akhaliq/status/1762729757454618720?s=20) |
| 4) **Dataset for LLMs** - a comprehensive overview (180+ pages) and analysis of LLM datasets. | [Paper](https://arxiv.org/abs/2402.18041), [Tweet](https://x.com/omarsar0/status/1763233452852134001?s=20) |
| 5) **LearnAct** - explores open-action learning for language agents through an iterative learning strategy that creates and improves actions using Python functions; on each iteration, the proposed framework (LearnAct) expands the action space and enhances action effectiveness by revising and updating available actions based on execution feedback; the LearnAct framework was tested on Robotic planning and AlfWorld environments; it improves agent performance by 32% in AlfWorld compared to ReAct+Reflexion. | [Paper](https://arxiv.org/abs/2402.15809), [Tweet](https://x.com/omarsar0/status/1762533498492010761?s=20) |
| 6) **EMO** - a new framework for generating expressive video by utilizing a direct audio-to-video synthesis approach; by leveraging an Audio2Video diffusion model it bypasses the need for intermediate 3D models or facial landmarks; EMO can produce convincing speaking videos and singing videos in various styles while outperforming existing methods in terms of expressiveness and realism. | [Paper](https://arxiv.org/abs/2402.17485), [Tweet](https://x.com/_akhaliq/status/1762686465777999932?s=20) |
| 7) **On the Societal Impact of Open Foundation Models** - a position paper with a focus on open foundation models and their impact, benefits, and risks; proposes a risk assessment framework for analyzing risk and explains why the marginal risk of open foundation models is low in some cases; it also offers a more grounded assessment of the societal impact of open foundation models. | [Paper](https://crfm.stanford.edu/open-fms/), [Tweet](https://x.com/sayashk/status/1762508812370551207?s=20) |
| 8) **StarCoder 2** - a family of open LLMs for code with three different sizes (3B, 7B, and 15B); the 15B model was trained on 14 trillion tokens and 600+ programming languages with a context window of 16K token and employing a fill-in-the-middle objective; it matches 33B+ models on many evaluation like code completion, code reasoning, and math reasoning aided through PAL. | [Paper](https://huggingface.co/blog/starcoder2), [Tweet](https://x.com/_philschmid/status/1762843489220296881?s=20) |
| 9) **LLMs on Tabular Data** - an overview of LLMs for tabular data tasks including key techniques, metrics, datasets, models, and optimization approaches; it covers limitations and unexplored ideas with insights for future research directions. | [Paper](https://arxiv.org/abs/2402.17944), [Tweet](https://x.com/omarsar0/status/1763187964501254492?s=20) |
| 10) **PlanGPT** - shows how to leverage LLMs and combine multiple approaches like retrieval augmentation, fine-tuning, tool usage, and more; the proposed framework is applied to urban and spatial planning but there are a lot of insights and practical tips that apply to other domains.| [Paper](https://arxiv.org/abs/2402.19273), [Tweet](https://x.com/omarsar0/status/1763424166890377691?s=20) |
## Top ML Papers of the Week (February 19 - February 25) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Stable Diffusion 3** - a suite of image generation models ranging from 800M to 8B parameters; combines diffusion transformer architecture and flow matching for improved performance in multi-subject prompts, image quality, and spelling abilities; technical report to be published soon and linked here. | [Paper](https://stability.ai/news/stable-diffusion-3), [Tweet](https://x.com/StabilityAI/status/1760656767237656820?s=20) |
| 2) **Gemma** - a series of open models inspired by the same research and tech used for Gemini; includes 2B (trained on 2T tokens) and 7B (trained on 6T tokens) models including base and instruction-tuned versions; trained on a context length of 8192 tokens; generally outperforms Llama 2 7B and Mistral 7B. | [Paper](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf), [Tweet](https://x.com/omarsar0/status/1760310942552686604?s=20) |
| 3) **LLMs for Data Annotation** - an overview and a good list of references that apply LLMs for data annotation; includes a taxonomy of methods that employ LLMs for data annotation; covers three aspects: LLM-based data annotation, assessing LLM-generated annotations, and learning with LLM-generated annotations. | [Paper](https://arxiv.org/abs/2402.13446), [Tweet](https://x.com/omarsar0/status/1760664562779431367?s=20) |
| 4) **GRIT** - presents generative representational instruction tuning where an LLM is trained to perform both generative and embedding tasks and designed to distinguish between them via the instructions; produces new state-of-the-art on MTEB and the unification is reported to speed up RAG by 60% for long documents. | [Paper](https://arxiv.org/abs/2402.09906), [Tweet](https://x.com/Muennighoff/status/1758307967802224770?s=20) |
| 5) **LoRA+** - proposes LoRA+ which improves performance and finetuning speed (up to ∼ 2X speed up), at the same computational cost as LoRA; the key difference between LoRA and LoRA+ is how the learning rate is set; LoRA+ sets different learning rates for LoRA adapter matrices while in LoRA the learning rate is the same. | [Paper](https://arxiv.org/abs/2402.12354), [Tweet](https://x.com/omarsar0/status/1760063230406258892?s=20) |
| 6) **Revisiting REINFORCE in RLHF** - shows that many components of PPO are unnecessary in an RLHF context; it also shows that a simpler REINFORCE variant outperforms both PPO and newly proposed alternatives such as DPO and RAFT; overall, it shows that online RL optimization can be beneficial and low cost. | [Paper](https://arxiv.org/abs/2402.14740), [Tweet](https://x.com/sarahookr/status/1761042445997945070?s=20) |
| 7) **Recurrent Memory Finds What LLMs Miss** - explores the capability of transformer-based models in extremely long context processing; finds that both GPT-4 and RAG performance heavily rely on the first 25% of the input, which means there is room for improved context processing mechanisms; reports that recurrent memory augmentation of transformer models achieves superior performance on documents of up to 10 million tokens. | [Paper](https://arxiv.org/abs/2402.10790), [Tweet](https://x.com/omarsar0/status/1759591371126571028?s=20) |
| 8) **When is Tree Search Useful for LLM Planning** - investigates how LLM solves multi-step problems through a framework consisting of a generator, discriminator, and planning method (e.g., iterative correction and tree search); reports that planning methods demand discriminators with at least 90% accuracy but current LLMs don’t demonstrate these discrimination capabilities; finds that tree search is at least 10 to 20 times slower but regardless of it good performance it’s impractical for real-world applications. | [Paper](https://arxiv.org/abs/2402.10890), [Tweet](https://x.com/ysu_nlp/status/1759757711061704913?s=20) |
| 9) **CoT Reasoning without Prompting** - proposes a chain-of-thought (CoT) decoding method to elicit the reasoning capabilities from pre-trained LLMs without explicit prompting; claims to significantly enhance a model’s reasoning capabilities over greedy decoding across reasoning benchmarks; finds that the model's confidence in its final answer increases when CoT is present in its decoding path. | [Paper](https://arxiv.org/abs/2402.10200), [Tweet](https://x.com/omarsar0/status/1758566808213234017?s=20) |
| 10) **OpenCodeInterpreter** - a family of open-source systems for generating, executing, and iteratively refining code; proposes a dataset of 68K multi-turn interactions; integrates execution and human feedback for dynamic code refinement and produces high performance on benchmarks like HumalEval and EvalPlus. | [Paper](https://arxiv.org/abs/2402.14658), [Tweet](https://x.com/xiangyue96/status/1760891516107862104?s=20) |
## Top ML Papers of the Week (February 12 - February 18) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Sora** - a text-to-video AI model that can create videos of up to a minute of realistic and imaginative scenes given text instructions; it can generate complex scenes with multiple characters, different motion types, and backgrounds, and understand how they relate to each other; other capabilities include creating multiple shots within a single video with persistence across characters and visual style. | [Paper](https://openai.com/research/video-generation-models-as-world-simulators), [Tweet](https://x.com/OpenAI/status/1758192957386342435?s=20) |
| 2) **Gemini 1.5** - a compute-efficient multimodal mixture-of-experts model that focuses on capabilities such as recalling and reasoning over long-form content; it can reason over long documents potentially containing millions of tokens, including hours of video and audio; improves the state-of-the-art performance in long-document QA, long-video QA, and long-context ASR. Gemini 1.5 Pro matches or outperforms Gemini 1.0 Ultra across standard benchmarks and achieves near-perfect retrieval (>99%) up to at least 10 million tokens, a significant advancement compared to other long-context LLMs. | [Paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf), [Tweet](https://x.com/omarsar0/status/1758151923612483839?s=20) |
| 3) **V-JEPA** - a collection of vision models trained on a feature prediction objective using 2 million videos; relies on self-supervised learning and doesn’t use pretrained image encoders, text, negative examples, reconstruction, or other supervision sources; claims to achieve versatile visual representations that perform well on both motion and appearance-based tasks, without adaption of the model’s parameters. | [Paper](https://ai.meta.com/research/publications/revisiting-feature-prediction-for-learning-visual-representations-from-video/), [Tweet](https://x.com/AIatMeta/status/1758176023588577326?s=20) |
| 4) **Large World Model** - a general-purpose 1M context multimodal model trained on long videos and books using RingAttention; sets new benchmarks in difficult retrieval tasks and long video understanding; uses masked sequence packing for mixing different sequence lengths, loss weighting, and model-generated QA dataset for long sequence chat; open-sources a family of 7B parameter models that can process long text and videos of over 1M tokens. | [Paper](https://arxiv.org/abs/2402.08268), [Tweet](https://x.com/haoliuhl/status/1757828392362389999?s=20) |
| 5) **The boundary of neural network trainability is fractal** - finds that the boundary between trainable and untrainable neural network hyperparameter configurations is fractal; observes fractal hyperparameter landscapes for every neural network configuration and deep linear networks; also observes that the best-performing hyperparameters are at the end of stability. | [Paper](https://arxiv.org/abs/2402.06184), [Tweet](https://x.com/jaschasd/status/1756930242965606582?s=20) |
| 6) **OS-Copilot** - a framework to build generalist computer agents that interface with key elements of an operating system like Linux or MacOS; it also proposes a self-improving embodied agent for automating general computer tasks; this agent outperforms the previous methods by 35% on the general AI assistants (GAIA) benchmark. | [Paper](https://arxiv.org/abs/2402.07456), [Tweet](https://x.com/omarsar0/status/1757443594976206885?s=20) |
| 7) **TestGen-LLM** - uses LLMs to automatically improve existing human-written tests; reports that after an evaluation on Reels and Stories products for Instagram, 75% of TestGen-LLM's test cases were built correctly, 57% passed reliably, and 25% increased coverage. | [Paper](https://arxiv.org/abs/2402.09171), [Tweet](https://x.com/nathanbenaich/status/1758036247115608317?s=20) |
| 8) **ChemLLM** - a dedicated LLM trained for chemistry-related tasks; claims to outperform GPT-3.5 on principal tasks such as name conversion, molecular caption, and reaction prediction; it also surpasses GPT-4 on two of these tasks. | [Paper](https://arxiv.org/abs/2402.06852), [Tweet](https://x.com/omarsar0/status/1757246740539773165?s=20) |
| 9) **Survey of LLMs** - reviews three popular families of LLMs (GPT, Llama, PaLM), their characteristics, contributions, and limitations; includes a summary of capabilities and techniques developed to build and augment LLM; it also discusses popular datasets for LLM training, fine-tuning, and evaluation, and LLM evaluation metrics; concludes with open challenges and future research directions. | [Paper](https://arxiv.org/abs/2402.06196), [Tweet](https://x.com/omarsar0/status/1757049645119799804?s=20) |
| 10) **LLM Agents can Hack** - shows that LLM agents can automatically hack websites and perform tasks like SQL injections without human feedback or explicit knowledge about the vulnerability beforehand; this is enabled by an LLM’s tool usage and long context capabilities; shows that GPT-4 is capable of such hacks, including finding vulnerabilities in websites in the wild; open-source models did not show the same capabilities. | [Paper](https://arxiv.org/abs/2402.06664v1), [Tweet](https://x.com/emollick/status/1757937829340967240?s=20) |
## Top ML Papers of the Week (February 5 - February 11) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Grandmaster-Level Chess Without Search** - trains a 270M parameter transformer model with supervised learning on a dataset of 10 million chess games with up to 15 billion data points; reaches a Lichess blitz Elo of 2895 against humans, and solves a series of challenging chess puzzles; it shows the potential of training at scale for chess and without the need for any domain-specific tweaks or explicit search algorithms. | [Paper](https://arxiv.org/abs/2402.04494), [Tweet](https://x.com/_akhaliq/status/1755466387798020229?s=20) |
| 2) **AnyTool** - an LLM-based agent that can utilize 16K APIs from Rapid API; proposes a simple framework consisting of 1) a hierarchical API-retriever to identify relevant API candidates to a query, 2) a solver to resolve user queries, and 3) a self-reflection mechanism to reactivate AnyTool if the initial solution is impracticable; this tool leverages the function calling capability of GPT-4 so no further training is needed; the hierarchical API-retriever is inspired by a divide-and-conquer approach to help reduce the search scope of the agents which leads to overcoming limitations around context length in LLMs; the self-reflection component helps with resolving easy and complex queries efficiently. | [Paper](https://arxiv.org/abs/2402.04253), [Tweet](https://x.com/omarsar0/status/1755065033791283601?s=20) |
| 3) **A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention** - investigates and expands the theoretical understanding of learning with attention layers by exploring the interplay between positional and semantic attention; it employs a toy model of dot-product attention and identifies an emergent phase transition between semantic and positional learning; shows that if provided with sufficient data, dot-product attention layer outperforms a linear positional baseline when using the semantic mechanism. | [Paper](https://arxiv.org/abs/2402.03902), [Tweet](https://x.com/zdeborova/status/1755158457785704771?s=20) |
| 4) **Indirect Reasoning with LLMs** - proposes an indirect reasoning method to strengthen the reasoning power of LLMs; it employs the logic of contrapositives and contradictions to tackle IR tasks such as factual reasoning and mathematic proof; it consists of two key steps: 1) enhance the comprehensibility of LLMs by augmenting data and rules (i.e., the logical equivalence of contrapositive), and 2) design prompt templates to stimulate LLMs to implement indirect reasoning based on proof by contradiction; experiments on LLMs like GPT-3.5-turbo and Gemini Pro show that the proposed method enhances the overall accuracy of factual reasoning by 27.33% and mathematic proof by 31.43% compared to traditional direct reasoning methods. | [Paper](https://arxiv.org/abs/2402.03667), [Tweet](https://x.com/omarsar0/status/1755254627866419707?s=20) |
| 5) **ALOHA 2** - a low-cost system for bimanual teleoperation that improves the performance, user-friendliness, and durability of ALOHA; efforts include hardware improvements such as grippers and gravity compensation with a higher quality simulation model; this potentially enables large-scale data collection on more complex tasks to help advanced research in robot learning. | [Paper](https://aloha-2.github.io/assets/aloha2.pdf), [Tweet](https://x.com/tonyzzhao/status/1755380475118719407?s=20) |
| 6) **More Agents is All You Need** - presents a study on the scaling property of raw agents instantiated by LLMs; finds that performance scales when increasing agents by simply using a sampling-and-voting method. | [Paper](https://arxiv.org/abs/2402.05120), [Tweet](https://x.com/omarsar0/status/1755794341069455376?s=20) |
| 7) **Self-Discovered Reasoning Structures** - proposes a new framework, Self-Discover, that enables LLMs to select from multiple reasoning techniques (e.g., critical thinking and thinking step-by-step) to compose task-specific reasoning strategies; outperforms CoT (applied to GPT-4 and PaLM 2) on BigBench-Hard experiments and requires 10-40x fewer inference compute than other inference-intensive methods such as CoT-Self-Consistency; the self-discovered reasoning structures are also reported to transfer well between LLMs and small language models (SLMs). | [Paper](https://arxiv.org/abs/2402.03620), [Tweet](https://x.com/peizNLP/status/1755265197953146997?s=20) |
| 8) **DeepSeekMath** - continues pretraining a code base model with 120B math-related tokens; introduces GRPO (a variant to PPO) to enhance mathematical reasoning and reduce training resources via a memory usage optimization scheme; DeepSeekMath 7B achieves 51.7% on MATH which approaches the performance level of Gemini-Ultra (53.2%) and GPT-4 (52.9%); when self-consistency is used the performance improves to 60.9%. | [Paper](https://arxiv.org/abs/2402.03300), [Tweet](https://x.com/deepseek_ai/status/1754701472363958581?s=20) |
| 9) **LLMs for Table Processing** - provides an overview of LLMs for table processing, including methods, benchmarks, prompting techniques, and much more. | [Paper](https://arxiv.org/abs/2402.05121), [Tweet](https://x.com/omarsar0/status/1755789530710339788?s=20) |
| 10) **LLM-based Multi-Agents** - discusses the essential aspects of LLM-based multi-agent systems; it includes a summary of recent applications for problem-solving and word simulation; it also discusses datasets, benchmarks, challenges, and future opportunities to encourage further research and development from researchers and practitioners. | [Paper](https://arxiv.org/abs/2402.01680), [Tweet](https://x.com/omarsar0/status/1754710117734375429?s=20) |
## Top ML Papers of the Week (January 29 - February 4) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **OLMo** - introduces Open Language Model (OLMo), a 7B parameter model; it includes open training code, open data, full model weights, evaluation code, and fine-tuning code; it shows strong performance on many generative tasks; there is also a smaller version of it, OLMo 1B. | [Paper](https://arxiv.org/abs/2402.00838), [Tweet](https://x.com/omarsar0/status/1753080417530318872?s=20) |
| 2) **Advances in Multimodal LLMs** - a comprehensive survey outlining design formulations for model architecture and training pipeline around multimodal large language models. | [Paper](https://arxiv.org/abs/2401.13601), [Tweet](https://x.com/omarsar0/status/1751705689964089616?s=20) |
| 3) **Corrective RAG** - proposes Corrective Retrieval Augmented Generation (CRAG) to improve the robustness of generation in a RAG system; the core idea is to implement a self-correct component for the retriever and improve the utilization of retrieved documents for augmenting generation; the retrieval evaluator helps to assess the overall quality of retrieved documents given a query; using web search and optimized knowledge utilization operations can improve automatic self-correction and efficient utilization of retrieved documents. | [Paper](https://arxiv.org/abs/2401.15884), [Tweet](https://x.com/omarsar0/status/1752173216942944556?s=20) |
| 4) **LLMs for Mathematical Reasoning** - introduces an overview of research developments in LLMs for mathematical reasoning; discusses advancements, capabilities, limitations, and applications to inspire ongoing research on LLMs for Mathematics. | [Paper](https://arxiv.org/abs/2402.00157), [Tweet](https://x.com/omarsar0/status/1753424518171738194?s=20) |
| 5) **Compression Algorithms for LLMs** - covers compression algorithms like pruning, quantization, knowledge distillation, low-rank approximation, parameter sharing, and efficient architecture design.| [Paper](https://arxiv.org/abs/2401.15347), [Tweet](https://x.com/omarsar0/status/1752746770377974072?s=20) |
| 6) **MoE-LLaVA** - employs Mixture of Experts tuning for Large Vision-Language Models which constructs a sparse model with a substantial reduction in parameters with a constant computational cost; this approach also helps to address performance degradation associated with multi-modal learning and model sparsity. | [Paper](https://arxiv.org/abs/2401.15947), [Tweet](https://x.com/LinBin46984/status/1753403875531375003?s=20) |
| 7) **Rephrasing the Web** - uses an off-the-shelf instruction-tuned model prompted to paraphrase web documents in specific styles and formats such as “like Wikipedia” or “question-answer format” to jointly pre-train LLMs on real and synthetic rephrases; it speeds up pre-training by ~3x, improves perplexity, and improves zero-shot question answering accuracy on many tasks. | [Paper](https://arxiv.org/abs/2401.16380), [Tweet](https://x.com/pratyushmaini/status/1752337225097076809?s=20) |
| 8) **Redefining Retrieval in RAG** - a study that focuses on the components needed to improve the retrieval component of a RAG system; confirms that the position of relevant information should be placed near the query, the model will struggle to attend to the information if this is not the case; surprisingly, it finds that related documents don't necessarily lead to improved performance for the RAG system; even more unexpectedly, irrelevant and noisy documents can help drive up accuracy if placed correctly. | [Paper](https://arxiv.org/abs/2401.14887), [Tweet](https://x.com/omarsar0/status/1751803310267314509?s=20) |
| 9) **Hallucination in LVLMs** - discusses hallucination issues and techniques to mitigate hallucination in Large Vision-Language Models (LVLM); it introduces LVLM hallucination evaluation methods and benchmarks; provides tips and a good analysis of the causes of LVLM hallucinations and potential ways to mitigate them. | [Paper](https://arxiv.org/abs/2402.00253), [Tweet](https://x.com/omarsar0/status/1753449211931079101?s=20) |
| 10) **SliceGPT** - a new LLM compression technique that proposes a post-training sparsification scheme that replaces each weight matrix with a smaller dense matrix; helps reduce the embedding dimension of the network and can remove up to 20% of model parameters for Llama2-70B and Phi-2 models while retaining most of the zero-shot performance of the dense models. | [Paper](https://arxiv.org/abs/2401.15024v1), [Tweet](https://x.com/_akhaliq/status/1751796334531592496?s=20) |
## Top ML Papers of the Week (January 22 - January 28) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Depth Anything** - a robust monocular depth estimation solution that can deal with any images under any circumstance; automatically annotates large-scale unlabeled data (~62M) which helps to reduce generalization error; proposes effective strategies to leverage the power of the large-scale unlabeled data; besides generalization ability, it established new state-of-the-art through fine-tuning and even results in an enhanced depth-conditioned ControlNet. | [Paper](https://arxiv.org/abs/2401.10891v1), [Tweet](https://x.com/_akhaliq/status/1749284669936275463?s=20) |
| 2) **Knowledge Fusion of LLMs** - proposes FuseLLM with the core idea of externalizing knowledge from multiple LLMs and transferring their capabilities to a target LLM; leverages the generative distributions of source LLMs to externalize both their collective knowledge and individual strengths and transfer them to the target LLM through continual training; finds that the FuseLLM can improve the performance of the target model across a range of capabilities such as reasoning, common sense, and code generation. | [Paper](https://arxiv.org/abs/2401.10491), [Tweet](https://x.com/omarsar0/status/1749267663900057620?s=20) |
| 3) **MambaByte** - adapts Mamba SSM to learn directly from raw bytes; bytes lead to longer sequences which autoregressive Transformers will scale poorly on; this work reports huge benefits related to faster inference and even outperforms subword Transformers. | [Paper](https://arxiv.org/abs/2401.13660), [Tweet](https://x.com/omarsar0/status/1750366964759859633?s=20) |
| 4) **Diffuse to Choose** - a diffusion-based image-conditioned inpainting model to balance fast inference with high-fidelity while enabling accurate semantic manipulations in a given scene content; outperforms existing zero-shot diffusion inpainting methods and even few-shot diffusion personalization algorithms such as DreamPaint. | [Paper](https://arxiv.org/abs/2401.13795), [Tweet](https://x.com/_akhaliq/status/1750737690553692570?s=20) |
| 5) **WARM** - introduces weighted averaged rewards models (WARM) that involve fine-tuning multiple rewards models and then averaging them in the weight space; average weighting improves efficiency compared to traditional prediction ensembling; it improves the quality and alignment of LLM predictions. | [Paper](https://arxiv.org/abs/2401.12187), [Tweet](https://x.com/ramealexandre/status/1749719471806157304?s=20) |
| 6) **Resource-efficient LLMs & Multimodal Models** - a survey of resource-efficient LLMs and multimodal foundations models; provides a comprehensive analysis and insights into ML efficiency research, including architectures, algorithms, and practical system designs and implementations. | [Paper](https://arxiv.org/abs/2401.08092v1), [Tweet](https://x.com/omarsar0/status/1749208653926654010?s=20) |
| 7) **Red Teaming Visual Language Models** - first presents a red teaming dataset of 10 subtasks (e.g., image misleading, multi-modal jailbreaking, face fairness, etc); finds that 10 prominent open-sourced VLMs struggle with the red teaming in different degrees and have up to 31% performance gap with GPT-4V; also applies red teaming alignment to LLaVA-v1.5 with SFT using the proposed red teaming dataset, which improves model performance by 10% in the test set. | [Paper](https://arxiv.org/abs/2401.12915), [Tweet](https://x.com/omarsar0/status/1750170361843384790?s=20) |
| 8) **Lumiere** - a text-to-video space-time diffusion model for synthesizing videos with realistic and coherent motion; introduces a Space-Time U-Net architecture to generate the entire temporal duration of a video at once via a single pass; achieves state-of-the-art text-to-video generation results and supports a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and stylized generation. | [Paper](https://arxiv.org/abs/2401.12945), [Tweet](https://x.com/GoogleAI/status/1751003814931689487?s=20) |
| 9) **Medusa** - a simple framework for LLM inference acceleration using multiple decoding heads that predict multiple subsequent tokens in parallel; parallelization substantially reduces the number of decoding steps; it can achieve over 2.2x speedup without compromising generation quality, while Medusa-2 further improves the speedup to 2.3-3.6x. | [Paper](https://arxiv.org/abs/2401.10774v1), [Tweet](https://x.com/jiayq/status/1749461664393810350?s=20) |
| 10) **AgentBoard** - a comprehensive benchmark with an open-source evaluation framework to perform analytical evaluation of LLM agents; helps to assess the capabilities and limitations of LLM agents and demystifies agent behaviors which leads to building stronger and robust LLM agents. | [Paper](https://arxiv.org/abs/2401.13178v1), [Tweet](https://x.com/ma_chang_nlp/status/1750369056539218082?s=20) |
## Top ML Papers of the Week (January 15 - January 21) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **AlphaGeometry** - an AI system that acts as a theorem prover that can solve Olympiad geometry problems without human demonstrations; this system is trained on synthetic data involving millions of theorems and proofs across different levels of complexity; the data is used to train a neural language model that can solve olympiad-level problems and approaches the performance of an average International Mathematical Olympiad (IMO) gold medallist. | [Paper](https://www.nature.com/articles/s41586-023-06747-5), [Tweet](https://x.com/GoogleDeepMind/status/1747651817461125352?s=20) |
| 2) **AlphaCodium** - a code-oriented iterative flow that improves LLMs on code generation; it involves two key steps to improve code generation capabilities in LLMs: i) additional generated data (problem self-reflection and test reasoning) to aid the iterative process, and ii) enriching public tests using additional AI-generated tests; using the CodeContests validation dataset, GPT-4 pass@5 accuracy increased from 19% using a single well-crafted prompt to 44% using the AlphaCodium flow; it even outperforms AlphaCode using a significantly smaller computation budget and 4 orders of magnitude fewer LLM calls. | [Paper](https://arxiv.org/abs/2401.08500), [Tweet](https://x.com/itamar_mar/status/1747957348293824676?s=20) |
| 3) **RAG vs. Finetuning** - report discussing the tradeoff between RAG and fine-tuning when using LLMs like Llama 2 and GPT-4; performs a detailed analysis and highlights insights when applying the pipelines on an agricultural dataset; observes that there is an accuracy increase of over 6 p.p. when fine-tuning the model and this is cumulative with RAG, which increases accuracy by 5 p.p. further. | [Paper](https://arxiv.org/abs/2401.08406), [Tweet](https://x.com/omarsar0/status/1747676541876596779?s=20) |
| 4) **Self-Rewarding Models** - proposes a self-alignment method that uses the model itself for LLM-as-a-Judge prompting to provide its rewards during training; Iterative DPO is used for instruction following training using the preference pairs built from the generated data which comes from a self-instruction creation phase; using this approach, fine-tuning a Llama 2 70B model on three iterations can lead to a model that outperforms LLMs like Claude 2 and Gemini Pro on the AlpacaEval 2.0 leaderboard. | [Paper](https://arxiv.org/abs/2401.10020), [Tweet](https://x.com/jaseweston/status/1748158323369611577?s=20) |
| 5) **Tuning Language Models by Proxy** - introduces proxy-tuning, a decoding-time algorithm that modifies logits of a target LLM with the logits’ difference between a small base model and a fine-tuned base model; this can enable a larger target base model to perform as well as would a fine-tuned version of it; proxy-tuning is applied to Llama2-70B using proxies of only 7B size to close 88% of the gap between Llama2-70B and its tuned chat version. | [Paper](https://arxiv.org/abs/2401.08565), [Tweet](https://x.com/rasbt/status/1748021765790376385?s=20) |
| 6) **Reasoning with Reinforced Fine-Tuning** - proposes an approach, ReFT, to enhance the generalizability of LLMs for reasoning; it starts with applying SFT and then applies online RL for further refinement while automatically sampling reasoning paths to learn from; this differs from RLHF in that it doesn’t utilize a reward model learned from human-labeled data; ReFT demonstrates improved performance and generalization abilities on math problem-solving. | [Paper](https://arxiv.org/abs/2401.08967), [Tweet](https://x.com/_akhaliq/status/1747820246268887199?s=20) |
| 7) **Overview of LLMs for Evaluation** - thoroughly surveys the methodologies and explores their strengths and limitations; provides a taxonomy of different approaches involving prompt engineering or calibrating open-source LLMs for evaluation | [Paper](https://arxiv.org/abs/2401.07103), [Tweet](https://x.com/omarsar0/status/1748016227090305167?s=20) |
| 8) **Patchscopes** - proposes a framework that leverages a model itself to explain its internal representations; it decodes information from LLM hidden representations which is possible by “patching” representations into a separate inference pass that encourages the extraction of that information; it can be used to answer questions about an LLM’s computation and can even be used to fix latent multi-hop reasoning errors. | [Paper](https://arxiv.org/abs/2401.06102), [Tweet](https://x.com/ghandeharioun/status/1746946621215003041?s=20) |
| 9) **The Unreasonable Effectiveness of Easy Training Data for Hard Tasks** - suggests that language models often generalize well from easy to hard data, i.e., easy-to-hard generalization; it argues that it can be better to train on easy data as opposed to hard data, even when the emphasis is on improving performance on hard data, and suggests that the scalable oversight problem may be easier than previously thought. | [Paper](https://arxiv.org/abs/2401.06751), [Tweet](https://x.com/peterbhase/status/1747301128683839998?s=20) |
| 10) **MoE-Mamba** - an approach to efficiently scale LLMs by combining state space models (SSMs) with Mixture of Experts (MoE); MoE-Mamba, outperforms both Mamba and Transformer-MoE; it reaches the same performance as Mamba in 2.2x less training steps while preserving the inference performance gains of Mamba against the Transformer. | [Paper](https://arxiv.org/abs/2401.04081), [Tweet](https://x.com/arankomatsuzaki/status/1744552215946100969?s=20) |
## Top ML Papers of the Week (January 8 - January 14) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **InseRF** - a method for text-driven generative object insertion in the Neural 3D scenes; it enables users to provide textual descriptions and a 2D bounding box in a reference viewpoint to generate new objects in 3D scenes; InseRF is also capable of controllable and 3D-consistent object insertion without requiring explicit 3D information as input. | [Paper](https://arxiv.org/abs/2401.05335), [Tweet](https://x.com/_akhaliq/status/1745293576794255757?s=20) |
| 2) **Sleeper Agents** - shows that LLMs can learn deceptive behavior that persists through safety training; for instance, an LLM was trained to write secure code for a specified year but given another year can enable exploitable code; this backdoor behavior can persist even when training LLMs with techniques like reinforcement learning and adversarial training. | [Paper](https://arxiv.org/abs/2401.05566), [Tweet](https://x.com/AnthropicAI/status/1745854907968880970?s=20) |
| 3) **Blending Is All You Need** - shows that effectively combining existing small models of different sizes (6B/13B parameters) can result in systems that can compete with ChatGPT level performance; the goal is to build a collaborative conversational system that can effectively leverage these models to improve engagement and quality of chat AIs and generate more diverse responses. | [Paper](https://arxiv.org/abs/2401.02994), [Tweet](https://x.com/omarsar0/status/1744765981270950343?s=20) |
| 4) **MagicVideo-V2** - proposes an end-to-end video generation pipeline that integrates the text-to-image model, video motion generator, reference image embedding module, and frame interpolation module; it can generate high-resolution video with advanced fidelity and smoothness compared to other leading and popular text-to-video systems. | [Paper](https://arxiv.org/abs/2401.04468), [Tweet](https://x.com/arankomatsuzaki/status/1744918551415443768?s=20) |
| 5) **Trustworthiness in LLMs** - a comprehensive study (100+ pages) of trustworthiness in LLMs, discussing challenges, benchmarks, evaluation, analysis of approaches, and future directions; proposes a set of principles for trustworthy LLMs that span 8 dimensions, including a benchmark across 6 dimensions (truthfulness, safety, fairness, robustness, privacy, and machine ethics); it also presents a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets; while proprietary LLMs generally outperform most open-source counterparts in terms of trustworthiness, there are a few open-source models that are closing the gap. | [Paper](https://arxiv.org/abs/2401.05561), [Tweet](https://x.com/omarsar0/status/1745645273915736553?s=20) |
| 6) **Prompting LLMs for Table Understanding** - a new framework, inspired by Chain-of-Thought prompting, to instruct LLMs to dynamically plan a chain of operations that transforms a complex table to reliably answer the input question; an LLM is used to iteratively generate operations, step-by-step, that will perform necessary transformations to the table (e.g., adding columns or deleting info). | [Paper](https://arxiv.org/abs/2401.04398), [Tweet](https://x.com/omarsar0/status/1745164182205452603?s=20) |
| 7) **Jailbreaking Aligned LLMs** - proposes 40 persuasion techniques to systematically jailbreak LLMs; their adversarial prompts (also referred to as persuasive adversarial prompts) achieve a 92% attack success rate on aligned LLMs, like Llama 2-7B and GPT-4, without specialized optimization. | [Paper](https://chats-lab.github.io/persuasive_jailbreaker/), [Tweet](https://x.com/EasonZeng623/status/1744719354368029008?s=20) |
| 8) **From LLM to Conversational Agents** - proposes RAISE, an advanced architecture to enhance LLMs for conversational agents; it's inspired by the ReAct framework and integrates a dual-component memory system; it utilizes a scratchpad and retrieved examples to augment the agent's capabilities; the scratchpad serves as transient storage (akin to short-term memory) and the retrieval module operates as the agent's long-term memory; this system mirrors human short-term and long-term memory and helps to maintain context and continuity which are key in conversational systems. | [Paper](https://arxiv.org/abs/2401.02777), [Tweet](https://x.com/omarsar0/status/1744400054624846269?s=20) |
| 9) **Quantifying LLM’s Sensitivity to Spurious Features in Prompt Design** - finds that widely used open-source LLMs are extremely sensitive to prompt formatting in few-shot settings; subtle changes in prompt formatting using a Llama 2 13B model can result in a performance difference of up to 76 accuracy points. | [Paper](https://arxiv.org/abs/2310.11324), [Tweet](https://x.com/melaniesclar/status/1745557109419458695?s=20) |
| 10) **Adversarial Machine Learning** - a comprehensive survey that covers the current state of adversarial ML with a proper taxonomy of concepts, discussions, adversarial methods, mitigation tactics, and remaining challenges. | [Paper](https://csrc.nist.gov/pubs/ai/100/2/e2023/final), [Tweet](https://x.com/omarsar0/status/1745819927695540671?s=20) |
## Top ML Papers of the Week (January 1 - January 7) - 2024
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Mobile ALOHA** - proposes a system that learns bimanual mobile manipulation with low-cost whole-body teleoperation; it first collects high-quality demonstrations and then performs supervised behavior cloning; finds that co-training with existing ALOHA datasets increases performance on complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots while keeping the budget under $32K | [Paper](https://mobile-aloha.github.io/), [Tweet](https://x.com/zipengfu/status/1742973258528612724?s=20) |
| 2) **Mitigating Hallucination in LLMs** - summarizes 32 techniques to mitigate hallucination in LLMs; introduces a taxonomy categorizing methods like RAG, Knowledge Retrieval, CoVe, and more; provides tips on how to apply these methods and highlights the challenges and limitations inherent in them. | [Paper](https://arxiv.org/abs/2401.01313), [Tweet](https://x.com/omarsar0/status/1742633831234994189?s=20) |
| 3) **Self-Play Fine-tuning** - shows that without acquiring additional human-annotated data, a supervised fine-tuned LLM can be improved; inspired by self-play, it first uses the LLM to generate its training data from its previous iterations; it then refines its policy by distinguishing the self-generated responses from those obtained from human-annotated data; shows that the method can improve LLM’s performance and outperform models trained via DPO with GPT-4 preference data. | [Paper](https://arxiv.org/abs/2401.01335), [Tweet](https://x.com/_zxchen_/status/1742661587436216615?s=20) |
| 4) **LLaMA Pro** - proposes a post-pretraining method to improve an LLM’s knowledge without catastrophic forgetting; it achieves this by tuning expanded identity blocks using only new corpus while freezing the inherited blocks; uses math and code data to train a LLaMA Pro-8.3B initialized from Llama2-7B; these models achieve advanced performance on various benchmarks compared to base models while preserving the original general capabilities. | [Paper](https://arxiv.org/abs/2401.02415), [Tweet](https://x.com/_akhaliq/status/1743135851238805685?s=20) |
| 5) **LLM Augmented LLMs** - explore composing existing foundation models with specific models to expand capabilities; introduce cross-attention between models to compose representations that enable new capabilities; as an example, a PaLM2-S model was augmented with a smaller model trained on low-resource languages to improve English translation and arithmetic reasoning for low-resource languages; this was also done with a code-specific model which led to a 40% improvement over the base code model on code generation and explanation tasks. | [Paper](https://arxiv.org/abs/2401.02412), [Tweet](https://x.com/omarsar0/status/1743094632618106981?s=20) |
| 6) **Fast Inference of Mixture-of-Experts** - achieves efficient inference of Mixtral-8x7B models through offloading; it applies separate quantization for attention layers and experts to fit the model in combined GPU and CPU memory; designs a MoE-specific offloading strategy that enables running Mixtral-8x7B on desktop hardware and free-tier Google Colab instances | [Paper](https://arxiv.org/abs/2312.17238), [Tweet](https://x.com/rohanpaul_ai/status/1741044633495326861?s=20) |
| 7) **GPT-4V is a Generalist Web Agent** - explores the potential of GPT-4V as a generalist web agent; in particular, can such a model follow natural language instructions to complete tasks on a website? the authors first developed a tool to enable web agents to run on live websites; findings suggest that GPT-4V can complete 50% of tasks on live websites, possible through manual grounding of its textual plans into actions on the websites. | [Paper](https://arxiv.org/abs/2401.01614), [Tweet](https://x.com/omarsar0/status/1742923330544706035?s=20) |
| 8) **DocLLM** - a lightweight extension to traditional LLMs for reasoning over visual documents; focuses on using bounding box information to incorporate spatial layout structure; proposes a pre-training objective that addresses irregular layout and heterogeneous content present in visual documents; it’s then fine-tuned on an instruction-dataset and demonstrate SoTA performance on 14 out of 16 datasets across several document intelligence tasks. | [Paper](https://arxiv.org/abs/2401.00908), [Tweet](https://x.com/BrianRoemmele/status/1742572753251913742?s=20) |
| 9) **How Code Empowers LLMs** - a comprehensive overview of the benefits of training LLMs with code-specific data. Some capabilities include enhanced code generation, enabling reasoning, function calling, automated self-improvements, and serving intelligent agents. | [Paper](https://arxiv.org/abs/2401.00812), [Tweet](https://x.com/omarsar0/status/1742215295907811613?s=20) |
| 10) **Instruct-Imagen** - proposes an image generation model that tackles heterogeneous image generation tasks and generalizes across unseen tasks; it first enhances the model’s ability to ground its generation on external multimodal context and then fine-tunes on image generation tasks with multimodal instructions | [Paper](https://arxiv.org/abs/2401.01952), [Tweet](https://x.com/_akhaliq/status/1743108118630818039?s=20) |
---
## Top ML Papers of the Week (December 25 - December 31)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **CogAgent** - presents an 18 billion parameter visual language model specializing in GUI understanding and navigation; supports high-resolution inputs (1120x1120) and shows abilities in tasks such as visual Q&A, visual grounding, and GUI Agent; achieves state of the art on 5 text-rich and 4 general VQA benchmarks. | [Paper](https://arxiv.org/abs/2312.08914), [Tweet](https://x.com/cenyk1230/status/1739916469272789222?s=20) |
| 2) **From Gemini to Q-Star** - surveys 300+ papers and summarizes research developments to look at in the space of Generative AI; it covers computational challenges, scalability, real-world implications, and the potential for Gen AI to drive progress in fields like healthcare, finance, and education. | [Paper](https://arxiv.org/abs/2312.10868), [Tweet](https://x.com/omarsar0/status/1740119485011390558?s=20) |
| 3) **PromptBench** - a unified library that supports comprehensive evaluation and analysis of LLMs; it consists of functionalities for prompt construction, prompt engineering, dataset and model loading, adversarial prompt attack, dynamic evaluation protocols, and analysis tools. | [Paper](https://arxiv.org/abs/2312.07910v1), [Tweet](https://x.com/omarsar0/status/1739360426134028631?s=20) |
| 4) **Exploiting Novel GPT-4 APIs** - performs red-teaming on three functionalities exposed in the GPT-4 APIs: fine-tuning, function calling, and knowledge retrieval; Main findings: 1) fine-tuning on as few as 15 harmful examples or 100 benign examples can remove core safeguards from GPT-4, 2) GPT-4 Assistants divulge the function call schema and can be made to execute arbitrary function calls, and 3) knowledge retrieval can be hijacked by injecting instructions into retrieval documents. | [Paper](https://arxiv.org/abs/2312.14302), [Tweet](https://x.com/omarsar0/status/1739677995747450964?s=20) |
| 5) **Fact Recalling in LLMs** - investigates how MLP layers implement a lookup table for factual recall; scopes the study on how early MLPs in Pythia 2.8B look up which of 3 different sports various athletes play; suggests that early MLP layers act as a lookup table and recommends thinking about the recall of factual knowledge in the model as multi-token embeddings. | [Paper](https://www.alignmentforum.org/s/hpWHhjvjn67LJ4xXX/p/iGuwZTHWb6DFY3sKB), [Tweet](https://x.com/NeelNanda5/status/1738559368361349122?s=20) |
| 6) **Generative AI for Math** - presents a diverse and high-quality math-centric corpus comprising of ~9.5 billion tokens to train foundation models. | [Paper](https://arxiv.org/abs/2312.17120), [Tweet](https://x.com/arankomatsuzaki/status/1740564961032556942?s=20) |
| 7) **Pricipled Instructions Are All You Need** - introduces 26 guiding principles designed to streamline the process of querying and prompting large language models; applies these principles to conduct extensive experiments on LLaMA-1/2 (7B, 13B and 70B), GPT-3.5/4 to verify their effectiveness on instructions and prompts design. | [Paper](https://arxiv.org/abs/2312.16171v1), [Tweet](https://x.com/_akhaliq/status/1739857456161759455?s=20) |
| 8) **A Survey of Reasoning with Foundation Models** - provides a comprehensive survey of seminal foundational models for reasoning, highlighting the latest advancements in various reasoning tasks, methods, benchmarks, and potential future directions; also discusses how other developments like multimodal learning, autonomous agents, and super alignment accelerate and extend reasoning research. | [Paper](https://arxiv.org/abs/2312.11562v4), [Tweet](https://x.com/omarsar0/status/1740729489661874632?s=20) |
| 9) **Making LLMs Better at Dense Retrieval** - proposes LLaRA which adapts an LLM for dense retrieval; it consists of two pretext tasks: EBAE (Embedding-Based Auto-Encoding) and EBAR (Embedding-Based Auto-Regression), where the text embeddings from LLM are used to reconstruct the tokens for the input sentence and predict the tokens for the next sentence, respectively; a LLaMa-2-7B was improved on benchmarks like MSMARCO and BEIR. | [Paper](https://arxiv.org/abs/2312.15503v1) |
| 10) **Gemini vs GPT-4V** - provides a comprehensive preliminary comparison and combination of vision-language models like Gemini and GPT-4V through several qualitative cases; finds that GPT-4V is precise and succinct in responses, while Gemini excels in providing detailed, expansive answers accompanied by relevant imagery and links. | [Paper](https://arxiv.org/abs/2312.15011v1), [Tweet](https://x.com/omarsar0/status/1741177994377330895?s=20) |
---
## Top ML Papers of the Week (December 18 - December 24)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Gemini’s Language Abilities** - provides an impartial and reproducible study comparing several popular models like Gemini, GPT, and Mixtral; Gemini Pro achieves comparable but slightly lower accuracy than the current version of GPT 3.5 Turbo; Gemini and GPT were better than Mixtral. | [Paper](https://arxiv.org/abs/2312.11444), [Tweet](https://x.com/gneubig/status/1737108966931673191?s=20)|
| 2) **PowerInfer** - a high-speed inference engine for deploying LLMs locally; exploits the high locality in LLM inference to design a GPU-CPU hybrid inference engine; hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons (the majority) are computed on the CPU; this approach significantly reduces GPU memory demands and CPU-GPU data transfer. | [Paper](https://ipads.se.sjtu.edu.cn/_media/publications/powerinfer-20231219.pdf), [Tweet](https://x.com/omarsar0/status/1737168751668187229?s=20)|
| 3) **Discovery of a New Family of Antibiotics with Graph Deep Learning** - discovered a new structural class of antibiotics with explainable graph algorithms; the approach enables explainable deep learning guided discovery of structural classes of antibiotics which helps to provide chemical substructures that underlie antibiotic activity. | [Paper](https://www.nature.com/articles/s41586-023-06887-8), [Tweet](https://x.com/EricTopol/status/1737505177052348545?s=20)|
| 4) **VideoPoet** - introduces a large language model for zero-shot video generation; it’s capable of a variety of video generation tasks such as image-to-video and video stylization; trains an autoregressive model to learn across video, image, audio, and text modalities by using multiple tokenizers; shows that language models can synthesize and edit video with some degree of temporal consistency. | [Paper](https://sites.research.google/videopoet/), [Tweet](https://x.com/GoogleAI/status/1737235593078456389?s=20)_|
| 5) **Multimodal Agents as Smartphone Users** - introduces an LLM-based multimodal agent framework to operate smartphone applications; learns to navigate new apps through autonomous exploration or observing human demonstrations; shows proficiency in handling diverse tasks across different applications like email, social media, shopping, editing tools, and more. | [Paper](https://arxiv.org/abs/2312.13771), [Tweet](https://x.com/omarsar0/status/1738265651188253051?s=20)_|
| 6) **LLM in a Flash** - proposes an approach that efficiently runs LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM; enables running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. | [Paper](https://arxiv.org/abs/2312.11514), [Tweet](https://x.com/gabrielnocode/status/1737307286887133552?s=20)_|
| 7) **ReST Meets ReAct** - proposes a ReAct-style agent with self-critique for improving on the task of long-form question answering; it shows that the agent can be improved through ReST-style (reinforced self-training) iterative fine-tuning on its reasoning traces; specifically, it uses growing-batch RL with AI feedback for continuous self-improvement and self-distillation; like a few other recent papers, it focuses on minimizing human involvement (i.e., doesn't rely on human-labeled training data); it generates synthetic data with self-improvement from AI feedback which can then be used to distill the agent into smaller models (1/2 orders magnitude) with comparable performance as the pre-trained agent. | [Paper](https://arxiv.org/abs/2312.10003), [Tweet](https://x.com/omarsar0/status/1736587397830176910?s=20)_|
| 8) **Adversarial Attacks on GPT-4** - uses a simple random search algorithm to implement adversarial attacks on GPT-4; it achieves jailbreaking by appending an adversarial suffix to an original request, then iteratively making slight random changes to the suffix, and keeping changes if it increases the log probability of the token “Sure” at the first position of the response. | [Paper](https://www.andriushchenko.me/gpt4adv.pdf), [Tweet](https://x.com/maksym_andr/status/1737844601891983563?s=20)_|
| 9) **RAG for LLMs** - an overview of all the retrieval augmented generation (RAG) research that has been happening. | [Paper](https://arxiv.org/abs/2312.10997v1), [Tweet](https://x.com/omarsar0/status/1738354427759612222?s=20)_|
| 10) **Findings of the BabyLLM Challenge** - presents results for a new challenge that involves sample-efficient pretraining on a developmentally plausible corpus; the winning submission, which uses flashy LTG BERT, beat Llama 2 70B on 3/4 evals; other approaches that saw good results included data preprocessing or training on shorter context. | [Paper](https://aclanthology.org/volumes/2023.conll-babylm/), [Tweet](https://x.com/a_stadt/status/1737849248560066794?s=20)_|
---
## Top ML Papers of the Week (December 11 - December 17)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **LLMs for Discoveries in Mathematical Sciences** - uses LLMs to search for new solutions in mathematics & computer science; proposes FunSearch which combines a pre-trained LLM with a systematic evaluator and iterates over them to evolve low-scoring programs into high-scoring ones discovering new knowledge; one of the key findings in this work is that safeguarding against LLM hallucinations is important to produce mathematical discoveries and other real-world problems. | [Paper](https://www.nature.com/articles/s41586-023-06924-6), [Tweet](https://x.com/GoogleDeepMind/status/1735332722208284797?s=20) |
| 2) **Weak-to-strong Generalization** - studies whether weak model supervision can elicit the full capabilities of stronger models; finds that when naively fine-tuning strong pretrained models on weak model generated labels they can perform better than their weak supervisors; reports that finetuning GPT-4 with a GPT-2-level supervisor it’s possible to recover close to GPT-3.5-level performance on NLP tasks. | [Paper](https://cdn.openai.com/papers/weak-to-strong-generalization.pdf), [Tweet](https://x.com/OpenAI/status/1735349718765715913?s=20) |
| 3) **Audiobox** - a unified model based on flow-matching capable of generating various audio modalities; designs description-based and example-based prompting to enhance controllability and unify speech and sound generation paradigms; adapts a self-supervised infilling objective to pre-train on large quantities of unlabeled audio; performs well on speech and sound generation and unlocks new methods for generating audio with novel vocal and acoustic styles. | [Paper](https://ai.meta.com/research/publications/audiobox-unified-audio-generation-with-natural-language-prompts/), [Tweet](https://x.com/AIatMeta/status/1734257634008531453?s=20) |
| 4) **Mathematical LLMs** - a survey on the progress of LLMs on mathematical tasks; covers papers and resources on LLM research around prompting techniques and tasks such as math word problem-solving and theorem proving. | [Paper](https://arxiv.org/abs/2312.07622), [Tweet](https://x.com/omarsar0/status/1735323577392542084?s=20) |
| 5) **Towards Fully Transparent Open-Source LLMs** - proposes LLM360 to support open and collaborative AI research by making the end-to-end LLM training process transparent and reproducible; releases 7B parameter LLMs pre-trained from scratch, AMBER and CRYSTALCODER, including their training code, data, intermediate checkpoints, and analyses. | [Paper](https://arxiv.org/abs/2312.06550), [Tweet](https://x.com/omarsar0/status/1734591071575744820?s=20) |
| 6) **LLMs in Medicine** - a comprehensive survey (analyzing 300+ papers) on LLMs in medicine; includes an overview of the principles, applications, and challenges faced by LLMs in medicine. | [Paper](https://arxiv.org/abs/2311.05112), [Tweet](https://x.com/omarsar0/status/1734599425568231513?s=20) |
| 7) **Beyond Human Data for LLMs** - proposes an approach for self-training with feedback that can substantially reduce dependence on human-generated data; the model-generated data combined with a reward function improves the performance of LLMs on problem-solving tasks. | [Paper](https://arxiv.org/abs/2312.06585), [Tweet](https://x.com/omarsar0/status/1734953578274386002?s=20) |
| 8) **Gaussian-SLAM** - a neural RGBD SLAM method capable of photorealistically reconstructing real-world scenes without compromising speed and efficiency; extends classical 3D Gaussians for scene representation to overcome the limitations of the previous methods. | [Paper](https://vladimiryugay.github.io/gaussian_slam/), [Tweet](https://x.com/vlyug/status/1734683948440252480?s=20) |
| 9) **Pearl** - introduces a new production-ready RL agent software package that enables researchers and practitioners to develop RL AI agents that adapt to environments with limited observability, sparse feedback, and high stochasticity. | [Paper](https://arxiv.org/abs/2312.03814), [Tweet](https://x.com/ZheqingZhu/status/1732880717263352149?s=20) |
| 10) **Quip** - compresses trained model weights into a lower precision format to reduce memory requirements; the approach combines lattice codebooks with incoherence processing to create 2 bit quantized models; significantly closes the gap between 2 bit quantized LLMs and unquantized 16 bit models. | [Paper](https://cornell-relaxml.github.io/quip-sharp/), [Tweet](https://x.com/tsengalb99/status/1733222467953422702?s=20) |
---
## Top ML Papers of the Week (December 4 - December 10)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Gemini** - a series of multimodal models with multimodal reasoning capabilities across text, images, video, audio, and code; claims to outperform human experts on MMLU, a popular benchmark to test the knowledge and problem-solving abilities of AI models; capabilities reported include multimodality, multilinguality, factuality, summarization, math/science, long-context, reasoning, and more. | [Paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf), [Tweet](https://x.com/omarsar0/status/1732434324291563831?s=20) |
| 2) **EfficientSAM** - a lightweight Segment Anything Model (SAM) that exhibits decent performance with largely reduced complexity; leverages masked autoencoders with 20x fewer parameters and 20x faster runtime; EfficientSAM performs within 2 points (44.4 AP vs 46.5 AP) of the original SAM model.| [Paper](https://arxiv.org/abs/2312.00863), [Tweet](https://x.com/fiandola/status/1732171016783180132?s=20) |
| 3) **Magicoder** - a series of fully open-source LLMs for code that close the gap with top code models while having no more than 7B parameters; trained on 75K synthetic instruction data; uses open-source references for the production of more diverse, realistic, high-quality, and controllable data; outperforms state-of-the-art code models with similar or even larger sizes on several coding benchmarks, including Python text-to-code generation, multilingual coding, and data-science program completion; MagicoderS-CL-7B based on CodeLlama surpasses ChatGPT on HumanEval+ (66.5 vs. 65.9 in pass@1).| [Paper](https://arxiv.org/abs/2312.02120), [Tweet](https://x.com/omarsar0/status/1732063926613946863?s=20) |
| 4) **LLMs on Graphs** - a comprehensive overview that summarizes different scenarios where LLMs are used on graphs such as pure graphs, text-rich graphs, and text-paired graphs | [Paper](https://arxiv.org/abs/2312.02783), [Tweet](https://x.com/omarsar0/status/1732404393037762588?s=20) |
| 5) **Llama Guard** - an LLM-based safeguard model that involves a small (Llama2-7B) customizable instruction-tuned model that can classify safety risks in prompts and responses for conversational AI agent use cases; the model can be leveraged in a zero-shot or few-shot way if you need to adapt it to a different safety risk taxonomy that meets the requirements for a target use case; it can also be fine-tune on a specific dataset to adapt to a new taxonomy. | [Paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), [Tweet](https://x.com/omarsar0/status/1732781628139696279?s=20) |
| 6) **Human-Centered Loss Functions** - proposes an approach called Kahneman-Tversky Optimization (KTO) that matches or exceeds DPO performance methods at scales from 1B to 30B; KTO maximizes the utility of LLM generations instead of maximizing the log-likelihood of preferences as most current methods do. | [Paper](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf), [Tweet](https://x.com/ethayarajh/status/1732837520784957476?s=20) |
| 7) **Chain of Code** - a simple extension of the chain-of-thought approach that improves LM code-driven reasoning; it encourages LMs to format semantic sub-tasks in a program as pseudocode that the interpreter can explicitly catch undefined behavior and hand off to simulate with an LLM; on BIG-Bench Hard, Chain of Code achieves 84%, a gain of 12% over Chain of Thought. | [Paper](https://arxiv.org/abs/2312.04474), [Tweet](https://x.com/ChengshuEricLi/status/1733169631949701425?s=20) |
| 8) **Data Management For LLMs** - an overview of current research in data management within both the pretraining and supervised fine-tuning stages of LLMs; it covers different aspects of data management strategy design: data quantity, data quality, domain/task composition, and more. | [Paper](https://arxiv.org/abs/2312.01700), [Tweet](https://x.com/omarsar0/status/1731877232493166969?s=20) |
| 9) *8RankZephyr** - an open-source LLM for listwise zero-shot reranking that bridges the effectiveness gap with GPT-4 and in some cases surpasses the proprietary model; it outperforms GPT-4 on the NovelEval test set, comprising queries and passages past its training period, which addresses concerns about data contamination. | [Paper](https://arxiv.org/abs/2312.02724), [Tweet](https://x.com/lintool/status/1732430269485867114?s=20) |
| 10) **The Efficiency Spectrum of LLMs** - a comprehensive review of algorithmic advancements aimed at improving LLM efficiency; covers various topics related to efficiency, including scaling laws, data utilization, architectural innovations, training and tuning strategies, and inference techniques. | [Paper](https://arxiv.org/abs/2312.00678), [Tweet](https://x.com/omarsar0/status/1731696419457606048?s=20) |
---
## Top ML Papers of the Week (November 27 - December 3)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **GNoME** - a new AI system for material design that finds 2.2 million new crystals, including 380,000 stable materials; presents a new deep learning tool that increases the speed and efficiency of discovery by predicting the stability of new materials. | [Paper](https://www.nature.com/articles/s41586-023-06735-9), [Tweet](https://x.com/demishassabis/status/1729995611443769823?s=20) |
| 2) **Open-Source LLMs vs. ChatGPT** - provides an exhaustive overview of tasks where open-source LLMs claim to be on par or better than ChatGPT. | [Paper](https://arxiv.org/abs/2311.16989), [Tweet](https://x.com/sophiamyang/status/1730108858889097710?s=20) |
| 3) **Adversarial Diffusion Distillation** - a novel training approach that efficiently samples large-scale foundation image diffusion models in just 1-4 steps while maintaining high image quality; combines score distillation and an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps; reaches performance of state-of-the-art diffusion models in only four steps. | [Paper](https://stability.ai/research/adversarial-diffusion-distillation), [Tweet](https://x.com/robrombach/status/1729590281647870342?s=20) |
| 4) **Seamless** - a family of research models that enable end-to-end expressive cross-lingual communication in a streaming fashion; introduces an improved SeamlssM4T model trained on more low-resource language data; also applies red-teaming effort for safer multimodal machine translation. | [Paper](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/), [Tweet](https://x.com/AIatMeta/status/1730294284023427221?s=20) |
| 5) **MEDITRON-70B** - a suite of open-source LLMs with 7B and 70B parameters adapted to the medical domain; builds on Llama-2 and extends pretraining on a curated medical corpus; MEDITRON-70B outperforms GPT-3.5 and Med-PaLM and is within 5% of GPT-4 and 10% of Med-PaLM-2. | [Paper](https://arxiv.org/abs/2311.16079v1), [Tweet](https://x.com/eric_zemingchen/status/1729563855213175010?s=20) |
| 6) **Foundation Models Outcompeting Special-Purpose Tuning** - performs a systematic exploration of prompt engineering to boost the performance of LLMs on medical question answering; uses prompt engineering methods that are general purpose and make no use of domain expertise; prompt engineering led to enhancing GPT-4’s performance and achieves state-of-the-art results on nine benchmark datasets in the MultiMedQA suite. | [Paper](https://arxiv.org/abs/2311.16452), [Tweet](https://x.com/erichorvitz/status/1729854235443884385?s=20) |
| 7) **UniIR** - a unified instruction-guided multimodal retriever that handles eight retrieval tasks across modalities; can generalize to unseen retrieval tasks and achieves robust performance across existing datasets and zero-shot generalization to new tasks; presents a multimodal retrieval benchmark to help standardize the evaluation of multimodal information retrieval. | [Paper](https://arxiv.org/abs/2311.17136), [Tweet](https://x.com/CongWei1230/status/1730307767469068476?s=20) |
| 8) **Safe Deployment of Generative AI** - argues that to protect people’s privacy, medical professionals, not commercial interests, must drive the development and deployment of such models. | [Paper](https://www.nature.com/articles/d41586-023-03803-y), [Tweet](https://x.com/ClementDelangue/status/1730300666403238393?s=20) |
| 9) **On Bringing Robots Home** - introduces Dobb-E, an affordable and versatile general-purpose system for learning robotic manipulation within household settings; Dobbe-E can learn new tasks with only 5 minutes of user demonstrations; experiments reveal unique challenges absent or ignored in lab robotics, including effects of strong shadows, variable demonstration quality by non-expert users, among others. | [Paper](https://arxiv.org/abs/2311.16098v1), [Tweet](https://x.com/LerrelPinto/status/1729515379892826211?s=20) |
| 10) **Translatotron 3** - proposes an unsupervised approach to speech-to-speech translation that can learn from monolingual data alone; combines masked autoencoder, unsupervised embedding mapping, and back-translation; results show that the model outperforms a baseline cascade system and showcases its capability to retain para-/non-linguistic such as pauses, speaking rates, and speaker identity. | [Paper](https://arxiv.org/abs/2305.17547), [Tweet](https://x.com/GoogleAI/status/1730654297350959413?s=20) |
---
## Top ML Papers of the Week (November 20 - November 26)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **System 2 Attention** - leverages the reasoning and instruction following capabilities of LLMs to decide what to attend to; it regenerates input context to only include relevant portions before attending to the regenerated context to elicit the final response from the model; increases factuality and outperforms standard attention-based LLMs on tasks such as QA and math world problems. | [Paper](https://arxiv.org/abs/2311.11829), [Tweet](https://x.com/jaseweston/status/1726784511357157618?s=20) |
| 2) **Advancing Long-Context LLMs** - an overview of the methodologies for enhancing Transformer architecture modules that optimize long-context capabilities across all stages from pre-training to inference. | [Paper](https://arxiv.org/abs/2311.12351), [Tweet](https://x.com/omarsar0/status/1727358484360945750?s=20) |
| 3) **Parallel Speculative Sampling** - approach to reduce inference time of LLMs based on a variant of speculative sampling and parallel decoding; achieves significant speed-ups (up to 30%) by only learning as little as O(d_emb) additional parameters. | [Paper](https://arxiv.org/abs/2311.13581), [Tweet](https://x.com/omarsar0/status/1728066181796418009?s=20) |
| 4) **Mirasol3B** - a multimodal model for learning across audio, video, and text which decouples the multimodal modeling into separate, focused autoregressive models; the inputs are processed according to the modalities; this approach can handle longer videos compared to other models and it outperforms state-of-the-art approach on video QA, long video QA, and audio-video-text benchmark. | [Paper](https://arxiv.org/abs/2311.05698), [Tweet](https://x.com/GoogleAI/status/1724553024088191211?s=20) |
| 5) **Teaching Small LMs To Reason** - proposes an approach to teach smaller language models to reason; specifically, the LM is thought to use reasoning techniques, such as step-by-step processing, recall-then-generate, recall-reason-generate, extract-generate, and direct-answer methods; outperforms models of similar size and attains performance levels similar or better to those of models 5-10x larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings.| [Paper](https://arxiv.org/abs/2311.11045), [Tweet](https://x.com/omarsar0/status/1726990087399915995?s=20) |
| 6) **GPQA** - proposes a graduate-level Google-proof QA benchmark consisting of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry; the strongest GPT-4 based baseline achieves 39% accuracy; this benchmark offers scalable oversight experiments that can help obtain reliable and truthful information from modern AI systems that surpass human capabilities.| [Paper](https://arxiv.org/abs/2311.12022), [Tweet](https://x.com/idavidrein/status/1727033002234909060?s=20) |
| 7) **The Hitchhiker’s Guide From Chain-of-Thought Reasoning to Language Agents** - summary of CoT reasoning, foundational mechanics underpinning CoT techniques, and their application to language agent frameworks. | [Paper](https://arxiv.org/abs/2311.11797), [Tweet](https://x.com/omarsar0/status/1726803725220487277?s=20) |
| 8) **GAIA** - a benchmark for general AI assistants consisting of real-world questions that require a set of fundamental abilities such as reasoning, multimodal handling, web browsing, and generally tool-use proficiency; shows that human respondents obtain 92% vs. 15% for GPT-4 equipped with plugins. | [Paper](https://arxiv.org/abs/2311.12983), [Tweet](https://x.com/ThomasScialom/status/1727683993045201339?s=20) |
| 9) **LLMs as Collaborators for Medical Reasoning** - proposes a collaborative multi-round framework for the medical domain that leverages role-playing LLM-based agents to enhance LLM proficiency and reasoning capabilities. | [Paper](https://arxiv.org/abs/2311.10537), [Tweet](https://x.com/omarsar0/status/1726627951582511135?s=20) |
| 10) **TÜLU 2** - presents a suite of improved TÜLU models for advancing the understanding and best practices of adapting pretrained language models to downstream tasks and user preferences; TÜLU 2 suite achieves state-of-the-art performance among open models and matches or exceeds the performance of GPT-3.5-turbo-0301 on several benchmarks. | [Paper](https://arxiv.org/abs/2311.10702), [Tweet](https://x.com/natolambert/status/1727350301131518454?s=20) |
---
## Top ML Papers of the Week (November 13 - November 19)
| **Paper** | **Links** |
| ------------- | ------------- |
| 1) **Emu Video and Emu Edit** - present new models for controlled image editing and text-to-video generation based on diffusion models; Emu Video can generate high-quality video by using text-only, image-only, or combined text and image inputs; Emu Edit enables free-form editing through text instructions. | [Paper](https://ai.meta.com/blog/emu-text-to-video-generation-image-editing-research/), [Tweet](https://x.com/AIatMeta/status/1725184026154349007?s=20) |
| 2) **Chain-of-Note** - an approach to improve the robustness and reliability of retrieval-augmented language models in facing noisy, irrelevant documents and in handling unknown scenarios; CoN generates sequential reading notes for the retrieved documents, enabling an evaluation of their relevance to the given question and integrating this information to formulate the final answer; CoN significantly outperforms standard retrieval-augmented language models and achieves an average improvement of +7.9 in EM score given entirely noisy retrieved documents and +10.5 in rejection rates for real-time questions that fall outside the pre-training knowledge scope. | [Paper](https://arxiv.org/abs/2311.09210), [Tweet](https://x.com/omarsar0/status/1725181141693472959?s=20) |
| 3) **LLMs for Scientific Discovery** - explores the impact of large language models, particularly GPT-4, across various scientific fields including drug discovery, biology, and computational chemistry; assesses GPT-4's understanding of complex scientific concepts, its problem-solving capabilities, and its potential to advance scientific research through expert-driven case assessments and benchmark testing. | [Paper](https://arxiv.org/abs/2311.07361), [Tweet](https://x.com/omarsar0/status/1724465107046940893?s=20) |
| 4) **Fine-Tuning LLMs for Factuality** - fine-tunes language model for factuality without requiring human labeling; it learns from automatically generated factuality preference rankings and targets open-ended generation settings; it significantly improves the factuality of Llama-2 on held-out topics compared with RLHF or decoding strategies targeted at factuality. | [Paper](https://arxiv.org/abs/2311.08401), [Tweet](https://x.com/arankomatsuzaki/status/1724613041155608951?s=20) |
| 5) **Contrastive CoT Prompting** - proposes a contrastive chain of thought method to enhance language model reasoning; the approach provides both valid and invalid reasoning demonstrations, to guide the model to reason step-by-step while reducing reasoning mistakes; also proposes an automatic method to construct contrastive demonstrations and demonstrates improvements over CoT prompting. | [Paper](https://arxiv.org/abs/2311.09277), [Tweet](https://x.com/arankomatsuzaki/status/1725340150819905723?s=20) |
| 6) **A Survey on Language Models for Code** - provides an overview of LLMs for code, including a review of 50+ models, 30+ evaluation tasks, and 500 related works. | [Paper](https://arxiv.org/abs/2311.07989v1), [Tweet](https://x.com/omarsar0/status/1725637165256761553?s=20) |
| 7) **JARVIS-1** - an open-world agent that can perceive multimodal input | [Paper](https://arxiv.org/abs/2311.05997), [Tweet](https://x.com/arankomatsuzaki/status/1723882043514470629?s=20) |
| 8) **Learning to Filter Context for RAG** - proposes a method that improves the quality of the context provided to the generator via two steps: 1) identifying useful context based on lexical and information-theoretic approaches, and 2) training context filtering models that can filter retrieved contexts at inference; outperforms existing approaches on extractive question answering | [Paper](https://arxiv.org/abs/2311.08377v1), [Tweet](https://x.com/ZhiruoW/status/1724792850079252886?s=20) |
| 9) **MART** - proposes an approach for improving LLM safety with multi-round automatic red-teaming; incorporates automatic adversarial prompt writing and safe response generation, which increases red-teaming scalability and the safety of LLMs; violation rate of an LLM with limited safety alignment reduces up to 84.7% after 4 rounds of MART, achieving comparable performance to LLMs with extensive adversarial prompt writing. | [Paper](https://arxiv.org/abs/2311.07689), [Tweet](https://x.com/AIatMeta/status/1724887918685425829?s=20) |
| 10) **LLMs can Deceive Users** - explores the use of an autonomous stock trading agent powered by LLMs; finds that the agent acts upon insider tips and hides the reason behind the trading decision; shows that helpful and safe LLMs can strategically deceive users in a realistic situation without direction instructions or training for deception. | [Paper](https://arxiv.org/abs/2311.07590), [Tweet](https://x.com/ESYudkowsky/status/1725226563992715521?s=20) |
---
## Top ML Papers of the Week (November 6 - November 12)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| 1) **Hallucination in LLMs** - a comprehensive survey | [Paper](https://arxiv.org/abs/2311.05232), [Tweet](https://x.com/omarsar0/status/1722985251129966705?s=20) |
| 2) **Simplifying Transformer Blocks** - explores simplifying the transformer block and finds that many block components can be removed with no loss of training speed; using different architectures like autoregressive decoder-only and BERT encoder-only models, the simplified blocks emulate per-update training speed and performance of standard transformers, and even achieve 15% faster training throughput with fewer parameters | [Paper](https://arxiv.org/abs/2311.01906), [Tweet](https://x.com/maksym_andr/status/1722235666724192688?s=20) |
| 3) **Understanding In-Context Learning Abilities in Transformers** - investigates how effectively transformers can bridge between pretraining data mixture to identify and learn new tasks in-context which are both inside and outside the pretraining distribution; in the regimes studied, there is limited evidence that the models’ in-context learning behavior is capable of generalizing beyond their pretraining data. | [Paper](https://arxiv.org/abs/2311.00871), [Tweet](https://x.com/abacaj/status/1721223737729581437?s=20) |
| 4) **MusicGen** - a single-stage transformer-based LLM that operates over several streams of compressed discrete music representation; it can generate high-quality samples | [Paper](https://arxiv.org/abs/2306.05284), [Tweet](https://x.com/AIatMeta/status/1723043913638810025?s=20) |
| 5) **AltUp** - a method that makes it possible to take advantage of increasing scale and capacity in Transformer models without increasing the computational cost; achieved by working on a subblock of the widened representation at each layer and using a predict-and-correct mechanism to update the inactivated blocks; it widens the learn representation while only incurring a negligible increase in latency. | [Paper](https://arxiv.org/abs/2301.13310), [Tweet](https://x.com/GoogleAI/status/1722004366201418132?s=20) |
| 6) **Rephrase and Respond** - an effective prompting method that uses LLMs to rephrase and expand questions posed by humans to improve overall performance; it can improve the performance of different models across a wide range of tasks; the approach can be combined with chain-of-thought to improve performance further. | [Paper](https://arxiv.org/abs/2311.04205), [Tweet](https://x.com/QuanquanGu/status/1722364144379396513?s=20) |
| 7) **On the Road with GPT-4V(ision)** - provides an exhaustive evaluation of the latest state-of-the-art visual language model, GPT-4V(vision), and its application in autonomous driving; the model demonstrates superior performance in scene understanding and causal reasoning compared to existing autonomous systems. | [Paper](https://arxiv.org/abs/2311.05332), [Tweet](https://x.com/arankomatsuzaki/status/1722795897359139057?s=20) |
| 8) **GPT4All** - outlines technical details of the GPT4All model family along with the open-source repository that aims to democratize access to LLMs. | [Paper](https://arxiv.org/abs/2311.04931), [Tweet](https://x.com/_akhaliq/status/1722833378590793915?s=20) |
| 9) **S-LoRA** - an approach that enables the scalable serving of many LoRA adapters; it stores all adapters in main memory and fetches adapters of currently running queries to the GPU memory; employs novel tensor parallelism strategy and highly optimized custom CUDA kernels for heterogenous batching of LoRA computation; improves throughput by 4x, when compared to other solutions, and increases the number of served adapters by several orders of magnitude. | [Paper](https://arxiv.org/abs/2311.03285v2), [Tweet](https://x.com/ai_database/status/1722190708797592013?s=20) |
| 10) **FreshLLMs** - proposes a dynamic QA benchmark | [Paper](https://arxiv.org/abs/2310.03214), [Tweet](https://x.com/_akhaliq/status/1710108355157487635?s=20) |
---
## Top ML Papers of the Week (October 30 - November 5)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **MetNet-3** - a state-of-the-art neural weather model that extends both the lead time range and the variables that an observation-based model can predict well; learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature, and dew point. | [Paper](https://arxiv.org/abs/2306.06079), [Tweet](https://x.com/GoogleAI/status/1719774923294687636?s=20) |
| 2) **Evaluating LLMs** - a comprehensive survey | [Paper](https://arxiv.org/abs/2310.19736), [Tweet](https://x.com/omarsar0/status/1719351676828602502?s=20) |
| 3) **Battle of the Backbones** - a large benchmarking framework for a diverse suite of computer vision tasks; find that while vision transformers | [Paper](https://arxiv.org/abs/2310.19909), [Tweet](https://x.com/micahgoldblum/status/1719719308882801045?s=20) |
| 4) **LLMs for Chip Design** - proposes using LLMs for industrial chip design by leveraging domain adaptation techniques; evaluates different applications for chip design such as assistant chatbot, electronic design automation, and bug summarization; domain adaptation significantly improves performance over general-purpose models on a variety of design tasks; using a domain-adapted LLM for RAG further improves answer quality. | [Paper](https://arxiv.org/abs/2311.00176), [Tweet](https://x.com/omarsar0/status/1720066328961159387?s=20) |
| 5) **Efficient Context Window Extension of LLMs** - proposes a compute-efficient method for efficiently extending the context window of LLMs beyond what it was pretrained on; extrapolates beyond the limited context of a fine-tuning dataset and models have been reproduced up to 128K context length. | [Paper](https://arxiv.org/abs/2309.00071), [Tweet](https://x.com/theemozilla/status/1720107186850877662?s=20) |
| 6) **Open DAC 2023** - introduces a dataset consisting of more than 38M density functional theory | [Paper](https://arxiv.org/abs/2311.00341), [Tweet](https://x.com/AIatMeta/status/1720143486505341128?s=20) |
| 7) **Symmetry in Machine Learning** - presents a unified and methodological framework to enforce, discover, and promote symmetry in machine learning; also discusses how these ideas can be applied to ML models such as multilayer perceptions and basis function regression. | [Paper](https://arxiv.org/abs/2311.00212), [Tweet](https://x.com/eigensteve/status/1720115655050227911?s=20) |
| 8) **Next Generation AlphaFold** - reports progress on a new iteration of AlphaFold that greatly expands its range of applicability; shows capabilities of joint structure prediction of complexes including proteins, nucleic acids, small molecules, ions, and modified residue; demonstrates greater accuracy on protein-nucleic acid interactions than specialists predictors. | [Paper](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/a-glimpse-of-the-next-generation-of-alphafold/alphafold_latest_oct2023.pdf), [Tweet](https://x.com/demishassabis/status/1719345831730368596?s=20) |
| 9) **Enhancing LLMs by Emotion Stimuli** - explores the ability of LLMs to understand emotional stimuli; conducts automatic experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4; the tasks span deterministic and generative applications that represent comprehensive evaluation scenarios; experimental results show that LLMs have a grasp of emotional intelligence. | [Paper](https://arxiv.org/abs/2307.11760), [Tweet](https://x.com/emollick/status/1720135672764285176?s=20) |
| 10) **FP8-LM** - finds that when training FP8 LLMs most variables, such as gradients and optimizer states, in LLM training, can employ low-precision data formats without compromising model accuracy and requiring no changes to hyper-parameter. | [Paper](https://arxiv.org/abs/2310.18313), [Tweet](https://x.com/arankomatsuzaki/status/1718813303223222765?s=20) |
---
## Top ML Papers of the Week (October 23 - October 29)
| **Paper** | **Links** |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Zephyr LLM** - a 7B parameter model with competitive performance to ChatGPT on AlpacaEval; applies distilled supervised fine-tuning to improve task accuracy and distilled direct performance optimization on AI feedback data to better align the model; shows performance comparable to 70B-parameter chat models aligned with human feedback. | [Paper](https://arxiv.org/abs/2310.16944), [Tweet](https://x.com/nazneenrajani/status/1717747969842417723?s=20) |
| 2) **Fact-checking with LLMs** - investigates the fact-checking capabilities of LLMs like GPT-4; results show the enhanced prowess of LLMs when equipped with contextual information; GPT4 outperforms GPT-3, but accuracy varies based on query language and claim veracity; while LLMs show promise in fact-checking, they demonstrate inconsistent accuracy. | [Paper](https://arxiv.org/abs/2310.13549), [Tweet](https://x.com/omarsar0/status/1717550929145119212?s=20) |
| 3) **Matryoshka Diffusion Models** - introduces an end-to-end framework for high-resolution image and video synthesis; involves a diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture; enables a progressive training schedule from lower to higher resolutions leading to improvements in optimization for high-resolution generation. | [Paper](https://arxiv.org/abs/2310.15111), [Tweet](https://x.com/thoma_gu/status/1716923384846856691?s=20) |
| 4) **Spectron** - a new approach for spoken language modeling trained end-to-end to directly process spectrograms; it can be fine-tuned to generate high-quality accurate spoken language; the method surpasses existing spoken language models in speaker preservation and semantic coherence. | [Paper](https://arxiv.org/abs/2305.15255), [Tweet](https://x.com/GoogleAI/status/1717584836834001066?s=20) |
| 5) **LLMs Meet New Knowledge** - presents a benchmark to assess LLMs' abilities in knowledge understanding, differentiation, and association; benchmark results show | [Paper](https://arxiv.org/abs/2310.14820), [Tweet](https://x.com/omarsar0/status/1716817266195796186?s=20) |
| 6) **Detecting Pretraining Data from LLMs** - explores the problem of pretraining data detection which aims to determine if a black box model was trained on a given text; proposes a detection method named Min-K% Prob as an effective tool for benchmark example contamination detection, privacy auditing of machine unlearning, and copyrighted text detection in LM’s pertaining data. | [Paper](https://arxiv.org/abs/2310.16789), [Tweet](https://x.com/WeijiaShi2/status/1717612387174687150?s=20) |
| 7) **ConvNets Match Vision Transformers** - evaluates a performant ConvNet architecture pretrained on JFT-4B at scale; observes a log-log scaling law between the held out loss and compute budget; after fine-tuning on ImageNet, NFNets match the reported performance of Vision Transformers with comparable compute budgets. | [Paper](https://arxiv.org/abs/2310.16764), [Tweet](https://x.com/_akhaliq/status/1717385905214759421?s=20) |
| 8) **CommonCanvas** - a dataset of Creative-Commons-licensed | [Paper](https://arxiv.org/abs/2310.16825), [Tweet](https://x.com/iScienceLuvr/status/1717359916422496596?s=20) |
| 9) **Managing AI Risks** - a short paper outlining risks from upcoming and advanced AI systems, including an examination of social harms, malicious uses, and other potential societal issues emerging from the rapid adoption of autonomous AI systems. | [Paper](https://managing-ai-risks.com/managing_ai_risks.pdf), [Tweet](https://x.com/geoffreyhinton/status/1717967329202491707?s=20) |
| 10) **Branch-Solve-Merge Reasoning in LLMs** - an LLM program that consists of branch, solve, and merge modules parameterized with specific prompts to the base LLM; this enables an LLM to plan a decomposition of task into multiple parallel sub-tasks, independently solve them, and fuse solutions to the sub-tasks; improves evaluation correctness and consistency for multiple LLMs. | [Paper](https://arxiv.org/abs/2310.15123), [Tweet](https://x.com/jaseweston/status/1716635331393380619?s=20) |
---
## Top ML Papers of the Week (October 16 - October 22)
| **Paper** | **Links** |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Llemma** - an LLM for mathematics which is based on continued pretraining from Code Llama on the Proof-Pile-2 dataset; the dataset involves scientific paper, web data containing mathematics, and mathematical code; Llemma outperforms open base models and the unreleased Minerva on the MATH benchmark; the model is released, including dataset and code to replicate experiments. | [Paper](https://arxiv.org/abs/2310.10631), [Tweet](https://x.com/zhangir_azerbay/status/1714098025956864031?s=20) |
| 2) **LLMs for Software Engineering** - a comprehensive survey of LLMs for software engineering, including open research and technical challenges. | [Paper](https://arxiv.org/abs/2310.03533), [Tweet](https://x.com/omarsar0/status/1713940983199506910?s=20) |
| 3) **Self-RAG** - presents a new retrieval-augmented framework that enhances an LM’s quality and factuality through retrieval and self-reflection; trains an LM that adaptively retrieves passages on demand, and generates and reflects on the passages and its own generations using special reflection tokens; it significantly outperforms SoTA LLMs | [Paper](https://arxiv.org/abs/2310.11511), [Tweet](https://x.com/AkariAsai/status/1715110277077962937?s=20) |
| 4) **Retrieval-Augmentation for Long-form Question Answering** - explores retrieval-augmented language models on long-form question answering; finds that retrieval is an important component but evidence documents should be carefully added to the LLM; finds that attribution error happens more frequently when retrieved documents lack sufficient information/evidence for answering the question. | [Paper](https://arxiv.org/abs/2310.12150), [Tweet](https://x.com/omarsar0/status/1714986431859282144?s=20) |
| 5) **GenBench** - presents a framework for characterizing and understanding generalization research in NLP; involves a meta-analysis of 543 papers and a set of tools to explore and better understand generalization studies. | [Paper](https://www.nature.com/articles/s42256-023-00729-y?utm_source=twitter&utm_medium=organic_social&utm_campaign=research&utm_content=link), [Tweet](https://x.com/AIatMeta/status/1715041427283902793?s=20) |
| 6) **A Study of LLM-Generated Self-Explanations** - assesses an LLM's capability to self-generate feature attribution explanations; self-explanation is useful to improve performance and truthfulness in LLMs; this capability can be used together with chain-of-thought prompting. | [Paper](https://arxiv.org/abs/2310.11207), [Tweet](https://x.com/omarsar0/status/1714665747752923620?s=20) |
| 7) **OpenAgents** - an open platform for using and hosting language agents in the wild; includes three agents, including a Data Agent for data analysis, a Plugins Agent with 200+ daily API tools, and a Web Agent for autonomous web browsing. | [Paper](https://arxiv.org/abs/2310.10634v1), [Tweet](https://x.com/ChengZhoujun/status/1714343204148113860?s=20) |
| 8) **Eliciting Human Preferences with LLMs** - uses language models to guide the task specification process and a learning framework to help models elicit and infer intended behavior through free-form, language-based interaction with users; shows that by generating open-ended questions, the system generates responses that are more informative than user-written prompts. | [Paper](https://arxiv.org/abs/2310.11589), [Tweet](https://x.com/AlexTamkin/status/1715040019520569395?s=20) |
| 9) **AutoMix** - an approach to route queries to LLMs based on the correctness of smaller language models | [Paper](https://arxiv.org/abs/2310.12963), [Tweet](https://x.com/omarsar0/status/1715385477627334718?s=20) |
| 10) **Video Language Planning** - enables synthesizing complex long-horizon video plans across robotics domains; the proposed algorithm involves a tree search procedure that trains vision-language models to serve as policies and value functions, and text-to-video models as dynamic models. | [Paper](https://arxiv.org/abs/2310.10625), [Tweet](https://x.com/du_yilun/status/1714297584842318157?s=20) |
---
## Top ML Papers of the Week (October 9 - October 15)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
| 1) **Ring Attention** - a memory-efficient approach that leverages blockwise computation of self-attention to distribute long sequences across multiple devices to overcome the memory limitations inherent in Transformer architectures, enabling handling of longer sequences during training and inference; enables scaling the context length with the number of devices while maintaining performance, exceeding context length of 100 million without attention approximations. | [Paper](https://arxiv.org/abs/2310.01889), [Tweet](https://x.com/haoliuhl/status/1709630382457733596?s=20) |
| 2) **Universal Simulator** - applies generative modeling to learn a universal simulator of real-world interactions; can emulate how humans and agents interact with the world by simulating the visual outcome of high instruction and low-level controls; the system can be used to train vision-language planners, low-level reinforcement learning policies, and even for systems that perform video captioning. | [Paper](https://arxiv.org/abs/2310.06114), [Tweet](https://x.com/mengjiao_yang/status/1712153304757915925?s=20) |
| 3) **Overview of Factuality in LLMs** - a survey of factuality in LLMs providing insights into how to evaluate factuality in LLMs and how to enhance it. | [Paper](https://arxiv.org/abs/2310.07521), [Tweet](https://x.com/omarsar0/status/1712469661118517740?s=20) |
| 4) **LLMs can Learn Rules** - presents a two-stage framework that learns a rule library for reasoning with LLMs; in the first stage | [Paper](https://arxiv.org/abs/2310.07064), [Tweet](https://x.com/zhu_zhaocheng/status/1712582734550647091?s=20) |
| 5) **Meta Chain-of-Thought Prompting** - a generalizable chain-of-thought | [Paper](https://arxiv.org/abs/2310.06692), [Tweet](https://x.com/omarsar0/status/1712835499256090972?s=20) |
| 6) **A Survey of LLMs for Healthcare** - a comprehensive overview of LLMs applied to the healthcare domain. | [Paper](https://arxiv.org/abs/2310.05694), [Tweet](https://x.com/omarsar0/status/1711755055777415485?s=20) |
| 7) **Improving Retrieval-Augmented LMs with Compressors** - presents two approaches to compress retrieved documents into text summaries before pre-pending them in-context: 1) extractive compressor - selects useful sentences from retrieved documents 2) abstractive compressor - generates summaries by synthesizing information from multiple documents; achieves a compression rate of as low as 6% with minimal loss in performance on language modeling tasks and open domain question answering tasks; the proposed training scheme performs selective augmentation which helps to generate empty summaries when retrieved docs are irrelevant or unhelpful for a task. | [Paper](https://arxiv.org/abs/2310.04408), [Tweet](https://x.com/omarsar0/status/1711384213092479130?s=20) |
| 8) **Instruct-Retro** - introduces Retro 48B, the largest LLM pretrained with retrieval; continues pretraining a 43B parameter GPT model on an additional 100B tokens by retrieving from 1.2T tokens | [Paper](https://arxiv.org/abs/2310.07713), [Tweet](https://x.com/omarsar0/status/1712466049428521433?s=20) |
| 9) **MemWalker** - a method to enhance long-text understanding by treating the LLM as an interactive agent that can decide how to read the text via iterative prompting; it first processes long context into a tree of summer nodes and reads in a query to traverse the tree, seeking relevant information and crafting a suitable response; this process is achieved through reasoning and enables effective reading and enhances explainability through reasoning steps. | [Paper](https://arxiv.org/abs/2310.05029), [Tweet](https://x.com/__howardchen/status/1711584916708938042?s=20) |
| 10) **Toward Language Agent Fine-tuning** - explores the direction of fine-tuning LLMs to obtain language agents; finds that language agents consistently improved after fine-tuning their backbone language model; claims that fine-tuning a Llama2-7B with 500 agent trajectories | [Paper](https://arxiv.org/abs/2310.05915), [Tweet](https://x.com/omarsar0/status/1711757242905534479?s=20) |
---
## Top ML Papers of the Week (October 2 - October 8)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| 1) **LLMs Represent Space and Time** - discovers that LLMs learn linear representations of space and time across multiple scales; the representations are robust to prompt variations and unified across different entity types; demonstrate that LLMs acquire fundamental structured knowledge such as space and time, claiming that language models learn beyond superficial statistics, but literal world models. | [Paper](https://arxiv.org/abs/2310.02207), [Tweet](https://x.com/wesg52/status/1709551516577902782?s=20) |
| 2) **Retrieval meets Long Context LLMs** - compares retrieval augmentation and long-context windows for downstream tasks to investigate if the methods can be combined to get the best of both worlds; an LLM with a 4K context window using simple RAG can achieve comparable performance to a fine-tuned LLM with 16K context; retrieval can significantly improve the performance of LLMs regardless of their extended context window sizes; a retrieval-augmented LLaMA2-70B with a 32K context window outperforms GPT-3.5-turbo-16k on seven long context tasks including question answering and query-based summarization. | [Paper](https://arxiv.org/abs/2310.03025), [Tweet](https://x.com/omarsar0/status/1709749178199318545?s=20) |
| 3) **StreamingLLM** - a framework that enables efficient streaming LLMs with attention sinks, a phenomenon where the KV states of initial tokens will largely recover the performance of window attention; the emergence of the attention sink is due to strong attention scores towards the initial tokens; this approach enables LLMs trained with finite length attention windows to generalize to infinite sequence length without any additional fine-tuning. | [Paper](https://arxiv.org/abs/2309.17453), [Tweet](https://x.com/Guangxuan_Xiao/status/1708943505731801325?s=20) |
| 4) **Neural Developmental Programs** - proposes to use neural networks that self-assemble through a developmental process that mirrors properties of embryonic development in biological organisms | [Paper](https://arxiv.org/abs/2307.08197), [Tweet](https://x.com/risi1979/status/1708888992224362742?s=20) |
| 5) **The Dawn of LMMs** - a comprehensive analysis of GPT-4V to deepen the understanding of large multimodal models | [Paper](https://arxiv.org/abs/2309.17421), [Tweet](https://x.com/omarsar0/status/1708860551110041871?s=20) |
| 6) **Training LLMs with Pause Tokens** - performs training and inference on LLMs with a learnable <pause> token which helps to delay the model's answer generation and attain performance gains on general understanding tasks of Commonsense QA and math word problem-solving; experiments show that this is only beneficial provided that the delay is introduced in both pertaining and downstream fine-tuning. | [Paper](https://arxiv.org/abs/2310.02226), [Tweet](https://x.com/omarsar0/status/1709573238123122959?s=20) |
| 7) **Recursively Self-Improving Code Generation** - proposes the use of a language model-infused scaffolding program to recursively improve itself; a seed improver first improves an input program that returns the best solution which is then further tasked to improve itself; shows that the GPT-4 models can write code that can call itself to improve itself. | [Paper](https://arxiv.org/abs/2310.02304), [Tweet](https://x.com/ericzelikman/status/1709721771937587541?s=20) |
| 8) **Retrieval-Augmented Dual Instruction Tuning** - proposes a lightweight fine-tuning method to retrofit LLMs with retrieval capabilities; it involves a 2-step approach: 1) updates a pretrained LM to better use the retrieved information 2) updates the retriever to return more relevant results, as preferred by the LM Results show that fine-tuning over tasks that require both knowledge utilization and contextual awareness, each stage leads to additional gains; a 65B model achieves state-of-the-art results on a range of knowledge-intensive zero- and few-shot learning benchmarks; it outperforms existing retrieval-augmented language approaches by up to +8.9% in zero-shot and +1.4% in 5-shot. | [Paper](https://arxiv.org/abs/2310.01352), [Tweet](https://x.com/omarsar0/status/1709204756013490494?s=20) |
| 9) **KOSMOG-G** - a model that performs high-fidelity zero-shot image generation from generalized vision-language input that spans multiple images; extends zero-shot subject-driven image generation to multi-entity scenarios; allows the replacement of CLIP, unlocking new applications with other U-Net techniques such as ControlNet and LoRA. | [Paper](https://arxiv.org/abs/2310.02992), [Tweet](https://x.com/omarsar0/status/1709934741158510625?s=20) |
| 10) **Analogical Prompting** - a new prompting approach to automatically guide the reasoning process of LLMs; the approach is different from chain-of-thought in that it doesn’t require labeled exemplars of the reasoning process; the approach is inspired by analogical reasoning and prompts LMs to self-generate relevant exemplars or knowledge in the context. | [Paper](https://arxiv.org/abs/2310.01714), [Tweet](https://x.com/michiyasunaga/status/1709582150025240854?s=20) |
---
## Top ML Papers of the Week (September 25 - October 1)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------ |
| 1) **The Reversal Curse** - finds that LLMs trained on sentences of the form “A is B” will not automatically generalize to the reverse direction “B is A”, i.e., the Reversal Curse; shows the effect through finetuning LLMs on fictitious statements and demonstrating its robustness across model sizes and model families. | [Paper](https://owainevans.github.io/reversal_curse.pdf), [Tweet](https://x.com/OwainEvans_UK/status/1705285631520407821?s=20) |
| 2) **Effective Long-Context Scaling with LLMs** - propose a 70B variant that can already surpass gpt-3.5-turbo-16k’s overall performance on a suite of long-context tasks. This involves a cost-effective instruction tuning procedure that does not require human-annotated long instruction data. | [Paper](https://arxiv.org/abs/2309.16039), [Tweet](https://x.com/omarsar0/status/1707780482178400261?s=20) |
| 3) **Graph Neural Prompting with LLMs** - proposes a plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from knowledge graphs | [Paper](https://arxiv.org/abs/2309.15427), [Tweet](https://x.com/omarsar0/status/1707211751354212382?s=20) |
| 4) **Vision Transformers Need Registers** - identifies artifacts in feature maps of vision transformer networks that are repurposed for internal computations; this work proposes a solution to provide additional tokens to the input sequence to fill that role; the solution fixes the problem, leads to smoother feature and attention maps, and sets new state-of-the-art results on dense visual prediction tasks. | [Paper](https://arxiv.org/abs/2309.16588), [Tweet](https://x.com/TimDarcet/status/1707769575981424866?s=20) |
| 5) **Boolformer** - presents the first Transformer architecture trained to perform end-to-end symbolic regression of Boolean functions; it can predict compact formulas for complex functions and be applied to modeling the dynamics of gene regulatory networks. | [Paper](https://arxiv.org/abs/2309.12207), [Tweet](https://x.com/stephanedascoli/status/1706235856778834015?s=20) |
| 6) **LlaVA-RLHF** - adapts factually augmented RLHF to aligning large multimodal models; this approach alleviates the reward hacking in RLHF and improves performance on the LlaVA-Bench dataset with the 94% performance level of the text-only GPT-4. | [Paper](https://arxiv.org/abs/2309.14525), [Tweet](https://x.com/arankomatsuzaki/status/1706839311306621182?s=20) |
| 7) **LLM Alignment Survey** - a comprehensive survey paper on LLM alignment; topics include Outer Alignment, Inner Alignment, Mechanistic Interpretability, Attacks on Aligned LLMs, Alignment Evaluation, Future Directions, and Discussions. | [Paper](https://arxiv.org/abs/2309.15025), [Tweet](https://x.com/omarsar0/status/1706845285064818905?s=20) |
| 8) **Qwen LLM** - proposes a series of LLMs demonstrating the strength of RLHF on tasks involving tool use and planning capabilities for creating language agents. | [Paper](https://arxiv.org/abs/2309.16609), [Tweet](https://x.com/omarsar0/status/1707776749042364729?s=20) |
| 9) **MentalLlaMa** - an open-source LLM series for interpretable mental health analysis with instruction-following capability; it also proposes a multi-task and multi-source interpretable mental health instruction dataset on social media with 105K data samples. | [Paper](https://arxiv.org/abs/2309.13567), [Tweet](https://x.com/SAnaniadou/status/1707668936634794442?s=20) |
| 10) **Logical Chain-of-Thought in LLMs** - a new neurosymbolic framework to improve zero-shot chain-of-thought reasoning in LLMs; leverages principles from symbolic logic to verify and revise reasoning processes to improve the reasoning capabilities of LLMs. | [Paper](https://arxiv.org/abs/2309.13339), [Tweet](https://x.com/omarsar0/status/1706711389803287019?s=20) |
---
## Top ML Papers of the Week (September 18 - September 24)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| 1) **AlphaMissense** - an AI model classifying missense variants to help pinpoint the cause of diseases; the model is used to develop a catalogue of genetic mutations; it can categorize 89% of all 71 million possible missense variants as either likely pathogenic or likely benign. | [Paper](https://www.science.org/doi/10.1126/science.adg7492), [Tweet](https://x.com/GoogleDeepMind/status/1704145467129389178?s=20) |
| 2) **Chain-of-Verification reduces Hallucination in LLMs** - develops a method to enable LLMs to "deliberate" on responses to correct mistakes; include the following steps: 1) draft initial response, 2) plan verification questions to fact-check the draft, 3) answer questions independently to avoid bias from other responses, and 4) generate a final verified response. | [Paper](https://arxiv.org/abs/2309.11495), [Tweet](https://x.com/omarsar0/status/1704901425824772275?s=20) |
| 3) **Contrastive Decoding Improves Reasoning in Large Language Models** - shows that contrastive decoding leads Llama-65B to outperform Llama 2 and other models on commonsense reasoning and reasoning benchmarks. | [Paper](https://arxiv.org/abs/2309.09117), [Tweet](https://x.com/_akhaliq/status/1703966776990597567?s=20) |
| 4) **LongLoRA** - an efficient fine-tuning approach to significantly extend the context windows of pre-trained LLMs; implements shift short attention, a substitute that approximates the standard self-attention pattern during training; it has less GPU memory cost and training time compared to full fine-tuning while not compromising accuracy. | [Paper](https://arxiv.org/abs/2309.12307), [Tweet](https://x.com/omarsar0/status/1705234482930798813?s=20) |
| 5) **LLMs for Generating Structured Data** - studies the use of LLMs for generating complex structured data; proposes a structure-aware fine-tuning method, applied to Llama-7B, which significantly outperform other model like GPT-3.5/4 and Vicuna-13B. | [Paper](https://arxiv.org/abs/2309.08963), [Tweet](https://x.com/omarsar0/status/1703958549917847884?s=20) |
| 6) **LMSYS-Chat-1M** - a large-scale dataset containing 1 million real-world conversations with 25 state-of-the-art LLM; it is collected from 210K unique IP addresses on the Vincuna demo and Chatbot Arena website. | [Paper](http://arxiv.org/abs/2309.11998), [Tweet](https://x.com/arankomatsuzaki/status/1705024956122161217?s=20) |
| 7) **Language Modeling is Compression** - evaluates the compression capabilities of LLMs; it investigates how and why compression and prediction are equivalent; shows that LLMs are powerful general-purpose compressors due to their in-context learning abilities; finds that Chinchilla 70B compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG | [Paper](https://arxiv.org/abs/2309.10668), [Tweet](https://x.com/omarsar0/status/1704306357006897402?s=20) |
| 8) **Compositional Foundation Models** - proposes foundation models that leverage multiple expert foundation models trained on language, vision, and action data to solve long-horizon goals. | [Paper](https://arxiv.org/abs/2309.08587), [Tweet](https://x.com/du_yilun/status/1703786005612929214?s=20) |
| 9) **LLMs for IT Operations** - proposes OWL, an LLM for IT operations tuned using a self-instruct strategy based on IT-related tasks; it discusses how to collect a quality instruction dataset and how to put together a benchmark. | [Paper](https://arxiv.org/abs/2309.09298), [Tweet](https://x.com/omarsar0/status/1704137910834888743?s=20) |
| 10) **KOSMOS-2.5** - a multimodal model for machine reading of text-intensive images, capable of document-level text generation and image-to-markdown text generation. | [Paper](https://arxiv.org/abs/2309.11419), [Tweet](https://x.com/arankomatsuzaki/status/1704659787399487649?s=20) |
---
## Top ML Papers of the Week (September 11 - September 17)
| **Paper** | **Links** |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Textbooks Are All You Need II** - a new 1.3 billion parameter model trained on 30 billion tokens; the dataset consists of "textbook-quality" synthetically generated data; phi-1.5 competes or outperforms other larger models on reasoning tasks suggesting that data quality plays a more important role than previously thought. | [Paper](https://arxiv.org/abs/2309.05463), [Tweet](https://x.com/omarsar0/status/1701590130270601422?s=20) |
| 2) **The Rise and Potential of LLM Based Agents** - a comprehensive overview of LLM based agents; covers from how to construct these agents to how to harness them for good. | [Paper](https://arxiv.org/abs/2309.07864), [Tweet](https://x.com/omarsar0/status/1702736490067890239?s=20) |
| 3) **EvoDiff** - combines evolutionary-scale data with diffusion models for controllable protein generation in sequence space; it can generate proteins inaccessible to structure-based models. | [Paper](https://www.biorxiv.org/content/10.1101/2023.09.11.556673v1), [Tweet](https://x.com/KevinKaichuang/status/1701953715312136302?s=20) |
| 4) **LLMs Can Align Themselves without Finetuning?** - discovers that by integrating self-evaluation and rewind mechanisms, unaligned LLMs can directly produce responses consistent with human preferences via self-boosting. | [Paper](https://arxiv.org/abs/2309.07124), [Tweet](https://x.com/omarsar0/status/1702131444041011395?s=20) |
| 5) **Robot Parkour Learning** - presents a system for learning end-to-end vision-based parkour policy which is transferred to a quadrupedal robot using its ecocentric depth camera; shows that low-cost robots can automatically select and execute parkour skills in a real-world environment. | [Paper](https://arxiv.org/abs/2309.05665), [Tweet](https://x.com/zipengfu/status/1701316023612219445?s=20) |
| 6) **A Survey of Hallucination in LLMs** - classifies different types of hallucination phenomena and provides evaluation criteria for assessing hallucination along with mitigation strategies. | [Paper](https://arxiv.org/abs/2309.05922), [Tweet](https://x.com/omarsar0/status/1701970034711539839?s=20) |
| 7) **Agents** - an open-source library for building autonomous language agents including support for features like planning, memory, tool usage, multi-agent communication, and more. | [Paper](https://arxiv.org/abs/2309.07870), [Tweet](https://x.com/arankomatsuzaki/status/1702497897395396960?s=20) |
| 8) **Radiology-Llama2: Best-in-Class LLM for Radiology** - presents an LLM based on Llama 2 tailored for radiology; it's tuned on a large dataset of radiology reports to generate coherent and clinically useful impressions from radiology findings. | [Paper](https://arxiv.org/abs/2309.06419), [Tweet](https://x.com/omarsar0/status/1701774444052557965?s=20) |
| 9) **Communicative Agents for Software Development** - presents ChatDev, a virtual chat-powered software development company mirroring the waterfall model; shows the efficacy of the agent in software generation, even completing the entire software development process in less than seven minutes for less than one dollar. | [Paper](https://arxiv.org/abs/2307.07924v3), [Tweet](https://x.com/KevinAFischer/status/1702355125418045860?s=20) |
| 10) **MAmmoTH** - a series of open-source LLMs tailored for general math problem-solving; the models are trained on a curated instruction tuning dataset and outperform existing open-source models on several mathematical reasoning datasets. | [Paper](https://arxiv.org/abs/2309.05653), [Tweet](https://x.com/xiangyue96/status/1701710215442309323?s=20) |
---
## Top ML Papers of the Week (September 4 - September 10)
| **Paper** | **Links** |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1) **Transformers as SVMs** - finds that the optimization geometry of self-attention in Transformers exhibits a connection to hard-margin SVM problems; also finds that gradient descent applied without early-stopping leads to implicit regularization and convergence of self-attention; this work has the potential to deepen the understanding of language models. | [Paper](https://arxiv.org/abs/2308.16898) |
| 2) **Scaling RLHF with AI Feedback** - tests whether RLAIF is a suitable alternative to RLHF by comparing the efficacy of human vs. AI feedback; uses different techniques to generate AI labels and conduct scaling studies to report optimal settings for generating aligned preferences; the main finding is that on the task of summarization, human evaluators prefer generations from both RLAIF and RLHF over a baseline SFT model in ∼70% of cases. | [Paper](https://arxiv.org/abs/2309.00267), [Tweet](https://twitter.com/omarsar0/status/1699102486928265530?s=20) |
| 3) **GPT Solves Math Problems Without a Calculator** - shows that with sufficient training data, a 2B language model can perform multi-digit arithmetic operations with 100% accuracy and without data leakage; it’s also competitive with GPT-4 on 5K samples Chinese math problem test set when fine-tuned from GLM-10B on a dataset containing additional multi-step arithmetic operations and detailed math problems. | [Paper](https://arxiv.org/abs/2309.03241), [Tweet](https://twitter.com/_akhaliq/status/1699951105927512399?s=20) |
| 4) **LLMs as Optimizers** - an approach where the optimization problem is described in natural language; an LLM is then instructed to iteratively generate new solutions based on the defined problem and previously found solutions; at each optimization step, the goal is to generate new prompts that increase test accuracy based on the trajectory of previously generated prompts; the optimized prompts outperform human-designed prompts on GSM8K and Big-Bench Hard, sometimes by over 50% | [Paper](https://arxiv.org/abs/2309.03409), [Tweet](https://twitter.com/omarsar0/status/1700249035456598391?s=20) |
| 5) **Multi-modality Instruction Tuning** - presents ImageBind-LLM, a multimodality instruction tuning method of LLMs via ImageBind; this model can respond to instructions of diverse modalities such as audio, 3D point clouds, and video, including high language generation quality; this is achieved by aligning ImageBind’s visual encoder with an LLM via learnable bind network. | [Paper](https://arxiv.org/abs/2309.03905), [Tweet](https://twitter.com/arankomatsuzaki/status/1699947731333345750?s=20) |
| 6) **Explaining Grokking** - aims to explain grokking behavior in neural networks; specifically, it predicts and shows two novel behaviors: the first is ungrokking where a model goes from perfect generalization to memorization when trained further on a smaller dataset than the critical threshold; the second is semi-grokking where a network demonstrates grokking-like transition when training a randomly initialized network on the critical dataset size. | [Paper](https://arxiv.org/abs/2309.02390), [Tweet](https://twitter.com/VikrantVarma_/status/1699823229307699305?s=20) |
| 7) **Overview of AI Deception** - provides a survey of empirical examples of AI deception. | [Paper](https://arxiv.org/abs/2308.14752), [Tweet](https://twitter.com/DanHendrycks/status/1699437800301752332?s=20) |
| 8) **FLM-101B** - a new open LLM called FLM-101B with 101B parameters and 0.31TB tokens which can be trained on a $100K budget; the authors analyze different growth strategies, growing the number of parameters from smaller sizes to large ones. They ultimately employ an aggressive strategy that reduces costs by >50%. In other words, three models are trained sequentially with each model inheriting knowledge from its smaller predecessor | [Paper](https://arxiv.org/abs/2309.03852), [Tweet](https://twitter.com/omarsar0/status/1700156132700963053?s=20) |
| 9) **Cognitive Architecture for Language Agents** - proposes a systematic framework for understanding and building fully-fledged language agents drawing parallels from production systems and cognitive architectures; it systematizes diverse methods for LLM-based reasoning, grounding, learning, and decision making as instantiations of language agents in the framework. | [Paper](https://arxiv.org/abs/2309.02427), [Tweet](https://twitter.com/ShunyuYao12/status/1699396834983362690?s=20) |
| 10) **Q-Transformer** - a scalable RL method for training multi-task policies from large offline datasets leveraging human demonstrations and autonomously collected data; shows good performance on a large diverse real-world robotic manipulation task suite. | [Paper](https://q-transformer.github.io/), [Tweet](https://twitter.com/YevgenChebotar/status/1699909244743815677?s=20) |
---
## Top ML Papers of the Week (August 28 - September 3)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1) **Large Language and Speech Model** - proposes a large language and speech model trained with cross-modal conversational abilities that supports speech-and-language instruction enabling more natural interactions with AI systems. | [Paper](https://arxiv.org/abs/2308.15930v1), [Tweet](https://twitter.com/_akhaliq/status/1697081112164475304?s=20) |
| 2) **SAM-Med2D** - applies segment anything models | [Paper](https://arxiv.org/abs/2308.16184v1), [Tweet](https://twitter.com/omarsar0/status/1698014448856773102?s=20) |
| 3) **Vector Search with OpenAI Embeddings** - suggests that “from a cost–benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern “AI stack” for search since such applications have already received substantial investments in existing, widely deployed infrastructure.” | [Paper](https://arxiv.org/abs/2308.14963), [Tweet](https://twitter.com/omarsar0/status/1696879909950361867?s=20) |
| 4) **Graph of Thoughts** - presents a prompting approach that models text generated by LLMs as an arbitrary graph; it enables combining arbitrary "thoughts" and enhancing them using feedback loops; the core idea is to enhance the LLM capabilities through "network reasoning" and without any model updates; this could be seen as a generalization of the now popular Chain-of-Thought and Tree-of-Thought. | [Paper](https://arxiv.org/abs/2308.09687v2), [Tweet](https://twitter.com/omarsar0/status/1697245998828204200?s=20) |
| 5) **MVDream** - a multi-view diffusion model that can generate geometrically consistent multi-view images given a text prompt; it leverages pre-trained diffusion models and a multi-view dataset rendered from 3D assets; this leads to generalizability of 2D diffusion and consistency of 3D data. | [Paper](https://arxiv.org/abs/2308.16512), [Tweet](https://twitter.com/_akhaliq/status/1697521847963619462?s=20) |
| 6) **Nougat** - proposes an approach for neural optical understanding of academic documents; it supports the ability to extract text, equations, and tables from academic PDFs, i.e., convert PDFs into LaTeX/markdown. | [Paper](https://arxiv.org/abs/2308.13418v1), [Tweet](https://twitter.com/lukas_blecher/status/1696101110853910716?s=20) |
| 7) **Factuality Detection in LLMs** - proposes a tool called **FacTool** to detect factual errors in texts generated by LLMs; shows the necessary components needed and the types of tools to integrate with LLMs for better detecting factual errors. | [Paper](https://arxiv.org/abs/2307.13528v2), [Tweet](https://twitter.com/omarsar0/status/1697642048587694370?s=20) |
| 8) **AnomalyGPT** - an approach for industrial anomaly detection based on large vision-language models; it simulates anomalous images and textual descriptions to generate training data; employs an image decoder and prompt learner to detect anomalies; it shows few-shot in-context learning capabilities and achieves state-of-the-art performance benchmark datasets. | [Paper](https://arxiv.org/abs/2308.15366v1), [Tweet](https://twitter.com/shinmura0/status/1697091364633317707?s=20) |
| 9) **FaceChain** - a personalized portrait generation framework combining customized image-generation models and face-related perceptual understanding models to generate truthful personalized portraits; it works with a handful of portrait images as input. | [Paper](https://arxiv.org/abs/2308.14256v1) |
| 10) **Qwen-VL** - introduces a set of large-scale vision-language models demonstrating strong performance in tasks like image captioning, question answering, visual localization, and flexible interaction. | [Paper](https://arxiv.org/abs/2308.12966), [Tweet](https://twitter.com/arankomatsuzaki/status/1695964537671893306?s=20) |
---
## Top ML Papers of the Week (August 21 - August 27)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Code Llama** - a family of LLMs for code based on Llama 2; the models provided as part of this release: foundation base models | [Paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/), [Tweet](https://twitter.com/MetaAI/status/1694729071325007993?s=20) |
| 2) **Survey on Instruction Tuning for LLMs** - new survey paper on instruction tuning LLM, including a systematic review of the literature, methodologies, dataset construction, training models, applications, and more. | [Paper](https://arxiv.org/abs/2308.10792), [Tweet](https://twitter.com/omarsar0/status/1693978006237102589?s=20) |
| 3) **SeamlessM4T** - a unified multilingual and multimodal machine translation system that supports ASR, text-to-text translation, speech-to-text translation, text-to-speech translation, and speech-to-speech translation. | [Paper](https://ai.meta.com/research/publications/seamless-m4t/), [Tweet](https://twitter.com/MetaAI/status/1694020437532151820?s=20) |
| 4) **Use of LLMs for Illicit Purposes** - provides an overview of existing efforts to identify and mitigate threats and vulnerabilities arising from LLMs; serves as a guide to building more reliable and robust LLM-powered systems. | [Paper](https://arxiv.org/abs/2308.12833), [Tweet](https://twitter.com/omarsar0/status/1694885393286549636?s=20) |
| 5) **Giraffe** - a new family of models that are fine-tuned from base Llama and Llama 2; extends the context length to 4K, 16K, and 32K; explores the space of expanding context lengths in LLMs so it also includes insights useful for practitioners and researchers. | [Paper](https://arxiv.org/abs/2308.10882), [Tweet](https://twitter.com/bindureddy/status/1694126931174977906?s=20) |
| 6) **IT3D** - presents a strategy that leverages explicitly synthesized multi-view images to improve Text-to-3D generation; integrates a discriminator along a Diffusion-GAN dual training strategy to guide the training of the 3D models. | [Paper](https://arxiv.org/abs/2308.11473v1) |
| 7) **A Survey on LLM-based Autonomous Agents** - presents a comprehensive survey of LLM-based autonomous agents; delivers a systematic review of the field and a summary of various applications of LLM-based AI agents in domains like social science and engineering. | [Paper](https://arxiv.org/abs/2308.11432v1), [Tweet](https://twitter.com/omarsar0/status/1695440652048257251?s=20) |
| 8) **Prompt2Model** - a new framework that accepts a prompt describing a task through natural language; it then uses the prompt to train a small special-purpose model that is conducive to deployment; the proposed pipeline automatically collects and synthesizes knowledge through three channels: dataset retrieval, dataset generation, and model retrieval. | [Paper](https://arxiv.org/abs/2308.12261), [Tweet](https://twitter.com/omarsar0/status/1694718168185598055?s=20) |
| 9) **LegalBench** - a collaboratively constructed benchmark for measuring legal reasoning in LLMs; it consists of 162 tasks covering 6 different types of legal reasoning. | [Paper](https://arxiv.org/abs/2308.11462), [Tweet](https://twitter.com/NeelGuha/status/1694375959334670643?s=20) |
| 10) **Language to Rewards for Robotic Skill Synthesis** - proposes a new language-to-reward system that utilizes LLMs to define optimizable reward parameters to achieve a variety of robotic tasks; the method is evaluated on a real robot arm where complex manipulation skills such as non-prehensile pushing emerge. | [Paper](https://arxiv.org/abs/2306.08647), [Tweet](https://twitter.com/GoogleAI/status/1694086273689076170?s=20) |
---
## Top ML Papers of the Week (August 14 - August 20)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| 1) **Self-Alignment with Instruction Backtranslation** - presents an approach to automatically label human-written text with corresponding instruction which enables building a high-quality instruction following language model; the steps are: 1) fine-tune an LLM with small seed data and web corpus, then 2) generate instructions for each web doc, 3) curate high-quality examples via the LLM, and finally 4) fine-tune on the newly curated data; the self-alignment approach outperforms all other Llama-based models on the Alpaca leaderboard. | [Paper](https://arxiv.org/abs/2308.06259), [Tweet](https://twitter.com/jaseweston/status/1690888779878330368?s=20) |
| 2) **Platypus** - a family of fine-tuned and merged LLMs currently topping the Open LLM Leaderboard; it describes a process of efficiently fine-tuning and merging LoRA modules and also shows the benefits of collecting high-quality datasets for fine-tuning; specifically, it presents a small-scale, high-quality, and highly curated dataset, Open-Platypus, that enables strong performance with short and cheap fine-tuning time and cost... one can train a 13B model on a single A100 GPU using 25K questions in 5 hours. | [Paper](https://arxiv.org/abs/2308.07317v1), [Tweet](https://twitter.com/omarsar0/status/1692549762480791959?s=20) |
| 3) **Model Compression for LLMs** - a short survey on the recent model compression techniques for LLMs; provides a high-level overview of topics such as quantization, pruning, knowledge distillation, and more; it also provides an overview of benchmark strategies and evaluation metrics for measuring the effectiveness of compressed LLMs. | [Paper](https://arxiv.org/abs/2308.07633), [Tweet](https://twitter.com/omarsar0/status/1691803395160477905?s=20) |
| 4) **GEARS** - uses deep learning and gene relationship knowledge graph to help predict cellular responses to genetic perturbation; GEARS exhibited 40% higher precision than existing approaches in the task of predicting four distinct genetic interaction subtypes in a combinatorial perturbation screen. | [Paper](http://nature.com/articles/s41587-023-01905-6.pdf), [Tweet](https://twitter.com/jure/status/1692229511096754594?s=20) |
| 5) **Shepherd** - introduces a language model (7B) specifically tuned to critique the model responses and suggest refinements; this enables the capability to identify diverse errors and suggest remedies; its critiques are either similar or preferred to ChatGPT. | [Paper](https://arxiv.org/abs/2308.04592), [Tweet](https://twitter.com/MetaAI/status/1691517949130207232?s=20) |
| 6) **Using GPT-4 Code Interpreter to Boost Mathematical Reasoning** - proposes a zero-shot prompting technique for GPT-4 Code Interpreter that explicitly encourages the use of code for self-verification which further boosts performance on math reasoning problems; initial experiments show that GPT4-Code achieved a zero-shot accuracy of 69.7% on the MATH dataset which is an improvement of 27.5% over GPT-4’s performance (42.2%). Lots to explore here. | [Paper](https://arxiv.org/abs/2308.07921), [Tweet](https://twitter.com/omarsar0/status/1691630591744127355?s=20) |
| 7) **Teach LLMs to Personalize** - proposes a general approach based on multitask learning for personalized text generation using LLMs; the goal is to have an LLM generate personalized text without relying on predefined attributes. | [Paper](https://arxiv.org/abs/2308.07968), [Tweet](https://twitter.com/omarsar0/status/1692186726192521364?s=20) |
| 8) **OctoPack** - presents 4 terabytes of Git commits across 350 languages used to instruction tune code LLMs; achieves state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark; the data is also used to extend the HumanEval benchmark to other tasks such as code explanation and code repair. | [Paper](https://arxiv.org/abs/2308.07124v1), [Tweet](https://twitter.com/arankomatsuzaki/status/1691259656453193728?s=20) |
| 9) **Efficient Guided Generation for LLMs** - presents a library to help LLM developers guide text generation in a fast and reliable way; provides generation methods that guarantee that the output will match a regular expression, or follow a JSON schema. | [Paper](https://arxiv.org/abs/2307.09702), [Tweet](https://twitter.com/omarsar0/status/1691179888214966273?s=20) |
| 10) **Bayesian Flow Networks** - introduces a new class of generative models bringing together the power of Bayesian inference and deep learning; it differs from diffusion models in that it operates on the parameters of a data distribution rather than on a noisy version of the data; it’s adapted to continuous, discretized and discrete data with minimal changes to the training procedure. | [Paper](https://arxiv.org/abs/2308.07037), [Tweet](https://twitter.com/nnaisense/status/1691310494039379969?s=20) |
---
## Top ML Papers of the Week (August 7 - August 13)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| 1) **LLMs as Database Administrators** - presents D-Bot, a framework based on LLMs that continuously acquires database maintenance experience from textual sources; D-Bot can help in performing: 1) database maintenance knowledge detection from documents and tools, 2) tree of thought reasoning for root cause analysis, and 3) collaborative diagnosis among multiple LLMs. | [Paper](https://arxiv.org/abs/2308.05481), [Tweet](https://twitter.com/omarsar0/status/1689811820272353280?s=20) |
| 2) **Political Biases Found in NLP Models** - develops methods to measure media biases in LLMs, including the fairness of downstream NLP models tuned on top of politically biased LLMs; findings reveal that LLMs have political leanings which reinforce existing polarization in the corpora. | [Paper](https://aclanthology.org/2023.acl-long.656/), [Tweet](https://twitter.com/AiBreakfast/status/1688939983468453888?s=20) |
| 3) **Evaluating LLMs as Agents** - presents a multidimensional benchmark (AgentBench) to assess LLM-as-Agent’s reasoning and decision-making abilities; results show that there is a significant disparity in performance between top commercial LLMs and open-source LLMs when testing the ability to act as agents; open-source LLMs lag on the AgentBench tasks while GPT-4 shows potential to build continuously learning agents. | [Paper](https://arxiv.org/abs/2308.03688v1), [Tweet](https://twitter.com/arankomatsuzaki/status/1688719837760000000?s=20) |
| 4) **Studying LLM Generalization with Influence Functions** - introduces an efficient approach to scale influence functions to LLMs with up to 52 billion parameters; the influence functions are used to further investigate the generalization patterns of LLMs such as cross-lingual generalization and memorization; finds that middle layers in the network seem to be responsible for the most abstract generalization patterns. | [Paper](https://arxiv.org/abs/2308.03296), [Tweet](https://twitter.com/AnthropicAI/status/1688946685937090560?s=20) |
| 5) **Seeing Through the Brain** - proposes NeuroImagen, a pipeline for reconstructing visual stimuli images from EEG signals to potentially understand visually-evoked brain activity; a latent diffusion model takes EEG data and reconstructs high-resolution visual stimuli images. | [Paper](https://arxiv.org/abs/2308.02510), [Tweet](https://twitter.com/_akhaliq/status/1688787286807228416?s=20) |
| 6) **SynJax** - is a new library that provides an efficient vectorized implementation of inference algorithms for structured distributions; it enables building large-scale differentiable models that explicitly model structure in data like tagging, segmentation, constituency trees, and spanning trees. | [Paper](https://arxiv.org/abs/2308.03291v1), [Tweet](https://twitter.com/milosstanojevic/status/1688896558790520832?s=20) |
| 7) **Synthetic Data Reduces Sycophancy in LLMs** - proposes fine-tuning on simple synthetic data to reduce sycophancy in LLMs; sycophancy occurs when LLMs try to follow a user’s view even when it’s not objectively correct; essentially, the LLM repeats the user’s view even when the opinion is wrong. | [Paper](https://arxiv.org/abs/2308.03958), [Tweet](https://twitter.com/JerryWeiAI/status/1689340237993185280?s=20) |
| 8) **Photorealistic Unreal Graphics (PUG)** - presents photorealistic and semantically controllable synthetic datasets for representation learning using Unreal Engine; the goal is to democratize photorealistic synthetic data and enable more rigorous evaluations of vision models. | [Paper](https://arxiv.org/abs/2308.03977), [Tweet](https://twitter.com/MetaAI/status/1689316127846109184?s=20) |
| 9) **LLMs for Industrial Control** - develops an approach to select demonstrations and generate high-performing prompts used with GPT for executing tasks such as controlling (Heating, Ventilation, and Air Conditioning) for buildings; GPT-4 performs comparable to RL method but uses fewer samples and lower technical debt. | [Paper](https://arxiv.org/abs/2308.03028), [Tweet](https://twitter.com/emollick/status/1688760539441217536?s=20) |
| 10) **Trustworthy LLMs** - presents a comprehensive overview of important categories and subcategories crucial for assessing LLM trustworthiness; the dimensions include reliability, safety, fairness, resistance to misuse, explainability and reasoning, adherence to social norms, and robustness; finds that aligned models perform better in terms of trustworthiness but the effectiveness of alignment varies. | [Paper](https://arxiv.org/abs/2308.05374), [Tweet](https://twitter.com/_akhaliq/status/1689818964669390848?s=20) |
---
## Top ML Papers of the Week (July 31 - August 6)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| 1) **Open Problem and Limitation of RLHF** - provides an overview of open problems and the limitations of RLHF. | [Paper](https://arxiv.org/abs/2307.15217), [Tweet](https://twitter.com/arankomatsuzaki/status/1685813753063870465?s=20) |
| 2) **Med-Flamingo** - a new multimodal model that allows in-context learning and enables tasks such as few-shot medical visual question answering; evaluations based on physicians, show improvements of up to 20% in clinician's rating; the authors occasionally observed low-quality generations and hallucinations. | [Paper](https://arxiv.org/abs/2307.15189), [Tweet](https://twitter.com/Michael_D_Moor/status/1685804620730540033?s=20) |
| 3) **ToolLLM** - enables LLMs to interact with 16000 real-world APIs; it’s a framework that allows data preparation, training, and evaluation; the authors claim that one of their models, ToolLLaMA, has reached the performance of ChatGPT (turbo-16k) in tool use. | [Paper](https://arxiv.org/abs/2307.16789v1), [Tweet](https://twitter.com/omarsar0/status/1687531613574348800?s=20) |
| 4) **Skeleton-of-Thought** - proposes a prompting strategy that firsts generate an answer skeleton and then performs parallel API calls to generate the content of each skeleton point; reports quality improvements in addition to speed-up of up to 2.39x. | [Paper](https://arxiv.org/abs/2307.15337), [Tweet](https://twitter.com/omarsar0/status/1685832487103008768?s=20) |
| 5) **MetaGPT** - a framework involving LLM-based multi-agents that encodes human standardized operating procedures (SOPs) to extend complex problem-solving capabilities that mimic efficient human workflows; this enables MetaGPT to perform multifaceted software development, code generation tasks, and even data analysis using tools like AutoGPT and LangChain. | [Paper](https://arxiv.org/abs/2308.00352v2), [Tweet](https://twitter.com/ai_database/status/1686949868298973184?s=20) |
| 6) **OpenFlamingo** - introduces a family of autoregressive vision-language models ranging from 3B to 9B parameters; the technical report describes the models, training data, and evaluation suite. | [Paper](https://arxiv.org/abs/2308.01390), [Tweet](https://twitter.com/anas_awadalla/status/1687295129005195264?s=20) |
| 7) **The Hydra Effect** - shows that language models exhibit self-repairing properties — when one layer of attention heads is ablated it causes another later layer to take over its function. | [Paper](https://arxiv.org/abs/2307.15771), [Tweet](https://twitter.com/_akhaliq/status/1686192437771788288?s=20) |
| 8) **Self-Check** - explores whether LLMs have the capability to perform self-checks which is required for complex tasks that depend on non-linear thinking and multi-step reasoning; it proposes a zero-shot verification scheme to recognize errors without external resources; the scheme can improve question-answering performance through weighting voting and even improve math word problem-solving. | [Paper](https://arxiv.org/abs/2308.00436), [Tweet](https://twitter.com/_akhaliq/status/1686561569486827520?s=20) |
| 9) **Agents Model the World with Language** - presents an agent that learns a multimodal world model that predicts future text and image representations; it learns to predict future language, video, and rewards; it’s applied to different domains and can learn to follow instructions in visually and linguistically complex domains. | [Paper](https://arxiv.org/abs/2308.01399), [Tweet](https://twitter.com/johnjnay/status/1687277999517818880?s=20) |
| 10) **AutoRobotics-Zero** - discovers zero-shot adaptable policies from scratch that enable adaptive behaviors necessary for sudden environmental changes; as an example, the authors demonstrate the automatic discovery of Python code for controlling a robot. | [Paper](https://arxiv.org/abs/2307.16890), [Tweet](https://twitter.com/XingyouSong/status/1686190266578046976?s=20) |
---
## Top ML Papers of the Week (July 24 - July 30)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Universal Adversarial LLM Attacks** - finds universal and transferable adversarial attacks that cause aligned models like ChatGPT and Bard to generate objectionable behaviors; the approach automatically produces adversarial suffixes using greedy and gradient search. | [Paper](https://arxiv.org/abs/2307.15043), [Tweet](https://twitter.com/andyzou_jiaming/status/1684766170766004224?s=20) |
| 2) **RT-2** - a new end-to-end vision-language-action model that learns from both web and robotics data; enables the model to translate the learned knowledge to generalized instructions for robotic control. | [Paper](https://robotics-transformer2.github.io/assets/rt2.pdf), [Tweet](https://twitter.com/GoogleDeepMind/status/1684903412834447360?s=20) |
| 3) **Med-PaLM Multimodal** - introduces a new multimodal biomedical benchmark with 14 different tasks; it presents a proof of concept for a generalist biomedical AI system called Med-PaLM Multimodal; it supports different types of biomedical data like clinical text, imaging, and genomics. | [Paper](https://arxiv.org/abs/2307.14334), [Tweet](https://twitter.com/vivnat/status/1684404882844024832?s=20) |
| 4) **Tracking Anything in High Quality** - propose a framework for high-quality tracking anything in videos; consists of a video multi-object segmented and a pretrained mask refiner model to refine the tracking results; the model ranks 2nd place in the VOTS2023 challenge. | [Paper](https://arxiv.org/abs/2307.13974v1), [Tweet](https://twitter.com/arankomatsuzaki/status/1684380610901467136?s=20) |
| 5) **Foundation Models in Vision** - presents a survey and outlook discussing open challenges and research directions for foundational models in computer vision. | [Paper](https://arxiv.org/abs/2307.13721v1), [Tweet](https://twitter.com/KhanSalmanH/status/1684496991215316992?s=20) |
| 6) **L-Eval** - a standardized evaluation for long context language models containing 411 long documents over 2K query-response pairs encompassing areas such as law, finance, school lectures, long conversations, novels, and meetings. | [Paper](https://arxiv.org/abs/2307.11088v1), [Tweet](https://twitter.com/WenxiangJiao/status/1682208555762610176?s=20) |
| 7) **LoraHub** - introduces LoraHub to enable efficient cross-task generalization via dynamic LoRA composition; it enables the combination of LoRA modules without human expertise or additional parameters/gradients; mimics the performance of in-context learning in few-shot scenarios. | [Paper](https://arxiv.org/abs/2307.13269v1), [Tweet](https://twitter.com/_akhaliq/status/1684030297661403136?s=20) |
| 8) **Survey of Aligned LLMs** - resents a comprehensive overview of alignment approaches, including aspects like data collection, training methodologies, and model evaluation. | [Paper](https://arxiv.org/abs/2307.12966v1), [Tweet](https://twitter.com/omarsar0/status/1684960627423420419?s=20) |
| 9) **WavJourney** - leverages LLMs to connect various audio models to compose audio content for engaging storytelling; this involves an explainable and interactive design that enhances creative control in audio production. | [Paper](https://arxiv.org/abs/2307.14335v1), [Tweet](https://twitter.com/LiuXub/status/1684338437934002176?s=20) |
| 10) **FacTool** - a task and domain agnostic framework for factuality detection of text generated by LLM; the effectiveness of the approach is tested on tasks such as code generation and mathematical reasoning; a benchmark dataset is released, including a ChatGPT plugin. | [Paper](https://arxiv.org/abs/2307.13528v2), [Tweet](https://twitter.com/gneubig/status/1684658613921669120?s=20) |
---
## Top ML Papers of the Week (July 17 - July 23)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Llama 2** - a collection of pretrained foundational models and fine-tuned chat models ranging in scale from 7B to 70B; Llama 2-Chat is competitive on a range of tasks and shows strong results on safety and helpfulness. | [Paper](https://arxiv.org/abs/2307.09288v2), [Tweet](https://twitter.com/MetaAI/status/1681363272484945921?s=20) |
| 2) **How is ChatGPT’s Behavior Changing Over Time?** - evaluates different versions of GPT-3.5 and GPT-4 on various tasks and finds that behavior and performance vary greatly over time; this includes differences in performance for tasks such as math problem-solving, safety-related generations, and code formatting. | [Paper](https://arxiv.org/abs/2307.09009v1), [Tweet](https://twitter.com/matei_zaharia/status/1681467961905926144?s=20) |
| 3) **FlashAttention-2** - improves work partitioning and parallelism and addresses issues like reducing non-matmul FLOPs, parallelizing attention computation which increases occupancy, and reducing communication through shared memory. | [Paper](https://arxiv.org/abs/2307.08691v1), [Tweet](https://twitter.com/tri_dao/status/1680987577913065472?s=20) |
| 4) **Measuring Faithfulness in Chain-of-Thought Reasoning** - nds that CoT reasoning shows large variation across tasks by simple interventions like adding mistakes and paraphrasing; demonstrates that as the model becomes larger and more capable, the reasoning becomes less faithful; suggests carefully choosing the model size and tasks can enable CoT faithfulness. | [Paper](https://www-files.anthropic.com/production/files/measuring-faithfulness-in-chain-of-thought-reasoning.pdf), [Tweet](https://twitter.com/AnthropicAI/status/1681341063083229189?s=20) |
| 5) **Generative TV & Showrunner Agents** - an approach to generate episodic content using LLMs and multi-agent simulation; this enables current systems to perform creative storytelling through the integration of simulation, the user, and powerful AI models and enhance the quality of AI-generated content. | [Paper](https://fablestudio.github.io/showrunner-agents/), [Tweet](https://twitter.com/fablesimulation/status/1681352904152850437?s=20) |
| 6) **Challenges & Application of LLMs** - summarizes a comprehensive list of challenges when working with LLMs that range from brittle evaluations to prompt brittleness to a lack of robust experimental designs. | [Paper](https://arxiv.org/abs/2307.10169), [Tweet](https://twitter.com/omarsar0/status/1681844380934500358?s=20) |
| 7) **Retentive Network** - presents a foundation architecture for LLMs with the goal to improve training efficiency, inference, and efficient long-sequence modeling; adapts retention mechanism for sequence modeling that support parallel representation, recurrent representations, and chunkwise recurrent representation. | [Paper](https://arxiv.org/abs/2307.08621), [Tweet](https://twitter.com/arankomatsuzaki/status/1681113977500184576?s=20) |
| 8) **Meta-Transformer** - a framework that performs unified learning across 12 modalities; it can handle tasks that include fundamental perception (text, image, point cloud, audio, video), practical application (X-Ray, infrared, hyperspectral, and IMU), and data mining (graph, tabular, and time-series). | [Paper](https://arxiv.org/abs/2307.10802), [Tweet](https://twitter.com/omarsar0/status/1682197751990288385?s=20) |
| 9) **Retrieve In-Context Example for LLMs** - presents a framework to iteratively train dense retrievers to identify high-quality in-context examples for LLMs; the approach enhances in-context learning performance demonstrated using a suite of 30 tasks; examples with similar patterns are helpful and gains are consistent across model sizes. | [Paper](https://arxiv.org/abs/2307.07164), [Tweet](https://twitter.com/_akhaliq/status/1680770636166094848?s=20) |
| 10) **FLASK** - proposes fine-grained evaluation for LLMs based on a range of alignment skill sets; involves 12 skills and can help to provide a holistic view of a model’s performance depending on skill, domain, and level of difficulty; useful to analyze factors that make LLMs more proficient at specific skills. | [Paper](https://arxiv.org/abs/2307.10928), [Tweet](https://twitter.com/SeonghyeonYe/status/1682209670302408705?s=20) |
---
## Top ML Papers of the Week (July 10 - July 16)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **CM3Leon** - introduces a retrieval-augmented multi-modal language model that can generate text and images; leverages diverse and large-scale instruction-style data for tuning which leads to significant performance improvements and 5x less training compute than comparable methods. | [Paper](https://ai.meta.com/research/publications/scaling-autoregressive-multi-modal-models-pretraining-and-instruction-tuning/), [Tweet](https://twitter.com/MetaAI/status/1679885986363478018?s=20) |
| 2) **Claude 2** - presents a detailed model card for Claude 2 along with results on a range of safety, alignment, and capabilities evaluations. | [Paper](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf), [Tweet](https://twitter.com/AnthropicAI/status/1678759122194530304?s=20) |
| 3) **Secrets of RLHF in LLMs** - takes a closer look at RLHF and explores the inner workings of PPO with code included. | [Paper](https://arxiv.org/abs/2307.04964), [Tweet](https://twitter.com/omarsar0/status/1678938028918571009?s=20) |
| 4) **LongLLaMA** - employs a contrastive training process to enhance the structure of the (key, value) space to extend context length; presents a fine-tuned model that lengthens context and demonstrates improvements in long context tasks. | [Paper](https://arxiv.org/abs/2307.03170v1), [Tweet](https://twitter.com/s_tworkowski/status/1677125863429795840?s=20) |
| 5) **Patch n’ Pack: NaViT** - introduces a vision transformer for any aspect ratio and resolution through sequence packing; enables flexible model usage, improved training efficiency, and transfers to tasks involving image and video classification among others. | [Paper](https://arxiv.org/abs/2307.06304), [Tweet](https://twitter.com/m__dehghani/status/1679558751248850969?s=20) |
| 6) **LLMs as General Pattern Machines** - shows that even without any additional training, LLMs can serve as general sequence modelers, driven by in-context learning; this work applies zero-shot capabilities to robotics and shows that it’s possible to transfer the pattern among words to actions. | [Paper](https://arxiv.org/abs/2307.04721), [Tweet](https://twitter.com/DrJimFan/status/1679898692307005440?s=20) |
| 7) **HyperDreamBooth** - introduces a smaller, faster, and more efficient version of Dreambooth; enables personalization of text-to-image diffusion model using a single input image, 25x faster than Dreambooth. | [Paper](https://arxiv.org/abs/2307.06949), [Tweet](https://twitter.com/natanielruizg/status/1679893292618752000?s=20) |
| 8) **Teaching Arithmetics to Small Transformers** - trains small transformer models on chain-of-thought style data to significantly improve accuracy and convergence speed; it highlights the importance of high-quality instructive data for rapidly eliciting arithmetic capabilities. | [Paper](https://arxiv.org/abs/2307.03381), [Tweet](https://twitter.com/DimitrisPapail/status/1678407512637284352?s=20) |
| 9) **AnimateDiff** - appends a motion modeling module to a frozen text-to-image model, which is then trained and used to animate existing personalized models to produce diverse and personalized animated images. | [Paper](https://arxiv.org/abs/2307.04725v1), [Tweet](https://twitter.com/dreamingtulpa/status/1679459297946632193?s=20) |
| 10) **Generative Pretraining in Multimodality** - presents a new transformer-based multimodal foundation model to generate images and text in a multimodal context; enables performant multimodal assistants via instruction tuning. | [Paper](https://arxiv.org/abs/2307.05222v1), [Tweet](https://twitter.com/_akhaliq/status/1678939405170475008?s=20) |
---
## Top ML Papers of the Week (July 3 - July 9)
| **Paper** | **Links** |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1) **A Survey on Evaluation of LLMs** - a comprehensive overview of evaluation methods for LLMs focusing on what to evaluate, where to evaluate, and how to evaluate. | [Paper](https://arxiv.org/abs/2307.03109), [Tweet](https://twitter.com/omarsar0/status/1677137934946803712?s=20) |
| 2) **How Language Models Use Long Contexts** - finds that LM performance is often highest when relevant information occurs at the beginning or end of the input context; performance degrades when relevant information is provided in the middle of a long context. | [Paper](https://arxiv.org/abs/2307.03172), [Tweet](https://twitter.com/nelsonfliu/status/1677373731948339202?s=20) |
| 3) **LLMs as Effective Text Rankers** - proposes a prompting technique that enables open-source LLMs to perform state-of-the-art text ranking on standard benchmarks. | [Paper](https://arxiv.org/abs/2306.17563), [Tweet](https://twitter.com/arankomatsuzaki/status/1675673784454447107?s=20) |
| 4) **Multimodal Generation with Frozen LLMs** - introduces an approach that effectively maps images to the token space of LLMs; enables models like PaLM and GPT-4 to tackle visual tasks without parameter updates; enables multimodal tasks and uses in-context learning to tackle various visual tasks. | [Paper](https://arxiv.org/abs/2306.17842), [Tweet](https://twitter.com/roadjiang/status/1676375112914989056?s=20) |
| 5) **CodeGen2.5** - releases a new code LLM trained on 1.5T tokens; the 7B model is on par with >15B code-generation models and it’s optimized for fast sampling. | [Paper](https://arxiv.org/abs/2305.02309), [Tweet](https://twitter.com/erik_nijkamp/status/1677055271104045056?s=20) |
| 6) **Elastic Decision Transformer** - introduces an advancement over Decision Transformers and variants by facilitating trajectory stitching during action inference at test time, achieved by adjusting to shorter history that allows transitions to diverse and better future states. | [Paper](https://arxiv.org/abs/2307.02484), [Tweet](https://twitter.com/xiaolonw/status/1677003542249484289?s=20) |
| 7) **Robots That Ask for Help** - presents a framework to measure and align the uncertainty of LLM-based planners that ask for help when needed. | [Paper](https://arxiv.org/abs/2307.01928), [Tweet](https://twitter.com/allenzren/status/1677000811803443213?s=20) |
| 8) **Physics-based Motion Retargeting in Real-Time** - proposes a method that uses reinforcement learning to train a policy to control characters in a physics simulator; it retargets motions in real-time from sparse human sensor data to characters of various morphologies. | [Paper](https://arxiv.org/abs/2307.01938), [Tweet](https://twitter.com/_akhaliq/status/1676822600478015488?s=20) |
| 9) **Scaling Transformer to 1 Billion Tokens** - presents LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, with no loss in shorter sequences. | [Paper](https://arxiv.org/abs/2307.02486), [Tweet](https://twitter.com/arankomatsuzaki/status/1676765133362675712?s=20) |
| 10) **InterCode** - introduces a framework of interactive coding as a reinforcement learning environment; this is different from the typical coding benchmarks that consider a static sequence-to-sequence process. | [Paper](https://arxiv.org/abs/2306.14898), [Tweet](https://twitter.com/ShunyuYao12/status/1675903408727896066?s=20) |
---
## Top ML Papers of the Week (June 26 - July 2)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| 1) **LeanDojo** - an open-source Lean playground consisting of toolkits, data, models, and benchmarks for theorem proving; also develops ReProver, a retrieval augmented LLM-based prover for theorem solving using premises from a vast math library. | [Paper](https://arxiv.org/abs/2306.15626), [Tweet](https://twitter.com/KaiyuYang4/status/1673882824158613504?s=20) |
| 2) **Extending Context Window of LLMs** - extends the context window of LLMs like LLaMA to up to 32K with minimal fine-tuning (within 1000 steps); previous methods for extending the context window are inefficient but this approach attains good performance on several tasks while being more efficient and cost-effective. | [Paper](https://arxiv.org/abs/2306.15595), [Tweet](https://twitter.com/omarsar0/status/1674073189800919042?s=20) |
| 3) **Computer Vision Through the Lens of Natural Language** - proposes a modular approach for solving computer vision problems by leveraging LLMs; the LLM is used to reason over outputs from independent and descriptive modules that provide extensive information about an image. | [Paper](https://arxiv.org/abs/2306.16410), [Tweet](https://twitter.com/arankomatsuzaki/status/1674219223856365569?s=20) |
| 4) **Visual Navigation Transformer** - a foundational model that leverages the power of pretrained models to vision-based robotic navigation; it can be used with any navigation dataset and is built on a flexible Transformer-based architecture that can tackle various navigational tasks. | [Paper](https://arxiv.org/abs/2306.14846), [Tweet](https://twitter.com/svlevine/status/1673732522155601920?s=20) |
| 5) **Generative AI for Programming Education** - evaluates GPT-4 and ChatGPT on programming education scenarios and compares their performance with human tutors; GPT-4 outperforms ChatGPT and comes close to human tutors' performance. | [Paper](https://arxiv.org/abs/2306.17156), [Tweet](https://twitter.com/_akhaliq/status/1674590713051242498?s=20) |
| 6) **DragDiffusion** - extends interactive point-based image editing using diffusion models; it optimizes the diffusion latent to achieve precise spatial control and complete high-quality editing efficiently. | [Paper](https://arxiv.org/abs/2306.14435), [Tweet](https://twitter.com/_akhaliq/status/1673570232429051906?s=20) |
| 7) **Understanding Theory-of-Mind in LLMs with LLMs** - a framework for procedurally generating evaluations with LLMs; proposes a benchmark to study the social reasoning capabilities of LLMs with LLMs. | [Paper](https://arxiv.org/abs/2306.15448), [Tweet](https://twitter.com/johnjnay/status/1673871545725505537?s=20) |
| 8) **Evaluations with No Labels** - a framework for self-supervised evaluation of LLMs by analyzing their sensitivity or invariance to transformations on input text; can be used to monitor LLM behavior on datasets streamed during live model deployment. | [Paper](https://arxiv.org/abs/2306.13651v1), [Tweet](https://twitter.com/tomgoldsteincs/status/1673808766679097346?s=20) |
| 9) **Long-range Language Modeling with Self-Retrieval** - an architecture and training procedure for jointly training a retrieval-augmented language model from scratch for long-range language modeling tasks. | [Paper](https://arxiv.org/abs/2306.13421), [Tweet](https://twitter.com/arankomatsuzaki/status/1673129191863140353?s=20) |
| 10) **Scaling MLPs: A Tale of Inductive Bias** - shows that the performance of MLPs improves with scale and highlights that lack of inductive bias can be compensated. | [Paper](https://arxiv.org/abs/2306.13575), [Tweet](https://twitter.com/ethanCaballero/status/1673725211907182592?s=20) |
---
## Top ML Papers of the Week (June 19 - June 25)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| 1) **Textbooks Are All You Need** - introduces a new 1.3B parameter LLM called phi-1; it’s significantly smaller in size and trained for 4 days using a selection of textbook-quality data and synthetic textbooks and exercises with GPT-3.5; achieves promising results on the HumanEval benchmark. | [Paper](https://arxiv.org/abs/2306.11644), [Tweet](https://twitter.com/SebastienBubeck/status/1671326369626853376?s=20) |
| 2) **RoboCat** - a new foundation agent that can operate different robotic arms and can solve tasks from as few as 100 demonstrations; the self-improving AI agent can self-generate new training data to improve its technique and get more efficient at adapting to new tasks. | [Paper](https://arxiv.org/abs/2306.11706), [Tweet](https://twitter.com/DeepMind/status/1671171448638144515?s=20) |
| 3) **ClinicalGPT** - a language model optimized through extensive and diverse medical data, including medical records, domain-specific knowledge, and multi-round dialogue consultations. | [Paper](https://arxiv.org/abs/2306.09968), [Tweet](https://twitter.com/omarsar0/status/1670606068777381890?s=20) |
| 4) **An Overview of Catastrophic AI Risks** - provides an overview of the main sources of catastrophic AI risks; the goal is to foster more understanding of these risks and ensure AI systems are developed in a safe manner. | [Paper](https://arxiv.org/abs/2306.12001v1), [Tweet](https://twitter.com/DanHendrycks/status/1671894767331061763?s=20) |
| 5) **LOMO** - proposes a new memory-efficient optimizer that combines gradient computation and parameter update in one step; enables tuning the full parameters of an LLM with limited resources. | [Paper](https://arxiv.org/abs/2306.09782), [Tweet](https://twitter.com/arankomatsuzaki/status/1670603218659811330?s=20) |
| 6) **SequenceMatch** - formulates sequence generation as an imitation learning problem; this framework allows the ability to incorporate backtracking into text generation through a backspace action; this enables the generative model to mitigate compounding errors by reverting sample tokens that lead to sequence OOD. | [Paper](https://arxiv.org/abs/2306.05426), [Tweet](https://twitter.com/abacaj/status/1671636061494059009?s=20) |
| 7) **LMFlow** - an extensible and lightweight toolkit that simplifies finetuning and inference of general large foundation models; supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference. | [Paper](https://arxiv.org/abs/2306.12420), [Tweet](https://twitter.com/omarsar0/status/1671881864930549761?s=20) |
| 8) **MotionGPT** - uses multimodal control signals for generating consecutive human motions; it quantizes multimodal control signals intro discrete codes which are converted to LLM instructions that generate motion answers. | [Paper](https://arxiv.org/abs/2306.10900v1), [Tweet](https://twitter.com/arankomatsuzaki/status/1671341916980490241?s=20) |
| 9) **Wanda** - introduces a simple and effective pruning approach for LLMs; it prunes weights with the smallest magnitudes multiplied by the corresponding input activations, on a per-output basis; the approach requires no retraining or weight update and outperforms baselines of magnitude pruning. | [Paper](https://arxiv.org/abs/2306.11695), [Tweet](https://twitter.com/Yampeleg/status/1671885220218560516?s=20) |
| 10) **AudioPaLM** - fuses text-based and speech-based LMs, PaLM-2 and AudioLM, into a multimodal architecture that supports speech understanding and generation; outperforms existing systems for speech translation tasks with zero-shot speech-to-text translation capabilities. | [Paper](https://arxiv.org/abs/2306.12925v1), [Tweet](https://twitter.com/PaulKRubenstein/status/1672128984220413953?s=20) |
---
## Top ML Papers of the Week (June 12 - June 18)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 1) **Voicebox** - an all-in-one generative speech model; it can synthesize speech across 6 languages; it can perform noise removal, content editing, style conversion, and more; it's 20x faster than current models and outperforms single-purpose models through in-context learning. | [Paper](https://research.facebook.com/publications/voicebox-text-guided-multilingual-universal-speech-generation-at-scale/), [Tweet](https://twitter.com/MetaAI/status/1669766837981306880?s=20) |
| 2) **FinGPT** - an open-source LLM for the finance sector; it takes a data-centric approach, providing researchers & practitioners with accessible resources to develop FinLLMs. | [Paper](https://arxiv.org/abs/2306.06031), [Tweet](https://twitter.com/omarsar0/status/1668060502663077891?s=20) |
| 3) **Crowd Workers Widely Use Large Language Models for Text Production Tasks** - estimates that 33-46% of crowd workers on MTurk used LLMs when completing a text production task. | [Paper](https://arxiv.org/abs/2306.07899v1), [Tweet](https://twitter.com/manoelribeiro/status/1668986074801098754?s=20) |
| 4) **Reliability of Watermarks for LLMs** - watermarking is useful to detect LLM-generated text and potentially mitigate harms; this work studies the reliability of watermarking for LLMs and finds that watermarks are detectable even when the watermarked text is re-written by humans or paraphrased by another non-watermarked LLM. | [Paper](https://arxiv.org/abs/2306.04634), [Tweet](https://twitter.com/tomgoldsteincs/status/1668668484975464448?s=20) |
| 5) **Applications of Transformers** - a new survey paper highlighting major applications of Transformers for deep learning tasks; includes a comprehensive list of Transformer models. | [Paper](https://arxiv.org/abs/2306.07303), [Tweet](https://twitter.com/omarsar0/status/1668989324950491139?s=20) |
| 6) **Benchmarking NN Training Algorithms** - it’s currently challenging to properly assess the best optimizers to train neural networks; this paper presents a new benchmark, AlgoPerf, for benchmarking neural network training algorithms using realistic workloads. | [Paper](https://arxiv.org/abs/2306.07179), [Tweet](https://twitter.com/zacharynado/status/1668683433944424448?s=20) |
| 7) **Unifying LLMs & Knowledge Graphs** - provides a roadmap for the unification of LLMs and KGs; covers how to incorporate KGs in LLM pre-training/inferencing, leverage LLMs for KG tasks such as question answering, and enhance both KGs and LLMs for bidirectional reasoning. | [Paper](https://arxiv.org/abs/2306.09310), [Tweet](https://twitter.com/johnjnay/status/1670051081722769408?s=20) |
| 8) **Augmenting LLMs with Long-term Memory** - proposes a framework to enable LLMs to memorize long history; it’s enhanced with memory-augmented adaptation training to memorize long past context and use long-term memory for language modeling; achieves improvements on memory-augmented in-context learning over LLMs. | [Paper](https://arxiv.org/abs/2306.07174), [Tweet](https://twitter.com/arankomatsuzaki/status/1668429602841317378?s=20) |
| 9) **TAPIR** - enables tracking any queried point on any physical surface throughout a video sequence; outperforms all baselines and facilitates fast inference on long and high-resolution videos (track points faster than real-time when using modern GPUs). | [Paper](https://arxiv.org/abs/2306.08637), [Tweet](https://twitter.com/AdamWHarley/status/1669785589246468096?s=20) |
| 10) **Mind2Web** - a new dataset for evaluating generalist agents for the web; contains 2350 tasks from 137 websites over 31 domains; it enables testing generalization ability across tasks and environments, covering practical use cases on the web. | [Paper](https://arxiv.org/abs/2306.06070), [Tweet](https://twitter.com/DrJimFan/status/1669403956064432128?s=20) |
---
## Top ML Papers of the Week (June 5 - June 11)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Tracking Everything Everywhere All at Once** - propose a test-time optimization method for estimating dense and long-range motion; enables accurate, full-length motion estimation of every pixel in a video. | [Paper](https://arxiv.org/abs/2306.05422), [Tweet](https://twitter.com/sstj389/status/1667000331958468608?s=20) |
| 2) **AlphaDev** - a deep reinforcement learning agent which discovers faster sorting algorithms from scratch; the algorithms outperform previously known human benchmarks and have been integrated into the LLVM C++ library. | [Paper](https://www.nature.com/articles/s41586-023-06004-9), [Tweet](https://twitter.com/omarsar0/status/1666486491793481738?s=20) |
| 3) **Sparse-Quantized Representation** - a new compressed format and quantization technique that enables near-lossless compression of LLMs across model scales; “allows LLM inference at 4.75 bits with a 15% speedup”. | [Paper](https://arxiv.org/abs/2306.03078), [Tweet](https://twitter.com/Tim_Dettmers/status/1666076553665744896?s=20) |
| 4) **MusicGen** - a simple and controllable model for music generation built on top of a single-stage transformer LM together with efficient token interleaving patterns; it can be conditioned on textual descriptions or melodic features and shows high performance on a standard text-to-music benchmark. | [Paper](https://arxiv.org/abs/2306.05284), [Tweet](https://twitter.com/syhw/status/1667103478471176192?s=20) |
| 5) **Augmenting LLMs with Databases** - combines an LLM with a set of SQL databases, enabling a symbolic memory framework; completes tasks via LLM generating SQL instructions that manipulate the DB autonomously. | [Paper](https://arxiv.org/abs/2306.03901), [Tweet](https://twitter.com/omarsar0/status/1666254609524961282?s=20) |
| 6) **Concept Scrubbing in LLM** - presents a method called LEAst-squares Concept Erasure (LEACE) to erase target concept information from every layer in a neural network; it’s used for reducing gender bias in BERT embeddings. | [Paper](https://arxiv.org/abs/2306.03819) , [Tweet](https://twitter.com/norabelrose/status/1666469917636571137?s=20) |
| 7) **Fine-Grained RLHF** - trains LMs with fine-grained human feedback; instead of using overall preference, more explicit feedback is provided at the segment level which helps to improve efficacy on long-form question answering, reduce toxicity, and enables LM customization. | [Paper](https://arxiv.org/abs/2306.01693), [Tweet](https://twitter.com/zeqiuwu1/status/1665785626552049665?s=20) |
| 8) **Hierarchical Vision Transformer** - pretrains vision transformers with a visual pretext task (MAE), while removing unnecessary components from a state-of-the-art multi-stage vision transformer; this enables a simple hierarchical vision transformer that’s more accurate and faster at inference and during training. | [Paper](https://arxiv.org/abs/2306.00989), [Tweet](https://twitter.com/MetaAI/status/1665759715765411840?s=20) |
| 9) **Humor in ChatGPT** - explores ChatGPT’s capabilities to grasp and reproduce humor; finds that over 90% of 1008 generated jokes were the same 25 jokes and that ChatGPT is also overfitted to a particular joke structure. | [Paper](https://arxiv.org/abs/2306.04563), [Tweet](https://twitter.com/AlbertBoyangLi/status/1666707728272850944?s=20) |
| 10) **Imitating Reasoning Process of Larger LLMs** - develops a 13B parameter model that learns to imitate the reasoning process of large foundational models like GPT-4; it leverages large-scale and diverse imitation data and surpasses instruction-tuned models such as Vicuna-13B in zero-shot reasoning. | [Paper](https://arxiv.org/abs/2306.02707), [Tweet](https://twitter.com/johnjnay/status/1665906453587034112?s=20) |
---
## Top ML Papers of the Week (May 29-June 4)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------ |
| 1) **Let’s Verify Step by Step** - achieves state-of-the-art mathematical problem solving by rewarding each correct step of reasoning in a chain-of-thought instead of rewarding the final answer; the model solves 78% of problems from a representative subset of the MATH test set. | [Paper](https://arxiv.org/abs/2305.20050), [Tweet](https://twitter.com/OpenAI/status/1663957407184347136?s=20) |
| 2) **No Positional Encodings** - shows that explicit position embeddings are not essential for decoder-only Transformers; shows that other positional encoding methods like ALiBi and Rotary are not well suited for length generalization. | [Paper](https://arxiv.org/abs/2305.19466), [Tweet](https://twitter.com/a_kazemnejad/status/1664277559968927744?s=20) |
| 3) **BiomedGPT** - a unified biomedical generative pretrained transformer model for vision, language, and multimodal tasks. Achieves state-of-the-art performance across 5 distinct tasks with 20 public datasets spanning over 15 unique biomedical modalities. | [Paper](https://arxiv.org/abs/2305.17100), [Tweet](https://twitter.com/omarsar0/status/1662992484576681986?s=20) |
| 4) **Thought Cloning** - introduces an imitation learning framework to learn to think while acting; the idea is not only to clone the behaviors of human demonstrators but also the thoughts humans have when performing behaviors. | [Paper](https://arxiv.org/abs/2306.00323), [Tweet](https://twitter.com/johnjnay/status/1664798780644904960?s=20) |
| 5) **Fine-Tuning Language Models with Just Forward Passes** - proposes a memory-efficient zeroth-order optimizer and a corresponding SGD algorithm to finetune large LMs with the same memory footprint as inference. | [Paper](https://arxiv.org/abs/2305.17333) , [Tweet](https://twitter.com/arankomatsuzaki/status/1663360307274690560?s=20) |
| 6) **MERT** - an acoustic music understanding model with large-scale self-supervised training; it incorporates a superior combination of teacher models to outperform conventional speech and audio approaches. | [Paper](https://arxiv.org/abs/2306.00107) , [Tweet](https://twitter.com/yizhilll/status/1664680921146982401?s=20) |
| 7) **Bytes Are All You Need** - investigates performing classification directly on file bytes, without needing to decode files at inference time; achieves ImageNet Top-1 accuracy of 77.33% using a transformer backbone; achieves 95.42% accuracy when operating on WAV files from the Speech Commands v2 dataset. | [Paper](https://arxiv.org/abs/2306.00238), [Tweet](https://twitter.com/_akhaliq/status/1664497650702471169?s=20) |
| 8) **Direct Preference Optimization** - while helpful to train safe and useful LLMs, the RLHF process can be complex and often unstable; this work proposes an approach to finetune LMs by solving a classification problem on the human preferences data, with no RL required. | [Paper](https://arxiv.org/abs/2305.18290), [Tweet](https://twitter.com/archit_sharma97/status/1663595372269408261?s=20) |
| 9) **SQL-PaLM** - an LLM-based Text-to-SQL adopted from PaLM-2; achieves SoTA in both in-context learning and fine-tuning settings; the few-shot model outperforms the previous fine-tuned SoTA by 3.8% on the Spider benchmark; few-shot SQL-PaLM also outperforms few-shot GPT-4 by 9.9%, using a simple prompting approach. | [Paper](https://arxiv.org/abs/2306.00739), [Tweet](https://twitter.com/omarsar0/status/1664441085693657088?s=20) |
| 10) **CodeTF** - an open-source Transformer library for state-of-the-art code LLMs; supports pretrained code LLMs and popular code benchmarks, including standard methods to train and serve code LLMs efficiently. | [Paper](https://arxiv.org/abs/2306.00029), [Tweet](https://twitter.com/stevenhoi/status/1664483010954272770?s=20) |
---
## Top ML Papers of the Week (May 22-28)
| **Paper** | **Links** |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| 1) **QLoRA** - an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning performance. | [Paper](https://arxiv.org/abs/2305.14314), [Tweet](https://twitter.com/Tim_Dettmers/status/1661379354507476994?s=20) |
| 2) **LIMA** - a new 65B parameter LLaMa model fine-tuned on 1000 carefully curated prompts and responses; it doesn't use RLHF, generalizes well to unseen tasks not available in the training data, and generates responses equivalent or preferred to GPT-4 in 43% of cases, and even higher compared to Bard. | [Paper](https://arxiv.org/abs/2305.11206), [Tweet](https://twitter.com/violet_zct/status/1660789120069926912?s=20) |
| 3) **Voyager** - an LLM-powered embodied lifelong learning agent in Minecraft that can continuously explore worlds, acquire skills, and make novel discoveries without human intervention. | [Paper](https://arxiv.org/abs/2305.16291), [Tweet](https://twitter.com/DrJimFan/status/1662115266933972993?s=20) |
| 4) **Gorilla** - a finetuned LLaMA-based model that surpasses GPT-4 on writing API calls. This capability can help identify the right API, boosting the ability of LLMs to interact with external tools to complete specific tasks. | [Paper](https://arxiv.org/abs/2305.15334), [Tweet](https://twitter.com/omarsar0/status/1661540207206846464?s=20) |
| 5) **The False Promise of Imitating Proprietary LLMs** - provides a critical analysis of models that are finetuned on the outputs of a stronger model; argues that model imitation is a false premise and that the higher leverage action to improve open source models is to develop better base models. | [Paper](https://arxiv.org/abs/2305.15717) , [Tweet](https://twitter.com/arankomatsuzaki/status/1661908342829187072?s=20) |
| 6) **Sophia** - presents a simple scalable second-order optimizer that has negligible average per-step time and memory overhead; on language modeling, Sophia achieves 2x speed-up compared to Adam in the number of steps, total compute, and wall-clock time. | [Paper](https://arxiv.org/abs/2305.14342) , [Tweet](https://twitter.com/tengyuma/status/1661412995430219786?s=20) |
| 7) **The Larger They Are, the Harder They Fail** - shows that LLMs fail to generate correct Python code when default function names are swapped; they also strongly prefer incorrect continuation as they become bigger. | [Paper](https://arxiv.org/abs/2305.15507), [Tweet](https://twitter.com/AVMiceliBarone/status/1662150656327663617?s=20) |
| 8) **Model Evaluation for Extreme Risks** - discusses the importance of model evaluation for addressing extreme risks and making responsible decisions about model training, deployment, and security. | [Paper](https://arxiv.org/abs/2305.15324), [Tweet](https://twitter.com/soundboy/status/1661728733156503555?s=20) |
| 9) **LLM Research Directions** - discusses a list of research directions for students looking to do research with LLMs. | [Paper](https://arxiv.org/abs/2305.12544), [Tweet](https://twitter.com/omarsar0/status/1661405738059571201?s=20) |
| 10) **Reinventing RNNs for the Transformer Era** - proposes an approach that combines the efficient parallelizable training of Transformers with the efficient inference of RNNs; results show that the method performs on part with similarly sized Transformers. | [Paper](https://arxiv.org/abs/2305.13048), [Tweet](https://twitter.com/_akhaliq/status/1660816265454419969?s=20) |
---
## Top ML Papers of the Week (May 15-21)
| **Paper** | **Links** |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| 1) **Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold** - an approach for controlling GANs that allows dragging points of the image to precisely reach target points in a user-interactive manner. | [Paper](https://arxiv.org/abs/2305.10973v1), [Tweet](https://twitter.com/dair_ai/status/1660268470057967616?s=20) |
| 2) **Evidence of Meaning in Language Models Trained on Programs** - argues that language models can learn meaning despite being trained only to perform next token prediction on text. | [Paper](https://arxiv.org/abs/2305.11169), [Tweet](https://twitter.com/dair_ai/status/1660268472129945600?s=20) |
| 3) **Towards Expert-Level Medical Question Answering with Large Language Models** - a top-performing LLM for medical question answering; scored up to 86.5% on the MedQA dataset (a new state-of-the-art); approaches or exceeds SoTA across MedMCQA, PubMedQA, and MMLU clinical topics datasets. | [Paper](https://arxiv.org/abs/2305.09617), [Tweet](https://twitter.com/dair_ai/status/1660268473853829121?s=20) |
| 4) **MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers** - a multi-scale decoder architecture enabling end-to-end modeling of sequences of over one million bytes; enables sub-quadratic self-attention and improved parallelism during decoding. | [Paper](https://arxiv.org/abs/2305.07185), [Tweet](https://twitter.com/dair_ai/status/1660268475762327552?s=20) |
| 5) **StructGPT: A General Framework for Large Language Model to Reason over Structured Data** - improves the zero-shot reasoning ability of LLMs over structured data; effective for solving question answering tasks based on structured data. | [Paper](https://arxiv.org/abs/2305.09645) , [Tweet](https://twitter.com/dair_ai/status/1660268477628727298?s=20) |
| 6) **TinyStories: How Small Can Language Models Be and Still Speak Coherent English?** - uses a synthetic dataset of short stories to train and evaluate LMs that are much smaller than SoTA models but can produce fluent and consistent stories with several paragraphs, and demonstrate reasoning capabilities. | [Paper](https://arxiv.org/abs/2305.07759) , [Tweet](https://twitter.com/dair_ai/status/1660268479642054660?s=20) |
| 7) **DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining** - trains a small proxy model over domains to produce domain weights without knowledge of downstream tasks; it then resamples a dataset with the domain weights and trains a larger model; this enables using a 280M proxy model to train an 8B model (30x larger) more efficiently. | [Paper](https://arxiv.org/abs/2305.10429), [Tweet](https://twitter.com/dair_ai/status/1660268481466572802?s=20) |
| 8) **CodeT5+: Open Code Large Language Models for Code Understanding and Generation** - supports a wide range of code understanding and generation tasks and different training methods to improve efficacy and computing efficiency; tested on 20 code-related benchmarks using different settings like zero-shot, fine-tuning, and instruction tuning; achieves SoTA on tasks like code completion, math programming, and text-to-code retrieval tasks. | [Paper](https://arxiv.org/abs/2305.07922), [Tweet](https://twitter.com/dair_ai/status/1660268483152584704?s=20) |
| 9) **Symbol tuning improves in-context learning in language models** - an approach to finetune LMs on in-context input-label pairs where natural language labels are replaced by arbitrary symbols; boosts performance on unseen in-context learning tasks and algorithmic reasoning tasks. | [Paper](https://arxiv.org/abs/2305.08298)), [Tweet](https://twitter.com/dair_ai/status/1660268485035819009?s=20) |
| 10) **Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM's Translation Capability** - shows that PaLM is exposed to over 30 million translation pairs across at least 44 languages; shows that incidental bilingualism connects to the translation capabilities of PaLM. | [Paper](https://arxiv.org/abs/2305.10266), [Tweet](https://twitter.com/dair_ai/status/1660268486839476224?s=20) |
---
## Top ML Papers of the Week (May 8-14)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **LLM explains neurons in LLMs** - applies GPT-4 to automatically write explanations on the behavior of neurons in LLMs and even score those explanations; this offers a promising way to improve interpretability in future LLMs and potentially detect alignment and safety problems. | [Paper](https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html), [Tweet](https://twitter.com/OpenAI/status/1655982364273831936?s=20) |
| 2) **PaLM 2** - a new state-of-the-art language model integrated into AI features and tools like Bard and the PaLM API; displays competitive performance in mathematical reasoning compared to GPT-4; instruction-tuned model, Flan-PaLM 2, shows good performance on benchmarks like MMLU and BIG-bench Hard. | [Paper](https://ai.google/static/documents/palm2techreport.pdf), [Tweet](https://twitter.com/Google/status/1656347171556294669?s=20) |
| 3) **ImageBind** - an approach that learns joint embedding data across six modalities at once; extends zero-shot capabilities to new modalities and enables emergent applications including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection, and generation. | [Paper](https://arxiv.org/abs/2305.05665), [Tweet](https://twitter.com/MetaAI/status/1655989274620358656?s=20) |
| 4) **TidyBot** - shows that robots can combine language-based planning and perception with the few-shot summarization capabilities of LLMs to infer generalized user preferences that are applicable to future interactions. | [Paper](https://arxiv.org/abs/2305.05658), [Tweet](https://twitter.com/_akhaliq/status/1656117478760796160?s=20) |
| 5) **Unfaithful Explanations in Chain-of-Thought Prompting** - demonstrates that CoT explanations can misrepresent the true reason for a model’s prediction; when models are biased towards incorrect answers, CoT generation explanations supporting those answers. | [Paper](https://arxiv.org/abs/2305.04388) , [Tweet](https://twitter.com/milesaturpin/status/1656010877269602304?s=20) |
| 6) **InstructBLIP** - explores visual-language instruction tuning based on the pre-trained BLIP-2 models; achieves state-of-the-art zero-shot performance on 13 held-out datasets, outperforming BLIP-2 and Flamingo. | [Paper](https://arxiv.org/abs/2305.06500) , [Tweet](https://twitter.com/LiJunnan0409/status/1656821806593101827?s=20) |
| 7) **Active Retrieval Augmented LLMs** - introduces FLARE, retrieval augmented generation to improve the reliability of LLMs; FLARE actively decides when and what to retrieve across the course of the generation; demonstrates superior or competitive performance on long-form knowledge-intensive generation tasks. | [Paper](https://arxiv.org/abs/2305.06983), [Tweet](https://twitter.com/omarsar0/status/1657004417726423042?s=20) |
| 8) **FrugalGPT** - presents strategies to reduce the inference cost associated with using LLMs while improving performance. | [Paper](https://arxiv.org/abs/2305.05176), [Tweet](https://twitter.com/omarsar0/status/1656105704808419329?s=20) |
| 9) **StarCoder** - an open-access 15.5B parameter LLM with 8K context length and is trained on large amounts of code spanning 80+ programming languages. | [Paper](https://arxiv.org/abs/2305.06161), [Tweet](https://twitter.com/_akhaliq/status/1656479380296613894?s=20) |
| 10) **MultiModal-GPT** - a vision and language model for multi-round dialogue with humans; the model is fine-tuned from OpenFlamingo, with LoRA added in the cross-attention and self-attention parts of the language model. | [Paper](https://arxiv.org/abs/2305.04790), [Tweet](https://twitter.com/OpenMMLab/status/1656127026687000578?s=20) |
---
## Top ML Papers of the Week (May 1-7)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| 1) **scGPT: Towards Building a Foundation Model for Single-Cell Multi-omics Using Generative AI** - a foundation large language model pretrained on 10 million cells for single-cell biology. | [Paper](https://www.biorxiv.org/content/10.1101/2023.04.30.538439v1), [Tweet](https://twitter.com/dair_ai/status/1655223088152211456?s=20) |
| 2) **GPTutor: a ChatGPT-powered programming tool for code explanation** - a ChatGPT-powered tool for code explanation provided as a VSCode extension; claims to deliver more concise and accurate explanations than vanilla ChatGPT and Copilot; performance and personalization enhanced via prompt engineering; programmed to use more relevant code in its prompts. | [Paper](https://arxiv.org/abs/2305.01863), [Tweet](https://twitter.com/dair_ai/status/1655223089754517509?s=20) |
| 3) **Shap-E: Generating Conditional 3D Implicit Functions** - a conditional generative model for 3D assets; unlike previous 3D generative models, this model generates implicit functions that enable rendering textured meshes and neural radiance fields. | [Paper](https://arxiv.org/abs/2305.02463), [Tweet](https://twitter.com/dair_ai/status/1655223091482566663?s=20) |
| 4) **Are Emergent Abilities of Large Language Models a Mirage?** - presents an alternative explanation to the emergent abilities of LLMs; suggests that existing claims are creations of the researcher’s analyses and not fundamental changes in model behavior on specific tasks with scale | [Paper](https://arxiv.org/abs/2304.15004), [Tweet](https://twitter.com/dair_ai/status/1655223092975640578?s=20) |
| 5) **Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl** - releases PySR, an open-source library for practical symbolic regression for the sciences; it’s built on a high-performance distributed back-end and interfaces with several deep learning packages; in addition, a new benchmark, “EmpiricalBench”, is released to quantify applicability of symbolic regression algorithms in science. | [Paper](https://arxiv.org/abs/2305.01582) , [Tweet](https://twitter.com/dair_ai/status/1655223094640889856?s=20) |
| 6) **PMC-LLaMA: Further Finetuning LLaMA on Medical Papers** - a LLaMA model fine-tuned on 4.8 million medical papers; enhances capabilities in the medical domain and achieves high performance on biomedical QA benchmarks. | [Paper](https://arxiv.org/abs/2304.14454) , [Tweet](https://twitter.com/dair_ai/status/1655223096301740032?s=20) |
| 7) **Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes** - a mechanism to extract rationales from LLMs to train smaller models that outperform larger language models with less training data needed by finetuning or distillation. | [Paper](https://arxiv.org/abs/2305.02301), [Tweet](https://twitter.com/dair_ai/status/1655223098730217472?s=20) |
| 8) **Poisoning Language Models During Instruction Tuning** - show that adversaries can poison LLMs during instruction tuning by contributing poison examples to datasets; it can induce degenerate outputs across different held-out tasks. | [Paper](https://arxiv.org/abs/2305.00944), [Tweet](https://twitter.com/dair_ai/status/1655223100286332934?s=20) |
| 9) **Unlimiformer: Long-Range Transformers with Unlimited Length Input** - proposes long-range transformers with unlimited length input by augmenting pre-trained encoder-decoder transformer with external datastore to support unlimited length input; shows usefulness in long-document summarization; could potentially be used to improve the performance of retrieval-enhanced LLMs. | [Paper](https://arxiv.org/abs/2305.01625), [Tweet](https://twitter.com/dair_ai/status/1655223101913718784?s=20) |
| 10) **Learning to Reason and Memorize with Self-Notes** - an approach that enables LLMs to reason and memorize enabling them to deviate from the input sequence at any time to explicitly “think”; this enables the LM to recall information and perform reasoning on the fly; experiments show that this method scales better to longer sequences unseen during training. | [Paper](https://arxiv.org/abs/2305.00833), [Tweet](https://twitter.com/dair_ai/status/1655223103662829569?s=20) |
---
## Top ML Papers of the Week (April 24 - April 30)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning** - applies deep reinforcement learning to synthesize agile soccer skills for a miniature humanoid robot; the resulting policy allows dynamic movement skills such as fast recovery, walking, and kicking. | [Paper](https://arxiv.org/abs/2304.13653), [Tweet](https://twitter.com/dair_ai/status/1652693172810571780?s=20) |
| 2) **Scaling Transformer to 1M tokens and beyond with RMT** - leverages a recurrent memory transformer architecture to increase BERT’s effective context length to two million tokens while maintaining high memory retrieval accuracy. | [Paper](https://arxiv.org/abs/2304.11062), [Tweet](https://twitter.com/dair_ai/status/1652693174576349185?s=20) |
| 3) **Track Anything: Segment Anything Meets Videos** - an interactive tool for video object tracking and segmentation; it’s built on top segment anything and allows flexible tracking and segmenting via user clicks. | [Paper](https://arxiv.org/abs/2304.11968), [Tweet](https://twitter.com/dair_ai/status/1652693176644165634?s=20) |
| 4) **A Cookbook of Self-Supervised Learning** - provides an overview of fundamental techniques and key concepts in SSL; it also introduces practical considerations for implementing SSL methods successfully. | [Paper](https://arxiv.org/abs/2304.12210), [Tweet](https://twitter.com/dair_ai/status/1652693178724626435?s=20) |
| 5) **Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond** - a comprehensive and practical guide for practitioners working with LLMs; discusses many use cases with practical applications and limitations of LLMs in real-world scenarios. | [Paper](https://arxiv.org/abs/2304.13712) , [Tweet](https://twitter.com/dair_ai/status/1652693180381274114?s=20) |
| 6) **AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head** - connects ChatGPT with audio foundational models to handle challenging audio tasks and a modality transformation interface to enable spoken dialogue. | [Paper](https://arxiv.org/abs/2304.12995) , [Tweet](https://twitter.com/dair_ai/status/1652693181895409666?s=20) |
| 7) **DataComp: In search of the next generation of multimodal datasets** - releases a new multimodal dataset benchmark containing 12.8B image-text pairs. | [Paper](https://arxiv.org/abs/2304.14108), [Tweet](https://twitter.com/dair_ai/status/1652693183493447681?s=20) |
| 8) **ChatGPT for Information Extraction** - provides a deeper assessment of ChatGPT's performance on the important information extraction task. | [Paper](https://arxiv.org/abs/2304.11633), [Tweet](https://twitter.com/dair_ai/status/1652693184927989768?s=20) |
| 9) **Comparing Physician vs ChatGPT** - investigates if chatbot assistants like ChatGPT can provide responses to patient questions while emphasizing quality and empathy; finds that chatbot responses were preferred over physician responses and rated significantly higher in terms of both quality and empathy. | [Paper](https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2804309), [Tweet](https://twitter.com/dair_ai/status/1652693186467299331?s=20) |
| 10) **Stable and low-precision training for large-scale vision-language models** - introduces methods for accelerating and stabilizing training of large-scale language vision models. | [Paper](https://arxiv.org/abs/2304.13013), [Tweet](https://twitter.com/dair_ai/status/1652693187960479745?s=20) |
---
## Top ML Papers of the Week (April 17 - April 23)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| 1) **DINOv2: Learning Robust Visual Features without Supervision** - a new method for training high-performance computer vision models based on self-supervised learning; enables learning rich and robust visual features without supervision which are useful for both image-level visual tasks and pixel-level tasks; tasks supported include image classification, instance retrieval, video understanding, depth estimation, and much more. | [Paper](https://arxiv.org/abs/2304.07193), [Tweet](https://twitter.com/dair_ai/status/1650145892941324288?s=20) |
| 2) **Learning to Compress Prompts with Gist Tokens** - an approach that trains language models to compress prompts into gist tokens reused for compute efficiency; this approach enables 26x compression of prompts, resulting in up to 40% FLOPs reductions. | [Paper](https://arxiv.org/abs/2304.08467), [Tweet](https://twitter.com/dair_ai/status/1650145895332163585?s=20) |
| 3) **Scaling the leading accuracy of deep equivariant models to biomolecular simulations of realistic size** - presents a framework for large-scale biomolecular simulation; this is achieved through the high accuracy of equivariant deep learning and the ability to scale to large and long simulations; the system is able to “perform nanoseconds-long stable simulations of protein dynamics and scale up to a 44-million atom structure of a complete, all-atom, explicitly solvated HIV capsid on the Perlmutter supercomputer.” | [Paper](https://arxiv.org/abs/2304.10061), [Tweet](https://twitter.com/dair_ai/status/1650145897689350144?s=20) |
| 4) **Evaluating Verifiability in Generative Search Engines** - performs human evaluation to audit popular generative search engines such as Bing Chat, Perplexity AI, and NeevaAI; finds that, on average, only 52% of generated sentences are supported by citations and 75% of citations support their associated sentence. | [Paper](https://arxiv.org/abs/2304.09848), [Tweet](https://twitter.com/dair_ai/status/1650145900180779009?s=20) |
| 5) **Generative Disco: Text-to-Video Generation for Music Visualization** - an AI system based on LLMs and text-to-image models that generates music visualizations. | [Paper](https://arxiv.org/abs/2304.08551) , [Tweet](https://twitter.com/dair_ai/status/1650145904219832324?s=20) |
| 6) **Architectures of Topological Deep Learning: A Survey on Topological Neural Networks** | [Paper](https://arxiv.org/abs/2304.10031) , [Tweet](https://twitter.com/dair_ai/status/1650145906560311298?s=20) |
| 7) **Visual Instruction Tuning** - presents an approach that uses language-only GPT-4 to generate multimodal language-image instruction-following data; applies instruction tuning with the data and introduces LLaVA, an end-to-end trained large multimodal model for general-purpose visual and language understanding. | [Paper](https://arxiv.org/abs/2304.08485), [Tweet](https://twitter.com/dair_ai/status/1650145909387214848?s=20) |
| 8) **ChatGPT: Applications, Opportunities, and Threats** | [Paper](https://arxiv.org/abs/2304.09103), [Tweet](https://twitter.com/dair_ai/status/1650145911836745736?s=20) |
| 9) **Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models** - a plug-and-play compositional reasoning framework that augments LLMs and can infer the appropriate sequence of tools to compose and execute in order to generate final responses; achieves 87% accuracy on ScienceQA and 99% on TabMWP. | [Paper](https://arxiv.org/abs/2304.09842), [Tweet](https://twitter.com/dair_ai/status/1650145914420330496?s=20) |
| 10) **Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models** - applies latent diffusion models to high-resolution video generation; validates the model on creative content creation and real driving videos of 512 x 1024 and achieves state-of-the-art performance. | [Paper](https://arxiv.org/abs/2304.08818), [Tweet](https://twitter.com/dair_ai/status/1650145916794314752?s=20) |
---
## Top ML Papers of the Week (April 10 - April 16)
| **Paper** | **Links** |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| 1) **Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields** - combines mip-NeRF 360 and grid-based models to improve NeRFs that train 22x faster than mip-NeRF 360. | [Paper](https://arxiv.org/abs/2304.06706), [Tweet](https://twitter.com/dair_ai/status/1647613826425147401?s=20) |
| 2) **Generative Agents: Interactive Simulacra of Human Behavior** - proposes an architecture that extends LLMs to build agents that enable simulations of human-like behavior; these capabilities are possible by storing a complete record of an agent's experiences, synthesizing memories over time into higher-level reflections, and retrieving them dynamically to plan behavior. | [Paper](https://arxiv.org/abs/2304.03442), [Tweet](https://twitter.com/dair_ai/status/1647613828417351682?s=20) |
| 3) **Emergent autonomous scientific research capabilities of large language models** - presents an agent that combines LLMs for autonomous design, planning, and execution of scientific experiments; shows emergent scientific research capabilities, including the successful performance of catalyzed cross-coupling reactions. | [Paper](https://arxiv.org/abs/2304.05332), [Tweet](https://twitter.com/dair_ai/status/1647613830233571328?s=20) |
| 4) **Automatic Gradient Descent: Deep Learning without Hyperparameters** - derives optimization algorithms that explicitly leverage neural architecture; it proposes a first-order optimizer without hyperparameters that trains CNNs at ImageNet scale. | [Paper](https://arxiv.org/abs/2304.05187), [Tweet](https://twitter.com/dair_ai/status/1647613832804589569?s=20) |
| 5) **ChemCrow: Augmenting large-language models with chemistry tools** - presents an LLM chemistry agent that performs tasks across synthesis, drug discovery, and materials design; it integrates 13 expert-design tools to augment LLM performance in chemistry and demonstrate effectiveness in automating chemical tasks. | [Paper](https://arxiv.org/abs/2304.05376) , [Tweet](https://twitter.com/dair_ai/status/1647613834813644800?s=20) |
| 6) **One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era** - A Survey of ChatGPT and GPT-4 | [Paper](https://arxiv.org/abs/2304.06488) , [Tweet](https://twitter.com/dair_ai/status/1647613836617195525?s=20) |
| 7) **OpenAGI: When LLM Meets Domain Experts** - an open-source research platform to facilitate the development and evaluation of LLMs in solving complex, multi-step tasks through manipulating various domain expert models. | [Paper](https://arxiv.org/abs/2304.04370), [Tweet](https://twitter.com/dair_ai/status/1647613838567546886?s=20) |
| 8) **AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models** - a new benchmark to assess foundational models in the context of human-centric standardized exams, including college entrance exams, law school admission tests, and math competitions, among others. | [Paper](https://arxiv.org/abs/2304.06364), [Tweet](https://twitter.com/dair_ai/status/1647613840400498700?s=20) |
| 9) **Teaching Large Language Models to Self-Debug** - proposes an approach that teaches LLMs to debug their predicted program via few-shot demonstrations; this allows a model to identify its mistakes by explaining generated code in natural language; achieves SoTA on several code generation tasks like text-to-SQL generation. | [Paper](https://arxiv.org/abs/2304.05128), [Tweet](https://twitter.com/dair_ai/status/1647613842300497924?s=20) |
| 10) **Segment Everything Everywhere All at Once** - a promptable, interactive model for various segmentation tasks that yields competitive performance on open-vocabulary and interactive segmentation benchmarks. | [Paper](https://arxiv.org/abs/2304.06718), [Tweet](https://twitter.com/dair_ai/status/1647613844087361537?s=20) |
## Top ML Papers of the Week (April 3 - April 9)
| **Paper** | **Links** |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| 1) **Segment Anything** - presents a set of resources to establish foundational models for image segmentation; releases the largest segmentation dataset with over 1 billion masks on 11M licensed images; the model’s zero-shot performance is competitive with or even superior to fully supervised results. | [Paper](https://arxiv.org/abs/2304.02643v1), [Tweet](https://twitter.com/dair_ai/status/1645089444280561666?s=20) |
| 2) **Instruction Tuning with GPT-4** - presents GPT-4-LLM, a "first attempt" to use GPT-4 to generate instruction-following data for LLM fine-tuning; the dataset is released and includes 52K unique English and Chinese instruction-following data; the dataset is used to instruction-tune LLaMA models which leads to superior zero-shot performance on new tasks. | [Paper](https://arxiv.org/abs/2304.03277), [Tweet](https://twitter.com/dair_ai/status/1645089446524534788?s=20) |
| 3) **Eight Things to Know about Large Language Models** - discusses important considerations regarding the capabilities and limitations of LLMs. | [Paper](https://arxiv.org/abs/2304.00612v1), [Tweet](https://twitter.com/dair_ai/status/1645089448428699650?s=20) |
| 4) **A Survey of Large Language Models** - a new 50 pages survey on large language models. | [Paper](https://arxiv.org/abs/2303.18223), [Tweet](https://twitter.com/dair_ai/status/1645089450395852802?s=20) |
| 5) **Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data** - an open-source chat model fine-tuned with LoRA. Leverages 100K dialogs generated from ChatGPT chatting with itself; it releases the dialogs along with 7B, 13B, and 30B parameter models. | [Paper](https://arxiv.org/abs/2304.01196) , [Tweet](https://twitter.com/dair_ai/status/1645089452081938433?s=20) |
| 6) **Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark** - a new benchmark of 134 text-based Choose-Your-Own-Adventure games to evaluate the capabilities and unethical behaviors of LLMs. | [Paper](https://arxiv.org/abs/2304.03279) , [Tweet](https://twitter.com/dair_ai/status/1645089453780639744?s=20) |
| 7) **Better Language Models of Code through Self-Improvement** - generates pseudo data from knowledge gained through pre-training and fine-tuning; adds the data to the training dataset for the next step; results show that different frameworks can be improved in performance using code-related generation tasks. | [Paper](https://arxiv.org/abs/2304.01228v1), [Tweet](https://twitter.com/dair_ai/status/1645089455659687937?s=20) |
| 8) **Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models** - an overview of applications of ChatGPT and GPT-4; the analysis is done on 194 relevant papers and discusses capabilities, limitations, concerns, and more | [Paper](https://arxiv.org/abs/2304.01852), [Tweet](https://twitter.com/dair_ai/status/1645089457488404486?s=20) |
| 9) **Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling** - a suite for analyzing LLMs across training and scaling; includes 16 LLMs trained on public data and ranging in size from 70M to 12B parameters. | [Paper](https://arxiv.org/abs/2304.01373), [Tweet](https://twitter.com/dair_ai/status/1645089459191382016?s=20) |
| 10) **SegGPT: Segmenting Everything In Context** - unifies segmentation tasks into a generalist model through an in-context framework that supports different kinds of data. | [Paper](https://arxiv.org/abs/2304.03284), [Tweet](https://twitter.com/dair_ai/status/1645089461124886529?s=20) |
---
## Top ML Papers of the Week (Mar 27 - April 2)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------ |
| 1) **BloombergGPT: A Large Language Model for Finance** - a new 50B parameter large language model for finance. Claims the largest domain-specific dataset yet with 363 billion tokens... further augmented with 345 billion tokens from general-purpose datasets; outperforms existing models on financial tasks while not sacrificing performance on general LLM benchmarks. | [Paper](https://arxiv.org/abs/2303.17564v1), [Tweet](https://twitter.com/omarsar0/status/1641787456436547584?s=20) |
| 2) **Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware** - a low-cost system that performs end-to-end imitation learning from real demonstrations; also presents an algorithm called Action Chunking with Transformers to learn a generative model that allows a robot to learn difficult tasks in the real world. | [Paper](https://tonyzhaozh.github.io/aloha/), [Tweet](https://twitter.com/tonyzzhao/status/1640393026341322754?s=20) |
| 3) **HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace** - a system that leverages LLMs like ChatGPT to conduct task planning, select models and act as a controller to execute subtasks and summarize responses according to execution results. | [Paper](https://arxiv.org/abs/2303.17580), [Tweet](https://twitter.com/johnjnay/status/1641609645713129473?s=20) |
| 4) **ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge** - a medical chat model fine-tuned on LLaMA using medical domain knowledge. Collects data on around 700 diseases and generated 5K doctor-patient conversations to finetune the LLM. | [Paper](https://arxiv.org/abs/2303.14070), [Tweet](https://twitter.com/omarsar0/status/1640525256719753217?s=20) |
| 5) **LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention** - a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model; generates responses comparable to Alpaca with fully fine-tuned 7B parameter; it’s also extended for multi-modal input support. | [Paper](https://arxiv.org/abs/2303.16199) , [Tweet](https://twitter.com/rasbt/status/1641457696074334209?s=20) |
| 6) **ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks** - demonstrates that ChatGPT can outperform crowd-workers for several annotation tasks such as relevance, topics, and frames detection; besides better zero-shot accuracy, the per-annotation cost of ChatGPT is less 20 times cheaper than MTurk. | [Paper](https://arxiv.org/abs/2303.15056v1) , [Tweet](https://twitter.com/AlphaSignalAI/status/1641496876527517696?s=20) |
| 7) **Language Models can Solve Computer Tasks** - shows that a pre-trained LLM agent can execute computer tasks using a simple prompting scheme where the agent recursively criticizes and improves its outputs. | [Paper](https://arxiv.org/abs/2303.17491), [Tweet](https://twitter.com/arankomatsuzaki/status/1641609722951516161?s=20) |
| 8) **DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents** - a paradigm to enhance large language model completions by allowing models to communicate feedback and iteratively improve output; DERA outperforms base GPT-4 on clinically-focused tasks. | [Paper](https://arxiv.org/abs/2303.17071), [Tweet](https://twitter.com/johnjnay/status/1642168727796961280?s=20) |
| 9) **Natural Selection Favors AIs over Humans** - discusses why AI systems will become more fit than humans and the potential dangers and risks involved, including ways to mitigate them. | [Paper](https://arxiv.org/abs/2303.16200), [Tweet](https://twitter.com/DanHendrycks/status/1641102660412792833?s=20) |
| 10) **Machine Learning for Partial Differential Equations** - Pa review examining avenues of partial differential equations research advanced by machine learning. | [Paper](https://arxiv.org/abs/2303.17078), [Tweet](https://twitter.com/DynamicsSIAM/status/1641608068453777412?s=20) |
---
## Top ML Papers of the Week (Mar 20-Mar 26)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Sparks of Artificial General Intelligence: Early experiments with GPT-4** - a comprehensive investigation of an early version of GPT-4 when it was still in active development by OpenAI. | [Paper](https://arxiv.org/abs/2303.12712), [Tweet](https://twitter.com/dair_ai/status/1639991716349460481?s=20) |
| 2) **Reflexion: an autonomous agent with dynamic memory and self-reflection** - proposes an agent with dynamic memory and self-reflection capabilities to enhance its existing reasoning trace and task-specific action choice abilities. | [Paper](https://arxiv.org/abs/2303.11366), [Tweet](https://twitter.com/dair_ai/status/1639991718169722880?s=20) |
| 3) **Capabilities of GPT-4 on Medical Challenge Problems** - shows that GPT-4 exceeds the passing score on USMLE by over 20 points and outperforms GPT-3.5 as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). | [Paper](https://www.microsoft.com/en-us/research/publication/capabilities-of-gpt-4-on-medical-challenge-problems/), [Tweet](https://twitter.com/dair_ai/status/1639991720224989188?s=20) |
| 4) **GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models** - investigates the potential implications of GPT models and related systems on the US labor market. | [Paper](https://arxiv.org/abs/2303.10130), [Tweet](https://twitter.com/dair_ai/status/1639991722263412737?s=20) |
| 5) **CoLT5: Faster Long-Range Transformers with Conditional Computation** - a long-input Transformer model that employs conditional computation, devoting more resources to important tokens in both feedforward and attention layers. | [Paper](https://arxiv.org/abs/2303.09752) , [Tweet](https://twitter.com/dair_ai/status/1639991723806826499?s=20) |
| 6) **Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity** - compares human-generated ideas with those generated by generative AI chatbots like ChatGPT and YouChat; reports that 9.4% of humans were more creative than GPT-4 and that GAIs are valuable assistants in the creative process. | [Paper](https://arxiv.org/abs/2303.12003) , [Tweet](https://twitter.com/dair_ai/status/1639991725442646018?s=20) |
| 7) **A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models** - a comprehensive capability analysis of GPT series models; evaluates performance on 9 natural language understanding tasks using 21 datasets. | [Paper](https://arxiv.org/abs/2303.10420), [Tweet](https://twitter.com/dair_ai/status/1639991727292395520?s=20) |
| 8) **Context-faithful Prompting for Large Language Models** - presents a prompting technique that aims to improve LLMs' faithfulness using strategies such as opinion-based prompts and counterfactual demonstrations. | [Paper](https://arxiv.org/abs/2303.11315), [Tweet](https://twitter.com/dair_ai/status/1639991728882032646?s=20) |
| 9) **Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models** - a method for extracting room-scale textured 3D meshes from 2D text-to-image models. | [Paper](https://arxiv.org/abs/2303.11989), [Project](https://lukashoel.github.io/text-to-room/)[Tweet](https://twitter.com/dair_ai/status/1639991730723254274?s=20) |
| 10) **PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing** - a trillion parameter language model with sparse heterogeneous computing. | [Paper](https://arxiv.org/abs/2303.10845), [Tweet](https://twitter.com/dair_ai/status/1639991732405252100?s=20) |
---
## Top ML Papers of the Week (Mar 13-Mar 19)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **GPT-4 Technical Report** - GPT-4 - a large multimodal model with broader general knowledge and problem-solving abilities. | [Paper](https://arxiv.org/abs/2303.08774v2), [Tweet](https://twitter.com/dair_ai/status/1637456913993433089?s=20) |
| 2) **LERF: Language Embedded Radiance Fields** - a method for grounding language embeddings from models like CLIP into NeRF; this enables open-ended language queries in 3D. | [Paper](https://arxiv.org/abs/2303.09553), [Tweet](https://twitter.com/dair_ai/status/1637456915658686465?s=20) |
| 3) **An Overview on Language Models: Recent Developments and Outlook** - an overview of language models covering recent developments and future directions. It also covers topics like linguistic units, structures, training methods, evaluation, and applications. | [Paper](https://arxiv.org/abs/2303.05759), [Tweet](https://twitter.com/omarsar0/status/1635273656858460162?s=20) |
| 4) **Eliciting Latent Predictions from Transformers with the Tuned Lens** - a method for transformer interpretability that can trace a language model predictions as it develops layer by layer. | [Paper](https://arxiv.org/abs/2303.08112), [Tweet](https://twitter.com/dair_ai/status/1637456919819440130?s=20) |
| 5) **Meet in the Middle: A New Pre-training Paradigm** - a new pre-training paradigm using techniques that jointly improve training data efficiency and capabilities of LMs in the infilling task; performance improvement is shown in code generation tasks. | [Paper](https://arxiv.org/abs/2303.07295) , [Tweet](https://twitter.com/dair_ai/status/1637456922004561920?s=20) |
| 6) **Resurrecting Recurrent Neural Networks for Long Sequences** - demonstrates that careful design of deep RNNs using standard signal propagation arguments can recover the performance of deep state-space models on long-range reasoning tasks. | [Paper](https://arxiv.org/abs/2303.06349) , [Tweet](https://twitter.com/dair_ai/status/1637456923795521537?s=20) |
| 7) **UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation** - a new approach to tune a lightweight and versatile retriever to automatically retrieve prompts to improve zero-shot performance and help mitigate hallucinations. | [Paper](https://arxiv.org/abs/2303.08518), [Tweet](https://twitter.com/dair_ai/status/1637456925779456000?s=20) |
| 8) **Patches Are All You Need?** - proposes ConvMixer, a parameter-efficient fully-convolutional model which replaces self-attention and MLP layers in ViTs with less-expressive depthwise and pointwise convolutional layers. | [Paper](https://openreview.net/forum?id=rAnB7JSMXL), [Tweet](https://twitter.com/dair_ai/status/1637456927784329218?s=20) |
| 9) **NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes** - a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach; distills NeRFs into geometrically-accurate 3D meshes. | [Paper](https://arxiv.org/abs/2303.09431), [Tweet](https://twitter.com/dair_ai/status/1637456929705295873?s=20) |
| 10) **High-throughput Generative Inference of Large Language Models with a Single GPU** - a high-throughput generation engine for running LLMs with limited GPU memory. | [Paper](https://arxiv.org/abs/2303.06865), [Code](https://github.com/FMInference/FlexGen) , [Tweet](https://twitter.com/dair_ai/status/1637456931429183489?s=20) |
---
## Top ML Papers of the Week (Mar 6-Mar 12)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **PaLM-E: An Embodied Multimodal Language Model** - incorporates real-world continuous sensor modalities resulting in an embodied LM that performs tasks such as robotic manipulation planning, visual QA, and other embodied reasoning tasks. | [Paper](https://arxiv.org/abs/2303.03378), [Demo](https://palm-e.github.io/) , [Tweet](https://twitter.com/dair_ai/status/1634919222420836358?s=20) |
| 2) **Prismer: A Vision-Language Model with An Ensemble of Experts** - a parameter-efficient vision-language model powered by an ensemble of domain experts; it efficiently pools expert knowledge from different domains and adapts it to various vision-language reasoning tasks. | [Paper](https://arxiv.org/abs/2303.02506), [GitHub](https://github.com/NVlabs/Prismer), [Project](https://shikun.io/projects/prismer) , [Tweet](https://twitter.com/dair_ai/status/1634919224505257985?s=20) |
| 3) **Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models** - it connects ChatGPT and different visual foundation models to enable users to interact with ChatGPT beyond language format. | [Paper](https://arxiv.org/abs/2303.04671), [GitHub](https://github.com/microsoft/visual-chatgpt) [Tweet](https://twitter.com/dair_ai/status/1634919226396794882?s=20) |
| 4) **A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT** - an overview of generative AI - from GAN to ChatGPT. | [Paper](https://arxiv.org/abs/2303.04226), [Tweet](https://twitter.com/dair_ai/status/1634919228339003393?s=20) |
| 5) **Larger language models do in-context learning differently** - shows that with scale, LLMs can override semantic priors when presented with enough flipped labels; these models can also perform well when replacing targets with semantically-unrelated targets. | [Paper](https://arxiv.org/abs/2303.03846) , [Tweet](https://twitter.com/dair_ai/status/1634919230461345797?s=20) |
| 6) **Foundation Models for Decision Making: Problems, Methods, and Opportunities** - provides an overview of foundation models for decision making, including tools, methods, and new research directions. | [Project](https://arxiv.org/abs/2303.04129) , [Tweet](https://twitter.com/dair_ai/status/1634919232650760192?s=20) |
| 7) **Hyena Hierarchy: Towards Larger Convolutional Language Models** - a subquadratic drop-in replacement for attention; it interleaves implicit long convolutions and data-controlled gating and can learn on sequences 10x longer and up to 100x faster than optimized attention. | [Paper](https://arxiv.org/abs/2302.10866), [Code](https://github.com/HazyResearch/safari), [Blog](https://ermongroup.github.io/blog/hyena/), [Tweet](https://twitter.com/dair_ai/status/1634919234835980289?s=20) |
| 8) **OpenICL: An Open-Source Framework for In-context Learning** - a new open-source toolkit for in-context learning and LLM evaluation; supports various state-of-the-art retrieval and inference methods, tasks, and zero-/few-shot evaluation of LLMs. | [Paper](https://arxiv.org/abs/2303.02913), [Repo](https://github.com/Shark-NLP/OpenICL), [Tweet](https://twitter.com/dair_ai/status/1634919236954132480?s=20) |
| 9) **MathPrompter: Mathematical Reasoning using Large Language Models** - a technique that improves LLM performance on mathematical reasoning problems; it uses zero-shot chain-of-thought prompting and verification to ensure generated answers are accurate. | [Paper](https://arxiv.org/abs/2303.05398), [Tweet](https://twitter.com/dair_ai/status/1634919239030280197?s=20) |
| 10) **Scaling up GANs for Text-to-Image Synthesis** - enables scaling up GANs on large datasets for text-to-image synthesis; it’s found to be orders of magnitude faster at inference time, synthesizes high-resolution images, & supports various latent space editing applications. | [Paper](https://arxiv.org/abs/2303.05511), [Project](https://mingukkang.github.io/GigaGAN/) , [Tweet](https://twitter.com/dair_ai/status/1634919241198751744?s=20) |
---
## Top ML Papers of the Week (Feb 27-Mar 5)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Language Is Not All You Need: Aligning Perception with Language Models** - introduces a multimodal large language model called Kosmos-1; achieves great performance on language understanding, OCR-free NLP, perception-language tasks, visual QA, and more. | [Paper](https://arxiv.org/abs/2302.14045), [Tweet](https://twitter.com/dair_ai/status/1632383312550416384?s=20) |
| 2) **Evidence of a predictive coding hierarchy in the human brain listening to speech** - finds that human brain activity is best explained by the activations of modern language models enhanced with long-range and hierarchical predictions. | [Paper](https://www.nature.com/articles/s41562-022-01516-2?utm_source=twitter&utm_medium=organic_social&utm_campaign=evergreen&utm_content=animation), [Tweet](https://twitter.com/dair_ai/status/1632383315029180416?s=20) |
| 3) **EvoPrompting: Language Models for Code-Level Neural Architecture Search** - combines evolutionary prompt engineering with soft prompt-tuning to find high-performing models; it leverages few-shot prompting which is further improved by using an evolutionary search approach to improve the in-context examples. | [Paper](https://arxiv.org/abs/2302.14838), [Tweet](https://twitter.com/dair_ai/status/1632383317302562816?s=20) |
| 4) **Consistency Models** - a new family of generative models that achieve high sample quality without adversarial training. | [Paper](https://arxiv.org/abs/2303.01469), [Tweet](https://twitter.com/dair_ai/status/1632383319152132096?s=20) |
| 5) **Goal Driven Discovery of Distributional Differences via Language Descriptions** - a new task that automatically discovers corpus-level differences via language description in a goal-driven way; applications include discovering insights from commercial reviews and error patterns in NLP systems. | [Paper](https://arxiv.org/abs/2302.14233) , [Code](https://github.com/ruiqi-zhong/D5), [Tweet](https://twitter.com/dair_ai/status/1632383321035374593?s=20) |
| 6) **High-resolution image reconstruction with latent diffusion models from human brain activity** - proposes an approach for high-resolution image reconstruction with latent diffusion models from human brain activity. | [Project](https://sites.google.com/view/stablediffusion-with-brain/) , [Tweet](https://twitter.com/dair_ai/status/1632383323086487554?s=20) |
| 7) **Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control** - a scalable approach to planning with LLMs in embodied settings through grounding functions; GD is found to be a general, flexible, and expressive approach to embodied tasks. | [Paper](https://grounded-decoding.github.io/paper.pdf), [Project](https://grounded-decoding.github.io/) [Tweet](https://twitter.com/dair_ai/status/1632383325036740610?s=20) |
| 8) **Language-Driven Representation Learning for Robotics** - a framework for language-driven representation learning from human videos and captions for robotics. | [Paper](https://arxiv.org/abs/2302.12766), [Models](https://github.com/siddk/voltron-robotics), [Evaluation](https://github.com/siddk/voltron-evaluation), [Tweet](https://twitter.com/dair_ai/status/1632383327154888704?s=20) |
| 9) **Dropout Reduces Underfitting** - demonstrates that dropout can mitigate underfitting when used at the start of training; it counteracts SGD stochasticity and limits the influence of individual batches when training models. | [Paper](https://arxiv.org/abs/2303.01500), [Tweet](https://twitter.com/dair_ai/status/1632383328920666121?s=20) |
| 10) **Enabling Conversational Interaction with Mobile UI using Large Language Models** - an approach that enables versatile conversational interactions with mobile UIs using a single LLM. | [Paper](https://arxiv.org/abs/2209.08655), [Tweet](https://twitter.com/dair_ai/status/1632383331286253568?s=20) |
---
## Top ML Papers of the Week (Feb 20-26)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **LLaMA: Open and Efficient Foundation Language Models** - a 65B parameter foundation model released by Meta AI; relies on publicly available data and outperforms GPT-3 on most benchmarks despite being 10x smaller. | [Paper](https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/), [Tweet](https://twitter.com/dair_ai/status/1629845535946420226?s=20) |
| 2) **Composer: Creative and Controllable Image Synthesis with Composable Conditions** - a 5B parameter creative and controllable diffusion model trained on billions (text, image) pairs. | [Paper](https://arxiv.org/abs/2302.09778), [Project](https://damo-vilab.github.io/composer-page/) , [GitHub](https://github.com/damo-vilab/composer) , [Tweet](https://twitter.com/dair_ai/status/1629845537913548802?s=20) |
| 3) **The Wisdom of Hindsight Makes Language Models Better Instruction Followers** - an alternative algorithm to train LLMs from feedback; the feedback is converted to instruction by relabeling the original one and training the model, in a supervised way, for better alignment. | [Paper](https://arxiv.org/abs/2302.05206), [GitHub](https://github.com/tianjunz/HIR) [Tweet](https://twitter.com/dair_ai/status/1629845539964481537?s=20) |
| 4) **Active Prompting with Chain-of-Thought for Large Language Models** - a prompting technique to adapt LLMs to different task-specific example prompts (annotated with human-designed chain-of-thought reasoning); this process involves finding where the LLM is most uncertain and annotating those. | [Paper](https://arxiv.org/abs/2302.12246), [Code](https://github.com/shizhediao/active-prompt) [Tweet](https://twitter.com/dair_ai/status/1629845541847724033?s=20) |
| 5) **Modular Deep Learning** - a survey offering a unified view of the building blocks of modular neural networks; it also includes a discussion about modularity in the context of scaling LMs, causal inference, and other key topics in ML. | [Paper](https://arxiv.org/abs/2302.11529) , [Project](https://www.ruder.io/modular-deep-learning/), [Tweet](https://twitter.com/dair_ai/status/1629845544037228551?s=20) |
| 6) **Recitation-Augmented Language Models** - an approach that recites passages from the LLM’s own memory to produce final answers; shows high performance on knowledge-intensive tasks. | [Paper](https://arxiv.org/abs/2210.01296) , [Tweet](https://twitter.com/dair_ai/status/1629845546276995075?s=20) |
| 7) **Learning Performance-Improving Code Edits** - an approach that uses LLMs to suggest functionally correct, performance-improving code edits. | [Paper](https://arxiv.org/abs/2302.07867), [Tweet](https://twitter.com/dair_ai/status/1629845548210561029?s=20) |
| 8) **More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models** - a comprehensive analysis of novel prompt injection threats to application-integrated LLMs. | [Paper](https://arxiv.org/abs/2302.12173), [Tweet](https://twitter.com/dair_ai/status/1629845550152523777?s=20) |
| 9) **Aligning Text-to-Image Models using Human Feedback** - proposes a fine-tuning method to align generative models using human feedback. | [Paper](https://arxiv.org/abs/2302.12192), [Tweet](https://twitter.com/dair_ai/status/1629845552039968780?s=20) |
| 10) **MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes** - a memory-efficient radiance field representation for real-time view synthesis of large-scale scenes in a browser. | [Paper](https://arxiv.org/abs/2302.12249), [Tweet](https://twitter.com/dair_ai/status/1629845554061606915?s=20) |
---
## Top ML Papers of the Week (Feb 13 - 19)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Symbolic Discovery of Optimization Algorithms** - a simple and effective optimization algorithm that’s more memory-efficient than Adam. | [Paper](https://arxiv.org/abs/2302.06675), [Tweet](https://twitter.com/dair_ai/status/1627671313874575362?s=20) |
| 2) **Transformer models: an introduction and catalog** | [Paper](https://arxiv.org/abs/2302.07730), [Tweet](https://twitter.com/dair_ai/status/1627671315678126082?s=20) |
| 3) **3D-aware Conditional Image Synthesis** - a 3D-aware conditional generative model extended with neural radiance fields for controllable photorealistic image synthesis. | [Project](https://www.cs.cmu.edu/~pix2pix3D/) [Tweet](https://twitter.com/dair_ai/status/1627671317355831296?s=20) |
| 4) **The Capacity for Moral Self-Correction in Large Language Models** - finds strong evidence that language models trained with RLHF have the capacity for moral self-correction. The capability emerges at 22B model parameters and typically improves with scale. | [Paper](https://arxiv.org/abs/2302.07459), [Tweet](https://twitter.com/dair_ai/status/1627671319100768260?s=20) |
| 5) **Vision meets RL** - uses reinforcement learning to align computer vision models with task rewards; observes large performance boost across multiple CV tasks such as object detection and colorization. | [Paper](https://arxiv.org/abs/2302.08242) |
| 6) **Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment** - an unsupervised method for text-image alignment that leverages pretrained language models; it enables few-shot image classification with LLMs. | [Paper](https://arxiv.org/abs/2302.00902) , [Code](https://github.com/lhao499/lqae) [Tweet](https://twitter.com/haoliuhl/status/1625273748629901312?s=20) |
| 7) **Augmented Language Models: a Survey** - a survey of language models that are augmented with reasoning skills and the capability to use tools. | [Paper](https://arxiv.org/abs/2302.07842), [Tweet](https://twitter.com/dair_ai/status/1627671324477820929?s=20) |
| 8) **Geometric Clifford Algebra Networks** - an approach to incorporate geometry-guided transformations into neural networks using geometric algebra. | [Paper](https://arxiv.org/abs/2302.06594), [Tweet](https://twitter.com/dair_ai/status/1627671326176473088?s=20) |
| 9) **Auditing large language models: a three-layered approach** - proposes a policy framework for auditing LLMs. | [Paper](https://arxiv.org/abs/2302.08500), [Tweet](https://twitter.com/dair_ai/status/1627671327950643200?s=20) |
| 10) **Energy Transformer** - a transformer architecture that replaces the sequence of feedforward transformer blocks with a single large Associate Memory model; this follows the popularity that Hopfield Networks have gained in the field of ML. | [Paper](https://arxiv.org/abs/2302.07253), [Tweet](https://twitter.com/dair_ai/status/1627671329561346050?s=20) |
---
## Top ML Papers of the Week (Feb 6 - 12)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Toolformer: Language Models Can Teach Themselves to Use Tools** - introduces language models that teach themselves to use external tools via simple API calls. | [Paper](https://arxiv.org/abs/2302.04761), [Tweet](https://twitter.com/dair_ai/status/1624832248691191808?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 2) **Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents** - proposes using language models for open-world game playing. | [Paper](https://arxiv.org/abs/2302.01560), [Tweet](https://twitter.com/dair_ai/status/1624832250717036548?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 3) **A Categorical Archive of ChatGPT Failures** - a comprehensive analysis of ChatGPT failures for categories like reasoning, factual errors, maths, and coding. | [Paper](https://arxiv.org/abs/2302.03494), [Tweet](https://twitter.com/dair_ai/status/1624832252587700230?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 4) **Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery** - optimizing hard text prompts through efficient gradient-based optimization. | [Paper](https://arxiv.org/abs/2302.03668), [Tweet](https://twitter.com/dair_ai/status/1624832254588465156?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 5) **Data Selection for Language Models via Importance Resampling** - proposes a cheap and scalable data selection framework based on an importance resampling algorithm to improve the downstream performance of LMs. | [Paper](https://arxiv.org/abs/2302.03169), [Tweet](https://twitter.com/dair_ai/status/1624832256400302080?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 6) **Structure and Content-Guided Video Synthesis with Diffusion Models** - proposes an approach for structure and content-guided video synthesis with diffusion models. | [Paper](https://arxiv.org/abs/2302.03011) , [Project](https://research.runwayml.com/gen1), [Tweet](https://twitter.com/dair_ai/status/1624832258296229889?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 7) **A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity** - performs a more rigorous evaluation of ChatGPt on reasoning, hallucination, and interactivity. | [Paper](https://arxiv.org/abs/2302.04023), [Tweet](https://twitter.com/dair_ai/status/1624832260213026819?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 8) **Noise2Music: Text-conditioned Music Generation with Diffusion Models** - proposes diffusion models to generate high-quality 30-second music clips via text prompts. | [Paper](https://arxiv.org/abs/2302.03917), [Project](https://google-research.github.io/noise2music/), [Tweet](https://twitter.com/dair_ai/status/1624832262163337220?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 9) **Offsite-Tuning: Transfer Learning without Full Model** - introduces an efficient, privacy-preserving transfer learning framework to adapt foundational models to downstream data without access to the full model. | [Paper](https://arxiv.org/abs/2302.04870), [Project](https://github.com/mit-han-lab/offsite-tuning), [Tweet](https://twitter.com/dair_ai/status/1624832264029831169?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 10) **Zero-shot Image-to-Image Translation** - proposes a model for zero-shot image-to-image translation. | [Paper](https://arxiv.org/abs/2302.03027), [Project](https://pix2pixzero.github.io/), [Tweet](https://twitter.com/dair_ai/status/1624832265967607813?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
---
## Top ML Papers of the Week (Jan 30-Feb 5)
| **Paper** | **Links** |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **REPLUG: Retrieval-Augmented Black-Box Language Models** - a retrieval-augmented LM framework that adapts a retriever to a large-scale, black-box LM like GPT-3. | [Paper](https://arxiv.org/abs/2301.12652), [Tweet](https://twitter.com/dair_ai/status/1622261780725616641?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 2) **Extracting Training Data from Diffusion Models** - shows that diffusion-based generative models can memorize images from the training data and emit them at generation time. | [Paper](https://arxiv.org/abs/2301.13188), [Tweet](https://twitter.com/dair_ai/status/1622261782738788353?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 3) **The Flan Collection: Designing Data and Methods for Effective Instruction Tuning** - release a more extensive publicly available collection of tasks, templates, and methods to advancing instruction-tuned models. | [Paper](https://arxiv.org/abs/2301.13688), [Tweet](https://twitter.com/dair_ai/status/1622261784668241922?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 4) **Multimodal Chain-of-Thought Reasoning in Language Models** - incorporates vision features to elicit chain-of-thought reasoning in multimodality, enabling the model to generate effective rationales that contribute to answer inference. | [Paper](https://arxiv.org/abs/2302.00923), [Code](https://github.com/amazon-science/mm-cot) [Tweet](https://twitter.com/dair_ai/status/1622261786559791105?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 5) **Dreamix: Video Diffusion Models are General Video Editors** - a diffusion model that performs text-based motion and appearance editing of general videos. | [Paper](https://arxiv.org/abs/2302.01329), [Project](https://dreamix-video-editing.github.io/), [Tweet](https://twitter.com/dair_ai/status/1622261788497657856?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 6) **Benchmarking Large Language Models for News Summarization** | [Paper](https://arxiv.org/abs/2301.13848) , [Tweet](https://twitter.com/dair_ai/status/1622261790326259714?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 7) **Mathematical Capabilities of ChatGPT** - investigates the mathematical capabilities of ChatGPT on a new holistic benchmark called GHOSTS. | [Paper](https://arxiv.org/abs/2301.13867), [Tweet](https://twitter.com/dair_ai/status/1622261792238886913?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 8) **Emergence of Maps in the Memories of Blind Navigation Agents** - trains an AI agent to navigate purely by feeling its way around; no use of vision, audio, or any other sensing (as in animals). | [Paper](https://arxiv.org/abs/2301.13261), [Project](https://wijmans.xyz/publication/eom/), [Tweet](https://twitter.com/dair_ai/status/1622261793987989507?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 9) **SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections** - a generative model that synthesizes large-scale 3D landscapes from random noises. | [Paper](https://arxiv.org/abs/2302.01330), [Tweet](https://twitter.com/dair_ai/status/1622261795925671936?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 10) **Large Language Models Can Be Easily Distracted by Irrelevant Context** - finds that many prompting techniques fail when presented with irrelevant context for arithmetic reasoning. | [Paper](https://arxiv.org/abs/2302.00093), [Tweet](https://twitter.com/dair_ai/status/1622261798379429888?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
---
## Top ML Papers of the Week (Jan 23-29)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **MusicLM: Generating Music From Text** - a generative model for generating high-fidelity music from text descriptions. | [Paper](https://arxiv.org/abs/2301.11325), [Tweet](https://twitter.com/dair_ai/status/1619716425761042436?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 2) **Hungry Hungry Hippos: Towards Language Modeling with State Space Models** - an approach to reduce the gap, in terms of performance and hardware utilization, between state space models and attention for language modeling. | [Paper](https://arxiv.org/abs/2212.14052), [Tweet](https://twitter.com/dair_ai/status/1619716427879174144?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 3) **A Watermark for Large Language Models** - a watermarking framework for proprietary language models. | [Paper](https://arxiv.org/abs/2301.10226), [Tweet](https://twitter.com/dair_ai/status/1619716430127308800?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 4) **Text-To-4D Dynamic Scene Generation** - a new text-to-4D model for dynamic scene generation from input text. | [Paper](https://arxiv.org/abs/2301.11280), [GitHub](https://make-a-video3d.github.io/), [Tweet](https://twitter.com/dair_ai/status/1619718845018828801?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 5) **ClimaX: A foundation model for weather and climate** - a foundation model for weather and climate, including many capabilities for atmospheric science tasks. | [Paper](https://arxiv.org/abs/2301.10343), [Tweet](https://twitter.com/tungnd_13/status/1618642574427959296?s=20&t=ygX07dsAPDF8_jwrxZIo1Q), [Blog](https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/introducing-climax-the-first-foundation-model-for-weather-and-climate/) |
| 6) **Open Problems in Applied Deep Learning** - If you're looking for interesting open problems in DL, this is a good reference. Not sure if intentional but it also looks useful to get a general picture of current trends in deep learning with \~300 references. | [Paper](https://arxiv.org/abs/2301.11316) , [Tweet](https://twitter.com/dair_ai/status/1619719063915339777?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 7) **DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature** - an approach for zero-shot machine-generated text detection. Uses raw log probabilities from the LLM to determine if the passage was sampled from it. | [Paper](https://arxiv.org/abs/2301.11305), [Tweet](https://twitter.com/dair_ai/status/1619719169758613504?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 8) **StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis** - a new model that aims to regain the competitiveness of GANs for fast large-scale text-to-image synthesis. | [Paper](https://arxiv.org/abs/2301.09515), [Project](https://sites.google.com/view/stylegan-t/), [Code](https://github.com/autonomousvision/stylegan-t) [Tweet](https://twitter.com/dair_ai/status/1619719293779976193?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 9) **Large language models generate functional protein sequences across diverse families** - an LLM that can generate protein sequences with a predictable function across large protein families. | [Paper](https://www.nature.com/articles/s41587-022-01618-2), [Tweet](https://twitter.com/dair_ai/status/1619719404618645511?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
| 10) **The Impossibility of Parallelizing Boosting** - investigates the possibility of parallelizing boosting. | [Paper](https://arxiv.org/abs/2301.09627), [Tweet](https://twitter.com/dair_ai/status/1619719511867015168?s=20&t=ygX07dsAPDF8_jwrxZIo1Q) |
---
## Top ML Papers of the Week (Jan 16-22)
| **Paper** | **Links** |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Google AI Research Recap (2022 Edition)** - an excellent summary of some notable research Google AI did in 2022. | [Blog](https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html), [Tweet](https://twitter.com/JeffDean/status/1615796030611820545?s=20&t=vUEC8AZmrOJnVxuYIEJs5A) |
| 2) **Dissociating language and thought in large language models: a cognitive perspective** - a review paper on the capabilities of LLMs from a cognitive science perspective. | [Paper](https://arxiv.org/abs/2301.06627), [Tweet](https://twitter.com/neuranna/status/1615737072207400962?s=20&t=5iWUK4z_rp1NWst7JRbnwg) |
| 3) **Human-Timescale Adaptation in an Open-Ended Task Space** - an agent trained at scale that leads to a general in-content learning algorithm able to adapt to open-ended embodied 3D problems. | [Paper](https://arxiv.org/abs/2301.07608), [Tweet](https://twitter.com/FeryalMP/status/1616035293064462338?s=20&t=RN0YZFAXWr-uH2dT2ZTSqQ) |
| 4) **AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation** - an approach to help provide explanations of generative transformer models through memory-efficient attention manipulation. | [Paper](https://arxiv.org/abs/2301.08110), [Tweet](https://twitter.com/JonasAndrulis/status/1616722810608427008?s=20&t=vUEC8AZmrOJnVxuYIEJs5A) |
| 5) **Everything is Connected: Graph Neural Networks** - short overview of key concepts in graph representation learning. | [Paper](https://arxiv.org/abs/2301.08210), [Tweet](https://twitter.com/PetarV_93/status/1616379369953394688?s=20&t=AqTVY30Y7IZCultzwnqBPA) |
| 6) **GLIGEN: Open-Set Grounded Text-to-Image Generation** - an approach that extends the functionality of existing pre-trained text-to-image diffusion models by enabling conditioning on grounding inputs. | [Paper](https://arxiv.org/abs/2301.07093), [Tweet](https://twitter.com/hardmaru/status/1615766551113744384?s=20&t=wx0Y18oSmW0YenXjKRAdnA), [Project](https://gligen.github.io/) |
| 7) **InstructPix2Pix: Learning to Follow Image Editing Instructions** - proposes a method with the capability of editing images from human instructions. | [Paper](https://arxiv.org/abs/2211.09800), [Tweet](https://twitter.com/_akhaliq/status/1615947919286276096?s=20&t=pbRTn8DaPeQFApQ9okkdRg) |
| 8) **Dataset Distillation: A Comprehensive Review** | [Paper](https://arxiv.org/abs/2301.07014), [Tweet](https://twitter.com/omarsar0/status/1615745724473540609?s=20&t=r-pwuB6EhbZLXa5R6mL3NQ) |
| 9) **Learning-Rate-Free Learning by D-Adaptation** - a new method for automatically adjusting the learning rate during training, applicable to more than a dozen diverse ML problems. | [Paper](https://arxiv.org/abs/2301.07733), [Tweet](https://twitter.com/aaron_defazio/status/1616453609956478977?s=20&t=hGWDXu4sT5f1KcH-X1IL9g) |
| 10) **RecolorNeRF: Layer Decomposed Radiance Field for Efficient Color Editing of 3D Scenes** - a user-friendly color editing approach for the neural radiance field to achieve a more efficient view-consistent recoloring. | [Paper](https://arxiv.org/abs/2301.07958), [Tweet](https://twitter.com/_akhaliq/status/1616265465843548160?s=20&t=duiLmtDvxCwkFmw23rYDmQ) |
---
## Top ML Papers of the Week (Jan 9-15)
| **Paper** | **Links** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 1) **Mastering Diverse Domains through World Models** - a general algorithm to collect diamonds in Minecraft from scratch without human data or curricula, a long-standing challenge in AI. | [Paper](https://arxiv.org/abs/2301.04104v1), [Tweet](https://twitter.com/dair_ai/status/1614676677757661185?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 2) **Tracr: Compiled Transformers as a Laboratory for Interpretability** - a compiler for converting RASP programs into transformer weights. This way of constructing NNs weights enables the development and evaluation of new interpretability tools. | [Paper](https://arxiv.org/abs/2301.05062), [Tweet](https://twitter.com/dair_ai/status/1614676680165187584?s=20&t=3GITA7PeX7pGwrqvt97bYQ), [Code](https://github.com/deepmind/tracr) |
| 3) **Multimodal Deep Learning** - multimodal deep learning is a new book published on ArXiv. | [Book](https://arxiv.org/abs/2301.04856), [Tweet](https://twitter.com/dair_ai/status/1614676682555670528?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 4) **Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk** - new work analyzing how generative LMs could potentially be misused for disinformation and how to mitigate these types of risks. | [Paper](https://openai.com/blog/forecasting-misuse/), [Tweet](https://twitter.com/dair_ai/status/1614676684984156160?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 5) **Why do Nearest Neighbor Language Models Work?** - empirically identifies reasons why retrieval-augmented LMs (specifically k-nearest neighbor LMs) perform better than standard parametric LMs. | [Paper](https://arxiv.org/abs/2301.02828), [Code](https://github.com/frankxu2004/knnlm-why), [Tweet](https://twitter.com/dair_ai/status/1614676687597469696?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 6) **Memory Augmented Large Language Models are Computationally Universal** - investigates the use of existing LMs (e.g, Flan-U-PaLM 540B) combined with associative read-write memory to simulate the execution of a universal Turing machine. | [Paper](https://arxiv.org/abs/2301.04589) , [Tweet](https://twitter.com/dair_ai/status/1614676689908277252?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 7) **A Survey on Transformers in Reinforcement Learning** - transformers for RL will be a fascinating research area to track. The same is true for the reverse direction (RL for Transformers)... a notable example: using RLHF to improve LLMs (e.g., ChatGPT). | [Paper](https://arxiv.org/abs/2301.03044), [Tweet](https://twitter.com/dair_ai/status/1614676692538105860?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 8) **Scaling Laws for Generative Mixed-Modal Language Models** - introduces scaling laws for generative mixed-modal language models. | [Paper](https://arxiv.org/abs/2301.03728), [Tweet](https://twitter.com/dair_ai/status/1614676694920531969?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 9) **DeepMatcher: A Deep Transformer-based Network for Robust and Accurate Local Feature Matching** - a transformer-based network showing robust local feature matching, outperforming the state-of-the-art methods on several benchmarks. | [Paper](https://arxiv.org/abs/2301.02993), [Tweet](https://twitter.com/dair_ai/status/1614676697516752898?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
| 10) **Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement** - addresses the time series forecasting problem with generative modeling; involves a bidirectional VAE backbone equipped with diffusion, denoising for prediction accuracy, and disentanglement for model interpretability. | [Paper](https://arxiv.org/abs/2301.03028), [Tweet](https://twitter.com/dair_ai/status/1614676699915980804?s=20&t=3GITA7PeX7pGwrqvt97bYQ) |
---
## Top ML Papers of the Week (Jan 1-8)
| **Paper** | **Links** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) **Muse: Text-To-Image Generation via Masked Generative Transformers** - introduces Muse, a new text-to-image generation model based on masked generative transformers; significantly more efficient than other diffusion models like Imagen and DALLE-2. | [Paper](https://arxiv.org/abs/2301.00704), [Project](https://muse-model.github.io/), [Code](https://github.com/lucidrains/muse-maskgit-pytorch), [Tweet](https://twitter.com/dair_ai/status/1612153095772938241?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 2) **VALL-E Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers** - introduces VALL-E, a text-to-audio model that performs state-of-the-art zero-shot performance; the text-to-speech synthesis task is treated as a conditional language modeling task. | [Project](https://valle-demo.github.io/), [Tweet](https://twitter.com/dair_ai/status/1612153097962328067?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 3) **Rethinking with Retrieval: Faithful Large Language Model Inference** - shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting. | [Paper](https://arxiv.org/abs/2301.00303), [Tweet](https://twitter.com/dair_ai/status/1612153100114055171?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 4) **SparseGPT: Massive Language Models Can Be Accurately Pruned In One-Shot** - presents a technique for compressing large language models while not sacrificing performance; "pruned to at least 50% sparsity in one-shot, without any retraining." | [Paper](https://arxiv.org/abs/2301.00774), [Tweet](https://twitter.com/dair_ai/status/1612153102513360901?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 5) **ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders** - a performant model based on a fully convolutional masked autoencoder framework and other architectural improvements. CNNs are sticking back! | [Paper](https://arxiv.org/abs/2301.00808), [Code](https://github.com/facebookresearch/convnext-v2), [Tweet](https://twitter.com/dair_ai/status/1612153104329281538?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 6) **Large Language Models as Corporate Lobbyists** - with more capabilities, we are starting to see a wider range of applications with LLMs. This paper utilized large language models for conducting corporate lobbying activities. | [Paper](https://arxiv.org/abs/2301.01181) , [Code](https://github.com/JohnNay/llm-lobbyist), [Tweet](https://twitter.com/dair_ai/status/1612153106355130372?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 7) **Superposition, Memorization, and Double Descent** - aims to better understand how deep learning models overfit or memorize examples; interesting phenomena observed; important work toward a mechanistic theory of memorization. | [Paper](https://transformer-circuits.pub/2023/toy-double-descent/index.html), [Tweet](https://twitter.com/dair_ai/status/1612153108460892160?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 8) **StitchNet: Composing Neural Networks from Pre-Trained Fragments** - new idea to create new coherent neural networks by reusing pretrained fragments of existing NNs. Not straightforward but there is potential in terms of efficiently reusing learned knowledge in pre-trained networks for complex tasks. | [Paper](https://arxiv.org/abs/2301.01947), [Tweet](https://twitter.com/dair_ai/status/1612153110452903936?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 9) **Iterated Decomposition: Improving Science Q\&A by Supervising Reasoning Processes** - proposes integrated decomposition, an approach to improve Science Q\&A through a human-in-the-loop workflow for refining compositional LM programs. | [Paper](https://arxiv.org/abs/2301.01751), [Code](https://github.com/oughtinc/ice) [Tweet](https://twitter.com/dair_ai/status/1612153112638402562?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
| 10) **A Succinct Summary of Reinforcement Learning** - a nice overview of some important ideas in RL. | [Paper](https://arxiv.org/abs/2301.01379), [Tweet](https://twitter.com/dair_ai/status/1612153114773053446?s=20&t=ChwZWzSmoRlZKnD54fsV6w) |
---
We use a combination of AI-powered tools, analytics, and human curation to build the lists of papers.
[Subscribe to our NLP Newsletter](https://nlpnews.substack.com/) to stay on top of ML research and trends.
Join our [Discord](https://discord.gg/FzNtjEK9dg).
", Assign "at most 3 tags" to the expected json: {"id":"6425","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"