* equal contribution
From Word to World: Can Large Language Models be Implicit Text-based World
Models?
VFA: Empoweing Multilingual MLLMs via Vision-Free Adaptation
SPPO: Sequence-Level PPO for Long-Horizon Reasoning Tasks
Towards Fair and Comprehensive Evaluation of Routers in Collaborative LLM
Systems
No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning
Systems
Enhancing Large Language Model Reasoning via Selective Critical Token
Fine-Tuning
VisCodex: Unified Multimodal Code Generation via Merging Vision and Coding
Models
G2: Guided Generation for Enhanced Output Diversity in LLMs
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs
FANNO: Augmenting High-Quality Instruction Data with Open-Sourced LLMs Only
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy
SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation
UniPoll: A Unified Social Media Poll Generation Framework via Multi-Objective Optimization
PACIT: Unlocking the Power of Examples for Better In-Context Instruction Tuning
Cantonese Natural Language Processing in the Transformers Era, Survey and Challenges
Electric Power Grid Invulnerability Under Intentional Edge-Based Attacks