Table of contents
- (논문 요약) Apple Intelligence Foundation Language Models
- (논문 요약) Aya Vision; Expanding the worlds AI can see
- (논문 요약) DeepSeek-V2; A Strong, Economical, and Efficient Mixture-of-Experts Language Model
- (논문 요약) DeepSeek-V3 Technical Report
- (논문 요약) GRANITE 3.0 LANGUAGE MODELS
- (논문 요약) Gemma 2; Improving Open Language Models at a Practical Size
- (논문 요약) Gemma 3 Technical Report
- (논문 요약) Hunyuan-Large; An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
- (논문 요약) InternLM-XComposer-2.5; A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
- (논문 요약) Open Mixture-of-Experts Language Models
- (논문 요약) Ovis; Structural Embedding Alignment for Multimodal Large Language Model
- (논문 요약) Phi-3 Technical Report;A Highly Capable Language Model Locally on Your Phone
- (논문 요약) Phi-4 Technical Report
- (논문 요약) Phi-4-Mini Technical Report; Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs
- (논문 요약) QWEN2 TECHNICAL REPORT
- (논문 요약) Qwen2.5-Omni Technical Report
- (논문 요약) The Llama 3 Herd of Models
- (논문 요약) olmOCR; Unlocking Trillions of Tokens in PDFs with Vision Language Models
- (모델 요약) LLaMa3
- (모델 요약) NVIDIA Llama Nemotron
- (블로그 요약) Aya Expanse; Connecting Our World
- SigLIP 2; Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features