Table of contents
- (논문 요약) Byte Latent Transformer; Patches Scale Better Than Tokens
- (논문 요약) Gecko; Versatile Text Embeddings Distilled from Large Language Models
- (논문 요약) Large Concept Models; Language Modeling in a Sentence Representation Space
- (논문 요약) MTEB; Massive Text Embedding Benchmark
- (논문 요약) Matryoshka Representation Learning
- (논문 요약) One Embedder, Any Task; Instruction-Finetuned Text Embeddings
- (논문 요약) ROFORMER; ENHANCED TRANSFORMER WITH ROTARY
- (논문 요약) Text Embeddings by Weakly-Supervised Contrastive Pre-training
- (논문 요약) jina-embeddings-v3; Multilingual Embeddings With Task LoRA