Table of contents
- (논문 요약) Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
- (논문 요약) LONG-FORM FACTUALITY IN LARGE LANGUAGE MODELS
- (논문 요약) PROVER-VERIFIER GAMES IMPROVE LEGIBILITY OF LLM OUTPUTS
- (논문 요약) Quiet-STaR; Language Models Can Teach Themselves to Think Before Speaking
- (논문 요약) STaR; Self-Taught Reasoner Bootstrapping Reasoning With Reasoning
- (논문 요약) Strategic Chain-of-Thought; Guiding Accurate Reasoning in LLMs through Strategy Elicitation