Reinforcement learning may boost LLMs today, but it cannot deliver safe, long-term intelligence. We argue for System 2 learning instead.
Engineering
Machine learning
Elicit develops LLM-based auto-evals to balance scale, trust, and flexibility, ensuring reliable scientific reasoning at superhuman speed.
We evaluate an automated approach for catching hallucinations in paper abstracts, aiming for consistently more trustworthy results.
Research
Sign in
Solutions
Industries
Resources
Pricing
Literature Review
Paper Search
Library
Alerts
Reports
Pharma
Academia
Medtech
Consumer goods
Blog
Team
Careers
© 2025 Elicit, inc.
Privacy policy
Terms of service
Policies