
Anthropic is stepping up its game with a new feature for its Claude AI models: the Citations API. This tool lets Claude pull information directly from source documents, cutting down on those pesky “hallucinations” where AI makes stuff up. It’s like giving Claude a built-in fact-checker, and early users are already seeing promising results.
Here’s how it works: Developers can now add a simple parameter, citations: {enabled:true}
, to any document they send through the API. This tells Claude to cite its sources, making its responses more accurate and trustworthy. According to Anthropic, this capability has been part of Claude’s training all along—they’re just making it easier for developers to use.
Early adopters are thrilled. Thomson Reuters, which uses Claude for its CoCounsel legal AI platform, says the feature will help reduce errors and build trust in AI-generated content. Meanwhile, financial tech company Endex reported that Citations slashed their source confabulations from 10% to zero, while boosting the number of references per response by 20%.
But let’s not get too carried away—experts caution that relying on AI for accurate sourcing still comes with risks. The technology needs more real-world testing to prove its reliability. Still, Anthropic is charging ahead, offering the feature for its Claude 3.5 Sonnet and Haiku models through both its own API and Google Cloud’s Vertex AI platform.
As for pricing, Anthropic is keeping it simple. Quoted text in responses won’t count toward output token costs, and sourcing a 100-page document will cost around $0.30 with Claude 3.5 Sonnet or just $0.08 with Claude 3.5 Haiku. Not bad for a smarter, more reliable AI assistant!