As knowledge , we’re comfy with tabular knowledge…
We are able to additionally deal with phrases, json, xml feeds, and photos of cats. However what a couple of cardboard field stuffed with issues like this?

The data on this receipt needs so badly to be in a tabular database someplace. Wouldn’t it’s nice if we may scan all these, run them by means of an LLM, and save the ends in a desk?
Fortunate for us, we stay within the period of Document Ai. Doc AI combines OCR with LLMs and permits us to construct a bridge between the paper world and the digital database world.
All the most important cloud distributors have some model of this…
Right here I’ll share my ideas on Snowflake’s Doc AI. Other than utilizing Snowflake at work, I’ve no affiliation with Snowflake. They didn’t fee me to write down this piece and I’m not a part of any ambassador program. All of that’s to say I can write an unbiased evaluate of Snowflake’s Document AI.
What’s Doc AI?
Doc AI permits customers to rapidly extract info from digital paperwork. Once we say “paperwork” we imply photos with phrases. Don’t confuse this with niche NoSQL things.
The product combines OCR and LLM fashions so {that a} person can create a set of prompts and execute these prompts in opposition to a big assortment of paperwork abruptly.

LLMs and OCR each have room for error. Snowflake solved this by (1) banging their heads in opposition to OCR till it’s sharp — I see you, Snowflake developer — and (2) letting me fine-tune my LLM.
Effective-tuning the Snowflake LLM feels much more like glamping than some rugged outside journey. I evaluate 20+ paperwork, hit the “practice mannequin” button, then rinse and repeat till efficiency is passable. Am I even a knowledge scientist anymore?
As soon as the mannequin is skilled, I can run my prompts on 1000 paperwork at a time. I like to avoid wasting the outcomes to a desk however you might do no matter you need with the outcomes actual time.
Why does it matter?
This product is cool for a number of causes.
- You’ll be able to construct a bridge between the paper and digital world. I by no means thought the large field of paper invoices beneath my desk would make it into my cloud knowledge warehouse, however now it may well. Scan the paper bill, add it to snowflake, run my Doc AI mannequin, and wham! I’ve my desired info parsed right into a tidy desk.
- It’s frighteningly handy to invoke a machine-learning mannequin through SQL. Why didn’t we consider this sooner? In a previous occasions this was just a few hundred of strains of code to load the uncooked knowledge (SQL >> python/spark/and so forth.), clear it, engineer options, practice/take a look at break up, practice a mannequin, make predictions, after which usually write the predictions again into SQL.
- To construct this in-house could be a significant enterprise. Sure, OCR has been round a very long time however can nonetheless be finicky. Effective-tuning an LLM clearly hasn’t been round too lengthy, however is getting simpler by the week. To piece these collectively in a manner that achieves excessive accuracy for quite a lot of paperwork may take a very long time to hack by yourself. Months of months of polish.
After all some components are nonetheless in-built home. As soon as I extract info from the doc I’ve to determine what to do with that info. That’s comparatively fast work, although.
Our Use Case — Convey on Flu Season:
I work at an organization known as IntelyCare. We function within the healthcare staffing area, which suggests we assist hospitals, nursing houses, and rehab facilities discover high quality clinicians for particular person shifts, prolonged contracts, or full-time/part-time engagements.
Lots of our amenities require clinicians to have an up-to-date flu shot. Final 12 months, our clinicians submitted over 10,000 flu photographs along with tons of of hundreds of different paperwork. We manually reviewed all of those manually to make sure validity. A part of the enjoyment of working within the healthcare staffing world!
Spoiler Alert: Utilizing Doc AI, we have been capable of cut back the variety of flu-shot paperwork needing guide evaluate by ~50% and all in simply a few weeks.
To tug this off, we did the next:
- Uploaded a pile of flu-shot paperwork to snowflake.
- Massaged the prompts, skilled the mannequin, massaged the prompts some extra, retrained the mannequin some extra…
- Constructed out the logic to match the mannequin output in opposition to the clinician’s profile (e.g. do the names match?). Positively some trial and error right here with formatting names, dates, and so forth.
- Constructed out the “resolution logic” to both approve the doc or ship it again to the people.
- Examined the complete pipeline on larger pile of manually reviewed paperwork. Took a detailed take a look at any false positives.
- Repeated till our confusion matrix was passable.
For this undertaking, false positives pose a enterprise danger. We don’t wish to approve a doc that’s expired or lacking key info. We stored iterating till the false-positive charge hit zero. We’ll have some false positives ultimately, however fewer than what we’ve got now with a human evaluate course of.
False negatives, nevertheless, are innocent. If our pipeline doesn’t like a flu shot, it merely routes the doc to the human crew for evaluate. In the event that they go on to approve the doc, it’s enterprise as normal.
The mannequin does effectively with the clear/simple paperwork, which account for ~50% of all flu photographs. If it’s messy or complicated, it goes again to the people as earlier than.
Issues we discovered alongside the way in which
- The mannequin does greatest at studying the doc, not making choices or doing math primarily based on the doc.
Initially, our prompts tried to find out validity of the doc.
Dangerous: Is the doc already expired?
We discovered it far simpler to restrict our prompts to questions that might be answered by trying on the doc. The LLM doesn’t decide something. It simply grabs the related knowledge factors off the web page.
Good: What’s the expiration date?
Save the outcomes and do the mathematics downstream.
- You continue to should be considerate about coaching knowledge
We had just a few duplicate flu photographs from one clinician in our coaching knowledge. Name this clinician Ben. Considered one of our prompts was, “what’s the affected person’s title?” As a result of “Ben” was within the coaching knowledge a number of occasions, any remotely unclear doc would return with “Ben” because the affected person title.
So overfitting continues to be a factor. Over/beneath sampling continues to be a factor. We tried once more with a extra considerate assortment of coaching paperwork and issues did a lot better.
Doc AI is fairly magical, however not that magical. Fundamentals nonetheless matter.
- The mannequin might be fooled by writing on a serviette.
To my data, Snowflake doesn’t have a approach to render the doc picture as an embedding. You’ll be able to create an embedding from the extracted textual content, however that gained’t let you know if the textual content was written by hand or not. So long as the textual content is legitimate, the mannequin and downstream logic will give it a inexperienced mild.
You could possibly repair this beautiful simply by evaluating picture embeddings of submitted paperwork to the embeddings of accepted paperwork. Any doc with an embedding manner out in left area is shipped again for human evaluate. That is simple work, however you’ll need to do it exterior Snowflake for now.
- Not as costly as I used to be anticipating
Snowflake has a status of being spendy. And for HIPAA compliance issues we run a higher-tier Snowflake account for this undertaking. I have a tendency to fret about working up a Snowflake tab.
Ultimately we needed to attempt additional onerous to spend greater than $100/week whereas coaching the mannequin. We ran hundreds of paperwork by means of the mannequin each few days to measure its accuracy whereas iterating on the mannequin, however by no means managed to interrupt the funds.
Higher nonetheless, we’re saving cash on the guide evaluate course of. The prices for AI reviewing 1000 paperwork (approves ~500 paperwork) is ~20% of the fee we spend on people reviewing the remaining 500. All in, a 40% discount in prices for reviewing flu-shots.
Summing up
I’ve been impressed with how rapidly we may full a undertaking of this scope utilizing Doc AI. We’ve gone from months to days. I give it 4 stars out of 5, and am open to giving it a fifth star if Snowflake ever offers us entry to picture embeddings.
Since flu photographs, we’ve deployed comparable fashions for different paperwork with comparable or higher outcomes. And with all this prep work, as a substitute of dreading the upcoming flu season, we’re able to convey it on.