My wife is a doctor at a major university. They are under pressure right now and are looking to increase revenue. Changing the way they document cases can substantially alter the billing outcome. Note that these are not errors, they are omissions of work done in the note that prevents the downstream billing experts from using higher paying codes.
They have been aware for a few years that many clinicians aren’t documenting their work in the best way for billing. The current solution is to have an annual talk given by the one billing expert in their department pointing out where people often lose revenue due to poor documentation.
Not all the doctors attend this talk. There is no internal process for measuring subsequent improvements quantitatively. There are 85 doctors in her group.
Anyway, this is just to say that something automated to help doctors document their work in a billing friendly way seems powerful. But for my wife’s group, the issue doesn’t seem to be denied claims or “errors” per se. More omissions/sub optimal documentation due to lack of knowledge. Or lack of follow through on knowledge which is only occasionally communicated.
Congrats on the launch, but no link to your website? What EHRs do you integrate with, and why did you choose those to start with? Do physicians need to leave the EHR to use your app? In my experience, that ends up being a non-starter/huge impediment to usage.
Congrats on the launch! I have a few questions (though I know very little about this space):
1. How often is the cause of a denied insurance claim a documentation error vs an intentional denial from an insurance company (either an automated system or medical reviewer)?
2. This feels very conceptually similar to an AI review bot, but the threshold for false positives feels higher. What does the process look like for double checking a false positive in the agent orchestration layer?
Very cool to see an early stage company doing this! I always hear that healthcare has a lot of red tape to handle so it's hard for startups to operate without tons of VC funding.
How'd you guys find your initial users and figure out rehab clinics were a good place to start?
What was integrating with PowerChart and Epic like? Maybe they've improved in the last ten years, but the interfaces for both still seemed pretty awful.
Respect for the work, but recommend a pivot to Epic IT integrators as your target customer...
(1) Don't confuse medical errors with claims errors. Your claims-amplifying customers don't really care about medical errors; they're mainly just optimizing their extraction from government and insurance payment systems. (And the vast majority of medical errors take significant skill to detect - beyond even complicated decision support systems.)
For claims errors, I would rather the system provided feedback to the EHR Epic engineers than trying to block providers. The Epic IT should be getting regular reports that prompt them to fix their UI issues.
But then I care more about fixing the Epic UI than claims.
(2) Epic/EHR's are an epic UI failure (not surprising since it was not driven by user need but via top-down requirements). It has random and super-complicated form UI's forcing users into complicated multi-step workflows to say something trivial.
Today in medicine the logistics of interfacing with Epic and other EHR's takes longer than the actual care. (Just imagine having to use a compiler that took longer than you did to write the code.) It's the scourge of medical care today.
In that context, imagine: you want to build a system that argues with providers when they're done, based on AI logic completely separate from the Epic system logic? It's hard to imagine a better way to make a bad situation worse.
What would be huge benefit instead is an AI tester for Epic. Something where you could generate all the ways users might see their UI and need to use it, and quantify all the confusing+unnecessary visuals and steps, to actually measure usability. Think user modeling and fuzzing coupled with progressive pruning for workflows, with actual metrics of system and workflow complexity.
That usability testing would probably be useful in other domains, too. But starting with Epic would be good because it has so many UI errors (high signal to noise) and saving time for highly-paid, highly-blocked users translates directly to dollars. You could sell it to every Epic integrator in the US. Those customers are easy find and target, they have strong needs, they can work with you as your technology evolves (and filter the inaccuracies). By giving them objective measures of usability/complexity, you simplify their design space and give them a clean way to measure system improvement, reducing the level of politics they endure.
Along the way you would build AI models over user interaction (instead of tokens). Then you could build interactive auto-completing UI's that work based on session observation and voice alone.
Unless I'm searching or researching, I don't want AI to replicate what's been said. I want AI to anticipate what I'm doing, and afford me the choices I need to make. That's exactly the model of diagnosis and treatment.
Launch HN: WorkDone (YC X25) – AI Audit of Medical Charts
71 points by digitaltzar 22 May 2025 | 58 comments
Comments
They have been aware for a few years that many clinicians aren’t documenting their work in the best way for billing. The current solution is to have an annual talk given by the one billing expert in their department pointing out where people often lose revenue due to poor documentation.
Not all the doctors attend this talk. There is no internal process for measuring subsequent improvements quantitatively. There are 85 doctors in her group.
Anyway, this is just to say that something automated to help doctors document their work in a billing friendly way seems powerful. But for my wife’s group, the issue doesn’t seem to be denied claims or “errors” per se. More omissions/sub optimal documentation due to lack of knowledge. Or lack of follow through on knowledge which is only occasionally communicated.
I have extensive experience with it and am willing to help.
information is at https://worldvista.org and https://hardhats.org and https://va.gov/vdl
1. How often is the cause of a denied insurance claim a documentation error vs an intentional denial from an insurance company (either an automated system or medical reviewer)?
2. This feels very conceptually similar to an AI review bot, but the threshold for false positives feels higher. What does the process look like for double checking a false positive in the agent orchestration layer?
How'd you guys find your initial users and figure out rehab clinics were a good place to start?
How do you get the medical professionals to see a "diff" of the changes to approve? Is there versioning in the EHR tools?
Is this even possible in the EU with the GDPR and its stricter rules on medical data?
(1) Don't confuse medical errors with claims errors. Your claims-amplifying customers don't really care about medical errors; they're mainly just optimizing their extraction from government and insurance payment systems. (And the vast majority of medical errors take significant skill to detect - beyond even complicated decision support systems.)
For claims errors, I would rather the system provided feedback to the EHR Epic engineers than trying to block providers. The Epic IT should be getting regular reports that prompt them to fix their UI issues.
But then I care more about fixing the Epic UI than claims.
(2) Epic/EHR's are an epic UI failure (not surprising since it was not driven by user need but via top-down requirements). It has random and super-complicated form UI's forcing users into complicated multi-step workflows to say something trivial.
Today in medicine the logistics of interfacing with Epic and other EHR's takes longer than the actual care. (Just imagine having to use a compiler that took longer than you did to write the code.) It's the scourge of medical care today.
In that context, imagine: you want to build a system that argues with providers when they're done, based on AI logic completely separate from the Epic system logic? It's hard to imagine a better way to make a bad situation worse.
What would be huge benefit instead is an AI tester for Epic. Something where you could generate all the ways users might see their UI and need to use it, and quantify all the confusing+unnecessary visuals and steps, to actually measure usability. Think user modeling and fuzzing coupled with progressive pruning for workflows, with actual metrics of system and workflow complexity.
That usability testing would probably be useful in other domains, too. But starting with Epic would be good because it has so many UI errors (high signal to noise) and saving time for highly-paid, highly-blocked users translates directly to dollars. You could sell it to every Epic integrator in the US. Those customers are easy find and target, they have strong needs, they can work with you as your technology evolves (and filter the inaccuracies). By giving them objective measures of usability/complexity, you simplify their design space and give them a clean way to measure system improvement, reducing the level of politics they endure.
Along the way you would build AI models over user interaction (instead of tokens). Then you could build interactive auto-completing UI's that work based on session observation and voice alone.
Unless I'm searching or researching, I don't want AI to replicate what's been said. I want AI to anticipate what I'm doing, and afford me the choices I need to make. That's exactly the model of diagnosis and treatment.