From scribes to diagnostics and daily task management, more artificial intelligence (AI) tools seem to enter the veterinary market every day. Flashy marketing and lofty promises might catch your attention, but how can you determine which of these tools is worth trying, and which could cause more trouble than they’re worth?
Dr. Karen Bolten, The Business Vet, has spent years assessing and exploring AI in veterinary medicine. She helps practices learn how to separate fact from the hype, and how to choose tools that address clinic pain points without subjecting them to safety and security concerns. Here’s what clinics need to know.
Ask three basic questions about AI
According to Dr. Bolten, any evaluation of AI in veterinary medicine should start with three foundational questions. “First, is it safe? Second, does it do what it says it does? Third, is it usable?” she listed. “Those are the most important questions to ask before jumping in.”
Safety: What does the privacy policy say?
In the US, minimal regulations surrounding AI development and use mean that the burden of data protection falls on the user. AI developers don’t have to abide by specific standards, which means some tools might mishandle your data without your knowledge.
Dr. Bolten noted that when you read through the privacy policy, you may be surprised at what you find. “I’ve read policies that basically say, ‘We’re going to use all of your data and we’re going to keep it forever,’” she recalled.
Unfortunately, this scenario is all too common. Some less-than-reputable companies may publish lengthy, hard-to-read legalese so they can obscure their true intentions. Frustrated by this, Dr. Bolten wanted to find a way to analyze privacy policies and quickly compare them. Not ironically, she used AI to sort through them, and found companies fall on a broad spectrum from good to bad.
Here are the green flags to look for when evaluating AI tools for data safety and security:
- Voluntary GDPR compliance
- Clearly-worded explanations of how your data is handled
- An option to opt out of data collection for training purposes
Efficacy: Does it do what it says it does?
Information about AI tool development, data used to train the AI, and how companies measure performance and accuracy is limited, making it challenging to answer the efficacy question.
“Companies don’t often disclose how they’re training their AI models,” said Dr. Bolten. “That’s really concerning, because I don’t know what biases might be built into the model I’m using. I need to know what I'm using, just like any other medical service or product.”
Veterinary professionals expect the tests they use on patients to be clinically validated, with established sensitivity and specificity. The same should be true for AI in veterinary medicine, but most are launched without peer-reviewed research or a basic white paper. Dr. Bolten advises that whenever a tool makes clinical claims, you should ask the company for evidence.
Here are some green flags to look for when it comes to AI tool efficacy:
- Peer-reviewed studies
- Published white papers
- Disclosure of training and validation data
- A discussion of limitations
Conversely, a lack of data is a major red flag. “There's a very fine line between giving up proprietary data versus being honest about what's happening,” said Dr. Bolten. “Companies don’t have to give up all their secrets, but you need to know that it’s safe to use on patients or in practice.”
"Companies don't have to give up all their AI secrets, but you need to know that it's safe to use on patients or in practice."
—Dr. Karen Bolten
Usability: Does it actually help your team?
Even the most secure and effective AI in veterinary medicine is useless if it doesn’t meet your team’s specific needs. Dr. Bolten recommends taking stock of individual or clinic pain points and then finding tools that address them.
“I might develop something for myself, which makes sense to me, but then I give it to someone else, and it makes no sense to them,” she said. “Does it address the user’s pain points? Do the users even understand what their pain points are?”
Some practices adopt AI because they think they should, but lack a clear purpose or plan. Identify your biggest pain point. For example, are your doctors overwhelmed with records? Are your CSRs drowning in calls? Match the tool to the problem, not the other way around.
Other aspects of usability include how well a tool integrates with your current software systems, the tool's learning curve, and whether the company offers a free trial period to help you truly assess compatibility.
Integrated versus native AI tools
Taking advantage of the tools already built into your practice management platform is a convenient option if those tools address your pain points. If they don’t, consider adopting a third-party tool that integrates well with your existing software.
Dr. Bolten notes there are pros and cons for each. Built-in tools often have a better user experience and limit the number of companies you’re dealing with, but may be limited in functionality. Third-party tools may have advanced capabilities, but they can introduce friction between software programs not originally designed to work together.
Ultimately, the best choice depends on your clinic’s goals and resources. Don’t assume that one type of tool is always better. “Let’s say you have a third-party service that is not integrating with your software,” said Dr. Bolten. “Does the benefit of that third-party service outweigh whatever you have to do to make it work? Sometimes the answer is yes, sometimes it's no.”
Push for AI transparency
AI in veterinary medicine is a fast-moving ecosystem filled with promise as well as a minefield of potential problems. As a veterinary professional, your job is to critically evaluate AI, just as you would any other medical test or tool. Dr. Bolten advises asking questions and supporting companies that go above and beyond to provide the transparency and compliance you need to place your trust in them.
Key takeaways
- Ask three critical questions when evaluating AI tools: Is the AI safe? Does it do what it says it does? Is it usable in your clinic?
- Read the privacy policy in full; look for GDPR compliance, opt-out options, and transparency in data handling.
- Don’t trust hype without proof from peer-reviewed studies, white papers, and disclosed training data to validate claims.
- Choose tools that solve problems in your workflow, rather than selecting the trendiest programs.
- Convenience and security may favor built-in tools, but advanced functionality often comes from specialized third-party solutions.
Secure and transparent AI solutions from Provet
Provet’s built-in AI tools are designed with transparency, data security, and real clinical workflows in mind. With GDPR-compliant practices and clearly documented privacy policies, our thoughtfully developed and implemented AI tools help you work smarter, not harder.
Book a demo with our Provet Cloud team to experience our safe, ethical, and usable AI tools.