top of page
Sertis

Evaluating Vendors in the AI and Data Space: The Case for a Measurable POC Approach

Writer's picture: SertisSertis


As the rate of AI adoption across enterprises hastens, and Agentic AI integration to workflows becomes a competitive necessity, more is at stake for leaders to get AI and data related deployments right the first time.

Sertis has engaged in no fewer than three major rescue projects over the last 14 months, and several more are on the horizon. We believe that even more projects than reported are failing to reach the expected pre-deployment ROI and projections for value unlock that AI can deliver. This happens across industry verticals; none are immune to the root causes.

As reported recently in Information Week, the #1 reason that nearly 2 out of 5 AI projects in the USA end in disappointment is bad initial expectations and the consequent decision-making that flows into ROI projections. There are clearly better and worse ways of originating and deploying projects that we notice are happening in the market. 

Even within the same enterprise, we notice that there can be substantially different experiences and levels of outcome with AI and data projects, pointing to intra-organizational differences in overall approach and lack of a unified AI framework for the business that is championed across BUs or functional units.

There are several reasons that “post-deployment disappointment” is not uncommon, some frequently co-inhere in a project, and most seem to gravitate around:

  • poor initial definition of objectives, 

  • lack of understanding of the technology, 

  • archaic decision-making processes, 

  • poor vendor selection.

In this article, we focus on vendor selection and how to avoid pitfalls.   

The AI vendor landscape has recently become crowded. Where Sertis would see an average of 2-3 competitors in a bidding situation 5 years ago, that number is now 3x. Parsing through this is difficult, and many enterprises are frequently stuck with thinking and decision-making processes that are anathema to the AI space - leftover from the hardware-centric IT era, or simply lacking in a fundamental understanding of how data and AI projects differ from web/mobile projects, all leading to a susceptibility to patently unrealistic proposals. 

Moreover, the prohibitively high cost structures of many reputable global brand-name firms push payback periods out further than they need be. All of this highlights how rightsizing AI initiatives and adopting the right ways to evaluate proposals and vendors are mission critical.

Too often, organizations fall into the trap of selecting vendors based on presentations that are compelling on the surface, or prior relationships, rather than on the actual performance of the AI itself. To avoid this, businesses must adopt a rigorous, measurable Proof of Concept (POC) process that ensures fair evaluation and optimal selection.



Setting Up a Measurable POC Process

A well-structured POC allows organizations to both think ultra-clearly through what they want to accomplish with their deployment and to properly compare AI vendors on a level playing field. 

Here’s a step-by-step framework for scaled enterprises to guide the evaluation process:

1. Provide All Vendors with the Same Data Set

To ensure fairness, all vendors should receive the same dataset. It’s also advisable to hold back a portion of the data for blind testing, which will help in assessing the AI’s ability to generalize rather than just memorize patterns. If a vendor does not insist on this, do not use them.

2. Define Clear, Measurable Goals

Before vendors start working on the POC, establish key performance indicators (KPIs) for the application, ensuring that they are fully tied to and reflect the business’s real needs. These should include:

  • Accuracy (e.g., precision, recall, F1-score)

  • Processing speed and efficiency

  • Adaptability to edge cases

  • Ease of integration with existing systems

  • Explainability and interpretability of AI decisions

By setting these parameters upfront, organizations can objectively measure each vendor’s AI capabilities, eliminating charlatanism and the tendency of many newer vendors to exploit asymmetry of information by making outrageous performance claims to win opportunities.

3. Set an Initial Submission Deadline

Require all vendors to submit their first round of results within a defined period. This ensures all vendors are progressing at a similar pace and provides an early indication of their approach and methodology. Vendors with real experience that will derisk your eventual deployment will shine by delivering objectively more performance in a shorter timeframe.

4. Rank Results and Openly Provide Feedback

Once the initial results are in, rank each vendor based on predefined evaluation criteria. Share the rankings and performance breakdown with the vendors so they understand where they stand and what needs improvement. This lets you double-down on what the enterprise wants as well as enjoin any midcourse correction that may be needed.


5. Allow Vendors to Refine and Resubmit

By offering vendors multiple rounds, a “second chance” to refine their models based on feedback, organizations can assess which vendors are not only experienced and not exaggerating capabilities but also ferret-out the ones that are truly adaptable and responsive—hallmarks of a good AI and data vendor. Getting a vendor that understands how AI models will get better over time and that commits to continuous improvement round over round is important.  This approach simulates real-world AI deployment, where unlike buying web or mobile application software, continuous improvement is necessary and a proper hallmark for long-term success.

6. Assess Collaboration and Customization Capabilities

Throughout the POC, evaluate not just the AI’s performance but also the vendor’s ability to:

  • Work collaboratively with the organization’s team,

  • Incorporate feedback and adjust their solution accordingly,

  • Provide technical support and domain expertise overall including commercial understanding,  

  • Explain their activity to you in a manner all your teams can understand and that makes sense in the context of your business.

7. Make a Data-Driven Selection

At the end of the Measurable POC period, cross-functional decision-makers will have a comprehensive understanding of:

  • Which AI solution performs best based on real-world data - how and why,

  • Which vendor is easiest to work with and most responsive to feedback,

  • Which vendor can best customize their AI to fit the business’s unique needs,

  • Which vendor embraces the dance off vs. eschews it, 

  • Which vendor behaves more like a partner who ensures that its intelligent deployment gets more intelligent over time.

Why Measurable POC Approach Works

The AI and Data world is not the hardware-centric IT world nor the fixed point web software world. Evaluating directions and deciding among vendors requires an enterprise to think differently. Running a comprehensive, pre-structured and data-driven evaluation process minimizes the risk of choosing a vendor based solely on marketing materials, previous relationships, polished sales pitches, unsubstantiated claims, and even graft. 

In the AI and data world, the negative impacts of a poor decision can be far greater than in the software world. It also ensures that the natural asymmetry of information in the process between enterprise and vendor does not gravitate toward any charlatanism, or even simply unsubstantiated claims. 

It ensures that the selected AI vendor is not only technologically superior but also the best fit for the organization’s operational and strategic needs, approaching its relationship with your enterprise more like a partner than a one-and-done vendor. It also has the benefit of making sure the enterprise does not overspend for results.  

By adopting a measurable POC process, organizations can confidently invest in AI solutions that drive real business value, ensuring a strong foundation for long-term AI success.

Comments


Have a project in mind?

bottom of page