Why B2B companies need more than ChatGPT for AI product strategy
This post was co-authored with Gravitate.ai.
Generative AI has been present in consumer and B2B products for several years now. ChatGPT is taking it to the next level, but that doesn’t mean it’s the right fit for every product use case.
Here’s the reality - ChatGPT alone isn’t going to work for B2B startups building generative AI into their solutions. Why? It’s not a sustainable business model to use a foundational dataset trained on public consumer data.
Startup founders and their tech leaders are facing a classic business decision of build versus buy, but in this case - the “buy” is more of a “borrow”. With any build vs buy assessment comes an analysis of internal assets and strengths compared to what the cost/benefit would be for speed to market. We understand ChatGPT gets it done faster, but there are a few things to consider before you build your entire business offering around one trend.
Generative AI has been present in consumer and B2B products for several years now. ChatGPT is taking it to the next level, but that doesn’t mean it’s the right fit for every product use case.
Here’s the reality - ChatGPT alone isn’t going to work for B2B startups building generative AI into their solutions. Why? It’s not a sustainable business model to use a foundational dataset trained on public consumer data.
Startup founders and their tech leaders are facing a classic business decision of build versus buy, but in this case - the “buy” is more of a “borrow”. With any build vs buy assessment comes an analysis of internal assets and strengths compared to what the cost/benefit would be for speed to market. We understand ChatGPT gets it done faster, but there are a few things to consider before you build your entire business offering around one trend.
Considerations for Product teams developing generative AI product solutions
Many companies are going all in on the borrowing approach and using a publicly-trained model for their B2B customer experiences. Here are a few strategic questions for product teams to discuss before incorporating ChatGPT into the product roadmap.
What will make your product unique if you’re using the same dataset everyone else is?
How will you control your AI data strategy when your product and customer experience depend on a training model you can’t control?
How will you manage your brand reputation and customer experience when you can’t calculate confidence levels on the results?
To dig into the viability of using ChatGPT in business solutions, we’ve polled a few experts who’ve been working with successful generative AI products built from proprietary data assets. And for those who like real-world examples, we’ve also rounded up two success stories for putting generative AI into B2B products.
Generative AI product development: 3 experts share what works and what to watch for
Davit Buniatyan, CEO of Activeloop, Eugene Malobrodsky, Partner at One Way Ventures, Gevorg Karapetyn at Zero Systems, and Qiuyan Xu, PhD, founder of Gravitate AI share their thoughts about the overnight popularity of ChatGPT. As it turns out, while these business leaders have varying experiences in industries and product development portfolios, they share similar views on how businesses can best develop generative AI product solutions.
Q: What are startups doing with ChatGPT or Google BARD that are NOT working, or you believe are not going to work?
Eugene: The current deployment of ChatGPT or Google BARD is great for answering simple questions or helping you write an email by just describing your thoughts. There are two major issues, though, that companies need to look out for. First, how do we understand that the answer we got is True? Dealing with Fake News in AI is an even larger problem than what we are dealing with today. The second problem looks inside corporations, to answer questions based on data on the intranet. Companies need to train the data set on the internal info, but not take data outside of their infrastructure or pay the cost of computing to do this.
Some start-ups are trying to add these technologies to their product without understanding the benefits to their customer or the cost of adding these features. Companies may do this just to increase their valuations because it is the new hot thing. The key is to identify a problem you are trying to solve for your customers and understand if adding this functionality really solves the problem for them.
Qiuyan: I can see how a startup leader might ask themselves “Why build custom machine learning models when there is already an incredible infrastructure that has a ready API to experiment with now?” Even though we are all about quick iteration and providing value fast, we have our own battle scars from jumping into a “quick solution” only to find out later that resources were wasted because of unclear requirements, which lead to half-baked designs. Many startups have innovation teams facing a fluid environment, where the requirements for a solution can change quickly. Asking “does this really solve the problem” should be the starting point. ChatGPT provides all of us with more temptation of AI instant gratification, which is great, but it could be more fruitful if we can savor it AFTER actually planning the big picture for a solution. Even just a little iterative planning and requirements gathering can avoid overhauling the technical infrastructure, code base, or database a few months down the road.
Q: How can startups develop competitive, sustainable generative AI products?
Davit: If a startup company is going to develop a Data Moat - not just in the UX or training layer, it’s going to need to double down on its infrastructure. Just like a page from an economics textbook - as the barriers to entry into the generative AI product market drop, it all comes down to unit economics, and who can squeeze the most out of the data they have, as cost-efficiently as possible. We recently had an event with a speaker from Meta AI who shared that pre-training a foundational model requires wasting $30M just in computing costs on experiments before starting meaningful training.
We discovered that companies can unlock value fast by setting up a Deep Lake or a data lake optimized specifically for deep learning, which is what drove us to offer that to so many companies. Creating more robust and custom MLOps (or LLMOps) capabilities to give the generative AI the right context makes it more manageable, and definitely less costly.
Qiuyan: We ask our clients how they view their strategic data assets, which can be part of their Data Moat. Putting company data in the public domain is not going to develop into a differentiator. And neither is bringing that public data into a customer experience without any additional guardrails or proprietary datasets. Adding on algorithms and infrastructure is a recipe for success. We’ve seen it work really well when both technical and business sides come together to outline their best opportunities to differentiate with the unique data they have or that they can gather through their customer experience.
Key takeaway: Build AI products that are sustainable both in their unique differentiators and in how the business delivers the customer experience.
Startup founders and enterprise leaders who understand the key data assets needed to build a defensive Moat (read more about building AI technical long-term competitive advantage) and develop data-driven products, will be able to make difficult trade-offs, leading teams to prioritize product development that positively impacts their AI advantage.
Two case studies demonstrating how to get it right with generative AI
There are examples out there where companies are getting it right with their AI products, including how they’ve developed a blueprint to create strategic data assets as part of a larger AI product strategy.
In today's fast-paced business environment, several organizations are successfully addressing enterprise-level complexities, deploying artificial intelligence (AI) products without relying on off-the-shelf offerings. These two case studies show how companies can create custom chatbots or generative AI products that integrate human feedback and domain-specific knowledge. They each prove there are ways to ensure customer data remains secure and adheres to customer data privacy standards without compromising customer trust.
ZERO bridges the gap between generative AI and enterprises while preserving high-security standards
To generate high-quality models for custom enterprise AI solutions, ZERO has applied high-quality labeled data, while maintaining ethical walls and data access rules. This success would not be possible using ChatGPT as a stand-alone solution.
Company profile: ZERO, a cognitive automation company, creates smart solutions that enable professionals to spend more time on higher-value work by giving knowledge workers back time and driving productivity. Professional services teams use ZERO solutions to automate, organize, and advance their timekeeping and data management tools.
Use Case: Fortune 100 companies in the professional services industries, such as legal and consulting, can increase billable hours by automating a large portion of their highly manual regular reporting and billing-related tasks.
The majority of data in corporations is unstructured, which makes it unusable for AI applications unless it’s structured. Since general models like GPT may not suffice for domain-specific tasks or real-world use cases, organizations must employ both general Large Language Models (LLMs) and domain-specific models fine-tuned to their data.
AI application: ZERO has created a solution that simultaneously generates structured data where there is none and maintains enterprise data integrity. For example, financial or project-based reporting can be fully automated by connecting emails, legacy systems such as CRM, and project management software through a prompt-based API and harnessing the power of LLMs.
ZERO’s AI engine, Hercules, operates within the client's security perimeter and automatically labels unstructured data. This process enriches, interconnects, and depersonalizes the data, preparing it for multi-mode. ZERO's solution also automatically inherits ethical walls and data access rules, ensuring sensitive elements are stripped from outgoing data while maintaining role-based internal access.
Outcome: This highly secure, hybrid approach to AI for regulated industries and Fortune 100 clients enables the implementation of generative AI and LLMs with high-security standards. As a result, organizations can achieve a 20x improvement in productivity and efficiency in their business processes.
TrustCloud ensures customer trust by customizing AI while building strategic data assets
For companies building out a B2B customer experience that includes generative AI, it’s important to build trust as a priority. That means data stays within the organization, and the most stringent data governance and privacy practices apply. For industries with a heavy focus on security and compliance, there may be concern about upholding compliance standards with publicly trained data models. Having an effective AI solution under these strict restrictions presents companies with extreme challenges and TrustCloud has an AI-based solution.
Company profile: With the fastest, most cost-effective way to get audit-ready, answer security questionnaires and track risk, TrustCloud turns Governance, Risk & Compliance (GRC) into a profit center. TrustCloud empowers businesses to earn the trust of their partners and customers with a transparent, reliable compliance program. Predictive intelligence and programmatic verification ensure companies meet their customer, audit, and governance commitments so they can stay secure and grow their business.
Use case: Reduce the time and resources required to complete security questionnaires by automating responses based on prior questionnaires and a company's current security posture.
Security questionnaires are sent during the sales process when an enterprise is determining whether they can work with a specific vendor. Currently, most companies are relying on their employees to hunt down answers on an ad-hoc basis or try keyword searches in a status knowledge base to find previous answers, update them, and insert them into a questionnaire. Both paths are very time-consuming and leave a lot of room for error.
AI application: TrustCloud applies natural language processing and deep learning to create the best responses specific to a company’s security and compliance program.
Using generative AI to match security questions with answers pulled from a users’ compliance program and previous questionnaires, TrustCloud applies custom domain knowledge as part of the algorithm tuned to the security and compliance domain, incorporating a specific internal lexicon.
Similar to ChatGPT’s reward model, using human labeling to reinforce Natural Language understanding, TrustCloud has a dedicated compliance expert that frequently reviews the output of the algorithms, and provides domain feedback to improve the accuracy of the results. Additionally, customers review the output, and any edits they make are fed back into the model. That human touch along with the high accuracy improves the quality of answers over time and develops more trust with its customers. In this process, TrustClouds builds its own strategic data with smart annotations.
Outcome: TrustCloud completes 60-70% of new questionnaires and 100% of previously-seen questionnaires, saving customers hundreds of hours that would be otherwise spent finding, updating, and inserting answers to questionnaires. The increased accuracy and efficiency ensure these customers never have to jeopardize a sales deal due to security questionnaires.
Companies building generative AI products should think beyond ChatGPT
Startups CAN avoid potential development pitfalls and create success with the right data, the right customization, and the right tooling. Even with the popularity of ChatGPT that does not change. If anything, it puts more pressure on companies to differentiate their experience further so the end user doesn’t go elsewhere once they tire of the novelty ChatGPT currently provides.
So what can B2B business leaders do to develop a product roadmap with the right guardrails for including ChatGPT? Be sure to include these three elements in the AI product strategy:
Data Moat - Developing key defensive data assets ensures B2B companies don’t miss valuable opportunities to differentiate. As legal and ethical debates rise and leveraging public data for training diminishes, access to proprietary, high-quality data has never been more important for verticalizing your generative AI product (e.g., artwork for indie games or architectural design elements).
Customized AI algorithms - Companies will see the most success combining company knowledge with domain-specific data and internally-generated company data. Tech leaders and founders should monitor trends in Large Language Model API providers who, due to cost pressures and performance concerns, are converging towards a single, general LLM (like GPT-4). Domain-specific LLMs have the power to outperform general LLMs and provide a competitive edge to companies.
Infrastructure - B2B startups need to be able to operate in a lean fashion, and build their infrastructure to handle scale. Companies utilizing solutions optimized specifically for foundational model training, provided by companies like Activeloop, can improve time-to-market, and ship their generative AI products faster.
Even with a sustainable infrastructure and strong differentiator, there are other potential blockers keeping startups from successful AI products. We are considering exploring the risks and legal implications in a second article. Please let us know your thoughts and questions and give it a like if this article was helpful.
For support with AI product strategy and development, connect with Gravitate AI and build your AI blueprint in 28 days.
For more insights into the investment space, follow us on LinkedIn and Twitter!