At the end of each year, I always take some time to reflect on the past year and consider the path ahead. Over the last few weeks of 2024, I did a lot of soul-searching about the future of SteveBizBlog and its role in the evolving digital landscape.
Over the past 11 years, SteveBizBlog has grown into a trusted resource for small business owners, featuring over 1,200 posts and more than 1,000,000 words of expert advice; the equivalent of 17 full-length business books. However, with search habits shifting away from traditional keyword-based searches to conversational AI, I realized the pressing need to evolve. Faced with the “adapt or die” reality of this changing landscape, I began considering what it would take to transform my extensive knowledge base into an intelligent, interactive assistant leveraging AI.
To be more specific, as I pondered the future of years of work, I recognized that the age of stand-alone blogs as a primary resource for small business research was fading. While I have implemented sophisticated search features on SteveBizBlog to keep pace with my content repository, which has grown over the years, it still fundamentally relies on traditional keyword-based search, like the way we all interact with search engines such as Google. However, the paradigm is shifting to a conversational interface with semantic understanding.
Personally, I’ve found myself using conversational AI tools like ChatGPT for answers far more often than traditional keyword-based search—and I’m sure I’m not alone. Websites that depend on keyword-based search rely heavily on Search Engine Optimization (SEO) to improve their organic reach and compete for a spot on the highly coveted first page of the Search Engine Results Page (SERP). However, with the exponential growth in the volume of web content, chasing web traffic through SEO increasingly feels like an outdated strategy, rooted in a model that can no longer keep up with this oversaturation of available information.
Given that keyword-based search would soon no longer the answer and that contextual conversational AI was the future, I began exploring ways to make my content more appealing to Large Language Models (LLMs) like ChatGPT, hoping it might cite my pages as sources to improve SteveBizBlog’s organic reach. However, as I thought more about optimizing my content for LLMs, a significant issue based on my experience with GPTs like ChatGPT and Copilot surfaced.
When using ChatGPT during interactive research sessions with my clients, I occasionally encountered hallucinations—instances where ChatGPT fabricated facts or cited sources that didn’t exist. This highlighted a clear need for a contextual conversational AI tool specifically tailored to the challenges small business owners face, built on vetted content, such as the 1,200+ posts on SteveBizBlog.
The big question, though, was this: how could a self-funded effort like SteveBizBlog make this transition to a highly specialized and trustworthy conversational AI?
To explore ways to leverage the content I had already developed, I turned to ChatGPT. I asked it to deconstruct my business model using first principles and reimagine it within the context of the rise of conversational AI—all while adhering to a very restricted budget, as SteveBizBlog is entirely self-funded.
After an in-depth discussion, ChatGPT and I developed a framework for a potential solution. I then evaluated this framework by applying a series of assessment criteria, including its novelty, feasibility, specificity, impact, and workability. The outcome was a detailed project proposal, complete with an action plan broken down into clear implementation steps.
To quickly develop a Minimal Viable Product (MVP) to test some of my lingering assumptions, I used a WordPress plugin called WP All Export to extract the content from my blog and save it as a spreadsheet. Using this dataset, I created a custom GPT using ChatGPT to gather feedback from my peers as part of a proof of concept.
After seeing the potential and possibilities of a conversational AI bot trained exclusively on my content, I decided to take the next step. I shared my project proposal on Upwork, seeking an expert in AI chat development to help transform my MVP into a Minimum Marketable Product (MMP). I included the detailed project proposal, complete with an action plan, and invited bids to bring the project to life.
Posting this job turned out to be a truly enlightening experience. Many of the responses I received highlighted gaps in my understanding of AI solutions. As I often remind my clients, “You don’t know what you don’t know,” and this project drove that point home more than ever.
The technical details in many of the responses were beyond my expertise, prompting me to dive deeper into areas like structuring data with JSON, understanding how vector databases categorize, map, and index data for faster access, and exploring Retrieval-Augmented Generation (RAG) frameworks. But I’m getting ahead of myself.
What follows is a detailed account of how we transformed my MVP into an MMP, SteveBizBot. The mastermind behind this elegant solution was James Allen, a highly skilled professional I hired on Upwork. Remarkably, James created a proof of concept even before I brought him on board, which made his bid stand out above the rest and convinced me to do whatever it took to secure his expertise. Without his invaluable contributions, the deployment of SteveBizBot would not have been possible.
From Words to Data
The first step in the journey was to scrape the blog content from my WordPress website. Each post, including the metadata, including items such as the post’s title, except text, tags, categories, and URL, was parsed and organized into a JSON (JavaScript Object Notation) file—a structured, server-friendly format that made transferring and processing the data straightforward. With this data in hand and organized in the JSON file, the groundwork for building an AI-powered assistant was laid out.
Making Sense of It All
The next challenge was to transform this textual data into something that AI could understand. This involved converting the JSON file into vector representations using embedding models. This was done using a pre-trained OpenAI model. Think of vector embedding as assigning numerical values to words and phrases so that AI can analyze and identify various patterns. These vectors were then imported into Pinecone’s serverless vector database.
Why Pinecone? Pinecone offered us a flexible, pay-as-you-go pricing model based on usage, making it ideal for scaling as SteveBizBot gains more traction. With no upfront or fixed costs, I can keep expenses entirely variable, which is always a smart strategy when starting out, and the level of user adoption is still uncertain.
In addition to its cost efficiency, Pinecone excels at efficient indexing, creating a robust “memory” for SteveBizBot. Leveraging semantic connections in my data enables the bot to perform rapid and accurate searches, ensuring users receive relevant, contextual answers to their queries. This made Pinecone an excellent choice for building and maintaining high-performing AI-driven applications.
Building the Chatbot Brain
Since Pinecone does not provide hosting or application development, once the data was indexed and stored in Pinecone, we created an API in Pinecone so that Vercel, the tool we chose for creating the backend endpoints, could access the vector database on Pinecone.  Â
Finally, we wrapped it all in a chatbot framework using Python, OpenAI, and the GPT-4o-mini model. This is another way to keep the rollout costs manageable while delivering reliable performance.
Here’s how it works:
- A user submits a query.
- The query is processed through a framework called Retrieval-Augmented Generation (RAG). This ensures that the chatbot pulls its responses directly from the blog’s data, eliminating the risk of AI hallucinations, one of the goals of this project.
- The user’s query, combined with the data retrieved, is fed into ChatGPT, which crafts the response to the user using Natural Language.
- Each response to the user cites the blog post(s) used, giving users a clear source for more in-depth reading.
- Since the chat is retained in memory, the user can continue to interact with the bot by asking follow-up questions until all their immediate question are satisfied.
SteveBizBot’s Unique Value
What really sets SteveBizBot apart from just using a GPT that relies on a Large Language Model (LLM) is its source knowledge. Every piece of knowledge used to train SteveBizBot comes from my blog, which I personally researched and vetted. This guarantees users access to reliable, well-researched information, unlike generic chatbots that may pull from unvetted sources. Moreover, each response is linked to its original post, giving users a chance to dive deeper into the topics that matter to them.
Conclusion
Navigating a changing landscape requires adaptability and a willingness to embrace new approaches. Here are some key lessons you can take away from my journey:
- Regularly review your offerings: Ensure they remain relevant and aligned with the evolving needs of your audience or market.
- Leverage tools like ChatGPT: Use innovative technologies to reimagine the future and explore fresh opportunities.
- Start with a proof-of-concept MVP: Build a minimal viable product to gather early feedback and refine your ideas.
- Engage with experts: Seek out advice, remain open to constructive criticism, and conduct your own research to validate their input.
- Step outside your comfort zone: Embrace challenges to expand your knowledge and expertise.
- Document your experiences: Writing about your journey—whether in a blog post or procedure—helps solidify your learning and share insights with others.
By keeping these lessons in mind, you’ll be better equipped to adapt, innovate, and grow in a dynamic world.
What can you learn from my journey that you can apply to your business?