But What Can AI REALLY Do For Me?
Image by Sunny Spies
When it comes to new technology—especially innovation that has the power and potential to change the world and all of humanity—the adoption curve is never more present. The Innovators and Early Adopters will be vocal advocates and evangelists … and, these days, these groups also tend to be the ones building, funding, controlling and influencing not just the products and solutions but the market, industry, infrastructure and policies at large. They are creating the supply and the demand, or at least the perception of demand, at this point.
With news cycles happening at what feels like the blink-of-an-eye and social media ruled by algorithms that reinforce our own beliefs and perceptions, it can be easy to believe that AI is the center of all things for all people—reaching far beyond the borders of the intense and extreme funding levels of VC startup land. But, the truth is most people and business owners don’t really understand how it will help them in their day-to-day lives.
Already, the number of startups and existing companies positioning themselves as AI experts with a solution or product to solve damn near any problem is at a fever pitch. Sure, you’ve got the big, main players like the ChatGPTs, Claudes, Geminis, Copilots, etc. of the world, but you also have companies like Adobe, Apple, and Canva rolling out their versions of AI-enabled tools (some doing arguably better than others at defining what AI means for them and their customers), and then there’s a whole slew of companies solving specific problems (Midjourney, Anduril, Consensus). Wading through the options can feel overwhelming.
I thought I would share some of my personal experiences with navigating AI tools and talk a little about how MTGS is approaching integrating AI into our processes and practices.
Built for the Complex and Whimsical
The general “we” have heard about alllll of the amazing things AI can do; we’ve certainly witnessed how imaginative (and weird!) it can be … especially when hallucinating.
Want a story about your dog as an astronaut complete with visuals? Done!
Want to remove watermarks from images and have the background filled in? Done (though should we?!)
Want to enter a 20-part prompt to help you refine your target market? Done! Sort of.
Want an image of a sales funnel without misspellings? FAIL. Even prompting it to correct the spelling and using the proper spellings doesn’t fix the issue.
Want an image of a sales funnel that doesn’t look like a literal plastic funnel? FAIL.
Want to take an existing slide deck and turn it into something magical (yet usable)? FAIL.
Want an image of dancers that aren’t conspicuously (and inappropriate for context) missing limbs or have extra limbs that come from nowhere? FAIL.
Want to be able to ask your phone or other device to set a recurring alarm that goes off every 50 minutes, restarting at the top of each hour? Not a hope in Hades. (What, doesn’t everyone want to take a 10-minute break every hour?)
Each of the fails noted above are ones I personally experienced within the last few weeks across ChatGPT, Claude, Canva, Shutterstock, and Gemini, amongst others.
And herein lies two big barriers to broad adoption: 1) Practical daily use in the devices people are currently using and 2) trust. Most people don’t yet really understand how AI can really help them on the daily (beyond entertainment and text editing). And, because simple (or seemingly simple) requests often result in meme-worthy results, people end up with a general dismissive-shrug, “meh” perspective.
Bias in AI
Bias in AI is so well-known and well-documented that both SAP and IBM have entire sections of their websites dedicated to it (in addition to the universities, non-profits, other public and private businesses, and research studies that directly address it). This is not one of those times you can simply delete all mention of specific terms and pretend (or fervently believe) it doesn’t exist.
In fact one of the easiest examples we can give of bias in AI is in healthcare and the misrepresentation and/or under-representation of women in data sets for the simple fact that women were largely excluded from clinical trials until 1993. For the majority of history, women were simply treated as small men. Predictive AI algorithms may produce inaccurate results for any underrepresented data; this can be seen in computer-aided diagnosis systems that return results with lower accuracy for African American persons compared to white persons.
My recent experience with bias in AI is typical and unsurprising yet disheartening. I am a doctor of clinical nutrition. My husband was recently doing some research on the behalf of my private practice. When he prompted ChatGPT for some market data and used my professional title, Dr. Fowler, DCN, the responses came back referring to me solely using male pronouns. This isn’t a pronoun issue. This stems from the deeply held bias that doctor = male.
Another example that happened just last week. I was using the image generation tool in Canva; I wanted it to create an image that would be appropriate to use to welcome the attendees of a women’s business group meeting. The exact prompt I used: “welcome image for a women’s business group meeting.” I’ll give them props for making the image multicultural, which I did not specify (I usually do specify the inclusion of multicultural individuals across an age range of 30-70 years old, but this time I did not out of sheer curiosity to see what it would create). The significant downside? One of the four options had words that were completely unintelligible; the one that had no words on it at all used essentially the same person repeated several times. The most problematic, though, was the one titled “Caucasian Women’s Business Meeting.”
I had included no mention of race in my prompt. Yet, it returned a result using a term that, here in the U.S. (not necessarily reflective of its anthropological roots), is centered on whiteness.
I want to expect more from the cutting edge innovation that’s supposed to change our world. But without working to correct these biases, which can take significant time, effort, research and algorithmic adjustments, the data it spits out is only as good as the data that is put in (we all remember the glue as a pizza topping debacle that sourced from an article on The Onion). Like I’ve told so many marketing and product teams when we’re designing tests: garbage in, garbage out. Dirty data doesn’t equal stellar results, even if it might appear that way. It’s like using non-randomized, unblinded data and/or not accounting for confounding variables in a clinical research study … you can accept the results but you should maintain a certain level of awareness (and even skepticism) of the biases that may exist and the incomplete (and potentially inaccurate) picture the results may render. Which is why discernment is still a really good human quality to bring to AI results.
What We’re Doing at MTGS
Like any startup, we’re looking for ways to streamline processes. Naturally, AI was one of the first places we turned. What we found, though, was an abundance of options that only partially fit our needs. This is a determination we came to after hours and hours of researching alternatives, days and weeks of trying various tools, and coming away with a general sense of frustration.
Then, we got connected with an individual who specializes in helping businesses figure out how to make AI work for them. After a great deal of consideration around where to start and what problem to solve or process to optimize first, we decided to tackle what a lot of startups spend a lot of time and resources on: Lead qualification. With our own unique twists and turns, of course.
There’s something to be said for having to go through the exercise of articulating exactly what you need; spending time refining processes, defining parameters, creating scoring systems and taxonomies, etc. We looked at several out-of-the-box options; some looked incredibly promising. So, what did we end up going with? A custom agent. Yeah, that’s right. Nothing suited our needs for the way we actually work and the processes we currently utilize that also fit our business size and budget.
We’re not quite ready to unveil what we have in the hopper. We still have some work to do. But, I’ll say this, it’s helping us channel our inner Poindexter and making us feel like downright smarty pants.
Stay tuned. We’ll have more on this in the future.