Back to Blog
case-studyautomationtechnical

Teaching Claude About Custom Manufacturing Specs

A generic chatbot would quote specs that could get someone hurt. I built one that knows exact materials, tolerances, and pricing from real company data.

By Mike Hodgen

Want the full technical deep dive? Read the detailed version

A custom manufacturing client came to me with a problem that was costing them deals every single week.

Their customers — engineers, procurement teams, project managers — would call or email with technical questions. What materials do you work with? What sizes can you do? How tight are your tolerances? How long will it take? How much will it cost?

The sales team was three people. On a good day, they'd respond in a few hours. On a bad day, the next morning. And the real answers to most of these questions? They lived in spreadsheets, buried PDFs, and the heads of two guys who'd been there 15 years.

Every slow response was a lost deal. Every wrong answer about a specification was a liability. This wasn't a problem you solve with an FAQ page. This needed a smart assistant that actually understood the company's products — down to the exact materials, dimensions, and pricing.

Why a Regular Chatbot Would Be Dangerous Here

You might think, "Just put ChatGPT on the website." I tested that idea. Here's why it doesn't work.

ChatGPT and similar AI tools are great at sounding confident. But they don't look up real data — they make educated guesses based on patterns they've seen across the entire internet. Ask it about the strength of a specific type of aluminum in a specific thickness, and it'll give you an answer that sounds right but could be 15% off. In e-commerce, 15% off is a rounding error. In manufacturing, 15% off means a part fails and someone gets hurt.

Even worse than obviously wrong answers are answers that are almost right. Imagine a customer asks about a specific combination of material, thickness, and finish. A generic chatbot would happily describe pricing and timelines for a configuration the shop floor can't actually produce. It doesn't know what's real and what's impossible. It just sounds like it does.

That's not a chatbot problem. That's a lawsuit waiting to happen.

How I Built Something That Actually Works

The AI model — think of it as the "brain" — is only about 20% of the solution. The other 80% is the structure around it.

Here's the plain version of what I built.

First, I organized all the company's real knowledge into a system the AI could search. Product catalogs, material spec sheets, tolerance tables, pricing tiers, lead time estimates. I didn't just dump everything into a folder and hope for the best. I separated it into categories — materials in one layer, pricing in another, lead times in a third. When a customer asks a question that touches multiple areas, the system pulls the right answer from each category and assembles one clear response.

Second, I set hard rules for what the AI is allowed to say. This is the most important part. The AI can only answer using data from the knowledge base. If the answer isn't in there, it doesn't guess. It says, "I don't have that information, but let me connect you with someone who does." Period.

Telling an AI "don't make things up" doesn't actually work — that's like telling someone "don't think about elephants." Instead, I built structural rules. The AI has a defined list of topics it can discuss: specific products, specific materials, specific customization options that actually exist. Anything outside that list gets a pre-defined "I don't know" response.

Third, I connected it to the sales workflow. Every chat conversation gets logged automatically. When a customer discusses specific products and specifications, the system creates a lead record with the full context — what they asked about, what specs they need, what custom requirements they mentioned. When the conversation reaches the point where a real quote makes sense, it sends an alert to the sales team with everything they need. The rep picks up the conversation already knowing exactly what the customer wants.

I Tried to Break It on Purpose

I fed the system over 200 trick questions designed to make it invent answers. Questions about product configurations that don't exist. Questions that mix real product names with impossible specifications. Questions disguised as assumptions: "Since you offer X in Y configuration, what's the pricing?" — when that configuration has never existed.

On the first pass, it correctly caught 94% of these traps. I tightened the rules, re-tested, and got that above 98%.

The hardest part of building AI isn't making it smarter. It's making it honest about what it doesn't know.

What Customers Actually Experience

Here's a real interaction flow.

A procurement engineer visits the website and opens the chat. Types: "Do you carry 316 stainless in 14 gauge with a brushed finish?"

The AI responds in under two seconds. Yes, here are the available options, standard dimensions, minimum order quantities, and pricing tier.

The engineer asks about a custom width. The AI checks its knowledge, confirms the custom width is available, notes the additional lead time, and provides approximate pricing.

Then the engineer asks about a coating option that isn't in the system. Instead of making something up, the AI says: "I don't have specs for that coating on this product line. Can I grab your email and connect you with our team?"

Email captured. Lead created. Sales notified with the full conversation. The rep follows up within the hour with complete context.

The old version of this? Engineer sends an email, waits 6 hours, gets a partial answer, emails back, waits overnight. A 2-3 day back-and-forth now starts producing qualified leads in the first 5 minutes.

What Surprised Me

Response time went from hours to seconds. That changed the close rate on inbound inquiries because prospects were getting answers while they were still in buying mode — not after they'd already emailed two competitors.

One sales rep told me she was saving over an hour a day just on initial qualification conversations.

The biggest surprise was counterintuitive. The constraint system — the thing that prevents the AI from answering questions it shouldn't — actually built more trust than a system that tried to answer everything. Customers noticed when the AI said "I'm not sure, let me connect you with someone." It made them trust the answers it did give.

One honest caveat: this isn't set-and-forget. When the manufacturer adds new products or changes specs, the knowledge base needs updating. I built the system to make updates straightforward, but someone still has to maintain the source of truth. The AI is only as good as the information you feed it.

This approach works when you have specialized knowledge that generic AI gets wrong, when your sales team is stuck answering the same technical questions over and over, and when response time directly affects whether you win or lose the deal.

It doesn't make sense when a well-organized FAQ handles 90% of questions, or when the knowledge only lives in someone's head and hasn't been written down yet. In that case, organizing the data comes first.

But for the right business, this is the difference between a sales team spending 60% of their time on repetitive questions and one that focuses on closing deals with pre-qualified, context-rich leads.

Want to Explore What AI Could Do for Your Business?

If this sounds like what your team deals with — specialized knowledge, technical buyers, a sales bottleneck on repetitive questions — I'd be happy to walk through how something like this would work for your specific operation.

Book a free 30-minute strategy call. No pitch deck, no sales team on the other end. Just a real conversation about your operations and where AI fits.

Book a Discovery Call

Get AI insights for business leaders

Practical AI strategy from someone who built the systems — not just studied them. No spam, no fluff.

Ready to automate your growth?

Book a free 30-minute strategy call with Hodgen.AI.

Book a Strategy Call