9 min read Case Studies

Starting an AI Project: 3 Mistakes Every Company Should Avoid

The most common mistakes in AI projects for SMEs — and how to avoid them. From practice, not from textbooks.

Why Do AI Projects Fail in the Mid-Market?

The statistics are sobering: According to Gartner, around 85% of all AI projects fail — they either never go into production, don’t deliver the expected value, or are quietly discontinued after the pilot phase. In the mid-market, the rate is even higher.

This is rarely about the technology. AI models work remarkably well today. It comes down to three recurring mistakes that I have observed across dozens of projects in recent years. In this article, I will show you these mistakes — and how to avoid them from the start.

Key Takeaway: The most common causes of AI project failure are not technical in nature. They are strategic and organizational mistakes that can be easily avoided with the right approach.

Mistake 1: Starting with Technology Instead of the Business Problem

What Goes Wrong

The typical entry point sounds like this: “We need to do something with AI too.” Or: “The competition is already using ChatGPT.” This spawns a project that searches for a technology — not one that solves a problem.

The result: A chatbot nobody uses. A dashboard nobody needs. A proof of concept that works technically but has no measurable business value.

A Real-World Example

A mid-sized wholesale distributor wanted to introduce “AI for sales.” The project team spent three months evaluating various AI platforms, built a prototype for lead scoring, and proudly presented the results. The problem: the sales team didn’t have a lead problem — they had a quoting problem. 80% of their time went into manually creating individual quotes.

The AI platform could do lead scoring perfectly — but nobody needed it. The project was shut down after six months and a €120,000 investment.

How to Do It Right

Start with the pain, not the solution.

  1. Analyze processes: Where do your employees spend the most time on repetitive tasks?
  2. Quantify costs: What does the problem cost you today? (Personnel costs, error costs, opportunity costs)
  3. Only then evaluate technology: Can AI solve this specific problem? Which approach fits?
  4. Calculate ROI upfront: If the expected savings aren’t at least 3x the investment, reconsider the project

Rule of Thumb: A good AI project can be described in one sentence: “We automate [specific process] to achieve [measurable result].” If you need more than one sentence, the project isn’t ready yet.

Checklist: Is Your AI Project Business-Driven?

  • The problem to solve is clearly defined and measurable
  • The affected employees confirm it is a real problem
  • The current costs of the problem are quantified
  • The expected ROI has been calculated
  • There is a business owner (not just an IT owner) for the project

Mistake 2: Trying to Automate Everything at Once

What Goes Wrong

The second mistake often follows directly from a successful initial analysis: “If we’re going to do AI, let’s do it properly.” Instead of a focused pilot, a comprehensive transformation program is launched. Five departments, twelve use cases, an ambitious project plan spanning 18 months.

The result: After six months, nothing is in production. The budget is half spent. The business departments are losing patience. Management is asking for results. And the project team is stuck in endless coordination rounds.

A Real-World Example

An automotive supplier planned the “AI-powered Smart Factory.” The project plan simultaneously included predictive maintenance, automated quality control, AI-based production scheduling, and intelligent logistics. Four external consultants, two internal project managers, a 12-person project team.

After nine months, not a single use case was live. The predictive maintenance solution had 92% accuracy in the lab but didn’t work with the actual sensor data from the machines. Quality control failed due to missing training data. Production scheduling failed due to a missing ERP interface.

The company stopped the project and restarted — this time with predictive maintenance on just one machine. Six weeks later, the first use case was in production.

How to Do It Right

One use case. One machine. One team. Six weeks.

  1. Choose a single use case: The one with the clearest ROI and fewest dependencies
  2. Smallest possible scope: One department, one machine, one process
  3. Fix the timeframe: Maximum 8–12 weeks to pilot results
  4. Small teams: 3–5 people, with at least one from the business department
  5. Only expand after success: Rollout and next use case only after proven ROI

From Practice: The most successful AI projects I have accompanied all had one thing in common: they started ridiculously small. One process, one problem, one solution. And they were in production within weeks, not months.

The Right Sequence

PhaseDurationScopeGoal
Pilot6–12 weeks1 use case, 1 areaProve feasibility + ROI
Rollout2–4 monthsSame use case, all areasScale + stabilize process
ExpansionOngoingNext use caseBuild portfolio

Mistake 3: Underestimating Data Quality

What Goes Wrong

“We have plenty of data” — I hear this in almost every initial conversation. And it’s usually true: there is data. But when you look more closely, a different picture emerges:

  • Data silos: Sales, production, and accounting each have their own systems. Customer data exists three times, with different spellings
  • Gaps: 30% of records have missing fields. Machine sensors deliver sporadically instead of continuously
  • Inconsistencies: Same values in different formats (dates, units, naming conventions)
  • Legacy issues: Data from old systems whose structure nobody understands anymore

AI models are only as good as their training data. “Garbage in, garbage out” applies even more strongly to AI than to traditional software.

A Real-World Example

A mechanical engineering company wanted to implement AI-based demand forecasting. The order history from the last five years was available in the ERP system — at first glance, a solid data foundation.

Analysis revealed: in the first two years, returns had not been recorded correctly. Special pricing was booked as regular orders. And a system migration three years ago had caused a break in article numbering — the same product had different numbers in the old and new data.

The AI model delivered forecasts with 45% accuracy — worse than the sales director’s gut-feel forecast. Only after four weeks of data cleansing did the model reach 82% accuracy.

How to Do It Right

Invest 60% of pilot time in data, not in models.

  1. Conduct a data audit: Before building a model, understand your data

    • What sources exist?
    • How complete is the data?
    • Are there systematic errors?
    • How current is the data?
  2. Define minimum quality: Not all data problems need to be solved before the pilot — but you need to know which ones matter

    • Which fields are critical for the model?
    • What error rate is acceptable?
    • Can missing values be meaningfully imputed?
  3. Budget for data cleansing: Allocate 30–50% of the project budget for data work

    • Remove duplicates
    • Handle missing values
    • Standardize formats
    • Implement plausibility checks
  4. Build a data pipeline: Don’t just clean once — establish a sustainable process

    • Automated quality checks
    • Monitoring for data anomalies
    • Clear responsibilities for data quality

Benchmark: An AI model typically needs 500–5,000 clean, representative data records to get started. That sounds like very little — but “clean and representative” is the critical point.

Data Quality Quick Test

Answer these five questions for your planned AI application:

QuestionGoodProblematic
How many relevant data records do you have?> 1,000< 500
How complete are the critical fields?> 90%< 70%
How old is the data?< 2 years> 5 years
Is there a uniform structure?Yes, one systemMultiple silos
Is data captured continuously?AutomaticallyManually, sporadically

If you land in the “Problematic” column for two or more questions, plan additional time for data preparation — or choose a use case with a better data situation.

Bonus: The Three Mistakes Are Connected

In practice, these mistakes rarely occur in isolation. A company that starts with technology instead of a business problem (Mistake 1) tends to want everything at once (Mistake 2) — and only realizes late that the data doesn’t cooperate (Mistake 3).

The common denominator: Lack of focus. Successful AI projects in the mid-market always share three characteristics:

  1. A clearly defined business problem with measurable ROI
  2. A deliberately small scope with fast time-to-value
  3. An honest assessment of the data situation with a realistic effort for data cleansing

Conclusion: Start Small, Learn Fast, Then Scale

The best strategy for your first AI project is not an 18-month plan with a 500-page requirements document. It is a focused pilot that proves in 8 weeks that AI works for your specific problem.

Invest the first two weeks in problem analysis and data assessment. Build a prototype in the next four weeks. And use the last two weeks to validate the ROI with real numbers.

If the pilot works, you have the best foundation for next steps. If it doesn’t, you found out in eight weeks and with a manageable budget — instead of after 18 months and a six-figure failed investment.

Ready for your first AI project — but this time done right? We guide mid-market companies from problem analysis to a productive pilot. No buzzword bingo, just pragmatic solutions with measurable ROI.

Schedule a Free Consultation → | Learn More About Our Process →

Dennis Pfeifer
Dennis Pfeifer
Founder & IT Consultant
LinkedIn

Related articles

Mehr Praxiswissen?

Erhalten Sie neue Artikel direkt in Ihr Postfach. Kein Spam, jederzeit abmelden.

No spam. Unsubscribe anytime.Privacy Policy

Haben Sie Fragen?

Lassen Sie uns über Ihr Projekt sprechen.