What Questions to Ask Before Starting an AI Project
Before you spend a dollar on AI development, you need to ask the right questions. Not the technology questions - those come later. The business questions that determine whether the project should exist at all.
We've run dozens of AI engagements across Australian businesses, and the ones that fail almost always skip this step. They jump straight into "what model should we use?" without answering "should we do this at all?"
Here are the questions you need to answer before starting an AI project, organised into the categories that matter most.
Questions About the Business Problem
What specific problem are we solving?
This sounds basic, but vague problem statements are the number one cause of failed AI projects. "We want to use AI to improve efficiency" isn't a problem statement. "Our accounts payable team spends 40 hours per week manually entering data from supplier invoices" is.
Write it down in one sentence. If you can't, the problem isn't well-defined enough for an AI project.
How much is this problem costing us today?
You need a baseline to measure against. Calculate the current cost in terms of:
- Staff hours spent on the task
- Error rates and their downstream costs (rework, customer complaints, compliance issues)
- Opportunity cost (what else could these people be doing?)
- Revenue impact (lost sales, slow response times, customer churn)
If you can't quantify the cost, you can't measure ROI. That doesn't mean the project is bad - it means you need to find a measurable proxy before proceeding.
What does success look like?
Define it before the project starts, not after. Be specific.
Bad: "The AI should process documents faster." Good: "The AI should process 80% of standard invoices without human intervention, reducing average processing time from 6 minutes to under 30 seconds."
Include both the minimum viable outcome (what makes this worth the investment) and the stretch goal (what would make this a clear win). This gives the development team a target to aim for and gives leadership a framework for evaluating results.
Is AI the right solution for this problem?
Not every problem needs AI. Sometimes a well-designed spreadsheet, a workflow automation tool, or a process redesign would solve the problem faster and cheaper.
AI is the right tool when:
- The task involves pattern recognition, classification, or prediction
- The volume is too high for manual processing
- The task requires understanding unstructured data (text, images, audio)
- The optimal solution changes based on context and can't be fully captured in rules
AI is probably not the right tool when:
- The process can be captured in simple if/then rules
- The volume is low enough for manual handling
- The data doesn't exist or is fundamentally unreliable
- The real problem is organisational, not technical
Who is asking for this and why?
Understanding the motivation helps you assess whether the project has staying power. If it's driven by a genuine operational pain point, it's more likely to succeed than if it's driven by "everyone else is doing AI."
Check whether:
- The request comes from people who actually experience the problem
- There's executive sponsorship (budget and air cover)
- The motivation is solving a real problem vs. keeping up with trends
- Multiple stakeholders agree on the priority
Questions About the Data
What data do we have?
Before committing to a project, you need to understand your data situation. This means actually looking at the data, not just assuming it exists.
Inventory your data:
- What data is relevant to this problem?
- Where does it live? (databases, spreadsheets, documents, emails, legacy systems)
- How much of it is there? (volume matters for training)
- How old is it? (recent data is usually more relevant)
- Is it structured (database records) or unstructured (documents, emails, images)?
How clean is the data?
"We have the data" and "the data is usable" are two different statements. In our experience, most organisations overestimate their data quality by a wide margin.
Check for:
- Completeness: Are there significant gaps or missing fields?
- Consistency: Is the same thing recorded the same way across records?
- Accuracy: Are the values actually correct?
- Timeliness: Is the data current enough to be useful?
- Labels: If you need to train a supervised model, do you have examples of correct answers?
A common discovery in early AI projects: "We thought our data was clean because it's in a database. It turns out 30% of the records have inconsistencies that matter."
Can we access the data?
Knowing the data exists and being able to use it are different things. Common access barriers:
- Legacy systems with no API or export capability
- Data locked behind permissions that take months to obtain
- Privacy or compliance restrictions that limit how the data can be used
- Data spread across multiple systems that have never been connected
- Third-party data that requires licensing agreements
Identify these barriers early. They're often the longest lead-time items in an AI project.
Is the data sensitive?
If the data includes personal information, financial records, health data, or anything else that's regulated, you need to plan for:
- Data handling and storage requirements
- Privacy impact assessments
- Consent and data usage policies
- Where the data can be processed (on-premises vs. cloud, Australian vs. overseas)
- Who can access the data during development and in production
This isn't just a legal checkbox. It fundamentally affects the technical architecture and can rule out certain approaches entirely.
Questions About the Organisation
Who will own this project?
Every successful AI project we've seen has a clear owner - someone who is responsible for the outcome, has the authority to make decisions, and is available to the development team for questions.
The owner should be:
- From the business side, not just IT
- Someone who understands the current process deeply
- Empowered to make decisions without escalating everything
- Available for regular check-ins (not someone who's in back-to-back meetings every day)
If you can't identify an owner, the project isn't ready.
Will the team adopt the solution?
The best AI system in the world is worthless if nobody uses it. Before building, assess whether the team that will use the system is open to it.
Warning signs of adoption risk:
- The team wasn't consulted in the planning
- People fear the AI will replace their jobs
- The current process, while inefficient, is comfortable and familiar
- There's a history of failed technology projects in this area
- The team doesn't trust the accuracy of AI-generated outputs
If adoption risk is high, budget for change management. Involve the end users early. Show them the prototype. Get their feedback. Make them part of the solution, not the target of it.
What's our risk tolerance?
AI systems make mistakes. The question is whether your organisation can tolerate the mistakes this particular system will make.
Consider:
- What happens when the AI gets it wrong? (Financial loss, customer impact, compliance breach, embarrassment)
- What's the current error rate for the manual process? (AI often needs to beat this, not be perfect)
- Is there a human review step for high-risk outputs?
- How visible are errors? (Internal errors are lower risk than customer-facing ones)
- What's the regulatory environment? (Some industries have specific requirements for AI decision-making)
Do we have the internal capability to maintain this?
After the development partner delivers the solution, someone needs to keep it running. Determine upfront:
- Who will monitor the system in production?
- Who will handle issues when they arise?
- Do you have (or plan to hire) people who can update and improve the AI over time?
- If not, what does an ongoing support arrangement look like with your development partner?
The answer to "we have no AI capability in-house" isn't necessarily a blocker. It just means you need a partner who provides ongoing support, or you need a plan to build that capability.
Questions About Budget and Timeline
What's the realistic budget?
AI projects range from $50,000 for a focused proof of concept to millions for enterprise-wide implementations. Where you sit depends on:
- Complexity of the problem
- State of your data (clean data is cheaper, messy data costs more)
- Integration requirements
- Security and compliance requirements
- Whether you're building custom or configuring off-the-shelf
As a rough guide for Australian mid-market:
- Proof of concept: $30,000-$80,000
- Single use case, production deployment: $100,000-$400,000
- Enterprise AI programme (multiple use cases): $400,000+
Budget for uncertainty. AI projects frequently discover that the data situation is worse than expected or that the problem is more nuanced than initially scoped. A 20-30% contingency is reasonable.
What's the timeline expectation?
AI projects that promise production deployment in four weeks are either very narrow in scope or very optimistic. Realistic timelines for most business AI projects:
- Proof of concept: 4-8 weeks
- Production deployment of a single use case: 3-6 months
- Enterprise integration with multiple systems: 6-12 months
If your business needs results faster, consider whether an off-the-shelf product could meet your needs while a custom solution is developed.
How will we measure ROI?
Connect the project back to the cost baseline you established earlier. Define:
- What metrics will you track?
- How often will you measure them?
- What's the breakeven point?
- Who is responsible for measuring and reporting?
This should be agreed before the project starts, not negotiated after delivery.
Questions About Technology and Architecture
Where will this run?
The deployment environment affects cost, performance, security, and data sovereignty.
Options include:
- Cloud (Azure, AWS, Google Cloud): Most flexible, fastest to deploy, potential data sovereignty considerations
- On-premises: Full control, required by some regulated industries, higher infrastructure cost
- Hybrid: Sensitive data stays on-premises, AI processing happens in the cloud
- Edge: Processing happens at the point of data collection (manufacturing floor, field devices)
For Australian businesses, data sovereignty is often a factor. Make sure you understand where your data will be processed and stored.
How will this integrate with our existing systems?
Most AI solutions need to read from and write to existing business systems. Map out:
- Which systems does the AI need to connect to?
- Do those systems have APIs?
- What's the authentication model?
- What data flows in and out?
- Are there real-time requirements or is batch processing acceptable?
Integration is typically the most underestimated part of an AI project. In our experience, it accounts for 30-50% of the total effort.
What's the fallback plan?
If the AI can't handle a particular input, what happens? Every production AI system needs a graceful degradation path.
Options include:
- Route to a human for manual processing
- Flag for review and continue with a default action
- Queue for later processing when more information is available
- Reject and notify the user
The fallback plan should be designed before development starts. It affects the architecture.
The Pre-Project Checklist
Before giving the green light, confirm you have answers to these ten items:
- The problem is defined in one clear sentence
- The current cost is quantified
- Success criteria are written and agreed
- The data has been assessed (volume, quality, access)
- A project owner is identified and committed
- The end users have been consulted
- Budget is approved with contingency
- Timeline expectations are realistic
- Integration requirements are mapped
- A fallback plan is defined
If you can check all ten, you're ready to start. If you can't check five or more, you're not ready - and starting anyway is how AI projects fail.
Getting Help With the Assessment
If you're considering an AI project but aren't sure whether you're ready, an assessment engagement is a good first step. At Team 400, we run structured readiness assessments that answer all of these questions and produce a clear recommendation on whether to proceed, what to build, and what it will take.
Learn more about our AI consulting services, explore our approach to AI development, or get in touch to discuss your situation.