What to Expect in Your First 90 Days of an AI Project
You have approved the budget. You have signed the contract. The AI project is officially underway. Now what?
The first 90 days of an AI project set the trajectory for everything that follows. Get them right and you build momentum, trust, and early results. Get them wrong and you burn through budget producing slide decks that never turn into working software.
After leading AI projects for Australian businesses ranging from mid-market companies to large enterprises, I have seen both outcomes. Here is an honest, week-by-week view of what the first 90 days actually look like.
Days 1-14 - Discovery and Alignment
The first two weeks are not about building anything. They are about making sure everyone agrees on what you are building and why.
What happens:
- Stakeholder interviews - We talk to the people who sponsor the project, the people who will use the system, and the people who maintain the existing process. These are three different groups with three different perspectives, and all of them matter.
- Process mapping - Document the current process in detail. Not how the process manual says it works - how it actually works. Watch people do the work. Ask about the workarounds, the exceptions, the things they do that are not in any documentation.
- Data audit - What data exists? Where does it live? What format is it in? How clean is it? How do you access it? This is the single most important activity in the first two weeks because data problems will derail everything downstream.
- Technical environment assessment - What systems need to integrate? What infrastructure is available? What security and compliance requirements apply?
- Success criteria definition - Write down, specifically, what success looks like. Numbers, not adjectives. "95% accuracy on standard invoice extraction" is a success criterion. "Better efficiency" is not.
What you should have at the end of day 14:
- A shared understanding of the problem
- A documented current-state process with baseline metrics
- A data availability assessment
- Defined success criteria with buy-in from stakeholders
- A project plan for the next 76 days
What can go wrong:
- Data access delays. In enterprise environments, getting access to production data can take weeks due to security reviews, privacy assessments, and approval chains. Start this process on day 1. Do not wait until you need the data to request it.
- Scope creep. Someone will say "while we're at it, could we also..." in the first stakeholder meeting. Have a parking lot for future ideas and keep the scope locked for this engagement.
- Misaligned expectations. The executive sponsor expects a deployed system in 90 days. The IT team expects a 6-month research project. Get alignment early or you will fight about it for the next 3 months.
Days 15-35 - Proof of Concept
With discovery complete, you move into building a working proof of concept. This is where the project gets tangible.
What happens:
- Architecture design - Select the AI models, design the integration approach, define the data pipeline, and choose the deployment infrastructure. At Team 400, we typically work with Azure AI for enterprise clients, but the architecture decision is driven by your requirements, not our preferences.
- Data preparation - Clean, structure, and prepare the data the AI system needs. This is often the most time-consuming part. Budget at least a week for data work, even if the data looks clean at first glance.
- Core model development - Build the AI capability. This might be prompt engineering for a foundation model, fine-tuning for a specialised use case, or building an agentic workflow that coordinates multiple AI components.
- Initial testing - Run the system against real data and measure performance. Compare results to your baseline.
What you should have at the end of day 35:
- A working proof of concept running against real data
- Initial accuracy and performance metrics
- A list of edge cases and limitations
- A clear go/no-go assessment for moving to the next phase
What can go wrong:
- Data quality surprises. The data audit said the data was "mostly clean." Turns out "mostly" means 30% of records have missing fields, inconsistent formats, or outright errors. This is normal. Budget time for it.
- Model performance gaps. The AI handles the common cases well but struggles with the tail - the 15-20% of cases that do not fit the standard pattern. This is expected at the PoC stage. The question is whether the gap is closeable, not whether it exists.
- Integration complications. Connecting to legacy systems is almost always harder than expected. APIs that are supposed to exist do not. Documentation is outdated. Test environments do not match production.
Days 36-60 - Iteration and Hardening
The PoC works, but it is not production-ready. This phase is about closing the gaps.
What happens:
- Accuracy improvement - Refine prompts, add examples, adjust model parameters, and handle the edge cases identified during the PoC. This is iterative work - expect 3-5 cycles of test, adjust, re-test.
- Error handling - Build the logic for what happens when the AI gets it wrong. When does it flag for human review? When does it retry? When does it escalate? Good error handling is the difference between a demo and a production system.
- User interface development - If the system has a user-facing component, build it now. The interface does not need to be beautiful at this stage, but it needs to be functional and usable.
- Integration development - Connect the AI system to the upstream and downstream systems it needs to work with. APIs, databases, file systems, messaging queues - whatever the architecture requires.
- Security review - Work with your security team to review the system for vulnerabilities, data handling compliance, and access controls.
What you should have at the end of day 60:
- A system that meets accuracy thresholds on the full range of expected inputs
- Error handling and escalation workflows in place
- Working integrations with upstream and downstream systems
- A user interface (if applicable) that has been tested with actual users
- Security review completed or in progress
- A pilot plan for the next 30 days
What can go wrong:
- Diminishing returns on accuracy. Going from 80% to 90% accuracy might take one week. Going from 90% to 95% might take three weeks. Going from 95% to 99% might take three months. Know where the threshold of "good enough" is and do not chase perfection at the expense of time.
- User feedback reveals new requirements. When actual users see the system, they will identify workflows and needs that were not captured in discovery. Some of these are important. Some can wait for version 2. Making this distinction quickly is a skill.
- Security review blockers. If your security team has a 4-week review cycle, you needed to submit the review request at the end of the PoC phase. Late security submissions are one of the most common causes of project delays in enterprise AI.
Days 61-90 - Pilot Deployment
The system is ready for real-world testing. Now you deploy it in a controlled production environment with actual users.
What happens:
- Pilot group setup - Identify 5-15 users who will use the AI system as part of their actual work. Train them. Set up support channels. Define the feedback process.
- Staged rollout - Start with the AI in "assisted mode" where every output is reviewed by a human. Over the 30 days, gradually increase autonomy as confidence builds.
- Monitoring and adjustment - Track every metric defined in your success criteria. Make adjustments to the system based on real-world performance data.
- Feedback collection - Weekly check-ins with pilot users. What is working? What is frustrating? What edge cases are they hitting?
- Results reporting - At the end of 90 days, compile the pilot data and present results to stakeholders.
What you should have at the end of day 90:
- 30 days of real-world performance data
- User feedback from the pilot group
- Quantified performance metrics compared to baseline
- A documented list of issues and improvements needed
- A recommendation on full deployment (go, go with changes, or no-go)
- A rollout plan if proceeding
What the Emotional Journey Looks Like
Nobody talks about this, but the emotional arc of an AI project matters as much as the technical arc. Here is what we have seen consistently:
Days 1-14: Excitement. Everyone is enthusiastic. The possibilities feel endless. Stakeholders are engaged and optimistic.
Days 15-25: Reality. The data is messier than expected. The integration is harder than expected. The first test results are underwhelming. Doubt creeps in.
Days 25-35: First breakthrough. The PoC produces results that actually work. Not perfectly, but clearly better than random chance and visibly useful. Confidence rebuilds.
Days 36-55: The grind. Incremental improvement is hard work. Progress is real but feels slow. Stakeholders start asking "is it done yet?" This is the phase where projects need strong project management and clear communication.
Days 55-70: Momentum. The system is working well. User testing produces positive feedback. The pilot is coming together. The team starts seeing the finish line.
Days 70-90: Validation. Real users are using the system and it is delivering value. The data proves the business case. The conversation shifts from "does this work?" to "how fast can we roll this out?"
Understanding this pattern helps you prepare. When you hit the reality check at day 20, you will know it is a normal part of the process, not a sign of failure.
Realistic Expectations by Project Type
Not all AI projects follow exactly the same 90-day arc. Here is how it varies:
Document processing and extraction
- Discovery: 1-2 weeks
- PoC: 2 weeks
- Iteration: 3-4 weeks
- Pilot: 4 weeks
- These projects tend to hit milestones earlier because the success criteria are concrete and measurable.
Conversational AI assistants
- Discovery: 2 weeks
- PoC: 2-3 weeks
- Iteration: 4 weeks
- Pilot: 4-6 weeks
- The iteration phase is longer because conversation quality is subjective and edge cases are harder to enumerate.
Multi-step agentic workflows
- Discovery: 2 weeks
- PoC: 3-4 weeks
- Iteration: 4-6 weeks
- Pilot: May extend beyond 90 days
- Complex agent systems often need more than 90 days to reach pilot stage. Set expectations accordingly.
Data analysis and reporting
- Discovery: 2 weeks
- PoC: 2-3 weeks
- Iteration: 3-4 weeks
- Pilot: 3-4 weeks
- Often completes within 90 days because the outputs are easy to validate against known-correct analyses.
Five Things to Do Before Day 1
If you want your 90 days to go as smoothly as possible, get these sorted before the project officially starts:
- Identify your data sources and start the access request process. Do not wait until the project kicks off.
- Assign a business owner who has decision-making authority and can commit 20-30% of their time to the project.
- Brief your IT and security teams so they know the project is coming and can plan their review cycles.
- Set up a project collaboration space (Teams channel, Slack workspace, shared folder) so communication is centralised from day 1.
- Align stakeholders on scope before the contract is signed. "Out of scope" is much easier to say before the project starts than after.
How Team 400 Structures the First 90 Days
At Team 400, we have refined our approach to the first 90 days across dozens of AI development projects for Australian businesses. Our structure is built to produce tangible results at every milestone.
We assign a dedicated team from day 1 - not a rotating cast of consultants. The same engineers who run discovery build the PoC, iterate the system, and support the pilot. This continuity means no knowledge loss between phases and faster decision-making throughout.
Our 90-day engagements include weekly progress updates, fortnightly stakeholder reviews, and a final presentation with quantified results and a clear recommendation. No ambiguity, no open-ended research, no surprises.
If you are planning an AI project and want a realistic view of what the first 90 days will look like for your specific use case, reach out to us. We will give you an honest assessment before you commit.