Project Management for AI | TechRepublic

Managing AI projects requires a different approach than traditional IT project management. What are these differences and how can you manage an AI project successfully?

Image: nd3000, Getty Images/iStockphoto

In 2019, the number of AI projects that ultimately failed was around 85%, with 96% of organizations reporting that they had issues with data quality, data labeling, and model trust. It was also reported that senior management lacked understanding of artificial intelligence and the value it could bring.

Today, AI (and AI projects) are still in the early stages of deployment. If companies are using AI, they are using it in pre-built systems from external vendors where the vendors have developed the AI, not their client companies.

In the future, however, more and more companies will find a reason to develop their own internal AI, which means defining a project management approach that works with AI.

How is an AI project different from traditional projects?

In traditional project management, although done with methodologies like Agile, project success is defined by the software that is produced and a well-understood process. Even though project development is not linear like in Agile, the basic steps are still define, design, develop, test and deploy. The data on which these applications operate is almost always a structured system of record data that is already quality checked and quite mature in form and substance.

Since the data that traditional software development operates on is reliable and everyone understands the development steps used in the project, there is considerably less uncertainty in traditional software development projects. This helps attach credible project timelines based on past project history.

Unfortunately, AI projects don’t have that same stability, and it’s not as easy to assign strict deadlines for project completion.

SEE: Recruitment Kit: Project Manager (TechRepublic Premium)

Managing uncertainty in AI projects

There is no absolute “end” to an AI project, unless it’s a project where you pull the plug.

If you’re an AI project manager, you have to live with this “endless” reality, as do your project’s management and sponsors.

Why is there no end?

Because AI asks questions about the data it analyzes based on the data it operates on, and that data is constantly changing. As you add new data sources, the results change. The AI ​​itself will also contain machine learning (ML) that recognizes data patterns and learns from those patterns. It can also change the results.

Your management and users should understand (and expect) that as the data changes, the results can too. Part of this process is accepting uncertainty as part of the evolution of the AI ​​system.

Define the deliverable of your AI project

At some point, from a project perspective, an AI project should be considered complete.

The goal of most AI projects is to achieve at least 95% compliance of AI results with what subject matter experts would conclude. Once this 95% threshold has been reached, the project is considered sufficiently precise to be put online. It is at this stage that the project should be declared complete.

This does not mean that all work on the resulting application or AI systems is complete. There will be
“drift” over time, which could cause the AI ​​to lose some of its accuracy. At these points the AI ​​will need to be recalibrated to deliver full quality again, but this is software maintenance.

SEE: Top Keyboard Shortcuts You Need to Know (Free PDF) (TechRepublic)

Are AI project deliverables still going as planned?

The answer is a “No!”

Sometimes the data used by the AI ​​is not properly prepared, especially when new, unknown data sources are introduced. Dirty data will skew AI results.

Second, if your business case changes (and the value users want from it), AI will no longer be what the business wants. Finally, there are just cases where AI projects just don’t work no matter how hard you try. This possibility should be discussed up front with management, and everyone should be ready to “unplug” as soon as an AI project shows that it cannot succeed.