Three Implications of AI for the Enterprise

Learn Aster
Community Manager

From self-driving cars to photo recognition, AI is becoming an increasing presence not just in our headlines, but also in our lives. Yet depending on the business problem and data involved, there are challenges to applying AI in an enterprise context.


Enterprise AI is a different game with different rules. Some of the differences, which I’ll cover below, are based on both the kinds of data available in the enterprise and the complexity of the operations in which AI will be used. Here are three implications to consider for using AI in an enterprise context.


Full screen preview


Implication 1: Consider domains where AI has been proven  

AI has had some spectacular successes across a broad range of domains: image recognition, object detection, diagnostic image analysis, autonomous driving, machine translation, sentiment analysis, speech recognition, robotics control, and, of course, Go and chess. Notably, all of these breakthroughs are in domains that humans are quite good at.  This makes sense: deep learning networks are inspired by the architecture of the human brain and, in the case of computer vision, by specific structures within the visual cortex. All of these examples represent problems with a hierarchical structure that is amenable to increasingly abstract representation and understanding of the domain. These domains are also associated with extensive publicly available research, code, and, in many cases, pre-trained models.


On the other hand, the application of AI to domains outside of those listed above is less well developed.  Think of recommender systems, fraud detection, or preventative maintenance models. AI has been applied successfully to each of these domains, but the results are more incremental and the research is much less publically available. In part this reflects the fact that these domains involve closely guarded enterprise data which cannot readily be shared with the broader community and in part the nature of the data itself.


Now, the good news is that many enterprises have problems that involve vision, language or robotics control. Whether it’s computer vision on the factory floor or on inventory management systems, or natural language processing (NLP) for compliance reporting or sentiment analysis, companies can directly leverage an enormous body of research and experience.  For other domains, those lacking established research, pre-trained models, published papers, or notable public success stories, AI should be viewed as part of a continuum with other machine learning and analytical techniques.

Implication 2: AI isn’t magic

Viewing AI as an extension of traditional analytics and machine learning for domains with unproven track records will help organizations avoid ascribing a kind of magic to AI: just feed in enough data and you will get good results. If you have this kind of magical thinking about AI, then drop a rubber duck in a stream and try to get AI to predict where it’s going to end up. You can train that model for the next thousand years and you won’t get good results. Without modelling the individual molecules that make up the stream, the process is fundamentally stochastic; there is nothing AI can do.   


AI isn’t a blanket solution for all of the problems enterprises want to use it for. Just because you’re able to classify images doesn’t mean you’re going to be able to perfectly forecast the amount of soda consumed in the Northwestern US in November.


In assessing the best use cases for AI in your business, look closely at the problem spaces you have available. Do you have any problems in areas for which there is available research?  Do you have problems where you have already been applying machine learning?  These are good candidates for applying AI.


You have probably also heard that AI is very data hungry.  The AI breakthroughs discussed above involved truly massive data sets: millions of images in the case of computer vision models.  It’s impossible to predict exactly how much data you’ll need to make AI successful, but typically the smaller the data set, the more likely you are to be better served with more traditional analytical techniques. Similarly, you have probably also heard that AI and deep learning cut down on the need for manual feature engineering. This is certainly true in the breakthrough domains: computer vision models just look at pixels, NLP models just look at words (or sometimes just characters).  The case with enterprise data is less clear. The data certainly needs to be clean and integrated, categorical features need to be encoded, and time series data needs to be dealt with (sometimes requiring manual feature engineering).  Overall, take the feature engineering claims of AI with a grain of salt for unproven domains and expect to have fairly complex data pipelines.

Implication 3: Try multiple AI experiments to find quick wins

Given the number of unknowns involved with assessing whether AI is right for your use cases or not, it’s better to start by casting a wide net. Apply AI to a larger number of problems — ten, for instance — and see which ones produce the best results. Such an approach means you’re not beating your head against the wall with one problem, forcing a square peg into a round hole, when AI isn’t the right approach for that particular issue. Additionally, with this strategy, you can ensure that you get some results relatively quickly. You’ll then have the patience to create the right AI-based models for use cases that might take longer (and the latitude to rule out AI where it isn’t the right solution to the problem at hand).  


Implementing AI involves many considerations. Many companies have never put machine learning models into production, and so jumping into the deep end with AI will mean they’ll soon find themselves in over their heads. It’s not that AI is more difficult than other machine learning, but deploying, monitoring, versioning, and tracking the performance of models is complicated, and if companies do not have experience with it, their AI implementation will not be smooth. A deliberate approach, where AI is applied gradually to a number of use cases, is a way to improve the transition.




ben-1-300x300.jpgBen MacKenzie, based in Ottawa Canada, has been a Principal Architect with Think Big Analytics, A Teradata Company, for the last 6 years, and has served as the Global Engineering Lead for the last two.    Ben has been focused on building scalable open-source based analytics solutions across a variety of industries and in his capacity as Engineering lead has helped help align Think Big and customers around a complex technology landscape.  Ben has an extensive background in AI and is excited to be part of the current deep learning inspired AI renaissance in his new role as Directory for AI Engineering. In addition to strong engineering and analytical skills, Ben has a proven track record of employing cutting edge research from the deep learning community to build customer solutions.