4 Challenges for Amazon to Address at AWS Re:Invent 2018: Beating Google and Microsoft Azure in the AI Race

With major customers migrating to Google Cloud due to „better data analytics and artificial intelligence APIs“ and to Microsoft Azure due to „better DevOps pipeline integration“, Amazon needs to use this year’s ReInvent conference to claim AI and data analytics leadership. For a deeper dive into the artificial intelligence and machine learning topic, visit www.ematop3.com/ai.

Key AI/ML Challenges to Address at Re:Invent 2018

Compliant Data Accessibility – Data Still Is the Critical Bottleneck

95% of AI/ML projects fail at the proof-of-concept stage, where business analysts and developers are unable to pry the required structured and unstructured data for a compelling proof of concept out of the hands of the data owners. Typically these data owners want to know why they should risk potential compliance problems down the road. AWS needs to help data owners, developers, and business analysts to eliminate this catch 22 by enabling compliant and continuously tracked use of company data for AI projects.

Take a look at IBM Cloud Private for Data (or watch the 4 minute demo below)

WYSIWYG AI – Make AI Accessible to Everyone

AWS already is offering great APIs and infrastructure platforms for developers and platform architects. But why are today’s full-stack software developers not widely embracing AWS Sagemaker, AWS Rekognition, Lex, or Polly to replace rule-based coding solutions with much better performing and more flexible AI/ML models? Answer: it is still way too difficult for developers or business analysts to experiment with real-life data and develop a „feel“ for building useful AI/ML models. Over the previous 12 months, Amazon has not taken this challenge seriously, but I am hoping this will change at #ReInvent 2018 here in Vegas.

Data Robot offers an interesting WYSIWYG solution (watch the 1:31 min video).

Self-Tuning AI – Reduce Cost and Risk

Successfully creating AI/ML models today still is as much an art as it is a science. Modern GPUs enable us to train models by providing large numbers of cores for parallel processing, but this only addresses a small part of the problem. Selecting sub-optimal algorithms and model parameters quickly leads to large AWS bills without anything to show for. I am curious to see how AWS will enhance its existing basic auto-hyperparameterization to make model training much more accessible to developers and business analysts.

Take a look at H2Oai to see self-tuning AI in action (watch the 5 minute demo).

DevOps Integration – AI/ML Pipeline and Data Pipeline

Enhancing applications through AI/ML requires significant experimentation, fine-tuning, and testing. Whether or not this process results in a usable AI/ML model that will actually enhance the release is difficult to know during the planning stage. Therefore, a functioning AI model should generally not be part of release gates and data pipelines and AI/ML pipelines should generally be separate from application release pipelines. At Re:Invent 2018, I expect to see AWS services that bring consistency in terms of tooling and processes for AI/ML engineers and full stack developers. You could say, we need to see AI/ML models becoming first-class citizens for DevOps.

Excellent presentation of how to include AI / ML with DevOps.

Kommentar absenden

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert