The Artificial Intelligence Infrastructure Workshop: Build your own highly scalable and robust data storage systems that can support a variety of cutt
Arankalle, Chinmay, Dwyer, Gareth, Geerdink, Bas
- 出版商: Packt Publishing
- 出版日期: 2020-08-13
- 售價: $2,000
- 貴賓價: 9.5 折 $1,900
- 語言: 英文
- 頁數: 732
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1800209843
- ISBN-13: 9781800209848
-
相關分類:
JVM 語言、人工智慧
海外代購書籍(需單獨結帳)
買這商品的人也買了...
-
$4,040$3,838 -
$450大數據分析 — 數據倉庫項目實戰
-
$352$331 -
$560$442 -
$636$604 -
$484劍指大數據——企業級電商數據倉庫項目實戰(精華版)
相關主題
商品描述
Key Features
- Understand how artificial intelligence, machine learning, and deep learning are different from one another
- Discover the data storage requirements of different AI apps using case studies
- Explore popular data solutions such as Hadoop Distributed File System (HDFS) and Amazon Simple Storage Service (S3)
Book Description
Social networking sites see an average of 350 million uploads daily - a quantity impossible for humans to scan and analyze. Only AI can do this job at the required speed, and to leverage an AI application at its full potential, you need an efficient and scalable data storage pipeline. The Artificial Intelligence Infrastructure Workshop will teach you how to build and manage one.
The Artificial Intelligence Infrastructure Workshop begins taking you through some real-world applications of AI. You'll explore the layers of a data lake and get to grips with security, scalability, and maintainability. With the help of hands-on exercises, you'll learn how to define the requirements for AI applications in your organization. This AI book will show you how to select a database for your system and run common queries on databases such as MySQL, MongoDB, and Cassandra. You'll also design your own AI trading system to get a feel of the pipeline-based architecture. As you learn to implement a deep Q-learning algorithm to play the CartPole game, you'll gain hands-on experience with PyTorch. Finally, you'll explore ways to run machine learning models in production as part of an AI application.
By the end of the book, you'll have learned how to build and deploy your own AI software at scale, using various tools, API frameworks, and serialization methods.
What you will learn
- Get to grips with the fundamentals of artificial intelligence
- Understand the importance of data storage and architecture in AI applications
- Build data storage and workflow management systems with open source tools
- Containerize your AI applications with tools such as Docker
- Discover commonly used data storage solutions and best practices for AI on Amazon Web Services (AWS)
- Use the AWS CLI and AWS SDK to perform common data tasks
Who this book is for
If you are looking to develop the data storage skills needed for machine learning and AI and want to learn AI best practices in data engineering, this workshop is for you. Experienced programmers can use this book to advance their career in AI. Familiarity with programming, along with knowledge of exploratory data analysis and reading and writing files using Python will help you to understand the key concepts covered.
作者簡介
Chinmay Arankalle has been working with data since day 1 of his career. In his 7 years in the field, he has designed and built production-grade data systems for telecommunication, pharmaceutical, and life science domains, where new and exciting challenges are always on the horizon. Chinmay started as a software engineer and, over time, has worked extensively on data cleaning, pre-processing, text mining, transforming, and modeling. Production-ready big data systems are his forte.
Gareth Dwyer hails from South Africa but now lives in Europe. He is a software engineer and author and is currently serving as the CTO at the largest coding education provider in Africa. Gareth is passionate about technology, education, and sharing knowledge through mentorship. He holds four university degrees in computer science and machine learning, with a specialization in natural language processing. He has worked with companies such as Amazon Web Services and has published many online tutorials as well as the book Flask by Example.
Bas Geerdink is a programmer, scientist, and IT manager. He works as a technology lead in the AI and big data domain. Having an academic background in artificial intelligence and informatics, he has his research on reference architectures for big data solutions published at the IEEE conference ICITST 2013. Bas has a background in software development, design, and architecture with a broad technical view from C++ to Prolog to Scala. He occasionally teaches programming courses and is a regular speaker at conferences and informal meetings, where he presents a mix of market context, his own vision, business cases, architecture, and source code in an interesting way.
Kunal Gera has been involved in unleashing solutions with the help of data. He has successfully implemented various projects in the field of predictive analytic and data analysis using the analytical skill gained over his professional experience and education.
Kevin Liao has a rich experience in applying data science in different industries by building classes of data science solutions for applications ranging from startup fintech products to web-scale consumer-facing web/mobile pages. Kevin started his career as a statistician/quant in a fintech startup. As data scaled, he honed his data engineering skills and established best practices for web-scale data science solutions. Even after moving to a consumer-facing product company, Kevin has continued to develop data science experiences in an online environment space, which requires extremely low latency solutions.
Anand N.S. has more than two decades of technology experience working, with a strong hands-on track record of application of artificial intelligence, machine learning, and data science to create measurable business outcomes. He has been granted several US patents in the areas of data science, machine learning, and artificial Intelligence. Anand has a B.Tech in Electrical Engineering from IIT Madras and an MBA with a Gold Medal from IIM Kozhikode.
目錄大綱
- Data Storage Fundamentals
- Artificial Intelligence Storage Requirements
- Data Preparation
- Ethics of AI Data Storage
- Data Stores: SQL and NoSQL Databases
- Big Data File Formats
- Introduction to Analytics Engine (Spark) for Big Data
- Data System Design Examples
- Workflow Management for AI
- Introduction to Data Storage on Cloud Services (AWS)
- Building an Artificial Intelligence Algorithm
- Productionizing Your AI Applications