Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This book is your comprehensive guide to creating powerful, end-to-end deep learning workflows on Amazon Web Services (AWS). The book explores how to integrate essential big data tools and technologies-such as PySpark, PyTorch, TensorFlow, Airflow, EC2, and S3-to streamline the development, training, and deployment of deep learning models.Starting with the importance of scaling advanced machine learning models, this book leverages AWS's robust infrastructure and comprehensive suite of services. It guides you through the setup and configuration needed to maximize the potential of deep learning technologies. You will gain in-depth knowledge of building deep learning pipelines, including data preprocessing, feature engineering, model training, evaluation, and deployment.The book provides insights into setting up an AWS environment, configuring necessary tools, and using PySpark for distributed data processing. You will also delve into hands-on tutorials for PyTorch and TensorFlow, mastering their roles in building and training neural networks. Additionally, you will learn how Apache Airflow can orchestrate complex workflows and how Amazon S3 and EC2 enhance model deployment at scale.By the end of this book, you will be equipped to tackle real-world challenges and seize opportunities in the rapidly evolving field of deep learning with AWS. You will gain the insights and skills needed to drive innovation and maintain a competitive edge in today's data-driven landscape.What You Will Learn Maximize AWS services for scalable and high-performance deep learning architectures Harness the capacity of PyTorch and TensorFlow for advanced neural network development Utilize PySpark for efficient distributed data processing on AWS Orchestrate complex workflows with Apache Airflow for seamless data processing, model training, and deployment
Migrate from pandas and scikit-learn to PySpark to handle vast amounts of data and achieve faster data processing time. This book will show you how to make this transition by adapting your skills and leveraging the similarities in syntax, functionality, and interoperability between these tools.Distributed Machine Learning with PySpark offers a roadmap to data scientists considering transitioning from small data libraries (pandas/scikit-learn) to big data processing and machine learning with PySpark. You will learn to translate Python code from pandas/scikit-learn to PySpark to preprocess large volumes of data and build, train, test, and evaluate popular machine learning algorithms such as linear and logistic regression, decision trees, random forests, support vector machines, Naïve Bayes, and neural networks.After completing this book, you will understand the foundational concepts of data preparation and machine learning and will have the skills necessary toapply these methods using PySpark, the industry standard for building scalable ML data pipelines.What You Will LearnMaster the fundamentals of supervised learning, unsupervised learning, NLP, and recommender systemsUnderstand the differences between PySpark, scikit-learn, and pandasPerform linear regression, logistic regression, and decision tree regression with pandas, scikit-learn, and PySparkDistinguish between the pipelines of PySpark and scikit-learn Who This Book Is ForData scientists, data engineers, and machine learning practitioners who have some familiarity with Python, but who are new to distributed machine learning and the PySpark framework.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.