Pretrain Vision and Large Language Models in Python
(eBook)

Book Cover
Your Rating: 0 stars
Star rating for

Contributors:
Published:
[United States] : Packt Publishing, 2023.
Format:
eBook
Content Description:
1 online resource (258 pages)
Status:

Description

Master the art of training vision and large language models with conceptual fundaments and industry-expert guidance. Learn about AWS services and design patterns, with relevant coding examples Key Features Learn to develop, train, tune, and apply foundation models with optimized end-to-end pipelines Explore large-scale distributed training for models and datasets with AWS and SageMaker examples Evaluate, deploy, and operationalize your custom models with bias detection and pipeline monitoring Book Description Foundation models have forever changed machine learning. From BERT to ChatGPT, CLIP to Stable Diffusion, when billions of parameters are combined with large datasets and hundreds to thousands of GPUs, the result is nothing short of record-breaking. The recommendations, advice, and code samples in this book will help you pretrain and fine-tune your own foundation models from scratch on AWS and Amazon SageMaker, while applying them to hundreds of use cases across your organization. With advice from seasoned AWS and machine learning expert Emily Webber, this book helps you learn everything you need to go from project ideation to dataset preparation, training, evaluation, and deployment for large language, vision, and multimodal models. With step-by-step explanations of essential concepts and practical examples, you'll go from mastering the concept of pretraining to preparing your dataset and model, configuring your environment, training, fine-tuning, evaluating, deploying, and optimizing your foundation models. You will learn how to apply the scaling laws to distributing your model and dataset over multiple GPUs, remove bias, achieve high throughput, and build deployment pipelines. By the end of this book, you'll be well equipped to embark on your own project to pretrain and fine-tune the foundation models of the future. What you will learn Find the right use cases and datasets for pretraining and fine-tuning Prepare for large-scale training with custom accelerators and GPUs Configure environments on AWS and SageMaker to maximize performance Select hyperparameters based on your model and constraints Distribute your model and dataset using many types of parallelism Avoid pitfalls with job restarts, intermittent health checks, and more Evaluate your model with quantitative and qualitative insights Deploy your models with runtime improvements and monitoring pipelines Who this book is for If you're a machine learning researcher or enthusiast who wants to start a foundation modelling project, this book is for you. Applied scientists, data scientists, machine learning engineers, solution architects, product managers, and students will all benefit from this book. Intermediate Python is a must, along with introductory concepts of cloud computing. A strong understanding of deep learning fundamentals is needed, while advanced topics will be explained. The content covers advanced machine learning and cloud techniques, explaining them in an actionable, easy-to-understand way.

Also in This Series

More Like This

More Details

Language:
English
ISBN:
9781804612545, 1804612545

Notes

Restrictions on Access
Instant title available through hoopla.
Description
Master the art of training vision and large language models with conceptual fundaments and industry-expert guidance. Learn about AWS services and design patterns, with relevant coding examples Key Features Learn to develop, train, tune, and apply foundation models with optimized end-to-end pipelines Explore large-scale distributed training for models and datasets with AWS and SageMaker examples Evaluate, deploy, and operationalize your custom models with bias detection and pipeline monitoring Book Description Foundation models have forever changed machine learning. From BERT to ChatGPT, CLIP to Stable Diffusion, when billions of parameters are combined with large datasets and hundreds to thousands of GPUs, the result is nothing short of record-breaking. The recommendations, advice, and code samples in this book will help you pretrain and fine-tune your own foundation models from scratch on AWS and Amazon SageMaker, while applying them to hundreds of use cases across your organization. With advice from seasoned AWS and machine learning expert Emily Webber, this book helps you learn everything you need to go from project ideation to dataset preparation, training, evaluation, and deployment for large language, vision, and multimodal models. With step-by-step explanations of essential concepts and practical examples, you'll go from mastering the concept of pretraining to preparing your dataset and model, configuring your environment, training, fine-tuning, evaluating, deploying, and optimizing your foundation models. You will learn how to apply the scaling laws to distributing your model and dataset over multiple GPUs, remove bias, achieve high throughput, and build deployment pipelines. By the end of this book, you'll be well equipped to embark on your own project to pretrain and fine-tune the foundation models of the future. What you will learn Find the right use cases and datasets for pretraining and fine-tuning Prepare for large-scale training with custom accelerators and GPUs Configure environments on AWS and SageMaker to maximize performance Select hyperparameters based on your model and constraints Distribute your model and dataset using many types of parallelism Avoid pitfalls with job restarts, intermittent health checks, and more Evaluate your model with quantitative and qualitative insights Deploy your models with runtime improvements and monitoring pipelines Who this book is for If you're a machine learning researcher or enthusiast who wants to start a foundation modelling project, this book is for you. Applied scientists, data scientists, machine learning engineers, solution architects, product managers, and students will all benefit from this book. Intermediate Python is a must, along with introductory concepts of cloud computing. A strong understanding of deep learning fundamentals is needed, while advanced topics will be explained. The content covers advanced machine learning and cloud techniques, explaining them in an actionable, easy-to-understand way.
System Details
Mode of access: World Wide Web.

Reviews from GoodReads

Loading GoodReads Reviews.

Citations

APA Citation (style guide)

Webber, E. (2023). Pretrain Vision and Large Language Models in Python. Packt Publishing.

Chicago / Turabian - Author Date Citation (style guide)

Webber, Emily. 2023. Pretrain Vision and Large Language Models in Python. Packt Publishing.

Chicago / Turabian - Humanities Citation (style guide)

Webber, Emily, Pretrain Vision and Large Language Models in Python. Packt Publishing, 2023.

MLA Citation (style guide)

Webber, Emily. Pretrain Vision and Large Language Models in Python. Packt Publishing, 2023.

Note! Citation formats are based on standards as of July 2022. Citations contain only title, author, edition, publisher, and year published. Citations should be used as a guideline and should be double checked for accuracy.

Staff View

Grouped Work ID:
4b9f3619-5609-fe11-a426-6b6b2318caab
Go To Grouped Work

Hoopla Extract Information

hooplaId17518295
titlePretrain Vision And Large Language Models In Python
languageENGLISH
kindEBOOK
series
season
publisherPackt Publishing
price1.35
active1
pa
profanity
children
demo
duration
rating
abridged
fiction
purchaseModelINSTANT
dateLastUpdatedNov 20, 2024 06:44:43 PM

Record Information

Last File Modification TimeMay 02, 2025 11:09:50 PM
Last Grouped Work Modification TimeJul 03, 2025 06:11:02 PM

MARC Record

LEADER04625nam a22004335i 4500
001MWT17518295
003MWT
00520250418112354.1
006m     o  d        
007cr cn|||||||||
008250418s2023    xxu    eo     000 0 eng d
020 |a 9781804612545 |q (electronic bk.)
020 |a 1804612545 |q (electronic bk.)
02842 |a MWT17518295
029 |a https://d2snwnmzyr8jue.cloudfront.net/dra_9781804612545_180.jpeg
037 |a 17518295 |b Midwest Tape, LLC |n http://www.midwesttapes.com
040 |a Midwest |e rda
099 |a eBook hoopla
1001 |a Webber, Emily, |e author.
24510 |a Pretrain Vision and Large Language Models in Python |h [electronic resource] / |c Emily Webber.
2641 |a [United States] : |b Packt Publishing, |c 2023.
2642 |b Made available through hoopla
300 |a 1 online resource (258 pages)
336 |a text |b txt |2 rdacontent
337 |a computer |b c |2 rdamedia
338 |a online resource |b cr |2 rdacarrier
347 |a text file |2 rda
506 |a Instant title available through hoopla.
520 |a Master the art of training vision and large language models with conceptual fundaments and industry-expert guidance. Learn about AWS services and design patterns, with relevant coding examples Key Features Learn to develop, train, tune, and apply foundation models with optimized end-to-end pipelines Explore large-scale distributed training for models and datasets with AWS and SageMaker examples Evaluate, deploy, and operationalize your custom models with bias detection and pipeline monitoring Book Description Foundation models have forever changed machine learning. From BERT to ChatGPT, CLIP to Stable Diffusion, when billions of parameters are combined with large datasets and hundreds to thousands of GPUs, the result is nothing short of record-breaking. The recommendations, advice, and code samples in this book will help you pretrain and fine-tune your own foundation models from scratch on AWS and Amazon SageMaker, while applying them to hundreds of use cases across your organization. With advice from seasoned AWS and machine learning expert Emily Webber, this book helps you learn everything you need to go from project ideation to dataset preparation, training, evaluation, and deployment for large language, vision, and multimodal models. With step-by-step explanations of essential concepts and practical examples, you'll go from mastering the concept of pretraining to preparing your dataset and model, configuring your environment, training, fine-tuning, evaluating, deploying, and optimizing your foundation models. You will learn how to apply the scaling laws to distributing your model and dataset over multiple GPUs, remove bias, achieve high throughput, and build deployment pipelines. By the end of this book, you'll be well equipped to embark on your own project to pretrain and fine-tune the foundation models of the future. What you will learn Find the right use cases and datasets for pretraining and fine-tuning Prepare for large-scale training with custom accelerators and GPUs Configure environments on AWS and SageMaker to maximize performance Select hyperparameters based on your model and constraints Distribute your model and dataset using many types of parallelism Avoid pitfalls with job restarts, intermittent health checks, and more Evaluate your model with quantitative and qualitative insights Deploy your models with runtime improvements and monitoring pipelines Who this book is for If you're a machine learning researcher or enthusiast who wants to start a foundation modelling project, this book is for you. Applied scientists, data scientists, machine learning engineers, solution architects, product managers, and students will all benefit from this book. Intermediate Python is a must, along with introductory concepts of cloud computing. A strong understanding of deep learning fundamentals is needed, while advanced topics will be explained. The content covers advanced machine learning and cloud techniques, explaining them in an actionable, easy-to-understand way.
538 |a Mode of access: World Wide Web.
6500 |a Artificial intelligence.
6500 |a Computer vision.
6500 |a Computers.
6500 |a Natural language processing (Computer science).
6500 |a Pattern recognition systems.
6500 |a Electronic books.
7102 |a hoopla digital.
85640 |u https://www.hoopladigital.com/title/17518295?utm_source=MARC&Lid=hh4435 |z Instantly available on hoopla.
85642 |z Cover image |u https://d2snwnmzyr8jue.cloudfront.net/dra_9781804612545_180.jpeg