- Level Intermediate
- المدة 17 ساعات hours
- الطبع بواسطة DeepLearning.AI
-
Offered by
عن
In Generative AI with Large Language Models (LLMs), you'll learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
By taking this course, you'll learn to:
- Deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment
- Describe in detail the transformer architecture that powers LLMs, how they're trained, and how fine-tuning enables LLMs to be adapted to a variety of specific use cases
- Use empirical scaling laws to optimize the model's objective function across dataset size, compute budget, and inference requirements
- Apply state-of-the art training, tuning, inference, tools, and deployment methods to maximize the performance of models within the specific constraints of your project
- Discuss the challenges and opportunities that generative AI creates for businesses after hearing stories from industry researchers and practitioners
- Developers who have a good foundational understanding of how LLMs work, as well the best practices behind training and deploying them, will be able to make good decisions for their companies and more quickly build working prototypes. This course will support learners in building practical intuition about how to best utilize this exciting new technology.
This is an intermediate course, so you should have some experience coding in Python to get the most out of it. You should also be familiar with the basics of machine learning, such as supervised and unsupervised learning, loss functions, and splitting data into training, validation, and test sets. If you have taken the Machine Learning Specialization or Deep Learning Specialization from DeepLearning.AI, you'll be ready to take this course and dive deeper into the fundamentals of generative AI.
الوحدات
Introduction to LLMs and the generative AI project lifecycle
2
External Tool
- Intake Survey
- Lab 1 - Generative AI Use Case: Summarize Dialogue
12
Videos
- Course Introduction
- Introduction - Week 1
- Generative AI & LLMs
- LLM use cases and tasks
- Text generation before transformers
- Transformers architecture
- Generating text with transformers
- Prompting and prompt engineering
- Generative configuration
- Generative AI project lifecycle
- Introduction to AWS labs
- Lab 1 walkthrough
4
Readings
- Contributor Acknowledgments
- [IMPORTANT] Have questions, issues or ideas? Join our Forum!
- Transformers: Attention is all you need
- [IMPORTANT] Guidelines before you start the labs in this course
LLM pre-training and scaling laws
1
Assignment
- Week 1 quiz
5
Videos
- Pre-training large language models
- Computational challenges of training LLMs
- Optional video: Efficient multi-GPU compute strategies
- Scaling laws and compute-optimal models
- Pre-training for domain adaptation
2
Readings
- Domain-specific training: BloombergGPT
- Week 1 resources
Lecture Notes (Optional)
1
Readings
- Lecture Notes Week 1
Fine-tuning LLMs with instruction
6
Videos
- Introduction - Week 2
- Instruction fine-tuning
- Fine-tuning on a single task
- Multi-task instruction fine-tuning
- Model evaluation
- Benchmarks
1
Readings
- Scaling instruct models
Parameter efficient fine-tuning
1
Assignment
- Week 2 quiz
1
External Tool
- Lab 2 - Fine-tune a generative AI model for dialogue summarization
4
Videos
- Parameter efficient fine-tuning (PEFT)
- PEFT techniques 1: LoRA
- PEFT techniques 2: Soft prompts
- Lab 2 walkthrough
1
Readings
- Week 2 Resources
Lecture Notes (Optional)
1
Readings
- Lecture Notes Week 2
Reinforcement learning from human feedback
1
External Tool
- Lab 3 - Fine-tune FLAN-T5 with reinforcement learning to generate more-positive summaries
10
Videos
- Introduction - Week 3
- Aligning models with human values
- Reinforcement learning from human feedback (RLHF)
- RLHF: Obtaining feedback from humans
- RLHF: Reward model
- RLHF: Fine-tuning with reinforcement learning
- Optional video: Proximal policy optimization
- RLHF: Reward hacking
- Scaling human feedback
- Lab 3 walkthrough
2
Readings
- KL divergence
- [IMPORTANT] Reminder about end of access to Lab Notebooks
LLM-powered applications
1
Assignment
- Week 3 Quiz
9
Videos
- Model optimizations for deployment
- Generative AI Project Lifecycle Cheat Sheet
- Using the LLM in applications
- Interacting with external applications
- Helping LLMs reason and plan with chain-of-thought
- Program-aided language models (PAL)
- ReAct: Combining reasoning and action
- LLM application architectures
- Optional video: AWS Sagemaker JumpStart
2
Readings
- ReAct: Reasoning and action
- Week 3 resources
Course conclusion and ongoing research
2
Videos
- Responsible AI
- Course conclusion
Lecture Notes (Optional)
1
Readings
- Lecture Notes Week 3
Acknowledgments
2
Readings
- Acknowledgments
- (Optional) Opportunity to Mentor Other Learners
Auto Summary
"Generative AI with Large Language Models" is a Coursera course focused on Data Science & AI, taught by industry experts. It delves into generative AI fundamentals, LLM lifecycle, transformer architecture, and empirical scaling laws. Ideal for developers with Python and basic machine learning knowledge, this intermediate course spans 1020 minutes and offers Starter, Paid, and Professional subscription options.

Chris Fregly

Antje Barth

Shelbee Eigenbrode

Mike Chambers