CS489: Generative AI & Large Language Models
CS489: Generative AI & Large Language Models
Mon-Wed: 2pm to 4pm
In CS489, we dive deep into the world of Generative AI, exploring the intricacies of Large Language Models and their applications.
CS489 is dedicated to providing students with a robust understanding of Generative AI, preparing them for advanced studies and industry challenges.
The CS489 course delves into the world of Generative AI, focusing on Large Language Models like GPT and their applications.
ID | Lectures | Hands-on Sessions | Readings |
---|---|---|---|
NATURAL LANGUAGE PROCESSING AND LARGE LANGUAGE MODELS |
|||
Week 1. Introduction | |||
1 | Introduction to Generative AI and Large Language Models (ChatGPT) [Slides|Lecture Notes] |
Hands-on: Introduction to Python Programming [Notebook| Video] Hands-on: Python Built-in Data Structures [ Notebook| Video] Hands-on: Introduction to Numpy Arrays [Notebook| Video] Hands-on: Introduction to Pandas Dataframes [ Overview| Series| Dataframes| Data| Stats Video] |
Gozalo-Brizuela et al. A survey of Generative AI Applications, arXiv:2306.02781, 2023 |
2 | Generative AI Life Cycle |
Hands-on: Introduction to OpenAI
Hands-on: Introduction to HuggingFace [Slides|Notebook] |
|
Week 2. Practical Large Language Models | |||
3 | Prompt Engineering | Hands-on: Prompt Engineering Basics | |
4 | Instruction Fine-Tuning of Large Language Models | Hands-on: Develop a fine-tuned large language model. https://crfm.stanford.edu/2023/03/13/alpaca.html | Quiz 1 on LLMs |
Week 3. Conceptual Text Representation | |||
5 | Word Tokenization | OpenaAI Python API | Assignment 1: Fine Tuning |
6 | Word Embedding | Introduction to LangChain | |
Week 4. LLM Architectural Design | |||
7 | Transformers for Sequence to Sequence Models | Hands-on: Implementing Word Tokenization | |
8 | Transformers Architecture for Language Language Models: LLAMA, GPT, PaLM, Falcon, | Assignment 3 | |
Week 5. LLM Training and its Challenges | |||
9 | LLM Pre-Training and Scaling Laws | Quiz 4 | |
10 | Parameter Efficient Fine-Tuning (PEFT): LoRa and Software Prompts | Assignment 4 | |
Week 6. LLM Alignment to Human Preferences | |||
11 | Introduction to Reinforcement Learning | ||
12 | Reinforcement Learning with Human Feedback | ||
Week 7. LLM Deployment | |||
12 | LLM Optimization and Deployment | ||
14 | Building Full-Stack LLM applications. | ||
GENERATIVE AI FOR COMPUTER VISION APPLICATIONS |
|||
Week 8. Introduction to Computer Vision | |||
15 | Computer Vision Concepts | ||
16 | Deep Learning Principles for Computer Vision | ||
Week 9. Extensive Hands-on Week | |||
17 | |||
18 | |||
Week 10. GANs | |||
19 | Generative Models I: CNN, GANs Architecture and Applications | ||
20 | Vision Transformer for Generative AI | Hands-on: Building a GAN | |
Week 11. Diffusion Model | |||
21 | Generative Models II: Diffusion Models | Assignment 5 | |
22 | Vision Transformer for Generative AI | ||
Week 12. Generative AI Risks and Ethical Consideration | |||
23 | Ethical and Social Considerations | ||
24 | Responsible AI | ||
Week 13. Advanced Topics | |||
25 | Research Trends in Generative AI (e.g., Explainable AI, General Artificial Intelligence) | ||
26 | Practical Implementation: Project Development | Hands-on: Project Development | Quiz 11 |
Week 14. Project Presentations | |||
27 | Project Presentations | ||
28 | Project Presentations | ||
Week 15. | |||
29 | Final Exam | Final Exam |
© Robotics and Internet-of-Things Lab.