Check out the video to learn more about this course.
Checkout the curriculum & try out the preview videos !!
- Section overview (1:35)
- Intro to AI, ML, Neural networks, and Gen AI (200) (8:40)
- Neurons, neural & deep learning networks (300) (11:51)
- Exercise: Try out a neural network for solving math equations (400) (9:52)
- A look at generative AI model as a black box (500) (9:16)
- Quiz: Fundamentals of Generative AI models (600) (4:47)
- An overview of generative AI applications (700) (8:28)
- Exercise: Setup access to Google Gemini models (800) (12:20)
- Introduction to Hugging Face (900) (7:24)
- Exercise: Checkout the Hugging Face portal (1000) (9:57)
- Exercise: Join the community and explore Hugging Face (1100) (4:54)
- Quiz: Generative AI and Hugging Face (1200) (5:16)
- Intro to natural language processing (NLP, NLU, NLG) (1300) (11:46)
- NLP with Large Language Models (1400) (10:46)
- Exercise: Try out NLP tasks with Hugging Face models (1500) (2:28)
- Quiz: NLP with LLMs (1600) (3:08)
- Section overview (2:50)
- Introduction to OLlama (7:06)
- OLlama model hosting (9:41)
- Model naming scheme (225) (3:34)
- Instruct, Embedding and Chat models (250) (11:30)
- Quiz: Instruct, Embedding and Chat models (300) (3:35)
- Next word prediction by LLM and fill mask task (400) (7:48)
- Model inference control parameters (500) (3:26)
- Randomness control inference parameters (600) (10:26)
- Exercise: Setup Cohere key and try out randomness control parameters (700) (3:05)
- Diversity control inference parameters (800) (5:13)
- Output length control parameters (900) (7:18)
- Exercise: Try out decoding or inference parameters (1000) (2:40)
- Quiz: Decoding hyper-parameters (1100) (4:43)
- Introduction to In-Context Learning (1200) (12:04)
- Quiz: In-context learning (1300) (3:33)
- Section overview (1:38)
- Exercise: Install & work with Hugging Face transformers library (200) (5:48)
- Transformers library pipeline classes (300) (9:50)
- Quiz: HuggingFace transformers library (400) (3:35)
- HuggingFace hub library & working with endpoints (500) (11:04)
- Quiz: HuggingFace hub library (600) (3:16)
- Exercise: PoC for summarization task (700) (7:22)
- HuggingFace CLI tools and model caching (800) (4:31)
- Model input/output and Tensors (900) (6:15)
- HuggingFace model configuration classes (1000) (5:48)
- Model tokenizers & Tokenization classes (1100) (10:03)
- Working with logits (1200) (7:31)
- HuggingFace models auto classes (1300) (5:30)
- Quiz: HuggingFace classes (1400) (2:59)
- Exercise: Build a question answering system(1500) (10:45)
- Section overview (2:07)
- Challenges with Large Language Models (200) (11:09)
- Model grounding and conditioning (300) (10:52)
- Exercise: Explore the domain adapted models (350) (3:44)
- Prompt engineering and practices (1 of 2) (400) (4:49)
- Prompt engineering and practices (2 of 2) (405) (6:59)
- Quiz & Exercise: Prompting best practices (500) (3:33)
- Few shots & zero shot prompts (600) (4:52)
- Quiz & Exercise: Few shot prompts (700) (5:36)
- Chain of thought prompting technique (800) (7:23)
- Quiz & Exercise: Chain of thought (900) (4:32)
- Self consistency prompting technique (1000) (5:33)
- Tree of thoughts prompting technique (1100) (8:08)
- Quiz & Exercise: Tree of thought (1200) (4:05)
- Exercise: Creative writing workbench (v1) (1300) (5:09)
- Section overview (1:28)
- Prompt templates (200) (7:04)
- Few shot prompt template & example selectors (250) (8:27)
- Prompt model specificity (300) (7:07)
- LLM invoke, streams, batches & Fake LLM (400) (9:11)
- Exercise: Interact with LLM using LangChain (500) (2:51)
- Exercise: LLM client utility (600) (4:17)
- Quiz: Prompt templates, LLM, and Fake LLM (650) (3:30)
- Introduction to LangChain Execution Language (700) (10:07)
- Exercise: Create compound sequential chain (800) (1:42)
- LCEL : Runnable classes (1 of 2) (900) (6:13)
- LCEL: Runnable classes (2 of 2) (1000) (6:41)
- Exercise: Try out common LCEL patterns (1100) (1:24)
- Exercise: Creative writing workbench v2 (1200) (2:05)
- Quiz: LCEL, Chains and Runnables (1300) (4:22)
- Section overview (1:22)
- Challenges with structured responses (200) (8:06)
- Langchain output parsers (300) (11:20)
- Exercise: Use the EnumOutputParser (400) (3:49)
- Exercise: Use the PydanticOutputParser (500) (3:46)
- Project: Creative writing workbench (600) (4:42)
- Project: Solution walkthrough (1 of 2) (700) (2:40)
- Project: Solution walkthrough (2 of 2) (800) (3:57)
- Handling parsing errors (900) (9:02)
- Quiz and Exercise: Parsers, error handling (1000) (3:16)
- What is the meaning of contextual understanding? (200) (8:42)
- Building blocks of Transformer architecture (300) (7:18)
- Intro to vectors, vector spaces and embeddings (400) (10:11)
- Measuring the semantic similarity (500) (6:42)
- Quiz: Vectors, Embeddings, Similarity (600) (3:55)
- Sentence transformer models (SBERT) (700) (5:55)
- Working with sentence transformers (800) (8:45)
- Exercise: Work with classification and mining tasks (900) (4:39)
- Creating embedding with LangChain (1000) (10:50)
- Exercise: CacheBackedEmbeddings classes (1100) (3:15)
- Lexical, semantic and kNN search (1200) (9:26)
- Search efficiency and search performance metrics (1300) (10:53)
- Search algorithms, indexing, ANN, FAISS (1400) (11:11)
- Quiz & Exercise: Try out FAISS for similarity search (1500) (9:27)
- Search algorithm: Local Sensitivity Hashing (LSH) (1600) (7:51)
- Search algorithm: Inverted File Index (IVF) (1700) (7:44)
- Search algorithm: Product Quantization (PQ) (1800) (10:41)
- Search algorithm: HNSW (1 of 2) (1900) (8:40)
- Search algorithm: HNSW (2 of 2) (1950) (11:06)
- Quiz & Exercise: Search algorithms & metrics (2000) (6:12)
- Project: Build a movie recommendation engine (2100) (6:04)
- Benchmarking ANN algorithms (2200) (8:12)
- Exercise: Benchmark the ANN algoithms (3:17)
- Challenges with semantic search libraries (200) (6:13)
- Introduction to vector databases (300) (12:31)
- Exercise: Try out ChromaDB (400) (9:12)
- Exercise: Custom embeddings (500) (1:29)
- Chunking, symmetric & asymmetric searches (600) (9:48)
- LangChain document loaders (700) (7:24)
- LangChain text splitters for chunking (800) (9:45)
- LangChain retrievers & vector stores (900) (10:20)
- Seach scores and maximal-marginal-relevancy (MMR) (1000) (9:38)
- Project: Pinecone adoption @ company (1100) (3:44)
- Quiz: Vector databases, chunking, text splitters (1400) (4:36)
- Introduction to Streamlit framework (200) (9:36)
- Exercise: Build a HuggingFace LLM playground (300) (5:09)
- Building conversational user interfaces (400) (7:07)
- Exercise: Build a chatbot with Streamlit (500) (7:44)
- LangChain conversation memory (600) (8:16)
- Quiz & Exercise: Building chatbots with LangChain (700) (5:27)
- Project: PDF document summarizer application (800) (3:57)
- Introduction to Retrieval Augmented Generation (RAG) (200) (8:49)
- LangChain RAG pipelines (300) (8:59)
- Exercise: Build smart retriever with LangChain (400) (1:57)
- Quiz: RAG and Retrievers (450) (3:25)
- Pattern: Multi query retriever (MQR) (500) (5:32)
- Pattern: Parent document retriever (PDR) (600) (9:25)
- Pattern: Multi vector retriever (MVR) (700) (6:22)
- Quiz: MQR, PDR and MVR (750) (4:36)
- Ranking, Sparse, Dense & Ensemble retrievers (800) (10:35)
- Pattern: Long context reorder (LCR) (900) (6:56)
- Quiz: Ensemble & Long Context Retrievers (950) (4:03)
- Pattern: Contextual compressor (1000) (6:48)
- Pattern: Merger retriever (1100) (4:42)
- Quiz: Contextual compressors and Merger retrievers (1150) (3:13)
- Introduction to agents, tools and agentic RAG (200) (9:18)
- Exercise: Build a single step agent without LangChain (300) (9:36)
- Langchain tools and toolkits (400) (12:45)
- Quiz: Agents, tools & toolkits (500) (4:31)
- Exercise: Try out the FileManagement toolkit (600) (1:16)
- How do we humans & LLM think? (700) (5:14)
- ReACT framework & multi-step agents (800) (12:09)
- Exercise: Build question/answering ReACT agent (900) (10:19)
- Exercise: Build a multi-step ReAct agent (1000) (6:59)
- LangChain utilities for building agentic-RAG solutions (1100) (11:09)
- Exercise: Build an agentic-RAG solution using LangChain (1200) (5:53)
- Quiz: Agentic RAG and ReAct (1300) (6:18)
- Introduction to Fine-tuning (200) (4:18)
- Fine-tuning : Reasons (300) (7:14)
- Fine tuning process (400) (9:03)
- Tools for fine tuning (500) (9:37)
- Exercise: Fine tune Cohere model for toxicity classification (600) (9:00)
- Creating a dataset for fine tuning (800) (12:04)
- Exercise: Prepare a dataset and fine tune Open AI 4o model (900) (5:41)
- Project: Build a credit card fraud detection dataset (1100) (9:19)
- LLM training compute needs (200) (11:03)
- Inferencing compute needs (300) (9:01)
- Quiz : Check your understanding of GPU & CUDA (400) (2:49)
- Introduction to Quantization (500) (8:05)
- Exercise: Quantization maths (Affine technique) (4:03)
- Applying quantization : Static & Dynamic (8:26)
- Exercise: Dynamic quantization with PyTorch (800) (5:02)
- Exercise: Static quantization with AutoGPTQ (900) (4:00)
- Quiz: Check your understanding of quantization (3:24)
Your instructor
As a seasoned Information Technology consultant with over two decades of experience, I bring a wealth of knowledge to the realm of Generative AI application design and development.
My professional journey spans applications development, consulting, infrastructure management, and strategy formulation gained from Fortune 500 environments. Leveraging this extensive background, I ensure that my teaching is not only rooted in industry best practices but also emphasizes the innovative potential of AI in modern technology solutions.
Within academia, I am particularly passionate about unlocking my students' potential through the intersection of theoretical understanding and practical application.
11xAWS Certified and having authored multiple courses in diverse tech areas such as Databases, Blockchain, Domain Driven Design and Microservices.
I integrate my industry insights into the classroom to inspire and guide future pioneers of Generative AI.
My goal is to connect each lesson with real-world scenarios, making the complex landscape of AI both accessible and engaging for those seeking to shape the future of technology.