# Contents

### 📘 Part 1: Foundations of Language Models

1. What is a Language Model?
2. History of LLMs: From N-Grams to GPT
3. How Do LLMs Understand Text?
4. Tokens and Tokenization Explained
5. Embeddings: Turning Words into Numbers
6. Neural Networks 101 (with a focus on Transformers)
7. What is a Transformer Architecture?
8. The Role of Attention and Self-Attention
9. How Do LLMs Generate Text?
10. Decoding Strategies: Greedy, Beam Search, Sampling

***

### ⚙️ Part 2: Training and Fine-Tuning

11. What is Pretraining?
12. Common Training Datasets (Wikipedia, BooksCorpus, etc.)
13. Transfer Learning in NLP
14. Fine-Tuning vs Prompt Engineering
15. Introduction to LoRA, PEFT, and QLoRA
16. Training Loss Functions in LLMs
17. What is a Foundation Model?
18. Ethics in Data Collection and Training
19. Costs of Training an LLM
20. Why Do LLMs Hallucinate?

***

### 🔍 Part 3: Prompting and Use

21. What is Prompt Engineering?
22. Few-Shot, Zero-Shot, and One-Shot Learning
23. Prompt Templates for Common Tasks (Summarization, Q\&A, etc.)
24. How to Evaluate LLM Responses
25. Chain-of-Thought Prompting
26. Role of Temperature and Top-k Sampling
27. Using LLMs for Code Generation
28. Using LLMs for Text Summarization
29. LLMs as Agents and Tool Users
30. Guardrails and Safety Filters

***

### 🧠 Part 4: Hands-On with LLMs

31. How to Use OpenAI (ChatGPT, GPT-4) APIs
32. Using Hugging Face Transformers
33. Exploring LLMs in LangChain
34. Building a Q\&A Bot with LLMs
35. Connecting LLMs to Google Docs / PDFs
36. Running LLMs Locally (GGUF, Ollama, LM Studio)
37. Vector Databases and RAG (Retrieval-Augmented Generation)
38. Streamlit/FastAPI + LLM Demo Projects
39. Integrating LLMs in Web Applications
40. Cost Optimization for LLM Use

***

### 🌐 Part 5: Broader Perspectives

41. The Future of Language Models
42. Multimodal Models: Text + Image + Audio
43. Comparing GPT, Claude, Gemini, Mistral, and LLaMA
44. Open Source vs Closed Source LLMs
45. LLMs in Education and Research
46. Business Use Cases of LLMs
47. Legal, Copyright, and Bias Issues
48. Responsible AI and Safety in LLMs
49. How to Stay Updated in the LLM World
50. Career Paths in LLM and Generative AI


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://csp.gitbook.io/llm-for-beginners/contents.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
