Quick Start Guide to Large Language Models: Strategies and Best Practices for Chatgpt, Embeddings, Fine-Tuning, and Multimodal AI (大型語言模型快速入門指南:ChatGPT、嵌入、微調及多模態人工智慧的策略與最佳實踐)

Ozdemir, Sinan

  • 出版商: Addison Wesley
  • 出版日期: 2024-10-13
  • 售價: $1,970
  • 貴賓價: 9.5$1,872
  • 語言: 英文
  • 頁數: 384
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 0135346568
  • ISBN-13: 9780135346563
  • 相關分類: ChatGPTLangChain人工智慧
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products

Large Language Models (LLMs) like Llama 3, Claude 3, and the GPT family are demonstrating breathtaking capabilities, but their size and complexity have deterred many practitioners from applying them. In Quick Start Guide to Large Language Models, Second Edition, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems.

Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, and hands-on exercises. Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, prompting, fine-tuning, performance, and much more. The resources on the companion website include sample datasets and up-to-date code for working with open- and closed-source LLMs such as those from OpenAI (GPT-4 and GPT-3.5), Google (BERT, T5, and Gemini), X (Grok), Anthropic (the Claude family), Cohere (the Command family), and Meta (BART and the LLaMA family).

  • Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and more
  • Use APIs and Python to fine-tune and customize LLMs for your requirements
  • Build a complete neural/semantic information retrieval system and attach to conversational LLMs for building retrieval-augmented generation (RAG) chatbots and AI Agents
  • Master advanced prompt engineering techniques like output structuring, chain-of-thought prompting, and semantic few-shot prompting
  • Customize LLM embeddings to build a complete recommendation engine from scratch with user data that outperforms out-of-the-box embeddings from OpenAI
  • Construct and fine-tune multimodal Transformer architectures from scratch using open-source LLMs and large visual datasets
  • Align LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF) to build conversational agents from open models like Llama 3 and FLAN-T5
  • Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mind
  • Diagnose and optimize LLMs for speed, memory, and performance with quantization, probing, benchmarking, and evaluation frameworks

"A refreshing and inspiring resource. Jam-packed with practical guidance and clear explanations that leave you smarter about this incredible new field."
--Pete Huang, author of The Neuron

Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.

商品描述(中文翻譯)

《在專案和產品中大規模使用大型語言模型的實用逐步指南》

大型語言模型(LLMs),如 Llama 3、Claude 3 和 GPT 系列,展現了驚人的能力,但其規模和複雜性使許多實務工作者卻步。在《大型語言模型快速入門指南,第二版》中,開創性的數據科學家和 AI 企業家 Sinan Ozdemir 消除了這些障礙,提供了一個關於如何使用、整合和部署 LLMs 以解決實際問題的指南。

Ozdemir 整合了所有你需要開始的資源,即使你對 LLMs 沒有直接經驗:逐步指導、最佳實踐、真實案例研究和實作練習。在此過程中,他分享了 LLMs 的內部運作原理,幫助你優化模型選擇、數據格式、提示、微調、性能等多方面。伴隨網站上的資源包括樣本數據集和最新的代碼,以便與來自 OpenAI(GPT-4 和 GPT-3.5)、Google(BERT、T5 和 Gemini)、X(Grok)、Anthropic(Claude 系列)、Cohere(Command 系列)和 Meta(BART 和 LLaMA 系列)等開源和閉源 LLMs 進行互動。

- 學習關鍵概念:預訓練、轉移學習、微調、注意力、嵌入、標記化等
- 使用 API 和 Python 根據你的需求微調和自定義 LLMs
- 建立完整的神經/語義信息檢索系統,並連接到對話式 LLMs,以構建增強檢索生成(RAG)聊天機器人和 AI 代理
- 精通高級提示工程技術,如輸出結構化、思維鏈提示和語義少量提示
- 自定義 LLM 嵌入,從零開始構建一個完整的推薦引擎,使用用戶數據超越 OpenAI 的即用型嵌入
- 使用開源 LLMs 和大型視覺數據集從零開始構建和微調多模態 Transformer 架構
- 使用人類和 AI 反饋的強化學習(RLHF/RLAIF)對 LLMs 進行對齊,從開放模型如 Llama 3 和 FLAN-T5 構建對話代理
- 部署提示和自定義微調的 LLMs 到雲端,考慮可擴展性和評估管道
- 通過量化、探測、基準測試和評估框架診斷和優化 LLMs 的速度、內存和性能

「這是一個令人耳目一新且鼓舞人心的資源。充滿實用指導和清晰解釋,讓你對這個令人驚嘆的新領域有更深的理解。」
-- Pete Huang,《The Neuron》作者

「註冊你的書籍以便方便訪問下載、更新和/或修正,隨著它們的可用性而提供。詳情請參見書內。」

作者簡介

Sinan Ozdemir is currently the founder and CTO of LoopGenius and an advisor to several AI companies. Sinan is a former lecturer of Data Science at Johns Hopkins University and the author of multiple textbooks on data science and machine learning. Additionally, he is the founder of the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a master's degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco, CA.

作者簡介(中文翻譯)

Sinan Ozdemir 目前是 LoopGenius 的創辦人兼首席技術官,並擔任多家 AI 公司的顧問。Sinan 曾是約翰霍普金斯大學的數據科學講師,也是多本數據科學和機器學習教科書的作者。此外,他還是最近被收購的 Kylie.ai 的創辦人,這是一個具備 RPA 功能的企業級對話式 AI 平台。他擁有約翰霍普金斯大學的純數學碩士學位,現居於加州舊金山。