Transformers for Natural Language Processing and Computer Vision, 3/e (Paperback) (自然語言處理與計算機視覺的變壓器,第三版(平裝本))
Rothman, Denis
- 出版商: Packt Publishing
- 出版日期: 2024-02-29
- 售價: $2,180
- 貴賓價: 9.5 折 $2,071
- 語言: 英文
- 頁數: 728
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1805128728
- ISBN-13: 9781805128724
-
相關分類:
人工智慧、Computer Vision
立即出貨 (庫存=1)
買這商品的人也買了...
-
$3,080$2,926 -
$4,150$3,943 -
$1,617Deep Learning (Hardcover)
-
$1,750$1,715 -
$2,280$2,166 -
$407PyTorch 自然語言處理入門與實戰
-
$880$695 -
$505細說 PyTorch 深度學習:理論、算法、模型與編程實現
-
$2,850$2,793 -
$454從零開始大模型開發與微調:基於 PyTorch 與 ChatGLM
-
$1,280$1,011 -
$880$695 -
$720$569 -
$680$537 -
$2,050$1,948 -
$474$450
相關主題
商品描述
Unleash the full potential of transformers with this comprehensive guide covering architecture, capabilities, risks, and practical implementations on OpenAI, Google Vertex AI, and Hugging Face
Key Features:
- Master NLP and vision transformers, from the architecture to fine-tuning and implementation
- Learn how to apply Retrieval Augmented Generation (RAG) with LLMs using customized texts and embeddings
- Mitigate LLM risks, such as hallucinations, using moderation models and knowledge bases
Book Description:
Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Models' (LLMs) architectures, applications, and various platforms (Hugging Face, OpenAI, and Google Vertex AI) used for Natural Language Processing (NLP) and Computer Vision (CV).
The book guides you through different transformer architectures to the latest Foundation Models and Generative AI. You'll pretrain and fine-tune LLMs and work through different use cases, from summarization to implementing question-answering systems with embedding-based search techniques. This book explains the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate risks using moderation models with rule and knowledge bases. You'll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and give you greater control over LLM outputs.
Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers. Go further by combining different models and platforms and learning about AI agent replication.
This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices.
What You Will Learn:
- Learn how to pretrain and fine-tune LLMs
- Learn how to work with multiple platforms, such as Hugging Face, OpenAI, and Google Vertex AI
- Learn about different tokenizers and the best practices for preprocessing language data
- Implement Retrieval Augmented Generation and rules bases to mitigate hallucinations
- Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP
- Create and implement cross-platform chained models, such as HuggingGPT
- Go in-depth into vision transformers with CLIP, DALL-E 2, DALL-E 3, and GPT-4V
Who this book is for:
This book is ideal for NLP and CV engineers, software developers, data scientists, machine learning engineers, and technical leaders looking to advance their LLMs and generative AI skills or explore the latest trends in the field.
Knowledge of Python and machine learning concepts is required to fully understand the use cases and code examples. However, with examples using LLM user interfaces, prompt engineering, and no-code model building, this book is great for anyone curious about the AI revolution.
商品描述(中文翻譯)
發揮Transformer的全部潛力,這本全面指南涵蓋了架構、能力、風險以及在OpenAI、Google Vertex AI和Hugging Face上的實際應用。
主要特點:
- 從架構到微調和實現,掌握NLP和視覺Transformer
- 學習如何使用自定義文本和嵌入來應用檢索增強生成(RAG)
- 使用調節模型和知識庫來減輕LLM的風險,如幻覺
書籍描述:
《自然語言處理和計算機視覺的Transformer,第三版》探索了大型語言模型(LLMs)的架構、應用和用於自然語言處理(NLP)和計算機視覺(CV)的各種平台(Hugging Face、OpenAI和Google Vertex AI)。
本書引導您通過不同的Transformer架構,了解最新的Foundation模型和生成AI。您將對LLMs進行預訓練和微調,並通過不同的用例進行工作,從摘要到使用基於嵌入的搜索技術實現問答系統。本書解釋了LLMs的風險,從幻覺和記憶到隱私,以及如何使用調節模型和規則和知識庫來減輕風險。您將使用LLMs實現檢索增強生成(RAG),以提高模型的準確性並更好地控制LLM的輸出。
深入研究生成視覺Transformer和多模型架構,並構建應用程序,例如圖像和視頻到文本的分類器。通過結合不同的模型和平台,並了解AI代理複製,更進一步。
本書使您了解Transformer架構、預訓練、微調、LLM用例和最佳實踐。
學到什麼:
- 學習如何預訓練和微調LLMs
- 學習如何使用Hugging Face、OpenAI和Google Vertex AI等多個平台
- 了解不同的分詞器和預處理語言數據的最佳實踐
- 實施檢索增強生成和基於規則的方法以減輕幻覺
- 使用BertViz、LIME和SHAP等工具可視化Transformer模型的活動,以獲得更深入的洞察
- 創建和實施跨平台的鏈式模型,如HuggingGPT
- 深入研究具有CLIP、DALL-E 2、DALL-E 3和GPT-4V的視覺Transformer
適合對象:
本書適合NLP和CV工程師、軟件開發人員、數據科學家、機器學習工程師和技術領導者,他們希望提升LLMs和生成AI技能,或者探索該領域的最新趨勢。
需要具備Python和機器學習概念的知識,以充分理解用例和代碼示例。然而,本書提供了使用LLM用戶界面、提示工程和無代碼模型構建的示例,非常適合對AI革命感興趣的任何人。