Mitigating Bias in Machine Learning (減少機器學習中的偏見)

Berry, Carlotta A., Marshall, Brandeis Hill

  • 出版商: McGraw-Hill Education
  • 出版日期: 2024-10-02
  • 定價: $1,800
  • 售價: 9.5$1,710
  • 貴賓價: 9.0$1,620
  • 語言: 英文
  • 頁數: 304
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1264922442
  • ISBN-13: 9781264922444
  • 相關分類: Machine Learning
  • 立即出貨 (庫存=1)

相關主題

商品描述

This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries.

Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced.

Mitigating Bias in Machine Learning addresses:

 

  • Ethical and Societal Implications of Machine Learning
  • Social Media and Health Information Dissemination
  • Comparative Case Study of Fairness Toolkits
  • Bias Mitigation in Hate Speech Detection
  • Unintended Systematic Biases in Natural Language Processing
  • Combating Bias in Large Language Models
  • Recognizing Bias in Medical Machine Learning and AI Models
  • Machine Learning Bias in Healthcare
  • Achieving Systemic Equity in Socioecological Systems
  • Community Engagement for Machine Learning

 

商品描述(中文翻譯)

這本實用指南逐步展示了如何使用機器學習進行可行的決策,並且不基於種族和性別等多種人為因素進行歧視。作者們檢視了當今領域中發生的各種偏見,並提供了可以在各種技術、應用和行業中部署的緩解策略。

由工程和計算專家編輯,《機器學習中的偏見緩解》包括來自不同人工智能領域的知名學者和專業人士的貢獻。每一章節都涉及不同的主題,並且其中穿插了真實案例研究,突出了歧視性機器學習實踐,並清楚展示了如何減少這些實踐。

《機器學習中的偏見緩解》涵蓋了以下主題:
- 機器學習的倫理和社會影響
- 社交媒體和健康資訊傳播
- 公平工具包的比較案例研究
- 減少仇恨言論檢測中的偏見
- 自然語言處理中的意外系統性偏見
- 在大型語言模型中對抗偏見
- 辨識醫療機器學習和人工智能模型中的偏見
- 醫療保健中的機器學習偏見
- 在社會生態系統中實現系統性公平
- 機器學習的社區參與