A Primer on Compression in the Memory Hierarchy
暫譯: 記憶體層級壓縮入門

Somayeh Sardashti, Angelos Arelakis, Per Stenström

  • 出版商: Morgan & Claypool
  • 出版日期: 2015-12-01
  • 售價: $1,460
  • 貴賓價: 9.5$1,387
  • 語言: 英文
  • 頁數: 88
  • 裝訂: Paperback
  • ISBN: 1627054154
  • ISBN-13: 9781627054157
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them.

商品描述(中文翻譯)

本講座綜述了在快取、記憶體及記憶體/快取連結中應用低延遲、無損硬體壓縮演算法的最新技術現狀。在這個背景下,必須解決許多非平凡的挑戰,以使數據壓縮能夠良好運作。首先,由於壓縮數據必須在訪問之前進行解壓縮,因此解壓縮延遲最終會影響到關鍵的記憶體訪問路徑。這對壓縮演算法的選擇施加了重大限制。其次,雖然傳統的記憶體系統存儲固定大小的實體,如數據類型、快取區塊和記憶體頁面,但在使用壓縮的記憶體系統中,這些實體的大小會突然變化。在使用壓縮的記憶體系統中處理可變大小的實體對快取的組織方式以及如何管理主記憶體中的資源有著重大影響。我們系統性地討論了文獻中對這些問題的解決方案。第二章通過首先介紹值局部性(value locality)的基本概念,提供了數據壓縮的基礎。我們接著介紹壓縮演算法的分類法,並展示先前提出的演算法如何適應該邏輯框架。第三章討論了快取記憶體系統可以採用壓縮的不同方式,重點關注延遲、容量和壓縮快取區塊的替代方式的複雜性之間的權衡。第四章討論了將數據壓縮應用於主記憶體的問題,第五章則涵蓋了在快取到記憶體連結上壓縮數據的技術。本書應能幫助熟練的記憶體系統設計師理解在記憶體層次結構中應用壓縮的基本挑戰,並向他/她介紹解決這些挑戰的最新技術。