Placeholder Image

Subtitles section Play video

  • Hi.

  • I'm Andy Rudoff from Intel.

  • In this video, I'll explain what persistent memory is

  • and why there's so much excitement around it.

  • Don't forget to watch the rest of this playlist

  • on persistent memory programming.

  • Let's start by describing what persistent memory is.

  • Sometimes called storage class memory,

  • persistent memory is only recently

  • available on modern hardware due to the emergence

  • of new memory technologies such as Intel's 3D XPoint memory.

  • These new technologies allow products

  • with the attributes of both storage and memory.

  • These products are persistent, like storage,

  • meaning they hold their contents even across power cycles.

  • And they're byte addressable, like memory,

  • meaning programs can access data structures in place.

  • But what really makes persistent memory stand out

  • is that it's fast enough to access directly

  • from the processor without stopping

  • to do the Block I/O required for traditional storage.

  • Performance is why there's so much

  • recent excitement in the industry

  • around persistent memory.

  • If you compare a modern NAND-based SSD which

  • plugs into the PCIe bus and communicates using MDM Express

  • protocol, you can see the time it

  • takes to read a block is over 80 microseconds.

  • Notice how most of the time is spent accessing the media

  • indicated by the blue area.

  • The software stack is a small percentage

  • of the overall access time.

  • We could work on making the device driver faster

  • and the difference would be hardly noticeable.

  • The Intel Optane SSD also plugs into the PCIe bus,

  • but uses 3D XPoint, so the time spent accessing

  • the media shrinks way down.

  • Now notice that the overhead of the software stack in the PCIe

  • protocol is a significant portion of the overall latency.

  • To get the most out of 3D XPoint technology,

  • it now makes sense to tackle the overhead of both software

  • and the interconnect.

  • That's where persistent memory comes in.

  • By connecting the media to the memory bus,

  • the CPU can access the data directly

  • without any driver or PCIe overhead.

  • And since memory is accessed in 64-byte cache lines,

  • the CPU reads only what it needs to read

  • instead of rounding every access up

  • to a block size like storage.

  • You can see how low latency a 64-byte read is here,

  • although I also show a 4K read for an apples-to-apples

  • comparison with the SSDS.

  • With persistent memory, applications

  • have a new tier available for data placement.

  • In addition to the memory and storage tiers,

  • the persistent memory tier offers

  • capacities larger than DRAM and performance significantly

  • higher than storage.

  • Applications can access persistent memory resident data

  • structures in place like they do with traditional memory.

  • This eliminates the need to page blocks of data

  • back and forth between memory and storage.

  • To get this low latency direct access,

  • we need a software architecture that

  • allows applications to connect up with ranges

  • of persistent memory.

  • That's the topic of the rest of this video series.

  • Re-architecting your software will take some time.

  • Get started right away by watching this persistent memory

  • programming playlist.

  • And don't forget to check out the links in the description

  • below.

  • [INTEL THEME MUSIC]

Hi.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it