Search
Languages
<

High-Speed Caching

Speed up your applications by using DRAM as an extremely fast performance accelerator.

DataCore SANsymphony leverages DRAM for high-speed caching, enabling rapid access to frequently used data and significantly boosting application responsiveness. By integrating DRAM, SANsymphony achieves unmatched speed and efficiency, ensuring instant data availability and optimal performance in all operational scenarios.

Feature Highlights

  • Best SPC-1 Price-Performance in the industry at $0.08 / SPC-1 IOPS™
  • Leverages x86-64 CPUs and server RAM as powerful, inexpensive “mega caches”
  • Does not degrade or wear out over time like Flash technology
  • Accelerates I/O response from existing storage resources
  • Pre-fetches anticipated blocks into read cache
  • Coalesces random writes into larger sequential transfers to avoid waiting on disks
  • Acts as a speed-matching buffer to avoid bottlenecks in the I/O path
  • Delivers the best cost-to-benefit ratio of any storage medium
  • Protects against excessive thrashing on traditional spinning disk

High-Speed Caching Diagram

Read cache reduces requests to backend slower disk. Write caching accepts inbound data as fast as possible.

Application performance degrades over time.

Eliminate this problem with High-Speed Caching.

High-Speed Caching is a proprietary caching algorithm that accelerates I/O by leveraging RAM as a read and write cache. DataCore supports up to 8 TBs of high-speed cache per node, creating a true “mega-cache” to turbocharge application performance.

RAM is a commodity component and therefore relatively inexpensive, yet it possesses outstanding performance characteristics and lacks the shortcomings that are attributed to Flash devices. RAM is an excellent technology to leverage for accelerating I/O performance and delivers a very high benefit-to-cost ratio across the entire storage architecture, facilitating faster storage.

High-Speed Caching is critical for maintaining application performance because RAM is many orders of magnitude faster than the fastest Flash technologies and it resides as close to the CPU as possible.

It is the fastest storage component in the architecture, delivering a 3-5x performance boost to applications and freeing up application servers to perform other tasks. It also extends the life of traditional storage components by minimizing the stress experienced from disk thrashing.

Record-Breaking Price-Performance

DataCore shattered the world record for price/performance in the Storage Performance Council’s prestigious SPC-1 Benchmark1, thanks in large part to the outstanding caching algorithms perfected over 20+ years and the software’s unique parallel I/O architecture, enabling high-speed storage.

1 DataCore SANsymphony 10.0 SPC-1 Full Disclosure Report

Metric Reported Result
SPC-1 Price-Performance $0.08/SPC-1 IOPS™
Response Time at 100% Load 0.32 ms
SPC-1 IOPS 459,290.87
Total Price USD 38,400.29

Overcoming Disk Thrashing in Traditional Hard Drives

Disk thrashing occurs when a hard drive is overworked by excessive reading and writing actions, leading to performance degradation. Traditional hard drives, which rely on mechanical parts like spinning disks and moving heads, can become overwhelmed with too many simultaneous or intensive operations.

This overactivity causes the drive to constantly seek data across different locations, resulting in slower response times and reduced overall system performance. High-Speed Caching mitigates this issue by efficiently managing I/O requests, thus preventing the excessive workload that leads to thrashing.

Get Started with SANsymphony, Software-Defined Block Storage