-
Notifications
You must be signed in to change notification settings - Fork 49
Frequently Asked Questions (FAQ)
The traditional Linux RAM disk can only be statically loaded at boot time with a fixed size for all volumes; no more than 16 MB each. This Megabyte value can be adjusted but the result is still the same. You can reference the brd code of the Linux kernel for more details on this. Low capacities and not very manageable. I was primarily inspired by the Solaris implementation of their ramdisk module and the user land tool to accompany it, ramdiskadm. I was also partly inspired by the FreeBSD implementation called md or Memory Disk. My Linux module allows for dynamic creation and removal of RAM-based block devices for high performance computing at sizes varying from 16 MB to as high as 1 TB and larger. Plus it incorporates more up-to-date features and functionality.
Also, all other Linux implementations of RAM disks exist as file systems in memory. These include tmpfs and ramfs. These file systems are not always the most ideal solution to achieving high performance.
Achieving high performance played a big role but it was also a combination of a project that Petros Koutoupis started as a result of writing an article for a Linux publication about the Linux RAM Disk and the Solaris implementation of their ramdisk module.
Aside from increasing productivity with little to no bottlenecks in accessing data, there is also the opportunity to save costs in cooling. For instance, if a lot of frequently accessed data is moved to a stable and redundant RAM disk, there is less need to access mechanical Hard Disk Drives which contain a lot of movable parts and in turn use a lot more power while generating significant amounts of heat. Data centers around the world invest a lot of money in cooling, to keep their equipment operational in moderately cool and stable temperatures.
As much as you see fit to do the job. So long as you do not exceeds the system’s limitations. We are not responsible for misuse or miscalculations and are not responsible for managing that for the user. As the user, you will need to determine how much the operating system will need to manage everything else and how much it can afford to use for our RAM disks.
In your /etc/lvm/lvm.conf file do a global search for “types =” and add the following line:
types = [ "rd", 1 ]
Then retry the pvcreate command.
The best part of this solution is that it is free. It will only cost you a bit of time to learn, test and configure the solution.
Aside from the fact that DRAM performs much better than NAND Flash memory, DRAM’s performance is also consistent. DRAM also does not have a limited cell life which limits its write operations. Flash memory will hit a breaking point once those cells have been written to and it enters its Programmable Erase (PE) cycles. This significantly affects write performance (by more than half at times) and to combat this, vendors resort to tricks such as over provisioning the NAND memory, wear leveling, write coalescing, etc. Although these methods also have an expiration date, that is, at some point, you will start to hit those PE cycles.
Although there is an even scarier concern with NAND technologies, the one thing the vendors of these NAND chips and in turn Flash drives are not divulging to their consumers / users. That is, the Read disturb. This can create data loss or data corruption and not all NAND controllers are equipped to avoid this.
The method used to read NAND flash memory can cause nearby cells in the same memory block to change over time (become programmed). This is known as read disturb. The threshold number of reads is generally in the hundreds of thousands of reads between intervening erase operations. If reading continually from one cell, that cell will not fail but rather one of the surrounding cells on a subsequent read.
One thing that the RapidDisk technologies can do is help alleviate read operations to Flash technology by redirecting all read operations to DRAM instead.
Linux does not have a built block I/O cache. All block I/O resides in a temporary buffer until the scheduler schedules the task to the block device. This is similar to running Direct I/O on a file over a file system in which all I/O is immediately dispatched regardless of the scheduler. Now a file system will cache data in the VFS layer but this cache is somewhat small and limited. With RapidDisk / RapidDisk-Cache, you can easily enable 1GB or even 1TB of cache to a slower block device, thus enabling a block based cache.
To load the RapidDisk and RapidDisk-Cache modules at boot time, you can run the following commands:
Red Hat / Fedora
$ echo -ne "\nmodprobe rapiddisk-cache 2>&1 >/dev/null" >> /etc/sysconfig/modules/rapiddisk.modules
$ echo -ne "#!/bin/sh\nmodprobe dm-crypt 2>&1 >/dev/null" > /etc/sysconfig/modules/dm-crypt.modules
$ chmod +x /etc/sysconfig/modules/{rapiddisk,dm-crypt}.module
Ubuntu / Debian
$ echo "rapiddisk max_sectors=2048 nr_requests=1024" >> /etc/modules
$ echo "rapiddisk-cache" >> /etc/modules
$ echo "dm_mod" >> /etc/modules
$ echo "dm_crypt" >> /etc/module
SUSE / OpenSuse
$ echo rapiddisk > /etc/modules-load.d/rapiddisk.conf
$ echo rapiddisk-cache >> /etc/modules-load.d/rapiddisk.conf
$ echo dm-crypt >> /etc/modules-load.d/rapiddisk.conf
$ chmod 755 /etc/modules-load.d/rapiddisk.conf
Remember, RapidDisk-Cache is a block level caching module. It will only work on the block device. That means, with ZFS it was be mapped directly to a ZVOL and in turn, you need to mount the mapping of that ZVOL. First, you must have a ZPOOL and the create a ZVOL of that ZPOOL:
$ sudo zpool create -f -O compression=lz4 rpool /dev/sdb /dev/sdc
$ sudo zfs create -V 100G rpool/vol1
Next create the RapidDisk RAM drive of your desired size and then map it to the ZVOL:
$ sudo rapiddisk --attach 1024
$ sudo rapiddisk --cache-map rd0 /dev/zvol/rpool/vol1
You can also use the short /dev mapping of the same volume. In this case it would be /dev/zd0.
Finally, all future writes to the volume will have to be through the new caching node name:
$ sudo mke2fs -F /dev/mapper/rc_vol1
$ sudo mount /dev/mapper/rc_vol1 /mnt/
When I use RapidDisk devices and I create ZPOOL with the ZFS on Linux project, why am I seeing an invalid ioctl 0x2285 in the kernel logs?
This may be alarming at first but note that it does not affect functionality of ZFS over RapidDisk. During the creation of the ZPOOL, the ZFS utilities send an SG_IO ioctl() request (0x2285) to send a SCSI Inquiry command to the underlying devices. This is to determine the type of disk device. The message you will see in the logs will looks something like this:
Aug 23 13:49:22 unknown0800275CBC7A kernel: [12516.633704] rapiddisk: 0x2285 invalid ioctl.
Again, this is not a bug or a real issue. All operations will continue as expected. If you wish to further dive into the ZFS code to observe this call, it can be found in the cmd/zpool/zpool_vdev.c file and in the function check_sector_size_database().
While it isn't advised and due to the volatile nature of RAM to enable writeback caching on a system drive or any drive where the data matters, there are simple instructions preserved here.
My data is not flushing to the backing store on shutdown when I enable RapidDisk-Cache in Writeback mode?
The "unmap" command with the rapiddisk
utility forces this flush before unmapping the volatile RAM drive from the persistent backing store but if this is configured for your OS drive, that will likely not be possible. You may need to write and configure a shutdown script such as an rc or systemd service file to run a command like this:
dmsetup message /dev/mapper/<device name> 0 flush