I love food with healthy and simple ingredients. I am a recipe developer and food blogger who is inspired by many cultures around the world. Today, I could be in Thailand having a Tom Yum Soup, and tomorrow I will be in France eating bouillabaisse!
Rogue Foodies is for everyone who loves to travel through the food they eat without borders, labels, or travel restrictions!
Become a member to get new recipes in your email every week
2016: 4TB + 512GB = 5.6TB of data disk space (always as RAID1) 2017: 4TB + 512GB = 6.12TB of data disk space 2018: 4TB + 512GB = 7.6TB of data disk space 2019: 6TB + 512GB = 12.5TB of data disk space 2020: 4TB + 512GB = 8.15TB of data disk space 2021: 6TB + 512GB = 14TB of data disk space
In the above chart, I have roughly 1.5 year (2016 was the last year I gave it some attention, obviously) of uptime and about 1.5 year of changes in disk space. I would notice drops in performance that affected the regular work of my server like: - Plugging/Unplugging HDD (no boot or shutdown)- Regular backup taking too long and sometimes failing (see below)- Regular work in general (such as testing using fio).- Regular SSH login via public SSH key or password authentication.
When the I/O latency started getting worse, there were almost no changes in I/O performance, so I had to start looking at the problems other than the performance. It became useless for me to use the server as a server, so I had to turn it into a NAS, somebody else is using it for stuff now.
After a while, I was working at restoring the unrepairable drive, when I noticed that any P/B, L/B, or S/B command was taking a long time (~3 minutes) and the output logged was not there (OK, so I was doing something wrong).
I ran a ZFS file system on 4 2TB SSD disks, mirrored, and ZIL disabled. The file system started out at 64K compression and 1K read/write cache. I changed some settings in /etc/zfs/config, rebooted, did some file writes and reads, computed I/O throughput, and stopped the I/O workload with the fio tool. The tool didn’t bother with any background operations, I just ran the tool to test.
I have a machine running Kubuntu 20.04, on an HP Z420, with 4 hard drives (in a software RAID configuration). Was working fine with 5.4.0 kernels, but after upgrade to 5.8.0-43-generic, it would fail to boot, only seeing 2 HDDs out of 4 - the ones that it no longer sees are attached to Intel Corporation C602 chipset 4-Port SATA Storage Control Unit (8086:1d6b, isci kernel module). When I boot back into 5.4.0-65-generic, then things are back to normal (after a 2-hour RAID re-sync and fsck :) ). Here is how things look with 5.4.0 (not brave enough to try booting into 5.8 again just yet): d2c66b5586