When I built my Ceph cluster in March 2020, I used old SATA HDDs I still had on the shelf. It was always clear that at some point these old consumables would die and need replacing. That time has come and I went with 3x 2TB SATA SSD OSD per node.
Just a short write-up on the state of my TerraMaster F5-433 Ceph cluster now that it has been in use for about 3 years.
This is my braindump on installing
cephadm on CentOS Stream 9.
While I use a QNAP TS-473A for my tests plus 2 VMs, the below applies to any machine or VM running CentOS Stream 9 FWIW: My installation and initial configuration is described in the post QNAP TS-473A with CentOS Stream 9.
So I cleanly took my QNAP TS-473A out of my existing Ceph Nautilus cluster again
(because I have enough combined capacity on my F5-422 nodes to be able to remove the OSDs in the TS-473A)
and installed Fedora Server 35 plus
cephadm from upstream.
While for my virtualisation needs I am firmly in the Red Hat camp, an article in ix 9/2021 piqued my interest.
Since I was on a week of ‘staycation’ and the T7910 was not in use, I decided to test Proxmox VE 7.0 on a Dell Precision T7910 tying in my existing Ceph Nautilus storage for both RBD and CephFS use.
user@workstation tmp $ s3cmd --access_key=FOO --secret_key=BAR ls s3://gitlab-backup/ 2020-09-09 19:28 686305280 s3://gitlab-backup/1599679709_2020_09_09_13.3.5-ee_gitlab_backup.tar
this is my braindump on setting up NFS-Ganesha to serve 3 separate directories on my CephFS using 3 separate cephx users.
[root@gitlab ~]# df -h /mnt/cephfs-GitLab_backup/ Filesystem Size Used Avail Use% Mounted on ceph-fuse 5.3T 175G 5.1T 4% /mnt/cephfs-GitLab_backup
I made some hardware changes to my Ceph Nautilus cluster.
While previously I used one SSD shared between the operating system and Ceph, I did want a clean setup where the OS and Ceph are on separate devices.