https://www.youtube.com/watch?v=nVjYVmqNClM :Ceph Code Walkthroughs: Librbd Part 2
其他的性能周会、crimson周会、编排周会、科学小组双周会就不单独说了,没啥意思
邮件列表
user
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RM5LOAWLWSRMDPZHOUK2UED3EJZ4ZOI6/:Metadata for LibRADOS,想不通过rgw上传对象,直接用librados,但是元数据又没办法更新到外部存储系统如es,想了解librados是否有hook或者notify机制,目前没有,但是业界有类似插件,详情看链接里面。
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZNOPX3K53GDUBO7VZC4VPYS3JHYHJB3H/:Best practices for OSD on bcache,给了一些bcache的配置建议,但根本问题(有时候bcache方案性能很差)没有结论
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/K3OXZUVTYVFOXRJ2FG3YWLCAO5M7SNQG/:Ceph User Survey Now Available
Alert triggered on MTU mismatches in the cluster network.
Host management: maintenance mode, labels.
Services: display placement specification.
OSD: disk replacement, display status of ongoing deletion, and improved health/SMART diagnostics reporting.
RADOS:
Pacific introduces RocksDB sharding, which reduces disk space requirements.
Ceph now provides QoS between client I/O and background operations via the mclock scheduler.(赞)
The balancer is now on by default in upmap mode to improve distribution of PGs across OSDs.
The output of ceph -s has been improved to show recovery progress in one progress bar. More detailed progress bars are visible via the ceph progress command.
RBD:
A new persistent write-back cache is available. The cache operates in a log-structured manner, providing full point-in-time consistency for the backing image. It should be particularly suitable for PMEM devices.(赞)
A Windows client is now available in the form of librbd.dll and rbd-wnbd (Windows Network Block Device) daemon. It allows mapping, unmapping and manipulating images similar to rbd-nbd.
CephFS:
Multiple file systems in a single Ceph cluster is now stable. New Ceph clusters enable support for multiple file systems by default. Existing clusters must still set the enable_multiple flag on the FS
cephfs-top is a new utility for looking at performance metrics from CephFS clients. It is development preview quality and will have bugs.(赞)
First class NFS gateway support in Ceph is here! It’s now possible to create scale-out (“active-active”) NFS gateway clusters that export CephFS using a few commands. The gateways are deployed via cephadm (or Rook, in the future).
Multiple active MDS file system scrub is now stable. It is no longer necessary to set max_mds to 1 and wait for non-zero ranks to stop. Scrub commands can only be sent to rank 0: ceph tell mds.:0 scrub start /path (赞)
A new cephfs-mirror daemon is available to mirror CephFS file systems to a remote Ceph cluster.
A Windows client is now available for connecting to CephFS. This is offered through a new ceph-dokan utility which operates via the Dokan userspace API, similar to FUSE.