Ceph社区跟踪(2021-03-01 ~ 2021-04-01)


  • https://www.youtube.com/watch?v=_mccEB35KG0 :Ceph Code Walkthroughs: RADOS Snapshots
  • https://www.youtube.com/watch?v=nVjYVmqNClM :Ceph Code Walkthroughs: Librbd Part 2
  • 其他的性能周会、crimson周会、编排周会、科学小组双周会就不单独说了,没啥意思



  • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RM5LOAWLWSRMDPZHOUK2UED3EJZ4ZOI6/:Metadata for LibRADOS,想不通过rgw上传对象,直接用librados,但是元数据又没办法更新到外部存储系统如es,想了解librados是否有hook或者notify机制,目前没有,但是业界有类似插件,详情看链接里面。
  • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZNOPX3K53GDUBO7VZC4VPYS3JHYHJB3H/:Best practices for OSD on bcache,给了一些bcache的配置建议,但根本问题(有时候bcache方案性能很差)没有结论
  • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/K3OXZUVTYVFOXRJ2FG3YWLCAO5M7SNQG/:Ceph User Survey Now Available
    • 问卷结果:https://survey.zohopublic.com/public/thankyou.do?uid=XZCNKQ&responseid=559647000000359001&responseekey=Rbzums2q
  • 其他都比较水,不放这里了


  • https://docs.ceph.com/en/latest/rbd/rbd-persistent-write-back-cache/:最新版本ceph(v16)支持了rbd client cache,可以把写io放到本地ssd上,只是影响了可用性
  • https://ceph.io/contribute/google-summer-of-code/


  • Quincy版本开发者峰会(April 6-9):https://ceph.io/cds/ceph-developer-summit-quincy/
  • v16.2.0 Pacific released:https://ceph.io/releases/v16-2-0-pacific-released/ ,相比O版本releasenote(详情看链接):
    • cephadm:
      • 一条命令从O升级P
      • rgw支持单站点和多站点部署,新增支持NFS、iSCSI
    • dashboard:
      • RBD: per-RBD image Grafana dashboards.
      • CephFS: Dirs and Caps displayed.
      • Alert triggered on MTU mismatches in the cluster network.
      • Host management: maintenance mode, labels.
      • Services: display placement specification.
      • OSD: disk replacement, display status of ongoing deletion, and improved health/SMART diagnostics reporting.
    • RADOS:
      • Pacific introduces RocksDB sharding, which reduces disk space requirements.
      • Ceph now provides QoS between client I/O and background operations via the mclock scheduler.(赞)
      • The balancer is now on by default in upmap mode to improve distribution of PGs across OSDs.
      • The output of ceph -s has been improved to show recovery progress in one progress bar. More detailed progress bars are visible via the ceph progress command.
    • RBD:
      • A new persistent write-back cache is available. The cache operates in a log-structured manner, providing full point-in-time consistency for the backing image. It should be particularly suitable for PMEM devices.(赞)
      • A Windows client is now available in the form of librbd.dll and rbd-wnbd (Windows Network Block Device) daemon. It allows mapping, unmapping and manipulating images similar to rbd-nbd.
    • CephFS:
      • Multiple file systems in a single Ceph cluster is now stable. New Ceph clusters enable support for multiple file systems by default. Existing clusters must still set the enable_multiple flag on the FS
      • cephfs-top is a new utility for looking at performance metrics from CephFS clients. It is development preview quality and will have bugs.(赞)
      • First class NFS gateway support in Ceph is here! It’s now possible to create scale-out (“active-active”) NFS gateway clusters that export CephFS using a few commands. The gateways are deployed via cephadm (or Rook, in the future).
      • Multiple active MDS file system scrub is now stable. It is no longer necessary to set max_mds to 1 and wait for non-zero ranks to stop. Scrub commands can only be sent to rank 0: ceph tell mds.:0 scrub start /path (赞)
      • A new cephfs-mirror daemon is available to mirror CephFS file systems to a remote Ceph cluster.
      • A Windows client is now available for connecting to CephFS. This is offered through a new ceph-dokan utility which operates via the Dokan userspace API, similar to FUSE.
    • @hzwuhongsong,我们组艾弗松搞了几个pr在这个版本(赞)
  • https://ceph.io/releases/v14-2-19-nautilus-released/:不再细说
  • https://ceph.io/releases/v15-2-10-octopus-released/:不再细说


  • cephfs-top相关:
    • https://github.com/ceph/ceph/pull/40403
    • https://github.com/ceph/ceph/pull/40327
  • 大量P版本的修复pr