Ceph社区跟踪(2020-10-16 ~ 2020-11-25)




本文作者:本人

youtube channel

Ceph Orchestrator Meeting 2020-10-26

  • 主要讨论alpha版本发布流程(分支、pr相关)和完善文档事宜

Ceph Docubetter Meeting 2020-10-29

  • ceph df输出格式修改对应的文档也要更新:https://github.com/ceph/ceph/pull/37539
  • digitalocean的一个人分享了一个ppt,主要是探讨如何把rados message protocol相关的文档写的更清晰易懂(un-black box rados),看起来更多的是librados相关的client封装比如go或其他语言,可以方便的通过message protocol(类似网络协议)跟rados打交道而不是通过封装调用C/C++的librados api,并且让其他开发人员可以分担核心开发的文档相关工作量
  • 安装部署文档跟不上安装工具的发布节奏(ceph-deploy、cephadm、ceph-ansible、docker、rook。。。),并且各个发行版的文档适配也是个大问题
  • 有些文档的url链接失效了,需要修复,但是老版本的文档要不要处理还在讨论
  • 主要还是围绕如何让开发人员更加高效快速高质量的贡献文档方面进行讨论,但看起来并没有实际的结论

Ceph Code Walkthrough 2020-10-27

  • LibRBD I/O Flow Pt. 1 (屏幕录制效果较差,HD模式勉强可以看。。。)
  • 全部分享记录:https://tracker.ceph.com/projects/ceph/wiki/Code_Walkthroughs

Ceph Crimson/SeaStor OSD 2020-10-28

  • review pr:https://github.com/ceph/ceph/pull/37870
  • extend mapping在添加大量kv之后删除(超过一千,问题必现),会导致内存暴涨,怀疑是编码问题导致,还在分析
  • seastore性能测试工具准备,发现nbd server方案是最简单的,已经实现,可以直接对接seastor后端发送transaction:https://github.com/ceph/ceph/pull/38290 (https://github.com/ceph/ceph/pull/38290/commits/82013fa0160fa08db9d3221ec9d40de055ea02e0)
  • 智能网卡场景下,osd和client的交互模式可能要完全修改,不需要唤醒cpu就可以写盘,没有线程队列之类的东西了,每个osd有很多port,pg按port分组(shared),还讨论了nvme over fabric的好处(采用cpu模式处理io,cpu占有率方面永远达不到nvme over fabric的效果,因为它是0),nvme oF还要考虑厂商兼容性问题,不能只支持特定厂商
  • 讨论client和pg通信模型(一个client/tcp连接对应后端一个cpu core,之后转发到对应的pg进行处理,把pg映射到core,方案有点ugly,但是在修改message protocol之前只能这么做)
  • 测试覆盖率提升讨论

Ceph Performance Meeting 2020-10-29

  • new pr讨论:https://github.com/ceph/ceph/pull/37762 、 https://github.com/ceph/ceph/pull/37636
  • 删除pg速度优化:(看起来是在大量omap场景下)可以提升10倍(https://github.com/ceph/ceph/pull/37314、https://github.com/ceph/ceph/pull/37496),性能数据 https://docs.google.com/spreadsheets/d/17V2mXUDEMAFVmSC67o1rQtrWNnAm_itBgzdxh1vRJMY/edit#gid=0
  • pg log容量占用问题:要么减少pool的pg数量,要么减少pg的log长度,pg太少并发性能受影响,只能创建更多的pool,冷数据池的pg log是否需要缓存到内存中?
  • 问题讨论:一个初始集群很小的时候,应该分配多少pg、多少pool才合适?并且后续扩容要尽量减少数据迁移。
  • rocksdb kv onode缓存不释放问题:很老的kv还保存在内存中,其实只有新的热数据会被使用,比较浪费内存空间(data cache也会被挤占掉,但是data cache却不能挤占metadata cache,即使他们中很多都是老的没有用的数据)

Ceph Crimson/SeaStor OSD Weekly 2020-11-04

  • 在seastor架构下object context lock如何实现问题(BlueStore、FileStore都有这个lock)
  • EIO处理逻辑开发,发现之前的方案有问题,需要修改(其他人希望功能开发者可以把新方案整理成文档再讨论下,但是他还没写):https://github.com/AmnonHanuhov/ceph/tree/wip-ObjectStore_EIO_Handling
  • 讨论pr:https://github.com/ceph/ceph/pull/37925、https://github.com/ceph/ceph/pull/37907

Ceph Developer Monthly 2020-11-04

  • 讨论3个议题:
    • manager scalability:MGR性能问题(是否要考虑增加metric来辅助定位),以及module调试问题,太长了没全部听完
    • 讨论replicated writeback cache in librbd(有ppt展示):Intel基于PMEMs/SSDs的rbd client writeback cache方案进展介绍,librbd cache也能副本化(master、replica),master client挂了,也能通过replica节点把数据刷到osd上,replica挂了会找新的replica:https://github.com/pmem/rpma
    • 建议搞一个面板来管理开发计划和进度,否则有些人由于时差问题不好同步参加周会

Ceph User Survey Working Group 2020-11-05/2020-11-12/2020-11-25

  • 讨论2020年度问卷设计问题,有人吐槽问卷有点复杂,无关问题有点多,当前的问卷:https://tracker.ceph.com/attachments/download/5260/Ceph%20User%20Survey%202020.pdf
  • https://tracker.ceph.com/projects/ceph/wiki/User_Survey_Working_Group
  • 这个就不关注了,等调查结果出来再看

Ceph Crimson/SeaStor OSD 2020-11-11

  • 任务交流,主要是提交和review pr

Ceph Performance Meeting 2020-11-12

  • cbt的数据库支持讨论,要能把测试结果保存起来,并且可以支持后续的自动化分析
  • 讨论给osd加socket command或日志,来获取benchmark测试过程中的性能统计数据,当前已经有了一些了,新的要怎么加?
  • 新增pr讨论(performance label的那些)https://github.com/ceph/ceph/pull/38025、https://github.com/ceph/ceph/pull/38477,。。。
  • wpq、dmclock scheduler对比,osd_op_num_shards=2,osd_op_num_threads_per_shard=8场景下的性能测试数据结果review、讨论,看起来dmclock效果比较好,时延和吞吐都有提升,恢复场景也能控制速度,等最终pr出来了再深入分析吧
  • 讨论这个benchmark工具:https://www.vi4io.org/tools/benchmarks/md-workbench
  • rocksdb compaction问题讨论

Ceph Science Working Group 2020-11-25

  • 主要讨论ceph在科研相关应用场景下遇到的各种问题,如可扩展性、可升级性等等

邮件列表

https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/

  • Module ‘cephadm’ has failed: cephadm exited with an error code: 2, stderr:usage: rm-daemon [-h] –name NAME –fsid FSID [–force] [–force-delete-data]:无人回复
  • RGW with HAProxy:报错invalid response,无人回复
  • Recommended settings for PostgreSQL(rbd场景跑数据库调优问题):https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PLYPZOTPWEPEH5BPKTIFVRSSMU3TDLFH/
  • OSD host count affecting available pool size?(属于初级用户问题)
  • Mon DB compaction MON_DISK_BIG(mon db占用了超过15G空间导致warning,我们也经常遇到,他的是12.2.8版本;在线压缩效果不明显,离线压缩也可以试试,但是主要是因为他集群中有一个osd down了好久没踢掉导致的):https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6EHBAIJG3BBYQ7XCMDQAVJKDVQGYXUKV/
  • Problems with ceph command – Octupus – Ubuntu 16.04(初级问题,mon集群没启动好,导致ceph命令超时,重启就好了):https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/IO6FCOS3ZIZFQ5HRJN64LPNEEJAYKLJZ/
  • pool pgp_num not updated(调整完pgp_num之后没变化,应该是集群空间不足导致backfill不能正常完成导致的):https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KWUUXNCHUR5E5MURKHEY3LECJCNFTINT/
  • Difference between node exporter and ceph exporter data:无人回复,应该是关于prometheus插件的问题,也是初级问题
  • 有人反馈了一个15.2.3版本内存泄露问题,会导致osd无法完成recover,问题还没有解决:https://tracker.ceph.com/issues/47929
  • 6 PG’s stuck not-active, remapped(应该是CRUSH经过最大尝试次数还找不到足够的osd来满足min_size要求,同时down掉的盘太多了):https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/F34WC6YSFBMDCT5S7YNXLRKEFTROSYXO/
  • Need help integrating radosgw with keystone for openstack swift:暂时没有人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RI3J5UCRGE6ELNZYX2UISEPCXDE3MQY5/
  • RFC: Seeking your input on some design documents related to cephadm / Dashboard:https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/3BN2UCD3N7XGIXJSTWWOL3MMCMNLYGRX/
  • ceph octopus centos7, containers, cephadm:(问centos7上跑O版本应该怎么部署,答可以用cephadm,用这个就会用到container,或者手工包安装就不会了)https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AIB72SZFS53E6SWZKBKGJJ7I4QPLZA4P/
  • TOO_FEW_PGS warning and pg_autoscale:(缩容pool导致pg数太少导致的warning告警该怎么处理,暂无人回复)https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PRHZLOJWCDO7NLRWAQ4HAQM5C3YX73AU/
  • Ceph Octopus:cephadm部署问题,刚开始的时候就一块网卡,后面加了一块,怎么修改配置? https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QL5UUZ3LKLLA3L6U3TOBKDVHHQIIR5IF/
  • OSD Failures after pg_num increase on one of the pools:(14.2.7版本扩容pg_num之后崩了。。。只要不设置norecover osd就会assert挂掉,估计是个bug,有一个人回复但还没有进一步结论)https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EPAA6MZW3HVG66F3DR2N3OGTTET5VST7/
  • desaster recovery Ceph Storage , urgent help needed:(又一个把自己集群玩崩了的。。。这个是因为改了机器的ip地址导致的,mon集群想办法恢复后ceph -s里面都没有osd了,暂时还没有结论) https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EEHQ4WLJOBTNXBQBLTDQH6UK5KSYLIJ4/
  • Large map object found:(rgw问题,omap太大导致了warning,有人建议对桶进行reshard,之后就好了):https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PVP6F5FTEJNDUDHQEAUXIC5V2BFSQUG5/
  • Ceph and ram limits:osd和mon占用内存太多导致oom问题,暂时没有结论 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/MMC22TPSP6VKXI6RNX4O224UBV5IHCCA/
  • Urgent help needed please – MDS offline:这个问题是12.2.10版本的,对我们有参考或者警示价值,主要问题是集群突然写满,mds重启后replay journal阶段有个dentry太大占用了太多内存,然后mds节点内存只有128G就OOM了,最后这个人根据其他人建议加了一块800G的ssd做swap,跑了3~4个小时恢复了集群: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FZE4RGXJMKMWVIBEQDDVVGBMTOWYE7KE/
  • Ceph cluster recovering status:又一个集群崩了的,暂无人回复
  • The feasibility of mixed SSD and HDD replicated pool:有个人提了一个比较“奇葩”或者思路清奇的问题,是否可以把ssd和hdd放到一个副本pool里面,ssd做primary,hdd做replica,用来保存只写一次的数据(写的时候慢点没关系),之后就是只读取了,读取的话ceph默认从primary读,就可以发挥ssd的性能优势了,可以通过设置primary affinity把主都放到ssd上,我觉得这个思路确实有点意思,感觉理论上是可行的, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/UZG3OKIEBH5VHP3U2B7R42NJP6MVG6Y6/
  • Ceph docker containers all stopped:这个人描述问题不太清晰,有人猜测是cephadm部署的集群,还没有进一步结论,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TUUYDRO47FZJ6COWKHQMY4JXWZDFKGIU/
  • Question about expansion existing Ceph cluster – adding OSDs:集群扩容速度没跟上使用量增长速度导致的问题,也讨论了添加osd的常规步骤, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FGOODUAH6XVXXKF73SGCY6ECD77L33RE/
  • Hardware for new OSD nodes.:讨论nvme+sas盘的场景下,该如何配置,对我们有参考价值 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KYHLBPYYPFSW2E6W7TNYKSW3UF5QXWIC/
  • Ceph not showing full capacity:数据不均衡导致的集群可用空间不符合预期, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FGC3AV6J7DYPOQR5TTIKL45ARFBQXY44/
  • Cephadm: module not found:cephadm问题,新版本已修复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/B3EE45F5TKA6VQCHZOP7RQ7IDJUVPSPZ/
  • Switching to a private repository:cephadm问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/OM6PIRHRB6Y7EYXX6LJPBDWA2HL3XEC5/
  • OSD disk usage:N版本集群可用容量问题,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZWSXKCEUVUS6JVQV5O5U2KA3IJV3T6XT/
  • Ceph Octopus and Snapshot Schedules:https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FX3AFHYQXRRU3636WTQWI66PQL2KLKB7/
    • 他提到的是这个功能:Mirroring now supports a new snapshot-based mode that no longer requires the journaling feature and its related impacts in exchange for the loss of point-in-time consistency (it remains crash consistent). https://docs.ceph.com/en/latest/releases/octopus/#rbd-block-storage
    • 还有人回复提到了cephfs这个快照调度功能(定时快照之类的),backport还没合入O版本:https://github.com/ceph/ceph/pull/37142
  • Strange USED size:对USED的值有疑惑,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FYCXRYJKWRCWAARK4MZEEFQWISVRLSMG/
  • Fix PGs states:一堆down、incomplete、stale、unknown状态的pg,不知道咋处理, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Y4QLZPMCBCL6FV6NPI3XZRQWZ2NSN3OJ/
  • pgs stuck backfill_toofull:rgw集群osd满了,有人回复帮忙解决, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZRVWWECTFQNWUURG7PDGHIWHZ2LTDAIE/
  • RGW seems to not clean up after some requests:rgw 503错误问题,暂未解决 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/I7DBUXTMUA5ONO7IXLWKKMZD6VFLG3QS/
  • cephfs cannot write: 15.2.5版本集群状态OK,pg状态active+clean,但是集群无法写入,有人回复怀疑是挂载目录的权限问题 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CMBICTKTMJZQ4BBXJLYNVKIHEP2OGR44/
  • read latency:讨论如何降低读时延,以及小io顺序读和随机读的区别(预读机制) https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LA5BKFZTLCWJE4ZKKFIYYLJ7RQSTTQDD/
  • Monitor persistently out-of-quorum:mon集群无法选举成功,原因是网络问题(多个交换机的MTU不匹配导致) https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QEMRZJWKQ4R6CGOFEF62UVM3BRI7ZJ6S/
  • How to recover from active+clean+inconsistent+failed_repair?:pg不一致之后没办法repair,出现的场景是全部osd掉电,有人猜测可能是某个盘有坏道导致3个副本都不一致,建议先用smartctl 找出来有问题的盘,试着用ddrescure/badblocks命令把原来盘上的数据恢复出来,在重新启动osd,最终这个问题也没有解决,也不能标记为lost https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TI6RM5GUAX5PGERCQVZZ6JMZXQQ3O7OZ/
  • Does it make sense to have separate HDD based DB/WAL partition:纯HDD的osd是否要把db/wal放到独立的分区,有人回复没必要,不能提升性能, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Q7M425C7TBBQKWJGVHGPOAB3USGD5LFP/
  • Updating client caps online:怎么在线更新auth caps,有人给了操作步骤,可以成功执行 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/MVMHMAWMLENMOJRJAP7X235AWXTDAAPX/
  • Restart Error: osd.47 already exists in network host:cephadm部署后重启osd报错,暂未解决, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LRMMYVQQMWDU3BZJ7MXZ3PO2FDPOEXRM/
  • Inconsistent Space Usage reporting:nbd盘的df结果显示的使用量和ceph集群使用量不一致,有人建议他用fstrim回收下空间, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TV3DY2OB3ENL2PPOMIN3OA6RSKXPJLEQ/
  • How to reset Log Levels:日志文件太大调了日志级别到0,怎么找回默认值?有人回复用 ceph config dump命令, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YMEJR3I4I52LHHH7R5TYHG2YS24GGDP6/
  • Ceph 14.2 – stuck peering. :pg卡在peering状态,暂无人回复 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KW7B5B42AYPNA63W5LVNIXMYKREPYD5D/
  • bluefs_buffered_io:问这个配置项的用途,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/HBGRMUUUHWBZ3VMHGUM5EZFM4XDQ63AI/
  • RBD image stuck and no erros on logs:集群状态OK,rbd卷无法读写,不知道该怎么处理,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TAAGMSVVJ2V7UICT5WVA2IMRSLMENVHL/
  • Ceph flash deployment:全闪存集群部署调优, https://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments 这个文档有点老了, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/3Y6DEJCF7ZMXJL2NRLXUUEX76W7PPYXK/
  • File read are not completing and IO shows in bytes able to not reading from cephfs:cephfs集群做异常测试,kernel client挂载模式,设置nodown、noout、norecover、nobackfill之后停掉整个集群,重启恢复healthy之后client端无法读写已有的文件,新写入正常, 暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GY6ZGNIRGJQVWZCSWG3VKISACYVP2QYM/
  • cephadm POC deployment with two networks, can’t mount cephfs:如题,有人回复建议他重新看下指导文档步骤, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/3UBRRNYV52SO5OGTWKV6DPRSZLEGBZ6E/
  • RGW pubsub deprecation:如下两种bucket变动的通知机制哪个更好?
    • [1] https://docs.ceph.com/en/latest/radosgw/notifications/ 主动推送模式,是否应该优先选这个
    • [2] https://docs.ceph.com/en/latest/radosgw/pubsub-module/ 被动拉取模式
    • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YQEW7DE4SOWERD7KH64MCRIVHSXWMUCZ/
  • high latency after maintenance:一个机柜停机维护2小时(上面5个node),停机前noout、noreblance已设置,开机后集群时延升高到20多秒,20分钟后恢复正常,不知道为啥。猜测应该是在做recover导致的hdd拉满
    • We’re running nautilus 14.2.11. The storage nodes run bluestore and have 9x 8T HDD’s and 3 x SSD for rocksdb. Each with 3 x 123G LV
    • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/R2OEG6RE4EKKGOAVG2H4LQ5MNYABKX7X/
  • Problem with checking mon for new map after upgrade:N版本升级到O之后,osd启动后卡在1234 tick checking mon for new map阶段
    • 已知问题(执行ceph osd require-osd-release mimic可以解决):https://www.mail-archive.com/search?l=ubuntu-bugs%40lists.ubuntu.com&q=subject%3A%22%5C%5BBug+1874939%5C%5D+Re%5C%3A+ceph%5C-osd+can%27t+connect+after+upgrade+to+focal%22&o=newest&f=1
    • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EABVI566VYJ4NA4UMPCOLE2SGNGA5MCB/
  • msgr-v2 log flooding on OSD proceses:日志太多,不知道怎么调整配置 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VYEIS6MPIQR4FG5HIRLAUFEL6YIH2J5G/
  • NoSuchKey on key that is visible in s3 list/radosgw bk:15.2.3版本(L版本正常),s3cmd ls和radosgw-admin bi list都可以查询到对象,但是curl下载的时候报404,https://tracker.ceph.com/issues/47866,https://tracker.ceph.com/issues/48331,https://github.com/ceph/ceph/pull/38249, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/WQ2F2GWI2WRDAGLVRDA7PAAGBJTNN4PI/
  • Low Memory Nodes:osd节点内存不够怎么解决?(有人建议修改配置:ceph config set osd osd_memory_target 1500000000) https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ERK3OCTPE4AR7HXM254M4D3PURALGNC5/
  • ceph command on cephadm install stuck:cephadm问题,暂无人回复 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CDIWUO2T2VFBOU4P5SZMKU56CX6VXTO4/
  • Dovecot and fnctl locks:Dovecot这个应用跑在cephfs上的问题,暂无人回复 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/WBU2BEWUA3HEQPP62SZ44VNADGQM6U4D/
  • pg xyz is stuck undersized for long time:加盘之后又挂了一个osd之后pg一直没有clean,看着应该是backfill tofull导致的, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/POIYKF4DP4H7L2ROTBK3OD2DYFULFOPL/
  • Multisite sync not working – permission denied:rgw这个功能之前的版本还可以用,升级到15.2.4版本之后不好用了,暂未解决, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RXRF54FFJIUTQDQXLZ3YIJZTYXOPLEML/
  • Mon went down and won’t come back:问题解决了,但是不知道根本原因, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/4V5X6M26VUNXAUDHDEMVN6P3YIHXUA2D/
  • Ceph 15.2.3 on Ubuntu 20.04 with odroid xu4 / python thread Problem:ceph命令超时,暂无人回复 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/7RLG5KJ5FAOIIFIJNUOVUBMQVDFBSIBE/
  • move rgw bucket to different pool:想把rgw的bucket迁移到其他pool,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AZGFB32O4HWLTDK2GTPIQXZJSMEVMQYV/
  • cephfs forward scrubbing docs:吐槽文档写的不清晰,
    • ceph tell mds.cephflax-mds-xxx scrub start /volumes/_nogroup/xxx recursive repair,与多mds有冲突,文档没写,另外这个功能本身看起来还不完善
    • https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ME6NOEIM4UXC5F5RX3EMNQ23ICL5H3Q6/
  • The feasibility of mixed SSD and HDD replicated pool:还是那个ssd做primary、hdd做replica的问题,在读多写少的场景下提升读性能(通过crush rule来实现primary落在ssd上,而不是通过修改affinity),有人回复已经这么玩很久了, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/IGLHDCD2EFS3CHIFWFZ4NZM5IAHXWPFX/
  • disable / remove multisite sync RGW (Ceph Nautilus):原来用的是multisite sync,后面改用rclone了,想保证数据安全的前提下去掉multisite sync,暂无人回复 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/BTLWEMDR3AAC2QT4EGLWRNGCUV7I7ZWU/
  • cephfs – blacklisted client coming back?:14.2版本evict之后reconnect问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ROYV6RUUT5VHQFNL2MPABL36EN4GZ3FZ/
  • Slow ops and “stuck peering”:pg卡在peering状态无法恢复正常,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/4VHI4LJOF7B5LHVEVL4BVI6H5EXH5PEJ/
  • RGW multisite sync and latencies problem:暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/UDG5VC5VV3W7LJZF546UU3LNBHSCPXKB/
  • Ceph RBD – High IOWait during the Writes:用ec pool跑rbd,有人回复EC pools + small random writes + performance: pick two of the three.,还有人建议用cephfs, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/25CCEUNNUXTRCGGKM4H7LYU6GTEF6RRG/
  • safest way to re-crush a pool:怎么配置crush规则问题,初级问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/MSMBLWB4KICHNP5K6SR474MZEM6FKHBG/
  • Is there a way to make Cephfs kernel client to write data to ceph osd smoothly with buffer io:page cache满了之后大量数据同时flush对osd造成压力,有人回复可以调整这两个参数vm.dirty_bytes、vm.dirty_background_bytes, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ROFF5IK7MCHM6BVHT2UVZKPJKMCSDIJX/
  • _get_class not permitted to load rgw_gc:版本混用导致的,O版本降级之后恢复了, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TDDH6YRK2M6EFKDH4TYMB4ZEFJ2XEE5R/
  • Nautilus – osdmap not trimming:osdmap不会trim问题,有人回复有个bug,如果集群有osd down的话就会这样,v14.2.13 has an important fix in this area: https://tracker.ceph.com/issues/47290 , https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ROINIUNPI36Z24YWGAYF6ZZB7LMQ6EZE/
  • newbie question: direct objects of different sizes to different pools?:这个人想把不同大小的对象放到不同的pool,比如小对象放副本pool,大对象放ec pool,想问s3 client或者rgw能不能做到根据对象大小自动分配pool, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FN4NF2M35MYCLPDULBVGPSS622UWPLOC/
  • How to use ceph-volume to create multiple OSDs per NVMe disk, and with fixed WAL/DB partition on another device?:有人回复用这个命令 ceph-volume lvm batch –osds-per-device 4 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 –db-devices /dev/nvme0n1, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SVHR3XAPAIPSXDKIWEIFYVGX2EZAHQAB/
  • How to run ceph_osd_dump:N版本报OSD_SLOW_PING_TIME_BACK,想通过dump_osd_network 命令查看网络状态,官方文档写的mgr可以,其实应该是osd,文档有误 https://docs.ceph.com/en/latest/rados/operations/monitoring/#network-performance-checks , https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/IGEXQLDAMLIZQBBA6DYBQTK4EWQBBRAL/
  • Rados Crashing:N版本删除对象的时候rgw服务挂了4个,有人回复可能是这个bug导致的,https://tracker.ceph.com/issues/22951, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QVIMMFKQ7GA3POXUX5QLLSFQVHTHOEKR/
  • Autoscale – enable or not on main pool?:如题,暂无人回复,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PDQILIX6RCGOSPFNAKD2NSSBH3SFUMWQ/
  • Cephfs Kernel client not working properly without ceph cluster IP:IP配置问题,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/3XSSJ745CPCM2L25TOLN2DPX4KYTVBYL/
  • question about rgw delete speed:没有明确结论,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/HAFHHL5VL2KZBVU7WQRL2TOB7XD4Y6AM/
  • OverlayFS with Cephfs to mount a snapshot read/write:这个没太看明白,应该是想用cephfs做overlayfs的只读layer,本地目录做读写layer,然后内核版本有问题,升级后解决了, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/BIOBKO67OFYMZ245RIH4XG6BANW7HCAY/
  • How to Improve RGW Bucket Stats Performance:暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/WTO2K5FP77DKA2DV4HBCYATACDWCYXOZ/
  • monitor sst files continue growing:加盘之后mon db问件越来越大,compact也没效果,有人回复建议先把集群搞成healthy状态,然后把mon db的盘换成ssd的(compact速度太慢了),之后就好了, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GTV6YD7V2GLWKFX4ROPRMS52YJOOP6AA/
  • which of cpu frequency and number of threads servers osd better?:如题,这类问题本身就不好回答, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EXWC2YJYFC6RUVSAUYLQWN2DHH4A2GG3/
  • Problem in MGR deamon:O版本MGR挂掉问题,暂无人回复 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/H3UKEJAXATLF5GPIF5ALC5D2R3NBFD4D/
  • Beginner’s installation questions about network:https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2U23VVLSN653CZ6JH3C2NTUGSGKYR3XP/
  • build nautilus 14.2.13 packages and container:打包问题,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QSSPPXBOP3AZFIUGGSYQWZGZLASU6XWD/
  • Mimic updated to Nautilus – pg’s ‘update_creating_pgs’ in log, but they exist and cluster is healthy.:暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/OZRAFZ7KSOYWB6RO6S77KVD4T52YHTNP/
  • OSD memory leak?:怀疑M版本内存泄漏,osd内存会缓慢增长,暂无结论, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TPIFMPQ6YHEK4GYH5LA6NWGRFXVW44MB/
  • How to configure restful cert/key under nautilus:暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/WQZE23MHTVZ6GPXPA36XU7UCTWTHRWL5/
  • CephFS: Recovering from broken Mount:14版本client连接15版本集群,网络故障后,无法恢复,只能重启物理机,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NBGPVFYDA3MBEQF4DEA4FUYMGPAVOSYC/
  • Bucket notification is working strange:O版本问题, https://tracker.ceph.com/issues/47904, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/XNEMWNL7ZKW6Y5MWZ5FQWO7JCOWGADKB/
  • Reclassify crush map:暂无人回复,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/46HQ375OQDQJ4AZ4CIT7MKCVJSONYUXK/
  • Ceph RBD – High IOWait during the Writes:这种问题本身就没有什么可以明确答复的, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/57IQ5HWXTOK77BYI6DS5VY5EETSEKCPY/
  • (Ceph Octopus) Repairing a neglected Ceph cluster – Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time:4个osd做了一个2+1的ec pool,然后大量pg进入degrade状态, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/WDT475WDZXU4VTYAG6BZDSYDA6TTHABU/
  • Ceph EC PG calculation:EC pool的pg数量怎么计算的, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/XHYKS7KK7SJNQWTHLHPFZDGWWUZQNG7M/
  • Unable to clarify error using vfs_ceph (Samba gateway for CephFS):Samba gateway for CephFS问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/MM7DPKL6IKIZEDWQNIREMPCHFHO5SHOH/
  • Module ‘dashboard’ has failed: ‘_cffi_backend.CDataGCP’ object has no attribute ‘type’:与ceph无关的问题,是pyOpenSSL 版本问题,已解决, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/53ZKGBQLNOX2N4RQAXHUDXKT4VO2ZXAT/
  • Not all OSDs in rack marked as down when the rack fails:猜测应该是没有足够多的osd把关掉的rack上的osd上报为down状态, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/HZXM2DQTNKE55CS35IJMJKYVEO7237VQ/
  • MONs unresponsive for excessive amount of time:一个mon停服45分钟之后再启动,加入集群后导致mon集群无响应达10分钟,MDS也无法连接mon集群,IO hang了10分钟才恢复,不知道原因,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RLZUWD4JNWUJY77U6SYOKU25VOZHITEK/
  • Using rbd-nbd tool in Ceph development cluster:rbd-nbd命令怎么在vstart开发环境使用, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AN3WQOBHWHUWZQOHA4GYELCYHSWBDEVY/
  • CentOS 8, Ceph Octopus, ssh private key:cephadm安装问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/BEEYGGDDY6LXJNZKM5WONUO5APUNLU6H/
  • MGR restart loop:M版本停服一个mon节点导致MGR反复重启,1天后才恢复,怀疑是跟心跳超时时间有关, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/P6K4MNSGXHNLCXOEP2U4MO2YS4EYVQRZ/
  • Weird ceph use case, is there any unknown bucket limitation?:ceph能否支持5万个bucket?有人回复说可以,但是要用nvme做metadata pool, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CJFJ2XFAPCWQ3GBJ7GAAKIXTZT7K7JZ5/
  • EC overwrite:问这个功能是否可用?会不会有问题?有人回复这个可以用但是没啥用,https://docs.ceph.com/en/latest/rados/operations/erasure-code/#erasure-coding-with-overwrites, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QRK2D73TNWPHPXSITHLIAS5WBDJA4ZUQ/
  • Slow OSDs:暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/MUO3WPXQZNYMEKORIGHJIJ2DVWGJVC4U/
  • BLUEFS_SPILLOVER BlueFS spillover detected:14.2.8版本报这个warn,手工compact可以修复,但是得反复做,有人回复这个问题已经讨论过多次了,后面14.2.12修复了,https://github.com/ceph/ceph/pull/33889、https://github.com/ceph/ceph/pull/37091,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Q4QP3SFHGZUTPGNWVJ4YI4DCJESKTW5H/
  • newbie Cephfs auth permissions issues:有人回复要先enable application cephfs, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QLYDHKWPAYZAQHPMJX4CS3J45LZEFHXR/
  • Mon’s falling out of quorum, require rebuilding. Rebuilt with only V2 address.:暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5X4HXXKHHVLFAQCCEZWX4BZJVRWZ455R/
  • EC cluster cascade failures and performance problems:14.2.11版本spillover问题、osd down一个导致其他也会跟着down,性能也比14.2.8版本差,只能回退,有人回复问他是否删除特别频繁,他说是400T数据每1~2周就循环删除一遍,暂无结论 https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YZRC3SUXGW3GSHDCQEN5BDCOAEBD74X7/
  • one osd down / rgw damoen wont start: 14.2.13版本,down一个osd导致rgw卡在启动阶段,暂无结论, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/BPESDVBVRYNJLFPJM7XAUM2CETNIKOOR/
  • CephFS error: currently failed to rdlock, waiting. clients crashing and evicted:smaba+cephfs问题,暂无结论,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/D2RTMXELQT2RX66ALTUCNGTKEQAK3YFZ/
  • The serious side-effect of rbd cache setting:开启rbd cache之后,rbd bench测试抖得很厉害,关掉之后就平稳了,之前我们也遇到过, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VI4ZOXQN4UADOWZK6MQXUEZLHSD6DOXW/
  • Multisite design details:找相关资料,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EJQTR3ZV74LDHDEGRYJTPL7XWW2CA7HU/
  • Problems with mon:mon集群无法选举,重建有问题的mon后恢复,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/3UOD4U46YS5GVEUGGFJI4YOJ5W4H66V3/
  • Manual bucket resharding problem:bucket reshard过程中Ctrl-C停掉了,之后没办法再次reshard,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/C2JSWSZOZO42GM22K3NUKJMA4IAPJP7G/
  • question about rgw index pool:index pool的硬件配置问题,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/POJATK2SLJYDWR2VG6LJKBSXNWB6XQ4S/
  • using fio tool in ceph development cluster (vstart.sh):初级问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2S7T2XCYLDJY6QZIBB4P3KOLILOQA5WT/
  • multiple OSD crash, unfound objects:15.2.4版本,讨论了好久,看起来也没啥结论, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QHFOGEKXK7VDNNSKR74BA6IIMGGIXBXA/
  • Sizing radosgw and monitor:rgw和osd的数量比例多少合适?暂无人回复,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KBZCLLQL72RSI5SVZSTFHVMM66YIRIPR/
  • HA_proxy setup:暂无人回复,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/HGELLTZZCRSTFSZWHRVGHW3PX7JOVWO7/
  • PGs undersized for no reason?:小版本升级之后的问题,原因是osd跑到了其他root下, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VN5B5LUL37YZSQZNY4O6FXWFMO4XQFBL/
  • ssd suggestion:QLC的ssd注意事项,有人回复主要注意下生命周期,容易坏, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/33AR2JWCVMOGPZUYI7XIZAI6ANAEQKZY/
  • Prometheus monitoring:对字段有疑惑,暂无人回复, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NWZ23WESBA4J4NEUE4VQX6F4WJ6XIAYY/
  • Cephfs snapshots and previous version:问samba的cephfs snapshot插件在centos8的包里没有,是否稳定了?有人回复这个插件稳定的,重新自己打包就好了, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AWGAV7YSM7EMYSQZHBXZU2OJMCA4Z4PO/
  • Documentation of older Ceph version not accessible anymore on docs.ceph.com:老版本文档地址问题, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FZYE2BEDU5Y4EVJB6RY4X7Z2WDG4O5VG/
  • Unable to find further optimization, or distribution is already perfect:8T和16T盘混用问题,利用率不均衡,暂无结论, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Q6GUF5UIXHY5D4SAP5RAYVYJURS6LU7Z/
  • Certificate for Dashboard / Grafana:暂无人回复,https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AGCDM46OXOU4RMW7JVZKXJWU4GP6I4SF/

https://lists.ceph.io/hyperkitty/list/dev@ceph.io/

  • can we remove the fmt submodule in master?:chai kefu回复不行,seastar依赖libfmt,xenial发行版的包不够新,https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/YLFSJNRDVH327DEYCHYLRMBHRERZPMZB/
  • Bucket chown:问为何不backport这个pr到N版本 https://github.com/ceph/ceph/pull/28813,答复是依赖太多,commit也多,需求不强烈,建议升级到O版本, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/GJMVENAQLJ4AK3ORKZXFZ7XUYBQX6HMT/
  • Bug #47586 – Able to circumvent S3 Object Lock using deleteobjects command:这个人提了个bug,对自己服务影响比较大,想自己修掉,先问问看有没有人已经在解决了。 https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/3JKGTQOBXTJFXM4YU2X4RZRPORKZGEN5/
  • RGW vstart.sh failed to start:MGR数量没有设置为0,导致一直等待mgr启动,由于编译选项没有加mgr的,导致启动失败, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/HIOSNVN3KKPKQHZNDBU7OLR7NHUQJH37/
  • RGW starting issue:开发环境启动rgw失败,暂无结论, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/TKLNMGCTOQII3MSVVAK3FTN3BAB5KR3A/
  • RFC: Seeking your input on some design documents related to cephadm / Dashboard:征集意见讨论如何完善cephadm、dashboard相关文档, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/3BN2UCD3N7XGIXJSTWWOL3MMCMNLYGRX/
  • Testing Roles S3 APIs:(提交了一个PR for syncing roles in a multisite env with https://github.com/ceph/ceph/pull/37679)怎么补充测试用例,暂无人回复, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/TBNOABDBRN6QQTCW7WWLTWGIFJ5MWGK4/
  • make-dist hangs:暂无结论, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/7TGWYZ6XB4BY6QK5VKYSGVMUVOQUZOT7/
  • ceph_crc32 null buffer:问null buffer为啥还要计算crc?暂无人回复, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/BQZGM5BH67MCTOQO6F53A5WI7W72BGLL/
  • nautilus cluster one osd start up failed when osd superblock crc fail:如题,暂无人回复, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/YXRMHFJGYL6AERX7GWYU7I7DN7IIEE6B/
  • error executing ./do_cmake.sh:编译问题,原因是Python3依赖问题,you could either use -DWITH_PYTHON3=3.6 or just ignore this option.,https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/PTQXEFXPZJOCS23PKZICYSBKRXBNJKK6/
  • v14.2.12 broken due to mon_host parsing. .13 release?:https://tracker.ceph.com/issues/47951,https://github.com/ceph/ceph/pull/37816,建议把这个pr backport到14.2.13版本,回复是已经加入了。 https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/PR77FQUGQZXWZWDE4BYOAE5JPPJIXDZZ/
  • cephfs inode size handling and inode_drop field in struct MetaRequest:没看懂,关于kcephfs的,Yan, Zheng回复了, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JJUFZHDWY3KASGAOJNGWHIBFBGD7HVTF/
  • “ceph osd crush set|reweight-subtree” behavior:命令不生效,提交了bug, https://tracker.ceph.com/issues/48065, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/UMXYYCHEQZUHB5IFUBE6QDFSCPPQLJO6/
  • radosgw keyring configuration parameter:提交了一个ceph config generate-minimal-conf命令的bug, https://tracker.ceph.com/issues/48122, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/NVPSANZHCRXNCVABQCAH7Q3UC6SYFYF2/
  • building RPMs locally:打包问题, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/YWLPAAPQVMTHEV4LOCGSJNY4TTCQYVU7/
  • make check failure:测试用例失败,已提交修复代码, https://github.com/ceph/ceph/pull/38021, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/BF36L2SPYVDGSCFACWRDIWICQMZQI55A/
  • Pull Request Auto-labeling:给pr自动添加label, 应该是根据改动的代码路径来判断的,https://github.com/ceph/ceph/pull/38049 and
    https://github.com/ceph/ceph/pull/38060, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/RCCK7XLAHV6EP46MPY3K2KXP6EBX2POF/
  • Design discussion about replicated persistent write-back cache in librbd.:如题,写的比较细,可以看下, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/C6BC3SPPDIZJAJ45RHAGKUQK43BIV2EL/
  • RGW Lua Compilation error:有人回复新的提交已经修复, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/PI6LOPVQIOVZXL65CE3HYHVVCWEPLE47/
  • Do rados objects have a version or epoch?:暂无人回复, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/3Z27XMVFXSVDK65WA3C2RXJFXE3Q4WUQ/
  • using fio tool in ceph development cluster (vstart.sh):问fio能不能用在vstart环境下,有人给的参考,https://github.com/ceph/ceph/pull/34622#issuecomment-625893545, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/2S7T2XCYLDJY6QZIBB4P3KOLILOQA5WT/
  • Ceph containers images:docker镜像维护问题,不够全, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/W4APLWWM6GDMZXNDJZUVQR7A7NXZWJR2/
  • PR test failure: https://github.com/ceph/ceph/pull/37933 这个pr的test失败了,但不知道为啥,本地是好的, 暂无人回复, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/T632DRI5USSVGQRPBJ4GX2I3HDTXLZYS/
  • Best way to contribute bunch of miscellaneous fixes?:移植15.2.4版本到arm平台,遇到很多bug,怎么提交pr比较合适?一波全提交还是一个一个来(会很多)? chaikefu给出的回复是ceph没支持过arm,还在讨论要不要测测看, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/4IRKQIIZO4XKGQTVI5LCIHX3FRPIYDA6/
  • run run-cli-tests in development environment:这个命令不会玩,chaikefu给的回复是参考这个https://github.com/ceph/ceph#running-unit-tests, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/24AJ4UIVCSMHBASXLL3DMTMH3YHFJE3F/
  • Compiling master with Clang….:如题,暂无结论, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/3DFVJN7CVKC7L4VH7RYSZR75GKA42HM7/
  • A Query about rados read op API behavior:can we feed the same “rados_read_op_t” instance to “rados_read_op_operate()” multiple times with different oid?” i think the the answer is “no”, https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/77SKXKCYR2GGA2D7HAZYKCCEWG4BKIB2/

社区博客

  • Welcoming Ernesto Puerta as the new Ceph Dashboard Component Lead:https://ceph.io/community/welcoming-ernesto-puerta/
  • https://ceph.io/releases/v14-2-14-nautilus-released/
  • https://ceph.io/community/v15-2-6-octopus-released/
  • https://ceph.io/releases/v14-2-15-nautilus-released/
  • https://ceph.io/community/v15-2-7-octopus-released/

master近期合入代码

  • 有点多,不一一分析了:https://github.com/ceph/ceph/pulls?q=is%3Apr++is%3Amerged