The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • June 22, 2021
    Ceph Foundation Announces the Formation of the Ceph Market Development Group

    San Francisco, California – June 21, 2021 – The Ceph Foundation, dedicated to enabling industry members to collaborate and pool resources to support the Ceph community, today announced the formation of the Ceph Market Development Group. The new group, composed of leading industry organizations, including Canonical, Red Hat and SoftIron, will collaborate to raise awareness …Read more

  • June 4, 2021
    Ceph Community Newsletter, June 2021

    Announcements Ceph Month June This week starts our June 2021 Ceph Month: full of Ceph presentations, lightning talks, and unconference sessions such as BOFs. There is no registration or cost to attend this event. Join the Ceph community as we discuss how Ceph, the massively scalable, open-source, software-defined storage system, can radically improve the economics …Read more

  • May 14, 2021
    The Red Hat Ceph Storage life cycle: upgrade scenarios and long-lived deployments

    Different industries have varying requirements for the software systems on which their respective businesses rely. Some operators choose to quickly embrace the latest and greatest release when facing change and integration updates. Others defer upgrades for as long as possible, trying to continue on a tried-and-true combination of software components until end-of-support (or security patching) …Read more

  • May 13, 2021
    2021 Ceph User Survey Results

    For the third year, we surveyed the population of Ceph users and published that data for the benefit of all interested. Thank you to the 245 respondents who shared your usage, information, and opinions this year.  The purpose of this survey is to better understand how Ceph technologies have been adopted and to understand our …Read more

  • May 7, 2021
    New in Pacific: SQL on Ceph

    A new libcephsqlite library is available in Pacific that provides a ceph SQLite3 Virtual File System (VFS). The VFS implements backend support for SQLite3 to store and manipulate a database file on Ceph’s distributed object store, RADOS. Normal unmodified applications using SQLite can switch to this new VFS with trivial reconfiguration. SQLite was chosen as …Read more

  • April 23, 2021
    New in Pacific: CephFS Updates

    The Ceph file system (CephFS) is the file storage solution of Ceph. Pacific brings many exciting changes to CephFS with a strong focus on usability, performance, and integration with other platforms, like Kubernetes CSI. Let’s talk about some of those enhancements. Multiple File System Support CephFS has had experimental support for multiple file systems for …Read more

  • April 16, 2021
    QoS Study with mClock and WPQ Schedulers

    Introduction Ceph’s use of mClock was primarily experimental and approached with an exploratory mindset. This is still true with other organizations and individuals continuing to either use the code base or modifying it according to their needs. DmClock exists in its own repository. Prior to the Ceph Pacific release, mClock could be enabled by setting …Read more

  • April 6, 2021
    Diving Deeper

    In our first post we introduced the joint development project among Penguin Computing, Seagate, and Red Hat culminating in the Penguin Computing DeepData solution. In this post, we’ll start with first principles, and then discuss some of the hardware and software technical details that drove our configuration changes. In particular, we will demonstrate  how to …Read more

  • March 30, 2021
    Diving into the Deep

    The IDC DataAge Whitepaper assesses where data is being generated, where it is being stored, byte shipments by media type, and a wealth of other information. The following chart from the whitepaper serves to illustrate how data is flowing towards, and reaches the highest concentrations at the core. Where there is data, there will be …Read more

  • November 30, 2020
    v15.2.7 Octopus released

    This is the 7th backport release in the Octopus series. This release fixes a serious bug in RGW that has been shown to cause data loss when a read of a large RGW object (i.e., one with at least one tail segment) takes longer than one half the time specified in the configuration option rgw_gc_obj_min_wait. …Read more