These resources can help you learn about Ceph, experiment, contribute, or kick off your new storage project.
The dev list is for developers, while ceph-users is for end-users and operators.
For more information, visit the Mailing Lists & IRC page
Through the efforts of a number of dedicated community members, Ceph performance has grown by leaps and bounds over the years, and continues to do so with the help of people like you. To aid in that effort we have tried to aggregate performance efforts and resources in a single location to help new and experienced user alike.
Get the latest version of Ceph!
Documentation is available online, and managed at Github.
Check out use cases and reference architectures to help you get started with Ceph.
A monthly presentation online to help raise the level of technical awareness around Ceph.
The Ceph community is available to answer questions and provide guidance, and professional support options are also available.
The following publications are directly related to the current design of Ceph:
Ceph: Reliable, Scalable, and High-Performance Distributed Storage
Sage A. Weil.
Ph.D. thesis, University of California, Santa Cruz
RADOS: A Fast, Scalable, and Reliable Storage Service for Petabyte-scale Storage Clusters ( slides )
Sage A. Weil, Andrew W. Leung, Scott A. Brandt, Carlos Maltzahn.
Petascale Data Storage Workshop SC07
Ceph: A Scalable, High-Performance Distributed File System
Sage Weil, Scott A. Brandt, Ethan L. Miller, Darrell D. E. Long, Carlos Maltzahn
Proceedings of the 7th Conference on Operating Systems Design and Implementation (OSDI ’06)
CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data
Sage Weil, Scott A. Brandt, Ethan L. Miller, Carlos Maltzahn
Proceedings of SC ’06
Dynamic Metadata Management for Petabyte-Scale File Systems
Sage Weil, Kristal Pollack, Scott A. Brandt, Ethan L. Miller
Proceedings of the 2004 ACM/IEEE Conference on Supercomputing (SC ’04)