Menu
Documentation
Blog
Wiki
IRC / Lists
The Ceph Foundation
Download
Search
Search
Discover
Introduction to Ceph
Blog
Videos
Resources
Use
Get Ceph
Install Ceph
Use cases
Users
Code
Github
Issue tracking
Build status
Security
Get Involved
Community
Contribute
Events
Foundation
Ceph Community Meetings
Team
User Survey
Documentation
Blog
Wiki
IRC / Lists
The Ceph Foundation
Download
Discover
Introduction to Ceph
Blog
Videos
Resources
Use
Get Ceph
Install Ceph
Use cases
Users
Code
Github
Issue tracking
Build status
Security
Get Involved
Community
Contribute
Events
Foundation
Ceph Community Meetings
Team
User Survey
Ceph PGs per Pool Calculator
Instructions
Confirm your understanding of the fields by reading through the Key below.
Select a
"Ceph Use Case"
from the drop down menu.
Adjust the values in the
"Green"
shaded fields below.
Tip:
Headers can be clicked to change the value throughout the table.
You will see the Suggested PG Count update based on your inputs.
Click the
"Add Pool"
button to create a new line for a new pool.
Click the
icon to delete the specific Pool.
For more details on the logic used and some important details, see the area below the table.
Once all values have been adjusted, click the
"Generate Commands"
button to get the pool creation commands.
Ceph Use Case Selector:
Add Pool
Generate Commands
Logic behind Suggested PG Count
( Target PGs per OSD ) x ( OSD # ) x ( %Data )
( Size )
If the value of the above calculation is less than the value of
( OSD# ) / ( Size )
, then the value is updated to the value of
( OSD# ) / ( Size )
. This is to ensure even load / data distribution by allocating at least one Primary or Secondary PG to every OSD for every Pool.
The output value is then rounded to the
nearest power of 2
.
Tip:
The nearest power of 2 provides a marginal improvement in efficiency of the
CRUSH
algorithm.
If the nearest power of 2 is more than
25%
below the original value, the next higher power of 2 is used.
Objective
The objective of this calculation and the target ranges noted in the "Key" section above are to ensure that there are sufficient Placement Groups for even data distribution throughout the cluster, while not going high enough on the PG per OSD ratio to cause problems during Recovery and/or Backfill operations.
Effects of enpty or non-active pools:
Empty or otherwise non-active pools should not be considered helpful toward even data distribution throughout the cluster.
However, the PGs associated with these empty / non-active pools still consume memory and CPU overhead.
top
Ceph Storage
Object Storage
Block Storage
File System
Getting Started
Use Cases
Community
Blog
Featured Developers
Events
Contribute
Careers
Resources
Getting help
Mailing Lists & IRC
Publications
Logos
Ceph Tech Talks
© 2021 All rights reserved.
Code of Conduct
Trademarks
Security