Parallel Processing minimum bucket size effects

Can I get confirmation on what the minimum bucket size is when Parallel Processing? I’ve got a feeling it’s 50 and that’s configured somewhere perhaps?

I’m reviewing a GL Integration parallel processing situation where there are 35,000 rows in the source data split into 113 different group levels, for an average of 309 records per group level (these end up being some large GLs). These group levels on the GL integration are specified as the Distinct field when the parallel processing call happens, and the effect is that the work gets distributed into 2 parallel buckets, each of which takes more than 30 minutes to process.

This seems inefficient given there are unused buckets, with each parcel of work being a heavy lift. There is no DB pressure when this happens, just a lot of ‘compute’ to work through each Group Level.

Is it possible to distribute this workload more effectively across more buckets to reduce the overall amount of time from 30 minutes across 2 buckets to something like 8 minute across 8 buckets or similar?

It was confirmed that the minimum bucket size is 50 and this is configured at a stack level so not changeable customer-by-customer.