At work, I had to generate random data to run logistic regressions on. In one unusual case, the slice sampler was performing far worse than expected. The code was simple, and contained no mistake; we thought something incredibly bad had happened with the whole testing framework.
What ended up happening was that our data matrix was generated by a uniform distribution from 0 to 1, but the reference runs were generated from -.5 to .5. The parameters Maybe a small proof will come later? But this seems to have to do with the eigenvalues of the sum of two randomly distributed matrix… which is not entirely trivial.