To properly explain this issue we have to recall what it means for a quantum computer or a probabilistic Turing machine to compute something: we say that such a device computes a function $~$f$~$ if for every input $~$x$~$ the probability of the device outputting $~$f(x)$~$ is greater or equal to some arbitrary constant greater than $~$1/2$~$\. Thus we can compute $~$f$~$ in a classical machine, for there exists always the possibility of simulating every possible outcome of the randomness to deterministically compute the probability distribution of the output, and output as a result the possible outcome with greater probability\.
Huh… Not sure I understand this. I have BS in CS, but don't remember running across this. Would love to read more.