"Actually, there should be diagonal matrices ins..."

https://arbital.com/p/8jx

by Nate Windwood Jul 21 2017


Let $~$\\mathbf H$~$ be a vector of hypotheses $~$H\_1, H\_2, \\ldots$~$ Because Bayes' rule holds between every pair of hypotheses in $~$\\mathbf H,$~$ we can simply multiply an odds vector by a likelihood vector in order to get the correct posterior vector:

Actually, there should be diagonal matrices instead of vectors. Cross product doesn’t work like this, and dot product gives us a sum of coordinates of the vector we need instead of the vector itself, so we can’t continue updating our probabilities (or make any sense of the result). Diagonal matrices, on the other hand, do exactly what we need: $~$C = AB; c_{ii} = a_{ii} * b_{ii}; ∀ i ≠ j, c_{ij} = 0$~$.