We often encounter the question whether a proposition, P, is true or false. The probability that it is true is "p". Various arguments - logical inference - may exist to determine our subjective value of "p". In particular, Bayesian inference multiples the probabilities "p" and "1-p" by the probability that the respective hypotheses give the result that agrees with the newest observation.

Some of the arguments may be K-sigma deviations of the measurements from the prediction of a null hypothesis. The value of "K" may be translated to "p" through the conventional error function: for example, a 3-sigma deviation translates to the 0.3% probability that P is true (99.7% that it is false).

It could be helpful to define another function of "p" or "K", called "AE", that kind of interpolates between "p" and "K". The letters "AE" stand for "amount of evidence". It is a dimensionless quantity but you may still use the term "unit of evidence" or "UE" for the unit. "AE" is defined as

AE = ln(p/(1-p))For your convenience, I have also written down the formula for "AE=AE(p)" where "p" only appears once, as well as the inverse relationship where "AE" appears once. If "AE" is positive, the evidence supporting the proposition P is stronger than the evidence going in the opposite direction.

AE = ln(1/(1-p)-1)

p = 1/(1+exp(-AE))

I constructed "AE" as a simple function of the odds - the ratio of probabilities of "P" and "non P", i.e. as a simple function of "p/(1-p)". The precise definition of "AE" has the obvious property that if you negate P i.e. if you exchange "p" and "1-p", the value of "AE" simply switches the sign.