When I am posting this message, it's the the birthday of Albert Einstein, also known as the \(\pi\) day. More precisely, as you may check, this blog post was written on

3/14/15 at 9:26:53.589793238... amPilsner winter time which contains as many digits of \(\pi\) as you want – you may probably find such a moment once in a century, if you trust Stephen Wolfram. I am not cheating: I was really writing this blog post after 9 am although the precision indicated above is exaggerated LOL.

And as the screenshot above (click to zoom in) shows, the year 2015 has finally started with everything that defines it – the Run II of the Large Hadron Collider in particular. You may get the current version of the screen above at this CERN page; I clicked at "Luminosity" (the second option) to get to the energy and luminosity chart above.

You may get to the screen at any time if you find the LHC section of the right sidebar on this blog (the full dark green template) and find the hidden URL beneath the words "highest luminosity".

At any rate, the energy of each proton in the beam is now \(6.5\TeV\), as the graph shows. If two such protons collide from the opposite directions, the momentum cancels but the energy doubles: the center-of-mass energy of the proton pair at the LHC has increased to \(13\TeV\). It was just \(8\TeV\) in the 2012 Run – and it is only planned to increase to \(14\TeV\) later, probably in this year.

For a proton to increase its energy from the latent energy \(m_0c^2 = 0.94\GeV\) to \(6,500\GeV\), it must move very quickly. Back in December, I told you that the energy would reach \(6.5\TeV\) in March – it did so now, this resource is trustworthy – and calculated the Lorentz \(\gamma\) factor and other relativistic effects connected with these really huge speeds.

Moreover, if you look at the screenshot above, the luminosity chart on the left side seems to show a curve with a nonzero value. It is a blue curve which should mean that it's the luminosity measured by ATLAS – see the legend. And because the luminosity is nonzero, it should mean that there are actually collisions taking place! If that interpretation is wrong, and it probably is (because the red/green picture in the lower right corner says that zero luminosity was delivered, and the nonzero number is just "target") it's the graph that should be blamed, not me, because a nonzero luminosity should mean collisions of the relevant beams.

Anyway, let us

*assume*that the collisions of proton pairs with the energy \(13\TeV\) are taking place. The Earth hasn't been devoured by a black hole yet. What a surprise! ;-)

It may be interesting to re-learn the units of luminosity again. The chart shows that the luminosity is restarted to the maximum value, \(0.37\) units, once an hour, and it gradually drops to \(0.11\) units or so. In average, the luminosity is close to \(0.25\) units, the graph tells us.

*Even in the third world, in this case the land of the Apaches and Siouxes, they are interested in the European experiment. Dr Lincoln is affiliated with the Fermilab, an Illinois fan club of CERN. Continuation about detectors. It would be totally inappropriate for Lincoln to choose a favorite experiment, he stresses, especially if it were the best one among all, the CMS, that kicks ass.*

The units are \((\mu {\rm b}\cdot s)^{-1}\), i.e. inverse microbarns per second. The right sidebar of this blog claims – and I have no reason not to trust it – that the highest luminosity achieved at the LHC so far was \(7.8/{\rm nb}/{\rm s}\). If my calculations are right, the numerical part is about \(20\) times greater than the recent one; and the unit was \(1,000\) times greater (a nanobarn is smaller than a microbarn, but the inverse nanobarn is larger than the inverse microbarn – the reversal is what the word "inverse" does for you).

So the luminosity right now is about \(20,000\) times smaller than the maximum one achieved in 2012. The LHC isn't running at full speed yet – but it is probably running already. Has the Run 2 of the LHC already produced some new particles? I think that even if the collisions were already allowed, and they are probably not, it is unlikely because the number of collisions was very small. But if it were some really special heavy particle that was out of reach in 2012 and became possible now, I think that it couldn't even be excluded that the tiny number of collisions has already produced something new.

*The word "seven" in the theme song should become "thirteen" (teraelectronvolts) now. Among the objects enumerated at the end of this 2010 song, only the Higgs boson has been found so far.*

Let me also convert the "average" luminosity in recent hours, \(0.25/{\rm \mu b}/{\rm s}\), to other units. If you multiply it by \(86400\times 365.25\), you get 7.9 million of the same units with the second replaced by year which is \(7.9/{\rm pb}/{\rm year}\). Over the year, we're used to expect dozens of inverse femtobarns, and this inverse picobarn is 1,000 times smaller. So the average luminosity was something like \(0.0008/{\rm fb}/{\rm year}\) so far. We want many femtobarns so the luminosity will have to increase thousands of times to get there, as I have already pointed out.

You may also convert the luminosity to the number of collisions per second. I will use the same conversion coefficient that we had in 2012 although the energy does change it slightly. The total delivered luminosity to one detector was about \(27/{\rm fb}\) in 2012 (the recorded one was about 92% of this figure) which corresponded to 1.9 quadrillion collision.

Now, we were getting \(0.25\) inverse microbarn per second. The inverse microbarn is one billion times smaller than the inverse femtobarn (the exponent is \(15-6=9\)). And we get two more orders of magnitude from \(27\) vs \(0.25\). In total, each second, the LHC was producing 11 orders of magnitude fewer collisions per second than 1.9 quadrillion (which is \(10^{15}\)) or \(1.9\times 10^{4}\). In other words, unless I made a mistake, we were getting about 20,000 collisions in ATLAS per second in recent hours.

Please feel free to verify it and correct me.

I guess that they will be very careful (perhaps unnecessarily careful) so the gradual increase of the luminosity will occupy the following month, and we will only get to reasonable competitive luminosities sometime in May 2015 when the "truly new physics research" will begin again. If Nature has prepared some beyond-the-Standard-Model objects and phenomena, the first trillions of collisions will have the highest chance to reveal their signs. Discoveries may place quickly – but the LHC may also decide to tell us (Nature did the decision behind the scenes) that the Standard Model continues to be OK at these energies, too.

## snail feedback (65) :

I wonder whether the Higgs boson would still be there at 14 TeV or the LHC should stand for Lost Higgs Contact.

If the Higgs disappears in the 2015 data, it would be surprising and cool, indeed LOL. I am eager to bet 10-to-1 that it won't disappear. Ready to lose $1,000 and win $10,000?

Go supersymmetry! (As a non-physicist I'm allowed to cheer for my preferred outcome.)

Oops, what would that mean if the Higgs disappear, Lubos ??

People are calling today super pi day, because the first 5 digits of pi are 3.1415. But they are wrong. The next digit is 9, and the best 5 digit estimator of pi is 3.1416. So 3/14/16, next year, is super pi day.

By the way, before pocket calculators people often used 22/7 to estimate pi: 3.14285714385… The is error by only + 0.04025%

In Wolfram's picture reproduced here, the digit 9 is stored in the counter of hours and it makes a vastly better sense than your picture.

Everyone knows that pi is about 22/7. Our math teacher at the basic school was even convinced that it was an accurate formula. She got pregnant soon and we got a better one, before the pregnant one returned as our high school teacher later LOL.

It would mean that all the measurements that showed that there was a Higgs would suddenly show flat curves in 2015, and Peter Higgs would suddenly disappear from the Earth,with speculations that he is living on a Moon of Jupiter. In that case, Albert would win our bet.

Dear Lobos,

I got the Wolfram thingy. It's cute. I was referring to the use of the date 3/14/15.

Not everyone knows about 22/7. In fact, I doubt any American kid does. American schools stopped teaching stuff like that a few decades ago.

By thingy, you mean his interpretation of the pi-day numbers, or some software? Or Wolfram Language? Or a pie? Or a Wolfram T-shirt. I am seeing lots of thingies on that Wolfram blog post. ;-)

Do you know the fraction 355/113? I found it when I was about 8 - and even though it has just a few more digits, it's vastly more accurate. 3.1415929

I get "over" 7 digits right - even though there are only 6 digits in 355/113. ;-)

Yes, I do and I completely agree with your reasoning.

Thank you for the clarification.

I admit that I was too lazy (okay, I thought that an unnecessary complication) to consider Markov chains formulation. Given this clarification I was clearly wrong when I added your probabilities as 1/2+1/2 +1/2.

What I don't see, however, is how is my breakup of the probability space into 1/2 + 1/2 for head and tail respectively, and then into 1/2 + 1/2(1/2+1/2) = 1/2 + 1/4 + 1/4 for (M,h), (M,t) and (T,t) illegal?

If I were arbitrarily splitting probability space events into sub-events, say, to be different this time, by dividing the tail event (but not the head event) into AM and PM of the day, without assigning the the split its proper measure, now that would be illegal.

Yeah Liam, I think we've beaten this beast from many angles ;)

Seems strange that they have beams at 6.5Tev, yet last week the latest schedule has beams in mid April:

https://indico.cern.ch/event/373417/

I got my first scientific calculator, an HP 35, in 1972, shortly after its introduction. The instruction booklet pointed out that dividing 355 by 113 yielded pi with an accuracy of better than 0.1 ppm so I guess I knew this before you were born. The old HP still works perfectly, by the way.

Lumo, At long last...that was an excruciating two years waiting for the LHC to come back online. Jubilant!

Tony - Remember that the random variables Monday and Tuesday are identical. It suffices to consider only one. Imagine the probability space to be a square. Either Mon or Tue will partition the square in the same way, e.g. by a line throught the middle. Say the upper (lower) region maps to heads (tails). The procedure you describe creates a distinction within the region which is mapped to tails. You label the two sub-regions of tails by the labels Mon and Tue. But Mon and Tue are identical and the distinction is vacuous. To further partition the probability space you must either add random variables which are not identical to Mon or enlarge the target set; but given the problem as stated there is no reason to do either.

As far as I understand it, no protons have circulated yet. The displayed energy would be the beam energy if protons were really circulating. They are still doing power testing (and a quick look at the Cryo status page shows they are performing magnet quenches -- probably training the magnets).

I remember that too. I bought it for 400 dollars, that was a price for a good used car in those days. It works based on reverse polish notation.

I pawned it for 50 dollars when I was broke. I remember the lady tossed it around , could not figure what it was. But since it was heavy and she knew it was valuable to me, so she gave me the money, I was 19.

The probability of tails and being awakened on Tues is 50%, tails and being awakened on Mon is 50%, and heads and being awakened on Mon is 50%.

Sorry, as for now, even after re-reading Walter's article, I am still not convinced that this is the best and the simplest way to approach and explain the problem.

I still feel that we are, in a way, beating around the bush, and forgetting the fact that we are seeing the demonstration of a blatant oversampling.

Yup - I agree with both of you - any which way you cut it the errors come basically from overcounting "independent/orthogonal" variables/degrees of freedom. We're all saying the same thing in slightly different ways.

Ironically enough, the problem of correct counting of degrees of freedom has also cropped up recently as a problem in the ADS/CFT context with this whole firewall/AMPS paradox business, though on a whole other level of subtelty/complexity of course. (Beyond my level to be honest!)

I've been googling around, and apparently our position is known as the "double-halfer" position.

It's the correct one. :)

Tony - How disappointing. The only thing I took from Walters' article was the diagram. It properly represents the problem. The blather about weighted averages and oversampling is both wrong and irrelevant.

Guys as far as I can tell you both agree with each other.

You can legitimately discuss the problem in terms of oversampling in a probability space, or incorrect identification of degrees of freedom/random variables and markov chains. It's basically the same thing.

Neither is more "fundamental" and both are correct ways of reasoning about the situation - so whatever works for you as long as it's correct, eh?

You're right. I have been trying to persuade people of this for a while now (start here: http://motls.blogspot.co.uk/2015/03/sleeping-beauty-and-beast-named-brad.html#comment-1906008872 and keep scrolling).

It ain't easy!

Yes, if I may speak for both, we only disagree about what is the best model/representation of the problem. We don't question the results, naturally, because they are the same.

Oh well. Anyway, you would do me a favor if you explained me why is the probability of passage from state 1 to state 2 in Walters' diagram 1. In the problem as stated it can never happen so shouldn't it be 0?

Tony - In the problem as stated it *does* happen! I tried to explain this a few comments up when I discussed the behavior of the system. It is the same as for the passage from 3 to 4, but without an awakening.

I shall try to recapitulate clearly and even add a bit more. I won't use the diagram but it might help to refer to it later as all calculations proceed just as they do with the diagram.

We, and Sleeping Beauty, are given - a probability space, the set {H,T}, the set {Y,N}, a random variable (call it Flip) from the probabilty space to {H,T}, and two maps from {H,T} to {Y,N} - the first (call it U) takes both H and T to Y, the second (call it V) takes H to N and T to Y.

We now construct two new random variables ( call them Amon and Atue) by composing the maps U and V with Flip. Amon = U@Flip and Atue = V@Flip. These are random variables on the set {Y,N}.

Again, all of this is known by Sleeping Beauty in the problem as stated..

The result given by Flip, Amon, and Atue on the sets is known only to the experimenters.

The experimenters wake her on Monday if Amon = Y, which of course it always does.

Her best, and the correct, estimate of the result of Flip is P(H) = P(T) = 1/2, whatever the actual result.

The experimenters wake her on Tuesday if Atue = Y, which only happens if Flip = T.

Again, her best, and correct, estimate is P(H) =P(H) = 1/2.

If she is told the day on Monday, knowing what the problem states, her best, and correct, estimate is P(H) = P(T) = 1/2 since Amon always gives Y.

If she is told the day on Tuesday, knowing what the problem states, she can infer that P(T) = 1.

That's it! Awakenings, weighted average, and oversampling are all irrelevant diversions.

I had thought this 'problem' to be so simple that it could only be misconceived by philosophers, much like the 'paradoxes' of quantum mechanics. Perhaps I should wait a bit longer before coming to a conclusion.

I'm now beginning to think that it's a practical joke and that someone somewhere has been laughing his ass of for years.

And thanks again for forcing me to make myself clear. I think it worked because you're so damned polite.

Does the problem specify whether the coin being flipped is colored blue and black, or is it gold and white?

Mmm. . . I still wouldn't follow Bob Walters diagram as RAF III would.

Here is a very bad thing that can happen:

You are in the state 0, in a pub, with a trash girl and a decent girl on your sides. You still don't know which is which from their looks.

So you buy them both a drink, start the conversation, transitioning to state 1 and state 3 with equal probability of 1/2.

Then you form an opinion about each, identifying correctly a stupid whore in state 1 and a decent girl, educated in sciences, in state 3. But then you make a cardinal mistake!

You buy them both a second drink! You show them both that it doesn't matter who they are, you would buy them a second drink anyway, and with probability 1 you transition to state 2 and also to state 4, state 2 representing the whore having a second drink, while state 4 is the same for a nice one.

Well, now a nice girl realizes you are a clueless moron and goes away, leaving you only with the option of going back to state 1 from state 4, that is, buying a whore yet another drink. State 2 also goes back to state 1 and you end up eternally buying another drink for a whore! That's exactly what follows from Walters diagram.

Now, you wouldn't want that to happen to you, would you?

Well, I'm in the pub now and before digesting your answer, consider that I was not so polite in my answer to MikeNov above.

Sometimes I get lucky. Hope you do as well.

Give me the Markovian transition matrix, please. The state is H or T, we agree. How do we make X0, X1, ... we may not. Are yYlou sure that Walters diagram has no errors or at least typos?

Now I am really hopelessly confused, after reading more about Markov chains.

I think reading made me lose the capability to understand the difference and relationships between the random variable, the state, the transition probability or any term in the theory.

What causes what, what happens after what, what is n+1 in relation to n, do I throw a coin to get from n to n+1, or do I just order Mondays and Tuesdays as X1, X2, X3, X4, like in M,T,M, T? Why is Walters ordering Mondays as X1 and X3?

I can't event formulate and describe precise Markov theory rules to visualize the sequence of ordinary coin tosses (forget the Beauty and Mondays and Tuesdays), I don't know how to write down the transition matrix, this is a total mess.

Pi day

Clicking "Live event display" collisions are displayed. With the current time, and refreshing one per minute approximately.

They are probably doing the same as Atlas (See: http://atlas-live.cern.ch/, especially the line at the bottom of the screen, which says "No live events available, showing events recorded earlier". Also check the "Page 1" vistar at http://op-webtools.web.cern.ch/op-webtools/vistar/vistars.php?usr=LHC1

You should get them both to buy you drinks... ;-)

It says right now: "Live event..." Data recorded: Sun Mar 15 15:43:13 2015 CEST Run/Event...

Note the LHC status "No Beam". Oh and maybe the comment "CMS Cosmic run" may be telling us what they are recording.

Yes. I was editing it now. That may be one explanation.

I should have included that if she wakes on Tuesday and remembers waking on Monday then she can infer that P(H) =1.

Hey guys, hope you had good Saturday nights ;)

I have a couple of final (to me interesting) remarks about the thirder position that occurred to me last night, that sharpen just how extreme the position is and how special the circumstances have to be for them to set up the independent events in the way they want to.

I'll consider the P(Tails) small N-awakenings large case.

If we don't like to talk about the "consciousness injection genie" we could instead grant the thirders that if the coin comes up heads we will write "H" on a folded piece of paper and put it in a bag, and if it comes up tails we will write "T" on N pieces of folded papar and put them in the bag. We will then draw a piece at random from the bag and ask them to guess the letter, (so that we are in fact realistically selecting a randomly sampled independently recorded "awakening" node).

It's clear that it actually makes a difference whether we are considering a single run of the experiment, in which case with P(Tails) small they should still guess heads, or a sufficiently large ensemble, in which case they should indeed guess tails!.

This is slightly counter-intuitive as we naively wouldn't expect the existence of a sufficiently large ensemble to make any difference.

The key here is that the actual existence of a sufficiently large ensemble is required to remove uncertainty in the total number of nodes (pieces of paper/awakenings). In a single run there is most probably only one node/awakening/piece of paper in the bag. With k-runs sufficiently large we are sure that there are a very large number of pieces of paper in the bag, (we could calculate the precise number N_total with near certainty if we wanted), the majority of which will be tails.

It should be strongly emphasised that once we do this, we have genuinely de-coupled the identities of the beauties in the ensemble and made them independent, (effectively turned them into independent "pieces of paper") so that the induced amnesia is now completely epistemically irrelevant!

The situation is now isomorphic to a "mass-kidnap" game where will kidnap N_total independent persons, and proceed by partitioning them into interview-groups of 1 or N persons according to the outcome of repeated coin-tosses. Each independent person is then informed of the protocol and interviewed on their own separately once, with no sleeping or induced amnesia involved.

The epistemic uncertainty now has nothing to do with any identification or correlation of beauties in the tail-groups who are now truly independent, and the "induced amnesia" is now simply equivalent to lack of knowledge of the size of the group to which one has been assigned.

I think this could be construed as strong evidence against the validity of SIA (self-indication assumption) and in favour of SSA (self-sampling assumption) in anthropic reasoning scenarios.

I'm tempted to write all this up nicely in TeX (with equations and pictures :-) ) in a short review "paper", just for fun, I wouldn't expect anyone to take it "seriously".

Working title: "Generalised Sleeping Beauty and the Random Consciousness Injection Genie" :D :D

I could vanity-upload it to ArXiv, have it rejected from a real grown-up journal, and watch as it gathers zero citations! ;-)

God knows when I'd find the time though.

Tony - There is only one coin toss. I suggest you take a break and come back to it in a couple of days.

Up-vote!

Nice way to see it.

Aw, how dumb of me. But of course that P(T,M)(H,H) is 1!

If we toss the coin on Sunday and it is heads, it will be heads until the next toss on Sunday, whether Beauty is awake or not.

We are encoding transition probabilities given the states H or T, not given the states of wakefulness.

So the transition matrix is unit matrix. P(T,M)(T,H) and P(T,M)(H,T) are 0 because the state never changes from Monday to Tuesday.

P(M,S)(H,H), P(M,S)(H,T), P(M,S)(T,H) and P(M,S)(T,T) are all 1/2 because it doesn't matter what is the state of coin on Sunday before we toss.

If learning that it is not Tuesday makes tails less likely, then shouldn't being uninformed about this make tails more likely?

Physics proceeds by orders of magnitude (one decimal place at a time). log(13 TeV/8 TeV) = 0.2 decimal place. meh

Come on now. The Markov chain model of the physical process like :

SMTSMTSMT... with transition probabilities being all unit matrices, except

P(M,S)(H,H), P(M,S)(H,T), P(M,S)(T,H) and P(M,S)(T,T) all 1/2

is, IMHO ,a perfectly valid, by the textbook, Markov chain model. I have the transition matrices right. I have all, by the textbook, definitions satisfied.

The fact that we can identify a repeating cycle in the chain, and look only at it, is a separate issue.

I'm afraid you just don't have a particular talent for teaching RAF III.

Tony - I was referring to the Markov chain of the Walters diagram. I was not attempting to divine the diagrams you're viewing.

I do agree that you could be living proof of my poor teaching abilities. On the other hand, it could be that you are just a poor, and impolite, student.

'And now, I bid you toodles.'

Person in crowd: You suck!

Karl Wolfschtagg: Hey, that stings. Come on who said that?

Feng: Sports, members, bandits alike.

No, really, I am grateful that you brought this aspect of representing the problem, even if Lubos really did it first, you just insisted it is is the only proper one, and in the process made me derive Markov process from the first principles, following textbook definitions. That's the only way I can learn and be confident that I understand things. If the theory has fundamental mathematical objects, I need to be able to calculate them.

So thanks again, now I know something more.

Tony - You're quite welcome, but I must point out that I did not say that Walters' Markov chain was the *only* proper representation (I even provided one in terms of random variables and another in terms of elementary probability theory).

My basic point has always been that all the disputants in this argument share the same misconception, which is that the awakenings can be weighted, and are unable or unwilling to understand the simple facts that P(Mon,H) = P(Mon,T) = P(Tue,T) = 1/2 and P((Tue,T)|(Mon,T)) = 1 (which even a novice can see); if you have understood this then no one can question the abilities of either the student or the teacher.

Dear RAF, with some interpretation of the symbols, your points are good, but the notation is ambiguous, especially with claims like the controversial

P((Tue,T)|(Mon,T)) = 1

The problem with oversimplified equations like this is that you are ignoring the fact that subjective probabilities are functions of time.

Many people interpret P(Tue,T) as the claim that "at the given moment, the sleeping beauty thinks that it is Tuesday now and the coin landed tails".

But note that my sentence, which is more accurate, at least says "at the given moment" (after a particular awakening). Now, the whole problem with the different answers is that we don't really know "what the given moment is" i.e. when it occurred.

So all these probabilities should have a subscript "t" which either measures some objective time or her subjective time and one must be careful about the dependence of the probabilities on this subscript.

So

P((Tue,T)|(Mon,T)) = 1

is true if you interpret it as "conditional probability that there exists an awakening on Tuesday with tails assuming that there exists an awakening on Monday with tails". Then it's a certainty, 100%.

But it's still true that at a fixed moment - even at a fixed time measured subjectively, after a specific fixed awakening that we keep the same, the claims "it is Monday" and "it is Tuesday now" are actually mutually exclusive, so the conditional probability interpreted in this way is zero.

Lubos - All the probabilities I gave presumed an initial state. I believe I mentioned this somewhere in my comments above, and also said that they could be read off from the diagram ( which of course has an initial state).

The expression P(Tue,T) means 'the probability, on Tuesday, that a fair coin tossed on Sunday landed tails'.

The expression P((Tue,T)|(Mon,T)) means 'the probability on Tuesday that a fair coin tossed on Sunday landed tails given that on Monday a fair coin tossed on Sunday landed tails'; it also means 'the probability, on Monday and Tuesday, that a fair coin tossed on Sunday landed tails]'/'the probabilty on Monday that a fair coin tossed on Sunday landed tails'.

Basic stuff!

It should be clear that Mon and Tue, though distinct, serve only as labels for the nodes - they *are* the subscripts that you demanded should be used. I also mentioned this in the comments above.

Sleeping Beauty is presumed to know all this and the details of the experiment, and nothing else, each time she wakes. She is also presumed to be rational, so will give the estimates (and updates) I detailed here: http://motls.blogspot.co.uk/2015/03/sleeping-beauty-and-beast-named-brad.html#comment-1907761529

The awakenings are irrelevant and the weighted average is spurious.

Everything I wrote on this thread is correct.

Cheers!!!

When I checked a moment ago the cryo status was all green and the Page 1 vistar comments are announcing the imminent start of Proton Physics mode.

Sorry, RAF, but the probability that the coin landed tails is - at any objectively defined time, Monday, Tuesday, anything - always 1/2. Everyone agrees with that, I think. The only problems begin when one evaluates the probabilities at moments that are defined subjectively - "after her awakening". That's where the difference between halfers and thirders emerges, and you haven't started to solve the problem yet at all.

Lubos - The problem is how she should evaluate the probabilities when she awakes. The moments "after her awakening" have been defined objectively and *she knows it* ! And *she knows* that they all have probability 1/2. This *is* her subjective experience.

If Walters' diagram represents objective evaluations can you produce a diagram that represents subjective evaluations? Can you consistently relate the two?

If my representation in terms of a random variable is objective can you produce another which is subjective? Can you consistently relate the two?

Finally, if this were simulated a gazillion times and she were to win or lose a fixed amount depending on whether or not she correctly guessed the toss (which is objectively fair) which probabilities should the programmer use (not Sleeping Beauty, who of course has no clue), ( 1/2, 1/2. 1/2) or (1/2, 1/4, 1/4) to ensure that she loses the least amount possible?

The real problem here is that halfers and thirders both think that weighting the awakenings is sensible.

No, RAF, the "moment after she was woken up" is an ambiguous term that may mean one of the 1 or 2 (or, in total, 3) possible awakenings, and the decision "which of them is occurring now" is a matter of psychological credence.

The sleeping beauty problem makes no sense whatsoever with purely "objective" notions of probability and time. That's why all meaningful formulations of the problem talk about the Bayesian probability or credence or confidence, and if they only talk about "probability", they mean "Bayesian probability" - a refinement of the definition of probability that depends on subjective knowledge.

Hmm - I'm reading Lewis's paper now (it's only 5 pages) and he actually considers this possibility and still concludes that sb's credence on being told it's monday should be 2/3 for heads.

Ok - I'm going to have to refresh my knowledge of Baysian inference and Markov chains now obviously. I'm still cautiously skeptical but remain open-minded..

I read Lewis' paper. Apparently the man was a notorious "modal realist" (all possible worlds exist and so on).

I strongly suspect that in his argument for increasing the subjective credence in heads on being informed that it's monday, he's happily relying on the controversial "Self-Indication Assumption" (self-location amongst possibly existing observers) as opposed to the "Self-Sampling Assumption", (self-location amongst concretely existing observers), an assumption on which it seems both he and Elgin agree.

He doesn't state so explicitly in the paper, but it certainly reads that way to me.

Even from a Bayesian/subjective self-location perspective, I'm unconvinced that such a raised credence is either useful or meaningful... I replied to MikeNov on it here:

http://motls.blogspot.com/2015/03/sleeping-beauty-and-beast-named-brad.html#comment-1910763091

I'm biased to be somewhat mistrustful of analytic philosophy, so maybe i'm not being fair/missing some piece of the puzzle :)

So far I'm still leaning towards the "double-halfer" position, but I have to admit this stuff is waaay more subtle than I initially thought!

Lubos - I really am aware of all the schools of thought (objective Bayesians, subjective Bayesians, frequentists, propensity, logical probability, QBists,...). They differ with respect to what probabilty *is*, but they all agree on the rules, or laws or equations of probability theory.

You defined the Sleeping Beauty problem here; http://motls.blogspot.co.uk/2014/07/the-sleeping-beauty-problem.html?m=1 as - The problem is the following:

On Sunday, a blonde is told that they would flip a coin once (and never again). Depending on what the coin shows, they will play with her in one way or another.

The games will include no sex but she will be interviewed on Monday, and maybe – if the coin came up heads – also on Tuesday. In the case of the "heads", she will be made forgotten about the Monday interview before they interview her on Tuesday.

In advance, they tell her that it is a fair coin that comes up tails with probability 50%.

During the interview(s), she is asked: “What is the probability you would assign that the coin came up tails?”

What should the babe say?

I have told you what the babe should say, and why she should say it. My answer is correct no matter how the probabilities were arrived at or defined, no matter what they really *are*, and no matter whether probaility 'exists' or not (deFinetti contends it does not). The rules of probability theory tell everyone how to answer, including Sleeping Beauty.

Elga, Lewis, and you have introduced a *new* problem by replacing the question 'What is the probability you would assign that the coin came up tails?' with the question 'what time is it now?'. You demand that the answer be given in terms of the original problem by assigning probabilities to the 'epistemic possibilities' (mon, heads), (mon, tails), and (tue, tails).

I consider this demand to be incoherent. This new problem is mis-conceived, wrongheaded, and ill-posed.

I am not trying to 'solve' this new problem. I am trying to show you that it should be dismissed *as a problem* !

It is not even a problem.

Cheers!!!

Dear RAF, your opinion about the relationship between the "schools of probability" is naive to the extent that it is completely silly.

When the tasks are sufficiently standardized and ordinary, all interpretations are OK. But in some cases, only some of them are really meaningful. It is surely not the case that the different "schools of probability" are completely and universally quantitatively equivalent and only differ by emotions or philosophical talk.

In this Sleeping Beauty case, you need a Bayesian understanding - there is no frequentist one.

It's even more silly for you to include "QBism" among the "schools of probability". It is an approach to foundations of quantum mechanics and has nothing whatever to do with solving basic problems such as the sleeping beauty.

Beams circulating:

http://home.web.cern.ch/about/updates/2015/04/proton-beams-are-back-lhc

Yes, now with both beams circulating simultaneously:

Yey, let the smashing begin!

Post a Comment