## Monday, March 30, 2015 ... /////

### David Gross' NYU lecture

I think that this 97-minute-long public lecture by David Gross at New York University hasn't been embedded on this blog yet:

It is not just another copy of a talk you have heard five times.

He has talked about the Standard Model's being nano-nanophysics, QCD, Higgs, some signs of SUSY (and perhaps unification) at the TeV scale that we may already be seeing, the future colliders (probably in China), and Schrödinger's dogs, among other things.

There were some questions at the end, too.

#### snail feedback (23) :

Lubos - The halfer is told that she will not learn the day before having to answer the question.
If she does not update in this circumstance from the probabilities 1/2, 1/4, 1/4, then how does she update from these same probabilities when told the day?
When doe she assign these probabilities to the 'epistemic possibilities' (mon,heads), (mon,tails), and (tue,tails)?

Sorry, I don't understand what you are saying.

One is not told new information - one does not update the probabilities.

One is told new information - one does update probabilities.

Lubos - You wrote:
'If one has to determine the probabilities of the heads-Monday, tails-Monday, and tails-Tuesday combinations (possible arrangements of the state of the coin and the day when she is woken up), the two tails possibilities have to share the 50%, so if Monday and Tuesday are equally likely for tails, the most sensible arrangement is 25% for tails-Monday and 25% for tails-Tuesday.
The probabilities are 50-25-25 for the three arrangements which also allows you to say that the probability is 50/(50+25) = 2/3 that it's "heads" if she's told that it's Monday, 25/(50+25) = 1/3 for "tails" if she's told it's Monday, and 25/25 = 1 = 100% for "tails" if she is told that it is Tuesday.'
and
'...the same kind of correct thinking that yields 1/2 is saying 2/3 for the Monty Hall problem.'.
If the same kind of thinking as in the Monty Hall problem yields 1/2 it can only be from updating the assignments of 1/2, 1/4, 1/4 to 1/2, 1/2.
Are you saying that she makes these assignments only after she is told the day? She can then update to 2/3, 1/3 as you indicated above.
But when does she make these assignments? Why not immediately upon awakening?

Before she is told the day, the odds are 50-25-25 for MonHeads, MonTails, TueTails.

Once she is told it's Monday, it's 67-33-0. Alternatively, once she is told it's Tuesday, it's 0-0-100. (Note that 50-25-25 is 2/3 times the first distribution plus 1/3 times the second one.) I can't believe there is room for extra confusing long talkative comments like yours.

Lubos - Would it be fair to say that P(H) = P(T) = 1/2 entails the assignments of 1/2, 1/4. 1/4 to the epistemic possibilities and that that this is shown to be consistent with P(H) =P(T) =1/2 by reasoning which is similar to updating but without any *real* update taking place?

RAF, won't you agree that words like "update without a real update" are plain nonsense?

There either is an update or not.

RAF, won't you agree that words like "update without a real update" are plain nonsense, oxymoron, unrestricted stupidity?

There either is an update or not.

It is good that he highlights too that reaching higher energy with a new collider is more important (and fun) than higher precision ...

Lubos - I am saying that the assignment is shown to be consistent by reasoning which is *similar* to updating.
Is this the case or not?

It's too vague for me to understand what you might be trying to say.

Numbers are *similar* to each other. Sometimes, incorrect solutions yield the *same* numbers as *correct* ones. So some incorrect arguments may be *similar* to the step of Bayesian inference.

But they're still incorrect if they assume that one may "update" and "not update" at the same moment because it's an oxymoron. Correct logical/Bayesian inference carefully recognizes "updating the probabilities" from "not updating probabilities", and if you don't recognize those, you may only inject extra chaos to this discussion.

Lubos - So just what did you mean when you wrote -'Do you agree, Michael, that the same logic that gives 1/2 for the sleeping beauty is the logic that gives 2/3 for the Monty Hall problem? This numerical difference may be confusing - because the results are kind of reverted in the two problems.'?

Which word do you misunderstand?

Lubos - Thanks for your time. I really do appreciate it.

Thanks for your time and interest, RAF.

Lubos - I am nearly finished writing a paper on this. If you would care to see it, for review or comment, before I send it out into the world, I would be happy to send you a copy.

Lubos - I just noticed that you have added to this comment. I understand that these are two different problems with different results. My question was about ' the same kind of correct thinking that yields 1/2 is saying 2/3 for the Monty Hall problem'.
Just what is 'this same kind of correct thinking' in the case of Sleeping Beauty?

I think that I have already written about 6 blog posts answering "what I think is the correct thinking in the case of the sleeping beauty". Do you need a 7th one?

Lubos - I need to know what you meant when you compared it to the Monty Hall problem. You did not cover this in any post, but simply mentioned it in a comment .

We have already wasted about 20 totally worthless comments with this nonsensical question of yours. WTF do you want?

I just compared two problems and said that they are different, their answers are different, and the numerical results interchanged which may be confusing to some. That's it. What may possibly be unclear about these statements?

I am not interested in reading any other comment or texts of yours about these topics.

Lubos - I wanted a simple and polite answer to a simple and polite question. I have no other questions for you on this topic, and no intention of imposing my views upon you. Thanks again for your time.

talking about new high energy colliders: have you ever seen a nima/juan collision? no? now you do: http://arxiv.org/pdf/1503.08043.pdf