Forums

General Betting

Welcome to Live View – Take the tour to learn more
Start Tour
There is currently 1 person viewing this thread.
Templeton Peck
09 Aug 17 19:16
Joined:
Date Joined: 17 Sep 02
| Topic/replies: 4,134 | Blogger: Templeton Peck's blog
I have two models (A and B) and each model predicts the following odds for what turns out to be the winning outcome in six different markets:

1.1232 (A) v 1.617 (B)
17.8695 v 2.20171
3.0622 v 2.71006
16.5113 v 9.73252
1.1035 v 1.16802
1.365 v 1.1809

I don't know what the actual market odds were.  How much better/worse is model A than model B for each market?  What calculation should I use to determine how much better/worse model A is?
Pause Switch to Standard View Basic maths question
Show More
Loading...
Report aye robot August 10, 2017 12:15 PM BST
I don't think this is such a basic question.

First off - you couldn't meaningfully say anything at all about your model only from the information you give and even with the full information your sample size would likely be FAR too small to be meaningful at all.

With that caveat; the assessment of a probability model is often called "the Weatherman's problem". You'd be best off reading up on that online and getting a first hand expert explanation.
Report longbridge August 10, 2017 4:33 PM BST
No way to tell.  No way at all.  Definitely not enough information there.  Entirely possible - from the data you've given, in the absence of any other - that model A is exactly right every time or equally that model B is.  And as aye robot says, far too small a sample size.

I can't tell if this is a problem to which you think you already know the answer and want forumites to prove they can find it too, or if you're looking help with a problem you need to solve and haven't yet.
Report pxb August 12, 2017 12:34 AM BST
If by 'better' you mean the shortest predicted odds for the correct outcome, then Sum the predictions/number of predictions. The lower value being the best predictor.

I assume you are not discarding incorrect predictions as that would completely change things.

As noted the small sample size would mean a low confidence that the model that best predicted in this sample would always be the best.
Report dave1357 August 13, 2017 10:22 AM BST
but if he means better as more accurate, then that doesn't work as the actual odds of the first event could be 2, so the higher odds are more accurate.
Report Templeton Peck August 14, 2017 9:35 AM BST
Glad to hear it's not a basic problem, I assumed it was and I was just being dim!

Ignore the sample size, I will be testing hundreds of outcomes but just included six here for brevity.  This is a football model and I'm testing it for every match over the last couple of seasons.  I'm not discarding any matches and all I know is the outcome of each match, not the odds (its not match odds hence I've not got the odds to hand although I might have to try Betfair's new historical data service).

I'm also not interested if either model is good or not, just which one is better, i.e which one do I need to use to maximise my profit.  A is my current model, which does ok, B is a new model and I want to test to see if it's better than A.  As I assumed, not having market odds available is what makes this tricky.

I don't have the answer, I've tried a few methods but I've not been happy with the result.  A couple of people have told me to simply sum the difference in probability from 100% for each model, so 1.1232 is 89.03% and 1.617 is 61.8% and so model A is better by 89.03% - 61.80% which is 27.23% and then do the same for each event and then total the differences.  It seems too basic but is giving me results closer to what I'd expect to see and so am currently using this method.

Had a quick look for the weatherman problem but not found anything but will have another look.
Report aye robot August 14, 2017 5:24 PM BST
A quick and dirty way to do it and get a rough result is simply to convert each odds to a probability using 1/odds - so odds of 2 is probability 0.5 then encode the results as 0s and 1s (1 = win). Then you average your results (9 losses 1 win = average 0.1) and average your probabilities. The closer the two results are to each-other the better your model, the numbers don't matter - it's how close together they are that counts. Just to be clear though - whilst this will get you an approximate result it is NOT a proper answer to the problem. A better answer is to use a function of the probabilities - but I can't remember off the top of my head what the function is.

Even with that there are some pitfalls - for example you have to remember that you're dealing with contingent probabilities - IE if team A wins then team B can't win. That affects the problem.

One good thing is that you're trying to assess the relative performance of 2 models rather than the absolute quality of one. That's the right way to do it.

Like I said though - I'd recommend that you find a really good exposition of the problem from a bona fide expert as it's not really my bag and I doubt that you'll get a good answer on here.
Report pxb August 14, 2017 10:27 PM BST
IMO, the odds are irrelevant if you are betting on one of the three possible results in a football match. It's a simple binary outcome. The bet either wins or loses.

Thus, the model that assigns the highest probability (on average) to the actual outcome is the best model.

So the advice you got about summing (or averaging) the probability X actual outcome (1 or 0) is correct.

However, maximizing profit is a different issue that does involve odds.
Report aye robot August 14, 2017 11:25 PM BST
IMO, the odds are irrelevant if you are betting on one of the three possible results in a football match. It's a simple binary outcome. The bet either wins or loses.

If you were considering only one outcome per event then you could say it was binary but what kind of model does that? Given that a football match has three outcomes and any model has to account for all of them (it'd be a pretty deficient model otherwise) I'm pretty sure you can't then just evaluate the probabilities as if they were independent.

Like I said above - the dirty approach that I outlined isn't the best answer, it's a shoddy approximation that's better than nothing but that's all. This is the subject of proper maths by proper mathematicians and it's not fair to the person asking the question to label something as "correct" when it's not. I can't explain why it's wrong or what the best answer is because my maths isn't that good, but I've looked into it enough to know that it's not that simple.
Report pxb August 15, 2017 12:15 AM BST
You could evaluate it as trinary, but that makes hardly any difference.

Obviously, the probabilities of the 3 outcomes must add up to 100%. But that's not relevant to predictive accuraccy.
Report Get me a drink August 18, 2017 10:35 PM BST
What I do is put them in a spreadsheet. column A is reference , B is P(win model A), C is P(win model B), D is outcome (0 or 1)

Put a running total of cols B,C,D into F,G,H - using a formula such as col  F(row) = F(row-1) + B(row), except the first data row which just = B2

Select rows A to D and sort by P(win model A) ascending or descending it doesn't matter

Create a line chart of cols F,G,H and see which curve is closest match to 'Outcome' line

Repeat with sorting by model B.

If you have the odds, you put these in the mix the same way as a model. It's interesting to see if your model(s) out perform the Odds curve
Report Get me a drink August 18, 2017 10:54 PM BST
* Should say select COLUMNS A to D, not rows sorry. And don't forget to let the spreadsheet know that the first row is column names and should'nt be sorted.
Post Your Reply
<CTRL+Enter> to submit
Please login to post a reply.

Wonder

Instance ID: 13539
www.betfair.com