Pool C

Started by Pat Coleman, January 20, 2006, 02:35:54 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Titan Q

Quote from: kiko on February 26, 2019, 09:17:40 AM
Titan Q is arguing this from a fan standpoint,
Well, more from an amateur D3 bracketologist standpoint.  I just want to get 20/20 Pool Cs right one of these years.  But close enough.

wally_wabash

Quote from: KnightSlappy on February 26, 2019, 11:22:56 AM
Quote from: AO on February 26, 2019, 09:55:30 AM
Quote from: kiko on February 26, 2019, 09:17:40 AM
[q
Should the Ron Rose-era Titans be trying to schedule +.500 teams on the road to boost their SOS?  The published criteria seems to suggest this is smart.  But if there is another listing that is used which tiers teams into top-50 or top-100, it would be challenging to tilt their schedule toward a greater number if these matchups without knowing what the profile of a team on this list looks like.  And we have no visibility toward how much marginal gain there is from playing a top-50 team versus a top-100 team.  You can't always control how good your opponents actually are, but if you schedule out-of-conference games to a certain philosophy, it usually works out more often than not.  But you have to know what the underpinnings for that philosophy should be, and the NCAA owes it to its members to let them know what criteria it is using.

I know that transparency is not the NCAA's first instinct, but there is absolutely no reason for whatever source the committee is relying upon beyond the published criteria to be opaque.
Don't forget about the broken home/away multiplier.  If you can't find enough great teams to schedule, make sure you only schedule the weak teams for home games.  Playing at Alma is a SOS killer.  IWU probably doesn't have to worry getting enough top 50 or 100 wins in the non-conference season since they play in the CCIW so they should schedule easier SoS boosting games against .600+ teams from bad conferences.

Right. I failed to continue to make a big deal about this this season, but THE SOS AS CURRENTLY CONSTRUCTED DOES NOT DO WHAT THE NCAA THINKS IT DOES. That is to say, the home/away multiplier doesn't so much alter how "difficult" a game looks on the schedule, it alters how much "space" it takes up.

Think of building a schedule like packing the station wagon for a family trip. Each game is a suitcase, and the final SOS is how much total mass you end up packing in your trunk.

The way the multiplier is currently constructed, an AWAY game means you're using a large suitcase (1.25 multiplier). It doesn't speak to how much mass you've put in it, but it's going to take up more space than a home game, which is a small suitcase (0.75 multiplier). The mass inside the suitcase is still only determined by the opponent's record (and their opponents' record, i.e. OWP and OOWP).

This *sort of* makes sense when you're talking about quality opponents. Playing a good opponent on the road is liking packing a large suitcase and filling it with bricks. It takes up a lot of space in your trunk with solid, heavy objects. It should have a larger impact on your overall schedule strength. Playing them at home is like packing bricks in a smaller suitcase -- it's still quality mass, but it's not going to play as big of a role once all of the other suitcases are packed in.

But this breaks down tremendously when viewing poor opponents. If you play an away game against a poor opponent you're packing a large suitcase and filling it with feathers. It takes up a lot of space in your trunk yet adds little to nothing in terms of how much stuff you're actually packing. You'd rather play that poor team at home -- sticking those feathers in a small suitcase which leaves you plenty of room for heavier suitcases.

What this means in the end is that teams should do everything they can to avoid scheduling their weaker opponents on the road as it will drag their SOS down more than playing the same team at home. THIS IS BONKERS.

This post is fantastic and should be linked in the playoff FAQs as a reference. 
"Nothing in the world is more expensive than free."- The Deacon of HBO's The Wire

Greek Tragedy

Quote from: Titan Q on February 26, 2019, 11:05:41 AM
Quote from: Greek Tragedy on February 26, 2019, 08:47:22 AM
Ok, so there's a "Final regional rankings" that's released to the public and then an actual 5th regional ranking they use to determine Pool C bids?


No.  The final regional rankings posted here are the final ones they used for the selection/seeding/bracketing processes.

Ok, thanks. So there's just four, all publicized. So in terms of vRRO, results against Platteville aren't included since they didn't become regionally ranked until the end.
Pointers
Breed of a Champion
2004, 2005, 2010 and 2015 National Champions

Fantasy Leagues Commissioner

TGHIJGSTO!!!

Titan Q

#8268
Quote from: Ryan Scott (Hoops Fan) on February 25, 2019, 11:35:17 PM
I believe those Top 50 and Top 100 are the ways in which they're evaluating results vs regionally ranked opponents.  We're just not sure what metric they're using to rank those teams.

I interpret this differently, Ryan.

On Hoopsville yesterday, Sam, in talking about Ramapo, mentioned them being 8-5 RRO. Then said they were 3-3 vs the top 50, and 7-6 top 100.  So at first I was thinking maybe they are just looking at the RRO relative to whatever this top 50/top 100 is. (He mentioned 13 games vs the top 100...which is the same total as RRO.)

But in making a case for UW-La Crosse he said, "The big thing with La Crosse is the quality of their wins. They are 7-4 vs Top 50 teams. Their best win was against Oshkosh.  They are at 5 division ranked opponent wins, and overall they are 9-4 vs the top 100.  La Crosse didn't have 13 RRO games.

They seem to be using this top 50/top 100 metric for the whole resume.  Seems to me they have introduced a new criterion that is not in the handbook? 

And by the way - I love this concept of looking at results vs the top 50 and 100.  But shouldn't it be public and in the listed criteria?
---------
PRIMARY SELECTION CRITERIA
The primary criteria emphasize competition leading up to NCAA championships; all criteria listed will be evaluated (not listed in priority order).
● Won-lost percentage against Division III opponents;
● Division III head-to-head competition;
● Results versus common Division III opponents;
● Results versus ranked Division III teams as established by the final ranking and the ranking preceding the final ranking. Conference postseason contests are included.
● Division III strength of schedule;
- Opponents' Average Winning Percentage (OWP).
- Opponents' Opponents' Average Winning Percentage (OOWP).

SECONDARY SELECTION CRITERIA
If the evaluation of the primary criteria does not result in a decision, the secondary criteria will be reviewed. All the criteria listed will be evaluated (not listed in priority order). The secondary criteria introduce results against all other opponents from other classifications (i.e., provisionals, NAIA, NCAA Divisions I and II).
● Non-Division III won-lost percentage;
● Results versus common non-Division III opponents;
● Division III non-conference strength-of-schedule.
Additionally, input is provided by regional advisory committees for consideration by the Men's Basketball Committee.

https://ncaaorg.s3.amazonaws.com/championships/sports/basketball/d3/men/2018-19DIIIMBB_PreChampManual.pdf

Titan Q

Quote from: Greek Tragedy on February 26, 2019, 12:11:50 PM
Quote from: Titan Q on February 26, 2019, 11:05:41 AM
Quote from: Greek Tragedy on February 26, 2019, 08:47:22 AM
Ok, so there's a "Final regional rankings" that's released to the public and then an actual 5th regional ranking they use to determine Pool C bids?


No.  The final regional rankings posted here are the final ones they used for the selection/seeding/bracketing processes.

Ok, thanks. So there's just four, all publicized. So in terms of vRRO, results against Platteville aren't included since they didn't become regionally ranked until the end.

RRO vs any teams in that final ranking would count.  So yes, La Crosse results vs Platteville would count.

Ryan Scott (Hoops Fan)

Welp, we'll see Sam in Ft. Wayne and we can figure it out.  I wonder if the NCAA data doesn't get broken down in ways that align with the d3 criteria?  I could see them using metrics that work for d1 and just applying it to d3.  It's an interesting idea.

I'm not sure, though, it's more or less reliable than looking at those games and just deciding some of them are more important than others.  I wrote off ECSU basically on my own judgement on their vRRO; I can't trust my intuition more than a Top50 or Top100 list - both are imperfect and less than ideal.  You gotta use something, though.
Lead Columnist for D3hoops.com
@ryanalanscott just about anywhere

Titan Q

Quote from: Ryan Scott (Hoops Fan) on February 26, 2019, 12:20:11 PM
Welp, we'll see Sam in Ft. Wayne and we can figure it out.  I wonder if the NCAA data doesn't get broken down in ways that align with the d3 criteria?  I could see them using metrics that work for d1 and just applying it to d3.  It's an interesting idea.

I'm not sure, though, it's more or less reliable than looking at those games and just deciding some of them are more important than others.  I wrote off ECSU basically on my own judgement on their vRRO; I can't trust my intuition more than a Top50 or Top100 list - both are imperfect and less than ideal.  You gotta use something, though.

I agree - I love the concept.

Just seems like they are using something they haven't told anyone about.

sac

Vs top 50 in D1 is different than vs top 50 in D3.

14%  vs 11%

Vs top 100 in D1 is different than vs top 50 in D3

28% vs 23%

I view that as a problem.


Gregory Sager

Quote from: Dave 'd-mac' McHugh on February 25, 2019, 11:22:12 PM
Quote from: Gregory Sager on February 25, 2019, 10:02:43 PM
Quote from: Dave 'd-mac' McHugh on February 25, 2019, 08:06:21 PM
Quote from: Gregory Sager on February 25, 2019, 07:59:22 PM
Yes, they say that, but it's patentily obvious that at certain points they have to favor one criterion over another. You guys amply displayed this last night on Hoopsville in what Ryan called the "apples versus oranges" debate between Ramapo and La Roche for your group pick.

I think there are different opinions... but I don't think they prioritize. In the past, with the SOS metric I think they had gotten into some prioritization ... but I don't think they are there as much any more.

I will say this ... Ramapo had a strong resume in all but one point compared to La Roche. Ramapo - as I put it - had a meatier resume. That looks better than just winning games according to the committee ... but I am not sure you could put your finger on which part.

Well, whether it's SOS or vRRO that they're leaning towards, or even if it's both, they're prioritizing them over WP ... which is my point. At some point the committee has to pick a lane and stay in it in terms of which criterion trumps another criterion in a given comparison.

The whole point ... is that the committee is open minded enough NOT to pick a lane. The whole point of removing the SOS metric was that they were getting themselves into a lane.

I think that we're discussing two different things, Dave. You're talking macro, I'm talking micro. In other words, you're focusing upon an overall committee methodology, and I'm focusing upon whatever specific comparisons are made in a particular round when two or more teams with highly contrasting résumés are on the table and being seriously considered for the next pick.
"To see what is in front of one's nose is a constant struggle." -- George Orwell

Gregory Sager

Quote from: Greek Tragedy on February 26, 2019, 08:33:27 AM
Quote from: Gregory Sager on February 25, 2019, 05:47:41 PM
Quote from: Titan Q on February 25, 2019, 05:25:24 PM
Final regional rankings...

https://www.d3hoops.com/notables/2019/02/men-regional-rankings-final

La Roche was 3 teams away from ever seeing the table -- behind Mount Union, Wilmington, Wabash.

This committee seems to be even more zealously committed to SOS being the primariest of the primary criteria than the VandeStreek committee was.

Quote from: Titan Q on February 25, 2019, 05:25:52 PM
Central region was crafty in building that resume for UW-La Crosse.  Perfectly done.

We can't complain, that's for sure. The Central had more Pool C selections than any other region, even the Northeast.

Well, as many, at least.

I think the Central had Augie, Oshkosh, Wheaton and La Crosse.

The NE had Hamilton, Williams, MIT and Middlebury.

You're right. Forgot that MIT was Pool C. Nice catch, Tom.
"To see what is in front of one's nose is a constant struggle." -- George Orwell

Gregory Sager

Quote from: Titan Q on February 26, 2019, 12:14:59 PM
Quote from: Ryan Scott (Hoops Fan) on February 25, 2019, 11:35:17 PM
I believe those Top 50 and Top 100 are the ways in which they're evaluating results vs regionally ranked opponents.  We're just not sure what metric they're using to rank those teams.

I interpret this differently, Ryan.

On Hoopsville yesterday, Sam, in talking about Ramapo, mentioned them being 8-5 RRO. Then said they were 3-3 vs the top 50, and 7-6 top 100.

Well, we know for a fact then that there's variance between their mystery ratings and Drew's. He had Ramapo at 2-3, 6-5.

BTW, nice detective work, Bob. I'm right there with you and kiko and ronk that this mystery rating that they're using is an indication of a lack of transparency on the part of the committee.

Quote from: sac on February 26, 2019, 12:34:12 PM
Vs top 50 in D1 is different than vs top 50 in D3.

14%  vs 11%

Vs top 100 in D1 is different than vs top 50 in D3

28% vs 23%

I view that as a problem.

... not to mention the fact that you've got entirely different scheduling philosophies between the two divisions. The D1 guys get on planes and fly willy-nilly around the country on weekdays as part of a nationally-based approach to scheduling. The D3 guys get on buses and stick as close to home as possible on weekdays in order to make sure that they don't miss class the next morning. The corresponding lack of crossover as compared to D1 makes the construction of a top 50 list or a top 100 list in D3 very different than what works for the big boys.
"To see what is in front of one's nose is a constant struggle." -- George Orwell

Dave 'd-mac' McHugh

Quote from: Rofrog on February 26, 2019, 12:27:53 AM
Dave you didnt even want to put Ramapo  in last night!What changed today

We aren't the official committee. We were doing mock selections. When you get down to the final selections, things can go in different directions. We have only gotten every team right once.

I'm not losing any sleep over it (I'm losing sleep for a lot of other reasons unrelated to any of this).
Host of Hoopsville. USBWA Executive Board member. Broadcast Director for D3sports.com. Broadcaster for NCAA.com & several colleges. PA Announcer for Gophers & Brigade. Follow me on Twitter: @davemchugh or @d3hoopsville.

wally_wabash

Quote from: Titan Q on February 26, 2019, 12:14:59 PM
Quote from: Ryan Scott (Hoops Fan) on February 25, 2019, 11:35:17 PM
I believe those Top 50 and Top 100 are the ways in which they're evaluating results vs regionally ranked opponents.  We're just not sure what metric they're using to rank those teams.

I interpret this differently, Ryan.

On Hoopsville yesterday, Sam, in talking about Ramapo, mentioned them being 8-5 RRO. Then said they were 3-3 vs the top 50, and 7-6 top 100.  So at first I was thinking maybe they are just looking at the RRO relative to whatever this top 50/top 100 is. (He mentioned 13 games vs the top 100...which is the same total as RRO.)

But in making a case for UW-La Crosse he said, "The big thing with La Crosse is the quality of their wins. They are 7-4 vs Top 50 teams. Their best win was against Oshkosh.  They are at 5 division ranked opponent wins, and overall they are 9-4 vs the top 100.  La Crosse didn't have 13 RRO games.

They seem to be using this top 50/top 100 metric for the whole resume.  Seems to me they have introduced a new criterion that is not in the handbook? 

And by the way - I love this concept of looking at results vs the top 50 and 100.  But shouldn't it be public and in the listed criteria?

I'm all for this sort of thing being part of the official criteria (relevant side discussions about how to create that ranking and how it compares across divisions aside), but Titan Q is right- it should be listed.  If it isn't, it shouldn't be part of the discussion. 

When we talked to the football chair, he was pretty clear that their conversations were pretty rigidly contained within the criteria.  Every single talking point needs to be able to answer "yes" to the question "Is this part of the criteria?"  I'm not saying that football does this right or better, but I think at the very least that one rule has to apply across the board. 
"Nothing in the world is more expensive than free."- The Deacon of HBO's The Wire

Dave 'd-mac' McHugh

One thing to keep in mind that we have heard from committee chairs for ... five or more years.

The term "results versus regionally ranked opponents" has always opened the door to what they can look at to some degree. They dive into what "results" they are looking at with "regionally ranked opponents."

There are some items that aren't expressly written out to try and give the committees a little more latitude or flexibility or ... pick your adjective. That is why I know this topic of Top 50 and Top 100 has been brought up before. And I have asked almost always where this comes from and the justification.

One thing to keep in mind, there is an NCAA liaison overseeing all of this. If the committee is straying into an area that they shouldn't, the liaison shuts it down. Well, they should. One could argue the past liaison allowed the SOS metric to be used when maybe it shouldn't have been. That said, I was also told that NCAA stats had approved and backed up the metric - something that may not be true now. (Basically, the former liaison may have been allowing it because she was told it was okay when now the NCAA has backed down from that support.)

I will continue to dive in best I can.
Host of Hoopsville. USBWA Executive Board member. Broadcast Director for D3sports.com. Broadcaster for NCAA.com & several colleges. PA Announcer for Gophers & Brigade. Follow me on Twitter: @davemchugh or @d3hoopsville.

Ryan Scott (Hoops Fan)

Quote from: wally_wabash on February 26, 2019, 01:33:31 PM
Quote from: Titan Q on February 26, 2019, 12:14:59 PM
Quote from: Ryan Scott (Hoops Fan) on February 25, 2019, 11:35:17 PM
I believe those Top 50 and Top 100 are the ways in which they're evaluating results vs regionally ranked opponents.  We're just not sure what metric they're using to rank those teams.

I interpret this differently, Ryan.

On Hoopsville yesterday, Sam, in talking about Ramapo, mentioned them being 8-5 RRO. Then said they were 3-3 vs the top 50, and 7-6 top 100.  So at first I was thinking maybe they are just looking at the RRO relative to whatever this top 50/top 100 is. (He mentioned 13 games vs the top 100...which is the same total as RRO.)

But in making a case for UW-La Crosse he said, "The big thing with La Crosse is the quality of their wins. They are 7-4 vs Top 50 teams. Their best win was against Oshkosh.  They are at 5 division ranked opponent wins, and overall they are 9-4 vs the top 100.  La Crosse didn't have 13 RRO games.

They seem to be using this top 50/top 100 metric for the whole resume.  Seems to me they have introduced a new criterion that is not in the handbook? 

And by the way - I love this concept of looking at results vs the top 50 and 100.  But shouldn't it be public and in the listed criteria?

I'm all for this sort of thing being part of the official criteria (relevant side discussions about how to create that ranking and how it compares across divisions aside), but Titan Q is right- it should be listed.  If it isn't, it shouldn't be part of the discussion. 

When we talked to the football chair, he was pretty clear that their conversations were pretty rigidly contained within the criteria.  Every single talking point needs to be able to answer "yes" to the question "Is this part of the criteria?"  I'm not saying that football does this right or better, but I think at the very least that one rule has to apply across the board.

I think the point people are trying to make is that it is listed, just perhaps lacking specificity.  "Results vs Regionally Ranked Opponents" implies some metric for gauging those results.  I'd agree maybe that means needs to be spelled out better than it is, but as the criteria is currently outlined, the committee has carte blanche to evaluate results vRRO as they see fit.
Lead Columnist for D3hoops.com
@ryanalanscott just about anywhere