• When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

Superman2006

Member
  • Posts

    1,949
  • Joined

  • Last visited

Posts posted by Superman2006

  1. On 6/20/2023 at 4:38 PM, Dark Knight said:
    On 6/20/2023 at 2:51 AM, Primetime said:

    IMG_1436.jpeg

    I guess I'm color blind :facepalm:.  All jokes aside, I think if this same book were to be graded now, it would likely receive a green label 

    From what I understand, if you have something that qualifies for a Green label, you can ask CGC to give it a Green label with a higher grade, or a Blue label with a lower grade.  In the case of this book, without seeing the back cover, and interior, I don't think a Blue label grade of 2.0 is unreasonable. Alternatively, if it was given a Green label, I could see it getting a 4.0 (or higher). 2c

  2. On 5/19/2023 at 12:54 AM, Squirrel Guy said:

    spoon, was there another contest I missed? I appreciate the hard work. I definitely need to do the next one.

    Ha ha, yeah, be sure to join the next one, so you can retain your MVP title!

    I'll probably increase the minimum number of contests to qualify for the updated rankings through contest #7 to 3.25 contests (Not everyone could participate in the 4th round of the 1st contest, so the max contests most participants could have participated in through contest #7 will be 6.75, and 6.75 / 2 = 3.375, which rounds down to 3.25 contests).

  3. Just a reminder that by only listing the top 50 ranked graders according to the requirements outlined in my posts, my intent isn't to exclude anyone from the rankings. I just don't want to anyone near the bottom of the rankings to feel bad about their average scores, so I cut off the rankings pretty high up (i.e. just the top 50 out of the 99 participants that have competed in 2.75 rounds or more out of the first 6 contests).

    For anyone else that's interested in their scores outside of the top 50 (whether you've competed 2.75 rounds, or not), don't hesitate to shoot me a PM, and I'll share your scores, ranking, etc. with you via PM.

  4. Here is my last planned ranking summary through contest #6 (I may post some other analysis at some point, such as most improved).

    This post just provides a consolidated view of the various rankings that I analyzed, sorted in order of the CGC ranking from my first post in this thread.

    The first two screenshots below show the same top 50 as in my first post. The 3rd screenshot below shows some additional contestants that are in the top 50 based on one the two alternative "wisdom of the crowd" ranking systems that I described in my posts above.

    Congrats to @spidermanbeyond @Black_Adam @Rumler @jcjames @frozentundraguy @PeterPark @Ron C. @Stefan_W and @Gman65NJ for top 50 grader finishes based on the wisdom of the crowd ranking systems shown in the last 2 columns below (and described in more detail in my posts above)! 

    CombinedRankingSummary1-25.JPG.cb8c1c6c55c1ca634ee0194bf7c2a02f.JPG

    CombinedRankingSummary26-50.JPG.6d36bcf557e7edbf5231ae274e245bda.JPG

    CombinedRankingSummaryOtherTop50s.JPG.4155b5ae51bb43b876e572be9467ba53.JPG

  5. While preparing the summary above, it hit me that maybe an even more accurate way to measure outstanding boardie grading performance (at least in my mind) might be to look at the average scores assigned by some number of top graders as measured by some preliminary criteria (I used the 50 award winners from my first post of this thread) in order to assign the "average top ranked boardies grade" for each book.

    I then recalculated all boardies scores against the "average top ranked boardies grade" across all contests in order to come up with the following results:

    ExpertsChamp1-25.JPG.f318788ba9fcedff9feabb2620255532.JPG

    ExpertsChamp26-50.JPG.abc8013a09d3e3b9b1914972723f549a.JPG

    Using this approach, jbpez once again ranks first overall with an average grade of just 12.5!

    TheGeneral and Electron follow fairly close behind with average scores of just 13.8 and 14.8 respectively!

    Although not shown in the summary above, just for kicks, I came up with a score for the CGC grader(s) across all books graded over the 6 contests, and the CGC grader(s) had a very impressive average of 14.0, which is just a bit behind the average score for jbpez and TheGeneral, when measured against the "average top ranked boardies grade".

  6. After each contest CGC posts the grade distribution from all graders. zzutak then analyzes the CGC distribution and provides various grade distribution statistic (including the mean, median, and mode), and shares a bunch of other interesting commentary.  Based on the grade distribution info from CGC and grade distribution statisics from zzutak, there are some books where the average board grade is off by one or two increments from the CGC grade. Which is the more accurate grade in those cases, the grade assigned by 1 (or 2 or 3?) CGC graders, or the average grade assigned by a hundred or so boardies?  Perhaps the CGC grade would approach the boardie average grade if CGC took that average grade from a higher number of graders. Due to economics, one can't really fault CGC for not doing so. Also, the more CGC graders physically handling a book, rather than simply grading from scans like we're doing the greater the risk of handling damage to book.

    With that said, I used my spreadsheet to calculate the mean grade for every in the contest, and then rounded that grade to the nearest actual CGC grade(e.g. if the average boardie grade was 1.6, I rounded that to a grade of 1.5, since 1.6 is closer to 1.5 than 1.8). I then measured everyone's performance against the average boardie grade and came up with the following results:

    RankingUsingBoardieAvgGrade1-25.JPG.3b040cb487f9fc70addb6246af323596.JPG

    RankingUsingBoardieAvgGrade26-50.JPG.d2ff5d1f00b80d936c03a188180c4290.JPG

    Under this ranking approach, jbpez is the overall champ by a wide margin, with an average score of just 12.8!!!

    Congrats @jbpez !!!

  7. On 5/15/2023 at 9:47 AM, DougC said:
    On 2/25/2023 at 8:18 PM, Tony "T" said:

    I mean every signed book says who? and what! was written on cover of the book on the Slabs label. But CGC is saying they don't keep a register of that.

    I think a better response from them is that they do not keep an active customer searchable register of what has been annotated on signature series books. This is quite close to something I asked about 15 years ago when I was curious about finding the serial numbers of books on the census to find which copy was graded first. CGC has and can pull up this information but chooses not to share it on the client side as it does not hold enough value for them to implement.

    On the other hand they might not be able to implement this type of search feature due to just how old the framework of their database is. As it has aged it has become less and less intuitive functionally and they do not seem interested in upgrading its infrastructure in any customer beneficial way. As more and more pictures of books are added can you imagine the following:

     

    Census Data Base:

    Search: X-Men #1 1963 9.8

    Result: X-Men 1963 (Parent directory)

    1. X-Men #1 9.8 (2) (result)

    Select the number in parentheses and you are directed to the following serial numbers:

    1. 0631963001
    2. 0000000000 (i don't have the # for the second one)

    You can then click on the serial number and go right to the certification look up with all the books information. A nice clean searchable data base would be great and something no other potential competitor currently has, though beckett has feelers out currently (survey's) to select customers asking about more integrated (searchable) data base options for all of their products.

    That's good stuff.

    CGC could even keep all of their existing search functionality, where you can search on X-Men 1, then click on the link for X-Men 1 from 1963, which then shows all of the census results for X-Men 1 from 1963. CGC would then just need to do everything else you described, i.e. allow the user to click on the census count for a given grading category and grade (such as count of 2 for universal / 9.8), which would then pop up another screen showing the serial numbers for the 2 universal 9.8 copies, and then allow the user to click on the serial number to pull up all of the available certification info for that book. The OP could then check out the other 41 CGC 5.0 SS X-Men #1 books to see if any others were noted as being signed & Excelsior by Stan Lee.

     

  8. On 5/13/2023 at 11:29 PM, TheGeneral said:

    Awesome recap, thanks for the analysis! @Superman2006

    I thought there have only been 5 contests? Was the very first one a different format or limited entry or something? Never mind, I took a look back. Missed the first contest. Also got confused because I think I've done 5, but it says 4 on your score breakdown.

    Thanks General!

    I took a look back and you are correct; you participated in contests 2 through 6. For contest #3 the results showed you as "The General" rather than "TheGeneral", and when I sorted by name before finalizing my spreadsheet, there were a couple of other board names between those two names, so I didn't catch that (I did catch 6 other such variances between contests; hopefully I was 6 for 7 overall in catching such typos). I will fix that in my spreadsheet, but it won't really impact your ranking or average score, as you averaged 17.8 in the 4 contests where you were listed as "TheGeneral" vs. 18.0 in the 1 contest (contest #3) where you were listed as "The General".

  9. On 5/13/2023 at 9:11 PM, DrMitchJ said:

    I don't understand anything that is going on here, can someone explain the contest?? 

    Hey DrMitchJ,

    See this thread for CGC Mike's explanation of contest #6 that just wrapped up a week or so ago:

    As Mike explains in the thread above, the top 3 winners of each contest win prize money from CGC.

    This thread that you're reading now just provides some additional stats for those that have partaken in multiple contests to date.

    I believe CGC Mike starts threads in the Comics General forum and in this "Hey buddy, can you spare a grade?" forum a few weeks or so before the start of each contest, so keep any eye out for Mike's advance notice thread(s), and join in the fun in the next contest!

     

     

  10. On 5/13/2023 at 8:05 PM, zzutak said:
    On 5/13/2023 at 4:03 PM, Superman2006 said:

    There are different ways that one could slice and dice the grading contest results over the last 6 grading contests to come up with cumulative rankings. The approach I took was to come up was to rank all participants based on their average [contest] score.

    Nice effort!  (thumbsu

    Food for thought (for anyone reading this thread): Which performance is "better" -- an average score of 18 in Contests #3 and 4 (where the top scores were 18 and 17), or an average score of 16 in Contests #1, 2, 5, and 6 (where the top scores were 11, 15, 11, and 13).  hm  hm  (shrug)

    Thanks, zzutak. :smile:

    In my opinion the answer to your question is somewhat subjective, as there are many factors in play. Here is how I would view it:

    Following are the overall average scores per round for contests 1 through 6:

    34, 31, 30, 32, 29, 29

    I believe Mike advertised contest 1 outside of the boards if I recall correctly, so it isn't too surprising that the average score for contest 1 was a bit higher than the average for contests 2 through 6. The average scores for contests 2 through 6 are all 30 +/- 2, which to me indicates that any given contest isn't really that much "easier" or "harder" than another contest. As such, I would argue that the lower average score in any given contest is the "better" performance (although if we're talking scores of 16 and 18, the performance is nearly the same). 2c

  11. On 5/13/2023 at 6:18 PM, Point Five said:

    So I'm #36 in zzutak's ranking system, but I'm #11 in yours?  hm


    Man! This Superman2006 cat REALLY knows his stuff.  :applause:

    In zzutak's ranking system you just got 3 points for an 8th to 12th finish, but you regularly finished close to the top 10, so under my ranking system you do much better than someone else that might have finished say 6th in one contest and collected 5 points, but finished much lower in the other contests. You are on the cusp of joining the illustrious Diamond Club ranking. lol

  12. On 5/10/2023 at 12:37 PM, zzutak said:

    Here are a couple of my opinions on what constitutes a "good" grader.

    Point #1:
    My experience with CGC dates back to the early 2000s, when three different staff members independently assigned a numerical grade to each book.  Any collector could easily learn what those three grades were with a simple (and very short) phone call to CGC.  I made several of those calls to assess whether a CGC certified/encapsulated book I was considering purchasing was, for example, "a weak 8.0" (7.5, 8.0, 8.0), "a solid 8.0" (8.0, 8.0, 8.0), or "a strong 8.0" (8.0, 8.0, 8.5).  My experience with CGC convinced me that CGC's "system" (the precision of its condition grading rubric, and the skill of the individuals who are tasked with properly interpreting and applying that rubric) is only reproducible to + or - one grade increment, at best.  Hence, any player with a total score of 20 or less in any single CGC Contest is almost certainly an outstanding grader -- as good or better than most of the individuals CGC currently has on staff.

    Point #2:
    Back in the mid-2000s, Ed Jaster (who had recently joined Heritage Auctions) spent a few days at my home in San Luis Obispo.  He told me the "trick" to having your books certified by CGC was to max out your invoices rather than submit only a few books.  "Sometimes you'll be higher than CGC, and sometimes you'll be lower; but if you're a good grader, your average grade should come pretty close to CGC's."

    CGC's proprietary (and still unpublished) grading rubric necessarily allows for judgment, and different graders may interpret/apply a given standard differently.  Blemishes/defects can be overlooked or discounted for any number of reasons.  The bottom line is that a given book may not always receive the same CGC grade if it's submitted several times (although the various grades assigned will almost certainly be within an increment or two of each other).  This fact (not opinion) will be obvious to anyone who's done an initial prescreen and then had the rejected books accepted on a later, identical, prescreen.  This is not meant to be a slam against CGC; I know of no collector who can/will assign the exact same grade to every one of his/her books every time he/she looks at 'em.

    So, just for kicks, recalculate your total contest score using "signed deltas": -2 for your guess being 2 grade increments too low, -1 for your guess being 1 grade increment too low, 0 for a bullseye, +1 for your guess being 1 grade increment too high, +2 for your guess being 2 grade increments too high, and so on.  What's your cumulative 20-book tally now?  Did the errors to the low side and high side more or less balance out?  Or is your tally way negative or way positive (in which case you're consistently being too strict or too lenient with your condition assessments)?

    Chances are your score using signed deltas will be much smaller than your score using absolute/unsigned deltas.  If that's the case, your errors are balancing out (which is exactly what Ed predicted would happen for a decent grader).  Here's an example using my own Contest #3, Round 1+2+3+4 guesses (20 books):

    • 5 Bullseyes = 0
    • 3 times my guess was one grade increment too low = -3
    • 7 times my guess was one grade increment too high = +7
    • 3 times my guess was two grade increments too low = -6
    • 2 times my guess was two grade increments too high = +4

    My total signed delta score would be 0 - 3 + 7 - 6 + 4 = +2 for the 20-book challenge.  That's an average error of one-tenth of one grade increment:whatthe:  :whatthe:  :whatthe:  Compare that to my total absolute/unsigned delta score (my actual contest score) of 20, or an average error of one grade increment per book.  This is exactly what Ed Jaster predicted would happen for a person whose grading standards are essentially in line with CGC's.

    I think it's pretty safe to say that any player with a total score of 20 or less in any single CGC contest is a good grader. Although with "just" 20 books, I think it is possible that the stars just aligned for them on a single contest, but if they repeat that performance (or don't deviate too far from it) over multiple contests, then I think it's pretty safe to say they're an outstanding grader.

    Regarding your 2nd point, a low total signed delta score on its own doesn't necessarily mean that someone is a good grader, as even a terrible grader could end up with a total signed delta score that is close to zero (I'm not suggesting that you implied otherwise). I agree that the total signed delta calculation is a good exercise for anyone to perform to see if they are consistently over or under-grading a comic compared to CGC, and something that could help them improve their grading skills in the future.

    e.g. A not so great grader with a total score of 56 could still end up with a total signed delta score of zero if they threw darts to come up with their grades similar to the following;

    • 0 Bullseyes = 0
    • 1 times a guess was 1 grade increment too low = 0
    • 3 times a guess was 1 grade increment too high = +3
    • 3 times a guess was 2 grade increments too low = -6
    • 2 times a guess was 2 grade increments too high = +4
    • 3 times a guess was 3 grade increments too low = -9
    • 1 times a guess was 3 grade increments too high = +3
    • 2 times a guess was 4 grade increments too low = -8
    • 3 times a guess was 4 grade increments too high = +12
    • 1 times a guess was 5 grade increments too low = -5
    • 0 times a guess was 5 grade increments too high = 0
    • 0 times a guess was 6 grade increments too low = 0
    • 1 times a guess was 6 grade increments too high = +6
    • Total signed delta score = 0

     

  13. I was curious to see (a) how my grading contest scores have changed from contest to contest and (b) how my average grading contest skills stack up against others.

    I mocked up the following Summary template that could be used to meet both of those objectives.

    Since not all boardies will have competed in all 6 contests to date, the "Total Contests" column shows how many contests each boardie competed in, to give some sense as to the credibility of a boardie's average score. 

    The "Contest Average" is calculated as the Total of the "Scores by Contest" / Total of the "Rounds Completed by Contest" * Total rounds per contest , e.g. in my case 96 / 20 * 4 = 19.20

    If for whatever reason I had only partook in the first 2 rounds in contest #6, and my score for those 2 rounds was 10, then my total would have been 89 (instead of 96), and my rounds completed would have been 18 (instead of 20), giving me a "Contest Average" of 89 / 18 * 4 = 19.78. In other words, the approach I have proposed for calculating "Contest Average" works even if someone didn't complete all of the rounds in a given contest.

    I figured I'd check to see if anyone else was interested in seeing this type of a grading score summary / ranking before trying to pull the data together into Excel and populating the summary below any further.

    If someone was interested in becoming a grader for CGC and they had a good grading contest average score and decent number of total contests (or improved performance over time), that might not be a bad thing to mention in their cover letter / resume.

      Contest Total   Scores by Contest             Rounds Completed by Contest        
    Boardie Average Contests   #1 #2 #3 #4 #5 #6 Total   #1 #2 #3 #4 #5 #6 Total
    Superman2006 19.20 5.0   0 18 20 21 20 17 96   0 4 4 4 4 4 20