• When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

Superman2006

Member
  • Posts

    1,933
  • Joined

  • Last visited

Everything posted by Superman2006

  1. Thanks for another great contest, Mike! After a roller coaster round 2 (my all-time worst 10 points) and round 3 (my all-time best 2 point), I was back to my usual 5 point round, with the Horrific costing me 3 points (I went with a 1.8)
  2. I plan to provide some updated cumulative grading contest rankings soon after the current contest ends : )
  3. Thanks, Mike! If possible, please do the same after round 4. That will make it really easy for me to copy and paste the results across all 4 rounds into the spreadsheet that I use to determine cumulative boardie grading contest performance rankings through the current contest.
  4. Good job on the 1 point! Similar results for me. In round 2 I had my worst ever round with 10 points, and in round 3 I had my best ever round with 2 points.
  5. I had my worst round ever with a 10. Ouch! I undergraded the first 3 books, and overgraded the last 2 books, which might have something to do with the graders' notes Michael shared. Oh well, I'm with zzutak, and think we should just move on to the next round, as everyone had the same scans / graders' notes...
  6. Regarding the new label not having the tape note, that could just be the result of CGC's recent change to include fewer non-restoration related notes on the label (and from what I understand tape isn't typically considered restoration by CGC). I didn't check to see if the new label number is visible in the video, but if it is, and you looked it up on CGC's Verify CGC Certification page, the grader's notes should (at least in theory; although no guarantees, I guess) point it out if it's still there.
  7. Yeah you're right. Would definitely be 4.0 if in a green label. I think from a value standpoint, leaving as is in the blue label would be worth more than if it were in a green label. Yup, I think so too.
  8. I guess I'm color blind . All jokes aside, I think if this same book were to be graded now, it would likely receive a green label From what I understand, if you have something that qualifies for a Green label, you can ask CGC to give it a Green label with a higher grade, or a Blue label with a lower grade. In the case of this book, without seeing the back cover, and interior, I don't think a Blue label grade of 2.0 is unreasonable. Alternatively, if it was given a Green label, I could see it getting a 4.0 (or higher).
  9. Ha ha, yeah, be sure to join the next one, so you can retain your MVP title! I'll probably increase the minimum number of contests to qualify for the updated rankings through contest #7 to 3.25 contests (Not everyone could participate in the 4th round of the 1st contest, so the max contests most participants could have participated in through contest #7 will be 6.75, and 6.75 / 2 = 3.375, which rounds down to 3.25 contests).
  10. Just a reminder that by only listing the top 50 ranked graders according to the requirements outlined in my posts, my intent isn't to exclude anyone from the rankings. I just don't want to anyone near the bottom of the rankings to feel bad about their average scores, so I cut off the rankings pretty high up (i.e. just the top 50 out of the 99 participants that have competed in 2.75 rounds or more out of the first 6 contests). For anyone else that's interested in their scores outside of the top 50 (whether you've competed 2.75 rounds, or not), don't hesitate to shoot me a PM, and I'll share your scores, ranking, etc. with you via PM.
  11. Here is my last planned ranking summary through contest #6 (I may post some other analysis at some point, such as most improved). This post just provides a consolidated view of the various rankings that I analyzed, sorted in order of the CGC ranking from my first post in this thread. The first two screenshots below show the same top 50 as in my first post. The 3rd screenshot below shows some additional contestants that are in the top 50 based on one the two alternative "wisdom of the crowd" ranking systems that I described in my posts above. Congrats to @spidermanbeyond @Black_Adam @Rumler @jcjames @frozentundraguy @PeterPark @Ron C. @Stefan_W and @Gman65NJ for top 50 grader finishes based on the wisdom of the crowd ranking systems shown in the last 2 columns below (and described in more detail in my posts above)!
  12. The rankings based on collective averages in my posts above follow Aristotle's "wisdom of the crowd" concept, whereby "large groups of people are collectively smarter than individual experts..."
  13. While preparing the summary above, it hit me that maybe an even more accurate way to measure outstanding boardie grading performance (at least in my mind) might be to look at the average scores assigned by some number of top graders as measured by some preliminary criteria (I used the 50 award winners from my first post of this thread) in order to assign the "average top ranked boardies grade" for each book. I then recalculated all boardies scores against the "average top ranked boardies grade" across all contests in order to come up with the following results: Using this approach, jbpez once again ranks first overall with an average grade of just 12.5! TheGeneral and Electron follow fairly close behind with average scores of just 13.8 and 14.8 respectively! Although not shown in the summary above, just for kicks, I came up with a score for the CGC grader(s) across all books graded over the 6 contests, and the CGC grader(s) had a very impressive average of 14.0, which is just a bit behind the average score for jbpez and TheGeneral, when measured against the "average top ranked boardies grade".
  14. After each contest CGC posts the grade distribution from all graders. zzutak then analyzes the CGC distribution and provides various grade distribution statistic (including the mean, median, and mode), and shares a bunch of other interesting commentary. Based on the grade distribution info from CGC and grade distribution statisics from zzutak, there are some books where the average board grade is off by one or two increments from the CGC grade. Which is the more accurate grade in those cases, the grade assigned by 1 (or 2 or 3?) CGC graders, or the average grade assigned by a hundred or so boardies? Perhaps the CGC grade would approach the boardie average grade if CGC took that average grade from a higher number of graders. Due to economics, one can't really fault CGC for not doing so. Also, the more CGC graders physically handling a book, rather than simply grading from scans like we're doing the greater the risk of handling damage to book. With that said, I used my spreadsheet to calculate the mean grade for every in the contest, and then rounded that grade to the nearest actual CGC grade(e.g. if the average boardie grade was 1.6, I rounded that to a grade of 1.5, since 1.6 is closer to 1.5 than 1.8). I then measured everyone's performance against the average boardie grade and came up with the following results: Under this ranking approach, jbpez is the overall champ by a wide margin, with an average score of just 12.8!!! Congrats @jbpez !!!
  15. I think a better response from them is that they do not keep an active customer searchable register of what has been annotated on signature series books. This is quite close to something I asked about 15 years ago when I was curious about finding the serial numbers of books on the census to find which copy was graded first. CGC has and can pull up this information but chooses not to share it on the client side as it does not hold enough value for them to implement. On the other hand they might not be able to implement this type of search feature due to just how old the framework of their database is. As it has aged it has become less and less intuitive functionally and they do not seem interested in upgrading its infrastructure in any customer beneficial way. As more and more pictures of books are added can you imagine the following: Census Data Base: Search: X-Men #1 1963 9.8 Result: X-Men 1963 (Parent directory) X-Men #1 9.8 (2) (result) Select the number in parentheses and you are directed to the following serial numbers: 0631963001 0000000000 (i don't have the # for the second one) You can then click on the serial number and go right to the certification look up with all the books information. A nice clean searchable data base would be great and something no other potential competitor currently has, though beckett has feelers out currently (survey's) to select customers asking about more integrated (searchable) data base options for all of their products. That's good stuff. CGC could even keep all of their existing search functionality, where you can search on X-Men 1, then click on the link for X-Men 1 from 1963, which then shows all of the census results for X-Men 1 from 1963. CGC would then just need to do everything else you described, i.e. allow the user to click on the census count for a given grading category and grade (such as count of 2 for universal / 9.8), which would then pop up another screen showing the serial numbers for the 2 universal 9.8 copies, and then allow the user to click on the serial number to pull up all of the available certification info for that book. The OP could then check out the other 41 CGC 5.0 SS X-Men #1 books to see if any others were noted as being signed & Excelsior by Stan Lee.
  16. If someone has experience successfully copying and pasting from Excel into a boards message, please let me know; the formatting gets all messed up on me when I try to do it (hence the screen shots that I've been including in my earlier posts). Thanks : )
  17. One tool that can be used to improve one's CGC grading contest results is analyze your scores to see if you're consistently undergrade or overgrade compared to CGC. zzutak has mentioned this concept in some of his past posts: "So, just for kicks, recalculate your total contest score using "signed deltas": -2 for your guess being 2 grade increments too low, -1 for your guess being 1 grade increment too low, 0 for a bullseye, +1 for your guess being 1 grade increment too high, +2 for your guess being 2 grade increments too high, and so on. What's your cumulative 20-book tally now?" In other words, if you graded a book an 8, and CGC graded it a 9, you would have a signed delta score of -2 score for that book. Now if you graded another book a 4.0, and CGC graded it a 3.0, then you would have a signed delta score of +2 for that book. If you summed up your results across those 2 books, you would have a total signed delta score of -2 + +2 = 0, meaning that on average, you didn't undergrade or overgrade books relative to CGC. Here are the signed delta results for the award winners listed in my first post: Amongst the top 10 ranked participants: EastEnd1 has an average signed delta score of -0.2, meaning on average he just slightly undergrades compared to CGC. jbpez has an average signed delta score of -7.2, meaning he tends to undergrade compared to CGC. This means that you'll want to buy raw books from jbpez, lol. This also means that it is possible that jbpez could improve his already impressive #3 ranking if he loosened up on his grades a tab in future CGC grading contests. Get Marwood & I has an average signed delta score of 4.0, meaning that he tends to overgrade a bit compared to CGC during grading contests. Keep in mind that this is an averaged signed delta score per 20 book contest, so it's really not that high, and it translates to just a 0.2 averaged signed delta score per book, and with most grading increments equal of 0.5 (or less when you get into grades in the 9's), this has very little impact on the average grade assigned, so I'd caution anyone against over compensating when assigning grades in future grading contests if you have a relatively small average signed delta score. There are average signed delta scores close to zero amongst those in the top 10% and in the bottom 10% of average scores, so an average signed delta score close to zero doesn't necessarily translate to being an accurate overall grader, but if your average signed delta score is a big positive, or a big negative number, it might be worth adjusting your future scores a bit. I'll probably post everyone's signed delta score at some point (sorted alphabetically, without overall score) in case anyone else wants to see how much they tend to overgrade or undergrade on average.
  18. @TheGeneral I fixed it in my spreadsheet (your average rounded to one decimal place is still 17.8), so it should be good to go for any further analysis / future contest updates.
  19. Thanks General! I took a look back and you are correct; you participated in contests 2 through 6. For contest #3 the results showed you as "The General" rather than "TheGeneral", and when I sorted by name before finalizing my spreadsheet, there were a couple of other board names between those two names, so I didn't catch that (I did catch 6 other such variances between contests; hopefully I was 6 for 7 overall in catching such typos). I will fix that in my spreadsheet, but it won't really impact your ranking or average score, as you averaged 17.8 in the 4 contests where you were listed as "TheGeneral" vs. 18.0 in the 1 contest (contest #3) where you were listed as "The General".
  20. Hey DrMitchJ, See this thread for CGC Mike's explanation of contest #6 that just wrapped up a week or so ago: As Mike explains in the thread above, the top 3 winners of each contest win prize money from CGC. This thread that you're reading now just provides some additional stats for those that have partaken in multiple contests to date. I believe CGC Mike starts threads in the Comics General forum and in this "Hey buddy, can you spare a grade?" forum a few weeks or so before the start of each contest, so keep any eye out for Mike's advance notice thread(s), and join in the fun in the next contest!
  21. at least you made "a list" haha I'll see myself out You're in some lists, just not the ones published in my first post You are in the group of 99 participants that qualified for the lists in my first post above by completing 2.75+ contest. In your case, as you mentioned, you have participated in all the contests. In fact, you are one of just 14 graders that has submitted grades for every round so far (ignoring the 4th round of the first contest for those that didn't qualify for that round). I missed the signup deadline for the first contest by a day or two, or I'd be in that club too. Perfect Attendance Group of 14: jbpez Motor City Rob WilliamLunt Point Five zzutak grendelbo pastandpresentcomics silverseeker Axelrod davidtere comicginger1789 jas1vans musicmeta ADAMANTIUM
  22. Nice effort! Food for thought (for anyone reading this thread): Which performance is "better" -- an average score of 18 in Contests #3 and 4 (where the top scores were 18 and 17), or an average score of 16 in Contests #1, 2, 5, and 6 (where the top scores were 11, 15, 11, and 13). Thanks, zzutak. In my opinion the answer to your question is somewhat subjective, as there are many factors in play. Here is how I would view it: Following are the overall average scores per round for contests 1 through 6: 34, 31, 30, 32, 29, 29 I believe Mike advertised contest 1 outside of the boards if I recall correctly, so it isn't too surprising that the average score for contest 1 was a bit higher than the average for contests 2 through 6. The average scores for contests 2 through 6 are all 30 +/- 2, which to me indicates that any given contest isn't really that much "easier" or "harder" than another contest. As such, I would argue that the lower average score in any given contest is the "better" performance (although if we're talking scores of 16 and 18, the performance is nearly the same).