• When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

Archived

This topic is now archived and is closed to further replies.

CGC Audit

71 posts in this topic

Grading is subjective only to a limited degree. The observational part of grading is objective...does the book have water damage, does it have a coupon clipped, does it have a tear on the bottom edge, does it have the tip of a corner missing, is the cover detached from one staple, is that a sub crease running through the book?

 

Those flaws should limit and loosely define the grade range and the subjective element is introduced when deciding how to accurately pigeon-hole your observations. Sometimes people are lacking in knowledge...there is an extremely handy tool in the Overstreet Grading Guide, but not everybody knows of it or uses it...or sometimes they lack experience in assessing the cumulative effect of various defects.

 

However, an experienced grader who is aware of the OS guidelines and accepted conventions should not differ greatly from another experienced grader who is aware of the OS guidelines and accepted conventions.

 

I'd like to say a one point variance either way, but would reluctantly accept a two point variance.

 

Beyond that, it's nothing to do with subjectively and all about the $$$

 

But even with the Overstreet Grading Guide - its your interpretation of flaw verses mine - and you and I standing next to each other with the same book at the same time could disagree on flaw and then also on the extent the flaw affects the grade.

 

We couldn't disagree that the book had a corner crease - that's the objective part.

 

How that crease impacts the grade is where the subjectivity comes in, but that should be limited to a certain range. As an extreme example, a book is flawless apart from a tiny tear on the bottom edge. I'm saying VF/NM, you're saying VG.

 

Not going to happen, is it?

 

However, I could say VF/NM and you'd be saying somewhere between VF+ and NM-.

 

To my mind, that's within the acceptable variance of subjectivity.

Link to comment
Share on other sites

The whole basis of CGC is to remove subjective and bias grading, and use a more “scientific method” approach to grading.

 

Just curious - where do you get this notion from? I don't see how grading a comic can be anything but subjective, frankly.

 

By having people complete the task of grading - it is by nature subjective - people sees thing differently.

 

Brian, I think you're agreeing with me aren't you?

 

Grading is always going to be subjective. I was asking where the idea that CGC are trying to remove the subjective element came from. It was presented as a fact and I don't think it's part of their mission statement!

 

We are on the SAME page (thumbs u

 

I agree that people can see things differently when it comes to grading. One person might not deduct as much as another person would on any given defect. CGC has come up with their own “standards” as far as arriving at a numerical grade.

 

Restoration detection should be a more scientific approach. Either it has been restored or it has not been restored.

 

But they are still relying on a person (and we all have flaws) deciding if there is restoration or not.

 

You and I may grade a book differently, but If we were both graders at CGC we would be using the same equipment to detect restoration and following the same procedures and standards that CGC has set in place to arrive at a grade. We should come out with the same results within an allowable degree of accuracy.

 

Define 'an allowable degree of accuracy'? (shrug)

 

Most testing has a degree of accuracy. This means that I can run a test on something 10 times, the same way, and get a slightly different result. If my results are all within the degree of accuracy for that test the results are correct.

Link to comment
Share on other sites

CGC I don't think will tell us their business practices

But my guess is lots of people try and sneak one by them all the time we just don't hear about the ones that got by we just see the purples on ebay

Link to comment
Share on other sites

The whole basis of CGC is to remove subjective and bias grading, and use a more “scientific method” approach to grading.

 

Just curious - where do you get this notion from? I don't see how grading a comic can be anything but subjective, frankly.

 

By having people complete the task of grading - it is by nature subjective - people sees thing differently.

 

Brian, I think you're agreeing with me aren't you?

 

Grading is always going to be subjective. I was asking where the idea that CGC are trying to remove the subjective element came from. It was presented as a fact and I don't think it's part of their mission statement!

 

We are on the SAME page (thumbs u

 

I agree that people can see things differently when it comes to grading. One person might not deduct as much as another person would on any given defect. CGC has come up with their own “standards” as far as arriving at a numerical grade.

 

Restoration detection should be a more scientific approach. Either it has been restored or it has not been restored.

 

But they are still relying on a person (and we all have flaws) deciding if there is restoration or not.

 

You and I may grade a book differently, but If we were both graders at CGC we would be using the same equipment to detect restoration and following the same procedures and standards that CGC has set in place to arrive at a grade. We should come out with the same results within an allowable degree of accuracy.

 

Define 'an allowable degree of accuracy'? (shrug)

 

Most testing has a degree of accuracy. This means that I can run a test on something 10 times, the same way, and get a slightly different result. If my results are all within the degree of accuracy for that test the results are correct.

 

Define 'an allowable degree of accuracy' for CGC?

Link to comment
Share on other sites

Grading is subjective only to a limited degree. The observational part of grading is objective...does the book have water damage, does it have a coupon clipped, does it have a tear on the bottom edge, does it have the tip of a corner missing, is the cover detached from one staple, is that a sub crease running through the book?

 

Those flaws should limit and loosely define the grade range and the subjective element is introduced when deciding how to accurately pigeon-hole your observations. Sometimes people are lacking in knowledge...there is an extremely handy tool in the Overstreet Grading Guide, but not everybody knows of it or uses it...or sometimes they lack experience in assessing the cumulative effect of various defects.

 

However, an experienced grader who is aware of the OS guidelines and accepted conventions should not differ greatly from another experienced grader who is aware of the OS guidelines and accepted conventions.

 

I'd like to say a one point variance either way, but would reluctantly accept a two point variance.

 

Beyond that, it's nothing to do with subjectively and all about the $$$

 

But even with the Overstreet Grading Guide - its your interpretation of flaw verses mine - and you and I standing next to each other with the same book at the same time could disagree on flaw and then also on the extent the flaw affects the grade.

 

You'd be wrong and Nick would be right :baiting:

 

Too right :roflmao:

Link to comment
Share on other sites

Grading is subjective only to a limited degree. The observational part of grading is objective...does the book have water damage, does it have a coupon clipped, does it have a tear on the bottom edge, does it have the tip of a corner missing, is the cover detached from one staple, is that a sub crease running through the book?

 

Those flaws should limit and loosely define the grade range and the subjective element is introduced when deciding how to accurately pigeon-hole your observations. Sometimes people are lacking in knowledge...there is an extremely handy tool in the Overstreet Grading Guide, but not everybody knows of it or uses it...or sometimes they lack experience in assessing the cumulative effect of various defects.

 

However, an experienced grader who is aware of the OS guidelines and accepted conventions should not differ greatly from another experienced grader who is aware of the OS guidelines and accepted conventions.

 

I'd like to say a one point variance either way, but would reluctantly accept a two point variance.

 

Beyond that, it's nothing to do with subjectively and all about the $$$

 

But even with the Overstreet Grading Guide - its your interpretation of flaw verses mine - and you and I standing next to each other with the same book at the same time could disagree on flaw and then also on the extent the flaw affects the grade.

 

We couldn't disagree that the book had a corner crease - that's the objective part.

 

How that crease impacts the grade is where the subjectivity comes in, but that should be limited to a certain range. As an extreme example, a book is flawless apart from a tiny tear on the bottom edge. I'm saying VF/NM, you're saying VG.

 

Not going to happen, is it?

 

However, I could say VF/NM and you'd be saying somewhere between VF+ and NM-.

 

To my mind, that's within the acceptable variance of subjectivity.

 

Agreed (thumbs u

Link to comment
Share on other sites

The whole basis of CGC is to remove subjective and bias grading, and use a more “scientific method” approach to grading.

 

Just curious - where do you get this notion from? I don't see how grading a comic can be anything but subjective, frankly.

 

By having people complete the task of grading - it is by nature subjective - people sees thing differently.

 

Brian, I think you're agreeing with me aren't you?

 

Grading is always going to be subjective. I was asking where the idea that CGC are trying to remove the subjective element came from. It was presented as a fact and I don't think it's part of their mission statement!

 

We are on the SAME page (thumbs u

 

I agree that people can see things differently when it comes to grading. One person might not deduct as much as another person would on any given defect. CGC has come up with their own “standards” as far as arriving at a numerical grade.

 

Restoration detection should be a more scientific approach. Either it has been restored or it has not been restored.

 

But they are still relying on a person (and we all have flaws) deciding if there is restoration or not.

 

You and I may grade a book differently, but If we were both graders at CGC we would be using the same equipment to detect restoration and following the same procedures and standards that CGC has set in place to arrive at a grade. We should come out with the same results within an allowable degree of accuracy.

 

Define 'an allowable degree of accuracy'? (shrug)

 

Most testing has a degree of accuracy. This means that I can run a test on something 10 times, the same way, and get a slightly different result. If my results are all within the degree of accuracy for that test the results are correct.

 

Define 'an allowable degree of accuracy' for CGC?

 

I would think that CGC would train their graders to grade according to their standards and probably test them before allowing them to grading on their own. What their allowable degree of accuracy is, I don't know.

Link to comment
Share on other sites

The whole basis of CGC is to remove subjective and bias grading, and use a more scientific method approach to grading.

 

Just curious - where do you get this notion from? I don't see how grading a comic can be anything but subjective, frankly.

 

By having people complete the task of grading - it is by nature subjective - people sees thing differently.

 

Brian, I think you're agreeing with me aren't you?

 

Grading is always going to be subjective. I was asking where the idea that CGC are trying to remove the subjective element came from. It was presented as a fact and I don't think it's part of their mission statement!

 

We are on the SAME page (thumbs u

 

I agree that people can see things differently when it comes to grading. One person might not deduct as much as another person would on any given defect. CGC has come up with their own standards as far as arriving at a numerical grade.

 

Restoration detection should be a more scientific approach. Either it has been restored or it has not been restored.

 

But they are still relying on a person (and we all have flaws) deciding if there is restoration or not.

 

You and I may grade a book differently, but If we were both graders at CGC we would be using the same equipment to detect restoration and following the same procedures and standards that CGC has set in place to arrive at a grade. We should come out with the same results within an allowable degree of accuracy.

 

Define 'an allowable degree of accuracy'? (shrug)

 

Most testing has a degree of accuracy. This means that I can run a test on something 10 times, the same way, and get a slightly different result. If my results are all within the degree of accuracy for that test the results are correct.

 

Define 'an allowable degree of accuracy' for CGC?

 

reliability_true_scores.jpg

 

you could start with something like that I guess

 

 

 

Link to comment
Share on other sites

I would think that CGC would train their graders to grade according to their standards and probably test them before allowing them to grading on their own. What their allowable degree of accuracy is, I don't know.

 

But as the consumer, surely it's your opinion as to an 'allowable degree of accuracy' that's important?

 

So what is your opinion as to an 'allowable degree of accuracy'?

Link to comment
Share on other sites

I would think that CGC would train their graders to grade according to their standards and probably test them before allowing them to grading on their own. What their allowable degree of accuracy is, I don't know.

 

But as the consumer, surely it's your opinion as to an 'allowable degree of accuracy' that's important?

 

So what is your opinion as to an 'allowable degree of accuracy'?

 

As far as the restoration check, I would expect them to get the correct color label every time. ;)

 

 

Link to comment
Share on other sites

The whole basis of CGC is to remove subjective and bias grading, and use a more “scientific method” approach to grading. This will (in theory) insure us that the same book will grade the same whether it is submitted one time, or 100 times.

 

This is done by removing (or minimizing) all variables as much as possible.

 

The two largest variables in the scientific method are:

 

1) The equipment being used .

 

2) The people using the equipment.

 

The equipment being used in the testing should be calibrated on a regular basis to ensure the correct results are being achieved each and every time. This is usually done by testing a known “standard” ( a sample that the results are known to be) to ensure that the results on the “standard” are adhering to a very tight specification.

 

How does this pertain to CGC?

 

If CGC is using a light source to detect color touch ( I think they are), then that light source should be calibrated periodically. This light source emits a very specific wavelength of light to get color touch to fluoresce (or glow). The bulb in the light source may not be the same throughout its entire life. It may be stronger when new, and over time may “shift” in power, causing a different result over time. By calibrating the equipment with a known standard , the equipment can be verified as working correctly.

 

To ensure that people are using the equipment and following procedures correctly a “Round Robin” approach is used. This is where all of the people performing the tests are given the same samples to test and all of the results are compared to see if there are any discrepancies in the results.

 

I would be very surprised if CGC is not already using these techniques in their company.

 

And yes, my background is in Research and Quality Control.

 

Resto is not always detected with gadgets and lights. The majority is likely caught with the human eye.

 

The gadgets they use are used consistently, whether at shows or at home. I know because I asked. There are things you can't control though. For example, they might bring lamps for each grader but you can only do so much to control the ambient lighting in a room at a convention center.

 

It's the human element that creates the majority of inconsistency, and the only way for the method to be more consistent is for humans to be more consistent.

 

More attentive, more rested, more observant, more patient, more experienced, more eyes / people.

 

Like any business, a grading company is going to find a balance between cost effectiveness and profit.

 

Sure they could probably become slightly more consistent by throwing an extra grader on every book or extra time on every book but how does that affect the big picture (turn around times, grading fees, profitability)?

 

The mistakes businesses (and individuals) make are usually on that fine line between too much and too little effort. You spent too little time and mistakes happen more often. You spend too much time and profitability drops. That balance on that fine line is what makes or breaks a company and it's usually a small percentage of error but it's the part of business that gets the big public magnifying glass.

 

I had some concerns and talked to Litch about their consistency about a year ago and as I said, they already do internal quality control audits. I think the sample size would probably equal a few 100 books a year.

 

I agree that an independent, impartial (surprise, surprise! :D ) 3rd party audit would be the way to go. I'm not sure whether they use such a company or if the audits were self regulated. In my opinion, where most businesses seem to suffer the most is when they try to save money in areas that need money spent. PR, internal care, stuff like that. Basically nickel and diming themselves to death. Healthy business will have a balance of cash flow and some significant money spent on itself as well.

Link to comment
Share on other sites

We couldn't disagree that the book had a corner crease - that's the objective part.

 

How that crease impacts the grade is where the subjectivity comes in, but that should be limited to a certain range. As an extreme example, a book is flawless apart from a tiny tear on the bottom edge. I'm saying VF/NM, you're saying VG.

 

Not going to happen, is it?

 

However, I could say VF/NM and you'd be saying somewhere between VF+ and NM-.

 

To my mind, that's within the acceptable variance of subjectivity.

 

I agree with you.

 

We've seen time and time again that as personnel changed, so did the standards.

 

CGCs standards didn't change internally - the people viewing the defects changed.

 

From Borock to Haspel, Haspel to Litch and even the addition of new graders there have been changes in the assigned grades flowing out of CGC. That's the human element. They are all subject to each other.

 

I would agree with Nick that a one point variance might be expected (on a smaller percentage of books - those that might be tweeners - not on all books) and might accept a 2 point variance on an even smaller percentage of books in special situations.

 

 

Link to comment
Share on other sites

There's absolutely nothing to stop you, or a group of boardies, from doing this. I can't see why CGC would pay an external auditor when they already do an internal audit.

 

Your plan doesn't factor in damage to books in between gradings, by the way (either through shipping or some other way). I suppose your tolerable variance idea is supposed to cover that, but it might not in all situations.

 

Also, CGC would probably lose business if they graded the same every single time - the speculative resub market would disappear. The possibility of grade bumps is an important money-maker for them.

 

 

I've actually thought about doing it on my own.

 

After I submitted the Harlan books and the first 298 came back with 11 label errors (That's 1 in 27) I began wondering about human error in the process. And CGC fixed these right away. No complaint there.

 

I wanted to randomly resub 50 books just to track the variance before I ever do a big collection sub again. Just morbid, cynical, personal curiosity I guess.

 

I mentioned this to a couple boardies in confidence and they advised me that if I ever did this to show the results to CGC first -- that a poor report might disrupt a lot of things in the collecting community.

 

I always figured the variance would be in the single digits 2-3%. But Dan's JIM83 fiasco makes me think it's much higher, like 15%, which is why the resub game is so popular.

 

Link to comment
Share on other sites

The whole basis of CGC is to remove subjective and bias grading, and use a more “scientific method” approach to grading. This will (in theory) insure us that the same book will grade the same whether it is submitted one time, or 100 times.

 

This is done by removing (or minimizing) all variables as much as possible.

 

The two largest variables in the scientific method are:

 

1) The equipment being used .

 

2) The people using the equipment.

 

The equipment being used in the testing should be calibrated on a regular basis to ensure the correct results are being achieved each and every time. This is usually done by testing a known “standard” ( a sample that the results are known to be) to ensure that the results on the “standard” are adhering to a very tight specification.

 

How does this pertain to CGC?

 

If CGC is using a light source to detect color touch ( I think they are), then that light source should be calibrated periodically. This light source emits a very specific wavelength of light to get color touch to fluoresce (or glow). The bulb in the light source may not be the same throughout its entire life. It may be stronger when new, and over time may “shift” in power, causing a different result over time. By calibrating the equipment with a known standard , the equipment can be verified as working correctly.

 

To ensure that people are using the equipment and following procedures correctly a “Round Robin” approach is used. This is where all of the people performing the tests are given the same samples to test and all of the results are compared to see if there are any discrepancies in the results.

 

I would be very surprised if CGC is not already using these techniques in their company.

 

And yes, my background is in Research and Quality Control.

 

I would hope CGC is using these approaches, but Dan's JIM83 problem was ongoing. Makes me wonder how consistent (and even scientific) their methods are.

 

As I've always said, I can get my brain around assigning a quantitative grade to a comic using what appears to be a qualitative measure. Any researcher would laugh at this.

Link to comment
Share on other sites

 

I've actually thought about doing it on my own.

 

After I submitted the Harlan books and the first 298 came back with 11 label errors (That's 1 in 27) I began wondering about human error in the process. And CGC fixed these right away. No complaint there.

 

I have a question regarding the label errors. Is it possible that you wrote down the wrong info on the submission invoice or did you write down the correct info and did CGC mess it up?

 

The only time I have ever noticed a label error on my personal submissions out of literally 1000's of books is when I have written the wrong information on the actual submission invoice.

 

For example, I have ASM #119 in my head and write down #119 but it's actually #120 being submitted.

 

The admin staff likely misses it when books are unpacked and logged in the system as they are more worried that all the books are undamage and present (ie 15 books on the invoice = 15 books in the shipment) and then the book travels through the system with a bar code attaching the wrong issue number to a book...or something along those lines.

 

Not trying to defend CGC on this point as a mistake is a mistake but in my experience I was the one that started the error on every occasion.

 

 

 

Link to comment
Share on other sites

I would think that CGC would train their graders to grade according to their standards and probably test them before allowing them to grading on their own. What their allowable degree of accuracy is, I don't know.

 

But as the consumer, surely it's your opinion as to an 'allowable degree of accuracy' that's important?

 

So what is your opinion as to an 'allowable degree of accuracy'?

 

One grade on either side of original? No label color changes? There probably won't be a consensus, but once the parameters are set and things are set-up, one could use the data for multiple null hypotheses, using my example or, using the exact grade as your standard.

 

Grant the original grade and label color a "zero." Every grade bump is positive plus the number of grades it went up, and every bump down is a negative. That number should be close to zero.

 

Same with resto checks.

 

You can use old data to obtain the variances, and like I said earlier, you can add variables like year the audit was done, period the books were originally graded, etc.

Link to comment
Share on other sites

There's absolutely nothing to stop you, or a group of boardies, from doing this. I can't see why CGC would pay an external auditor when they already do an internal audit.

 

Your plan doesn't factor in damage to books in between gradings, by the way (either through shipping or some other way). I suppose your tolerable variance idea is supposed to cover that, but it might not in all situations.

 

Also, CGC would probably lose business if they graded the same every single time - the speculative resub market would disappear. The possibility of grade bumps is an important money-maker for them.

 

 

I've actually thought about doing it on my own.

 

After I submitted the Harlan books and the first 298 came back with 11 label errors (That's 1 in 27) I began wondering about human error in the process. And CGC fixed these right away. No complaint there.

 

I wanted to randomly resub 50 books just to track the variance before I ever do a big collection sub again. Just morbid, cynical, personal curiosity I guess.

 

I mentioned this to a couple boardies in confidence and they advised me that if I ever did this to show the results to CGC first -- that a poor report might disrupt a lot of things in the collecting community.

 

I always figured the variance would be in the single digits 2-3%. But Dan's JIM83 fiasco makes me think it's much higher, like 15%, which is why the resub game is so popular.

 

lol

 

Don't whatever you do attempt to derail the gravy train. :baiting:

Link to comment
Share on other sites

 

I've actually thought about doing it on my own.

 

After I submitted the Harlan books and the first 298 came back with 11 label errors (That's 1 in 27) I began wondering about human error in the process. And CGC fixed these right away. No complaint there.

 

I have a question regarding the label errors. Is it possible that you wrote down the wrong info on the submission invoice or did you write down the correct info and did CGC mess it up?

 

The only time I have ever noticed a label error on my personal submissions out of literally 1000's of books is when I have written the wrong information on the actual submission invoice.

 

For example, I have ASM #119 in my head and write down #119 but it's actually #120 being submitted.

 

The admin staff likely misses it when books are unpacked and logged in the system as they are more worried that all the books are undamage and present (ie 15 books on the invoice = 15 books in the shipment) and then the book travels through the system with a bar code attaching the wrong issue number to a book...or something along those lines.

 

Not trying to defend CGC on this point as a mistake is a mistake but in my experience I was the one that started the error on every occasion.

 

 

 

It was their error, and merely clerical, I'm sure. Not a grading thing. But the books with errors weren't in numerical order or even the same titles, so it wasn't as though one person goofed 11 in a row. Just error in handling a ton of books I guess. Wasn't a big deal in the big scheme of things, but did remind me that there are error-prone humans in the mix.

 

Damn humans!

 

 

Link to comment
Share on other sites

There's absolutely nothing to stop you, or a group of boardies, from doing this. I can't see why CGC would pay an external auditor when they already do an internal audit.

 

Your plan doesn't factor in damage to books in between gradings, by the way (either through shipping or some other way). I suppose your tolerable variance idea is supposed to cover that, but it might not in all situations.

 

Also, CGC would probably lose business if they graded the same every single time - the speculative resub market would disappear. The possibility of grade bumps is an important money-maker for them.

 

 

I've actually thought about doing it on my own.

 

After I submitted the Harlan books and the first 298 came back with 11 label errors (That's 1 in 27) I began wondering about human error in the process. And CGC fixed these right away. No complaint there.

 

I wanted to randomly resub 50 books just to track the variance before I ever do a big collection sub again. Just morbid, cynical, personal curiosity I guess.

 

I mentioned this to a couple boardies in confidence and they advised me that if I ever did this to show the results to CGC first -- that a poor report might disrupt a lot of things in the collecting community.

 

I always figured the variance would be in the single digits 2-3%. But Dan's JIM83 fiasco makes me think it's much higher, like 15%, which is why the resub game is so popular.

 

lol

 

Don't whatever you do attempt to derail the gravy train. :baiting:

 

Yep. That was the inference. That the profit margin between a 9.6 and a 9.8 is so fat that some people don't want any kind of change. They're happy to roll the dice on the resub game because sometimes it works in their financial favor.

 

hm

 

 

Link to comment
Share on other sites

There's absolutely nothing to stop you, or a group of boardies, from doing this. I can't see why CGC would pay an external auditor when they already do an internal audit.

 

Your plan doesn't factor in damage to books in between gradings, by the way (either through shipping or some other way). I suppose your tolerable variance idea is supposed to cover that, but it might not in all situations.

 

Also, CGC would probably lose business if they graded the same every single time - the speculative resub market would disappear. The possibility of grade bumps is an important money-maker for them.

 

 

I've actually thought about doing it on my own.

 

After I submitted the Harlan books and the first 298 came back with 11 label errors (That's 1 in 27) I began wondering about human error in the process. And CGC fixed these right away. No complaint there.

 

I wanted to randomly resub 50 books just to track the variance before I ever do a big collection sub again. Just morbid, cynical, personal curiosity I guess.

 

I mentioned this to a couple boardies in confidence and they advised me that if I ever did this to show the results to CGC first -- that a poor report might disrupt a lot of things in the collecting community.

 

I always figured the variance would be in the single digits 2-3%. But Dan's JIM83 fiasco makes me think it's much higher, like 15%, which is why the resub game is so popular.

 

lol

 

Don't whatever you do attempt to derail the gravy train. :baiting:

 

Yep. That was the inference. That the profit margin between a 9.6 and a 9.8 is so fat that some people don't want any kind of change. They're happy to roll the dice on the resub game because sometimes it works in their financial favor.

 

hm

 

 

^^ Now sit down and be quiet.

Link to comment
Share on other sites