• When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

RhialtoTheMarvellous

Member
  • Posts

    132
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. What are people thinking of sending in? I'm thinking my Infinity Gauntlet #1, Crisis #1 at the very least.
  2. With the spine roll its got it's an 8.0. Fix that and it could be a 9.0, as it has a bunch of non-color breaking stress lines and a few color breaking stress lines on the spine. There is also a small indent on the front cover.
  3. 4.0, that corner isn't getting fixed by a press.
  4. Yes, I've seen that, it's a good overall tool and way to break down the grading system and your point is well taken. The classifications done on that page, if regarded as accurate, would be what we want to automate. Then the problem just becomes finding enough examples of each to train the machine on.
  5. I'm only now kind of getting a sense of the differences between machine learning and deep learning and understanding why deep learning requires a lot of data to become functional. I'm also realizing why you folks with more experience are suggesting training a dataset against much less broad criteria before going down this road. The reason I'm realizing all of this is because I actually went in and made a program (rather than the model builder tool I was using prior) to utilize tensorflow and when I got into the nitty gritty of it I saw the basics of image recognition in the consumer area are all oriented around transfer learning. The existing deep learning model you choose has already been pre-trained against millions of images that it can classify into thousands of different categories and you build out a model that uses a subset of that data. This obviously isn't going to really work to score a book under the standard comic rating system when the criteria for classification under the various scoring levels is not even known by the existing model. Hmmm, interesting. This does give me some impetus to start breaking down things into those smaller classes.
  6. I'm not sure why we are assuming that CGC takes detailed scans of books. Is there somewhere that they indicate this is part of their process? Regarding the interior defects, my assumption was the same as yours. These are outliers. Any cursory examination will show missing or damaged interior content of a book. If something in that regard is damaged then that usually becomes the major factor in downgrading the book. These books are usually excluded from grading by default unless they are much older and rarer books and those aren't really the ones that get graded in volume. The front and back cover details are what define the grade for comics being sent through in volume. Is that the case for a general learner? A spine wrap issue generally moves the book into a different scoring category which you would want to account for in the model, but at the same time some books just come with the spine wrapped differently from different eras. Or are you referring to placement of the book in the image, ie a book that is placed incorrectly could be improperly classified?
  7. Not at all. I just wish I had something more interesting to share at this point than the number of scans that CGC has on their website.
  8. It's interesting that you say that. At one point I decided to test out my new scanner and took a 1200dpi scan of one of my books. The image ended up being huge of course. I opened it up and started zooming in on various areas of the book looking closely for defects. The image was so detailed that I found myself zooming in a lot and what I found is that zooming in on a 1200dpi image is like using a microscope on it. There are so many scratches and marks that are completely invisible to the naked eye that you can detect at that resolution. This is partly why I wanted to try just using 2D scans at first, because it occurred to me that even at a lower scanning resolution there are probably patterns a computer can detect that a human can't detect.
  9. 1. If there is one thing I've gotten out of this thread (and I've gotten more than that) it's the idea that I might run an experiment of this sort initially on binary characteristics like spine ticks present or not or creases in cover or not to see how well I could train a model in that regard. This would also be an easier experiment from a data collection perspective as I'm sure everyone could come up with books both with and without spine tics. 2. Well, that's part of the deep learning problem solving aspect and one reason why machine learning can be so valuable, because it can derive outcomes using combinations of factors that aren't always evident to humans. You give the machine a bunch of data on one end and a known set of results on the other and then let it interpret the factors that differentiate the source from the results on its own. The big thing with this is training a model to predict medical conditions. You give the machine publicly available health data of thousands of people and which ones get a certain condition and which ones don't and then it can predict with a fair degree of accuracy whether an individual it is given is at risk for that condition. 4. There is definitely some standard necessary. I'm not yet sure what it is, but the CGC images are pretty weak in that regard if the CGC registry is any indication. A lot of the things I pulled off there are not even scans. They are photos of a slabbed book with bad lighting or bad angles or photos of just the score or the top of the holder.
  10. My sample choice is obviously quite bad as well, but then again it would be difficult overall to find multiples of anything at different grades.
  11. I'm starting to realize that. It seems unlikely that without some coordination I could even get the data for this sort of sample on even one book. If I make the attributes more generalized so they can be applied to any book then that might work.
  12. I don't see a real downside to CGC sharing this sort of data overall. It's not like it cuts into their business having a machine that can grade comics. You're not so much paying for the number as you are for the certification aspect and encapsulation assuring that number is correct. If anything a machine evaluator would just be the equivalent of the "Hey can you spare a grade" forum, people will still have to examine the thing no matter what.