This research work contains a number of interesting results e.g.:
- They analyzed typical image pre-processing/augmentation techniques. "We found that while no single transformation (that we studied) suffices to define a prediction task that yields the best representations, two transformations stand out: random cropping and random color distortion. Although neither cropping nor color distortion leads to high performance on its own, composing these two transformations leads to state-of-the-art results." This helps to prevent or reduce spurious feature learning like color histogram similarity
- "Scaling up significantly improves performance. We found that (1) processing more examples in the same batch, (2) using bigger networks, and (3) training for longer all lead to significant improvements. While these may seem like somewhat obvious observations, these improvements seem larger for SimCLR than for supervised learning. For example, we observe that the performance of a supervised ResNet peaked between 90 and 300 training epochs (on ImageNet), but SimCLR can continue its improvement even after 800 epochs of training." You have to remember that early stopping of training is a very common method, but it is doubtful that this is really useful!
Google AI Blog: Advancing Self-Supervised and Semi-Supervised Learning with SimCLR: Posted by Ting Chen, Research Scientist, and Geoffrey Hinton, VP & Engineering Fellow, Google Research Recently, natural language proces...
No comments:
Post a Comment