When the ideological fads of our time infect or intrude science! When privacy protection is taken to such an extreme that scientific results are negatively impacted! Human folly/madness can be boundless!
ImageNet is one of the most important and largest computer vision datasets in the world! It has been around since about 2009. Many computer vision models have been trained on this dataset!
Apparently, nothing is mentioned that any person pictured on ImageNet suffered from being potentially identifiable! How many pictured people were actually negatively affected? Let me guess: 0-3?
The training of computer vision models is already difficult enough. To add more unnecessary complications is kind of insane!
"De-Facing ImageNet
ImageNet now comes with privacy protection.
What’s new: The team that manages the machine learning community’s go-to image dataset blurred all the human faces pictured in it and tested how models trained on the modified images on a variety of image recognition tasks. The faces originally were included without consent. ...
However, the decline [in accuracy] was severe with respect to objects typically found close to a face, such as masks (-8.71 percent) and harmonicas (-8.93 percent). ...
Any loss of accuracy is painful, but a small loss is worthwhile to protect privacy. There’s more to life than optimizing test-set accuracy! ..."
Any loss of accuracy is painful, but a small loss is worthwhile to protect privacy. There’s more to life than optimizing test-set accuracy! ..."
Credits to Andrew Ng's newsletter to bring this subject to my attention:
No comments:
Post a Comment