Saturday, March 20, 2021

ImageNet creators find blurring faces for privacy has a 'minimal impact on accuracy'

Is this perhaps not a case of circular reasoning fallacy? They use computer vision to detect faces before blurring them. Or is this a case of disproportional concern for privacy?

"The makers of ImageNet, one of the most influential datasets in machine learning, have released a version of the dataset that blurs people’s faces in order to support privacy experimentation. Authors of a paper on the work say their research is the first known effort to analyze the impact blurring faces has on the accuracy of large-scale computer vision models. For this version, faces were detected automatically before they were blurred. Altogether, the altered dataset removes the faces of 562,000 people in more than a quarter-million images. ..."

ImageNet creators find blurring faces for privacy has a 'minimal impact on accuracy' | VentureBeat

No comments: