View Source

German Authorities Break Up International Child Sex Abuse Site

BERLIN — German prosecutors have broken up an online platform for sharing images and videos showing the sexual abuse of children, mostly boys, that had an international following of more than 400,000 members, they said on Monday.

The site, named “Boystown,” had been around since at least June 2019 and included forums where members from around the globe exchanged images and videos showing children, including toddlers, being sexually abused. In addition to the forums, the site had chat rooms where members could connect with one another in various languages.

German federal prosecutors described it as “one of the largest child pornography sites operating on the dark net” in a statement they released on Monday announcing the arrest in mid-April of three German men who managed the site and a fourth who had posted thousands of images to it.

“This investigative success has a clear message: Those who prey on the weakest are not safe anywhere,” Germany’s interior minister, Horst Seehofer, said on Monday. “We are holding perpetrators accountable and doing what is humanly possible to protect children from such repugnant crimes.”

several sophisticated networks, tens of thousands of new cases of abuse are reported to the authorities each year. Parliament passed a law that would toughen sentences against those convicted of sexual exploitation or abuse of children last week.

The accused administrators of the “Boystown” site, aged 40 and 49, were arrested after raids in their homes in Paderborn and Munich, the prosecutors said. A third man accused of being an administrator, 58, was living in the Concepción region of Paraguay, where he has been detained awaiting extradition.

was handed a 10-month suspended sentence after he was convicted of 26 counts of possession and sharing photos of girls younger than 10 being severely sexually abused. Mr. Metzelder confessed to some of the charges and apologized to the victims, which the judge said she took into consideration in lessening his punishment.

But many Germans, including some of Mr. Metzelder’s former teammates, protested that the punishment was too lenient.

“I don’t see how that is supposed to act as a deterrent,” Lukas Podolski, who was a member of the 2014 team that won the soccer World Cup for Germany, told the Bild newspaper. “Whoever commits sins against children must be punished with the full weight of the law.”

View Source

South Korean Man Gets 34 Years for Running Sexual Exploitation Chat Room

SEOUL — A South Korean man was sentenced to 34 years in prison on Thursday as part of the country’s crackdown on an infamous network of online chat rooms that lured young women, including minors, with promises of high-paying jobs before forcing them into pornography.

The man, Moon Hyeong-wook, opened one of the first such sites in 2015, prosecutors said. Mr. Moon, 25, operated a clandestine members-only chat room under the nickname “GodGod” on the Telegram messenger app, offering more than 3,700 clips of illicit pornography, they said.

Mr. Moon, an architecture major who was expelled from his college after his arrest last year, was one of the most notorious of the hundreds of people the police have arrested in the course of their investigation. Another chat room operator, a man named Cho Joo-bin, was sentenced to 40 years in prison last November.

“The accused inflicted irreparable damage on his victims through his anti-society crime that undermined human dignity,” the presiding judge, Cho Soon-pyo,​ said of Mr. Moon in his ruling on Thursday.​ The trial took place in a district court in the city of Andong in central South Korea​.

Mr. Moon was indicted in June on charges​ of forcing 21 young women, including minors, into ​making ​sexually explicit videos between 2017 and early last year.​

He ​approached young women looking for high-paying jobs through social media platforms​, then lured them into making sexually explicit videos, promising big payouts​, prosecutors said​.​ He also hacked into the online accounts of women who uploaded sexually explicit content, pretending to be a police officer investigating pornography.

​Once he got hold of the images and personal data, he used them to blackmail the women, threatening to send the clips to their parents unless the victims supplied more footage, prosecutors said.

Prosecutors demanded a life sentence for Mr. Moon.

Last December, the police said​ they had investigated 3,500 suspects, most of them men in their 20s or teenagers, as part of their investigation of the online chat rooms that served as avenues for sexual exploitation and pornographic distribution​. They arrested 245 of them.

The police also identified 1,100 victims.

The ​scandal, known in South Korea as “the Nth Room Case,” caused outrage over the cruel exploitation of the young women​. Women’s rights groups picketed courthouses where chat room operators were on trial, accusing judges of condoning sex crimes by handing down what they considered light punishments.

On Thursday, outside the Andong courthouse, advocates held a rally demanding the maximum punishment for Mr. Moon.

In recent years, the South Korean police began cracking down on sexually explicit file-sharing websites as part of international efforts to fight child pornography. As smartphones proliferated, ​they soon realized that much of the illegal trade was migrating to online chat rooms on messaging services like Telegram.

The police said they had trouble tracking down customers of the online chat rooms because they often used cryptocurrency payments to avoid being caught.

View Source

Who Is Making Sure the A.I. Machines Aren’t Racist?

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

said she had been fired after criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.

Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.

Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.

Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.

New York Times article that described it.

In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.

Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.

Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.

Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.

After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.

Cade Metz is a technology correspondent at The Times and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World,” from which this article is adapted.

View Source

Deepfake Videos of Eerie Tom Cruise Revive Debate

To those fearful of a future in which videos of real people are indistinguishable from computer-generated forgeries, two recent developments that attracted an audience of millions might have seemed alarming.

First, a visual effects artist worked with a Tom Cruise impersonator to create startlingly accurate videos imitating the actor. The videos, created with the help of machine-learning techniques and known as deepfakes, gained millions of views on TikTok, Twitter and other social networks in late February.

Then, days later, MyHeritage, a genealogy website best known for its role in tracking down the identity of the Golden State Killer, offered a tool to digitally animate old photographs of loved ones, creating a short, looping video in which people can be seen moving their heads and even smiling. More than 26 million images had been animated using the tool, called Deep Nostalgia, as of Monday, the company said.

The videos renewed attention to the potential of synthetic media, which could lead to significant improvements in the advertising and entertainment industries. But the technology could also be used — and has been — to raise doubts about legitimate videos and to insert people, including children, into pornographic images.

digitally resurrected him for a video promoting gun safety legislation. The police in the Australian state of Victoria used a police officer who died by suicide in 2012 to deliver a message about mental health support.

And “Welcome to Chechnya,” a documentary released last year about anti-gay and lesbian purges in Chechnya, used the technology to shield the identity of at-risk Chechens.

The effects could also be used in Hollywood to better age or de-age actors, or to improve the dubbing of films and TV shows in different languages, closely aligning lip movements with the language onscreen. Executives of international companies could also be made to look more natural when addressing employees who speak different languages.

But critics fear the technology will be further abused as it improves, particularly to create pornography that places the face of one person on someone else’s body.

Nina Schick, the author of “Deepfakes: The Coming Infocalypse,” said the earliest deepfaked pornography took hours of video to produce, so celebrities were the typical targets. But as the technology becomes more advanced, less content will be needed to create the videos, putting more women and children at risk.

A tool on the messaging app Telegram that allowed users to create simulated nude images from a single uploaded photo has already been used hundreds of thousands of times, according to BuzzFeed News.

have called the “liar’s dividend.”

In Gabon, opposition leaders argued that a video of President Ali Bongo Ondimba giving a New Year’s address in 2019 was faked in an attempt to cover up health problems. Last year, a Republican candidate for a House seat in the St. Louis area claimed that the video of George Floyd’s death in police custody had been digitally staged.

As the technology advances, it will be used more broadly, according to Mr. Gregory, the artificial intelligence expert, but its effects are already pronounced.

“People are always trying to think about the perfect deepfake when that isn’t necessary for the harmful or beneficial uses,” he said.

In introducing the Deep Nostalgia tool, MyHeritage addressed the issue of consent, asking users to “please use this feature on your own historical photos and not on photos featuring living people without their permission.” Mr. Ume, who created the deepfakes of Mr. Cruise, said he had no contact with the actor or his representatives.

Of course, people who have died can’t consent to being featured in videos. And that matters if dead people — especially celebrities — can be digitally resurrected, as the artist Bob Ross was to sell Mountain Dew, or as Robert Kardashian was last year in a gift to his daughter Kim Kardashian West from her husband, Kanye West.

Black Mirror,” whole aspects of our personalities could be simulated after death, trained by our voices on social media.

But that raises a tricky question, he said: “In what cases do we need consent of the deceased to resurrect them?”

“These questions make you feel uncomfortable, something feels a bit wrong or unsettling, but its difficult to know if that’s just because it’s new or if it hints at a deeper intuition about something problematic,” Mr. Ajder said.

View Source