- California Assembly OKs highest minimum wage in nation
- S. Korea unveils first graphic cigarette warnings
- US joins with South Korea, Japan in bid to deter North Korea
- LPGA golfer Chun In-gee finally back in action
- S. Korea won’t be top seed in final World Cup qualification round
- US men’s soccer misses 2nd straight Olympics
- US back on track in qualifying with 4-0 win over Guatemala
- High-intensity workout injuries spawn cottage industry
- CDC expands range of Zika mosquitoes into parts of Northeast
- Who knew? ‘The Walking Dead’ is helping families connect
Hybe uses AI voice technology for Midnatt’s multilingual single
K-pop powerhouse Hybe on Monday unveiled its first artificial intelligence (AI) project Midnatt, which sings a song in multiple languages and switches between female and male voices using generative voice technology.
Midnatt, an alter ego of ballad singer Lee Hyun, dropped the new digital single “Masquerade” in six languages: Korean, English, Japanese, Chinese, Spanish and Vietnamese.
Big Hit Music, Hybe’s music label behind K-pop supergroup BTS, teamed up with Hybe IM, its interactive media arm, to adopt AI startup Supertone’s voice synthetic tools for the project. Hybe acquired Supertone for 45 billion won (US$36.5 million) in January.
“We expect this technology to lessen the burden of learning foreign languages when K-pop artists tap into the global market and ultimately boost the genre’s global influence,” Shin Young-jae, CEO of Big Hit Music, said in a press conference.
Lee, who has performed as a soloist and as part of the trio 8eight since his debut in 2007, said he is excited to join the AI project with his new alter ego, Midnatt.
“I have had a strong desire to take on a new musical challenge. I hope you anticipate the other side of Lee Hyun,” Lee said.
Hybe said voice data extracted from native language narrators were used to turn Midnatt’s Korean song into five different languages and Supertone’s “voice designing technology” created a female voice featured in the song using Lee’s original voice.
“The technology used voice data extracted from narrators, such as the pronunciation and accent, so that it naturally fixes the pronunciation of foreign languages, while maintaining the artist’s singing style and musical expressions,” Hybe said.