Intelligent Tech Channels Issue 76 | Page 47

Services , such as personal banking and payments , are of particular apprehension , and such concerns are not baseless . For instance , in 2020 , a banker in the UAE was defrauded into transferring $ 35 million when criminals simulated the voice of its director using a fake AI-generated voice .
Deep Fakes can also be weaponised by cybercriminals who might use manipulated audio or video recordings as a sophisticated type of social engineering to deceive individuals or organisations into revealing sensitive information or engaging in fraudulent activities .
Consequently , the unchecked spread of Deep Fake technology presents a systemic threat to the security and stability of digital ecosystems across the globe .
Ethical responsibilities
To address these emerging threats , we must continue to develop and improve Deep Fake detection technologies . This can involve using more sophisticated algorithms and developing new methods that can identify Deep Fakes based on their context , metadata , or other factors .
Technology developers are ethically responsible for designing and deploying algorithms for Deep Fakes responsibly and transparently . They should ensure that strong security measures are put in place to deter their technologies from being used maliciously and that people ’ s privacy rights are upheld at all times .
As early as 2022 , 66 % of cybersecurity professionals had seen Deep Fakes leveraged in a cyberattack .
INTELLIGENT TECH CHANNELS 47