Intelligent Tech Channels Issue 67 | Page 40

privacy concerns , automated attacks , and even malicious use . Search engines already represent a well-known privacy risk in that any information that is unsecured or publicly available on a site that is scraped by the search engine will potentially be indexed .
To some extent , this has been mitigated over the years as search engine companies have recognised certain patterns that are particularly damaging and actively do not index them , or at least do not allow public searches for that information . An example would be social security numbers .
On the other hand , Chatbots , or more generally , AI tools trained on something like CommonCrawl or The Pile , which are large , curated , scrapes of the Internet at large , represent fewer familiar threats . Especially when we are thinking about large-scale models like LLaMa , ChatGPT , the potential for AI to be able to generate accurate personal data for some number of individuals given the proper prompt is real .
The good news is that , since the responses are being generated based on probabilities rather than recalled from scraped data , it is much less likely that all of the data is accurate .
In other words , the risk in a search engine remains greater for a larger percentage of the population . The risk to a smaller number of people might be higher in an AI but is somewhat alleviated by not knowing beforehand which individuals the AI might be able to generate accurate information about .
Should we be worried ?
It ’ s important to remember that ChatGPT is not learning at the moment , it is just
It is important to remember that ChatGPT is not learning at the moment , it is just doing predictions of the entire history of your chat . It cannot currently be directed to automate ransomware attacks . doing predictions of the entire history of your chat . It cannot currently be directed to automate ransomware attacks . It is a research tool created to show the world what is possible , see how people use it , and explore potential commercial uses . It is key to remember that we are not indirectly training AI chatbots every time we use them as some people may assume .
OpenAI wants to see what ChatGPT does , what it is capable of and how different people use it . The creators want to give AI to everyone . However , their concern is that if only a small number of humans have AI capabilities , those people will ultimately become superhumans . Democratising access to AI and its realworld security benefits will minimise the risk of only a select few having these extra capabilities .
This is why it is so important for the threat hunting and security teams in organisations to understand how these tools work and what the realities of the technologies are . Without knowledge , it can be very difficult for teams to keep management informed about what the real risks and opportunities are .
Businesses could even soon be using AI as a force for good by preventing cyberattacks through phishing , as the tools could be trained to recognise the language generally used by staff and therefore detect any deviations from this from outside threat actors .
The possibilities generative AI can bring for the future are exciting and transformative . However , it ’ s important to not lose sight of the threats that also come alongside it . Like any transition in how we do things online , AI chatbots introduce many new possibilities for the cybercriminals that use them too .
Educating people on the specific threats at play is key to avoiding attacks . If users know exactly how hackers could be targeting them , then they will be better able to ward them off . •
40 www . intelligenttechchannels . com