قالب وردپرس درنا توس
Home / SmartTech / AI Weekly: Machine learning could lead to cyber security in uncharted territory

AI Weekly: Machine learning could lead to cyber security in uncharted territory



VentureBeat publishes a special edition once a quarter to examine trends of great importance. This week we launched the second edition that examined AI and security. In a spectrum of stories, the VentureBeat editorial team looked at some of the key clashes between AI and security. It's a high-cost shift for individuals, businesses, cities, and critical infrastructure goals – data breaches alone are expected to cost more than $ 5 trillion by 2024 – and high stakes.

The stories may have a topic that AI doesn't seem to be used much in cyber attacks these days. However, cyber security companies are increasingly relying on AI to identify threats and sift through data to defend targets.

Security threats continue to evolve and include enemy attacks on AI systems. more expensive ransomware for cities, hospitals and public institutions; Misinformation and spear phishing attacks that bots can spread on social media; and deep fakes and synthetic media can become security holes.

In the cover story, European correspondent Chris O & # 39; Brien discussed how the proliferation of AI in security can lead to less human decision-making in the decision-making process. Malware is evolving to adapt in real-time to security firms' defense tactics. If the costs and consequences of security gaps increase, the transfer of autonomy to intelligent machines might appear to be the only right choice.

We also heard from security experts such as Steve Grobman, McAfee CTO, Mikko Hypponen from F-Secure, and Adam, Director of Malwarebytes Lab, Kujawa, who talked about the difference between phishing and spear phishing, went on an expected rise personalized spear phishing attacks and spoke generally about the concerns ̵

1; unfounded and not – about AI in cybersecurity.

VentureBeat employee author Paul Sawers took a look at how AI can be used to alleviate the massive labor shortage in the cybersecurity sector while Jeremy Horwitz examined how cameras in AI-equipped cars and home security systems are looking to the future Surveillance and privacy will impact.

AI editor Seth Colaner examines how security and AI can seem heartless and inhuman, but still depend heavily on people who are still a critical security factor, both as defenders and as targets. Human vulnerability remains a key reason why organizations become soft targets, and training in how to properly protect against attacks can lead to better protection.

We do not yet know to what extent those who carry out attacks will rely on AI systems. And we still don't know if open source AI has opened Pandora's box or to what extent AI could increase the threat level. One thing we know is that cybercriminals don't seem to need AI to be successful today.

I leave it up to you to read the special edition and draw your own conclusions, but one quote you should remember comes from Shuman Ghosemajumder, formerly known as a "click fraud tsar" at Google and now CTO at Shape Security, in Sawers & # 39; article. "[Good actors and bad actors] both automate as much as possible, build the DevOps infrastructure, and use AI techniques to try to outsmart the others," he said. "It's an endless cat-and-mouse game and it will only include more AI approaches on both sides over time."

For AI reporting, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner – and be sure to subscribe to the weekly AI newsletter and bookmark our AI channel.

Thank you for reading,

Khari Johnson

Senior AI Staff Writer


Source link