New Israeli study reveals how terrorists are weaponizing AI January 12, 2025(Shutterstock)(Shutterstock)New Israeli study reveals how terrorists are weaponizing AI Tweet WhatsApp Email https://worldisraelnews.com/new-israeli-study-reveals-how-terrorists-are-weaponizing-ai/ Email Print The study outlines the most likely risks associated with terrorists’ access to AI technology:By Jewish Breaking NewsArtificial intelligence technology could be exploited by terrorist groups around the globe in a number of ways, a new study has found, while governmental regulatory agencies and tech companies appear to be woefully unprepared to deal with the growing threat.The “Generative AI and Terrorism” study, conducted by the University of Haifa’s Prof. Gabriel Weimann, will be published in his forthcoming book, AI in Society.Weimann unmasks the real and pertinent threats associated with the growing interest in AI-based tools among terrorists and extremists. From online manuals on how to use generative AI to bolster propaganda and disinformation tactics to an Al-Qaeda-affiliated group announcing it would start holding AI workshops online to Islamic State’s tech support guide on how to use generative AI tools such as chatbots like ChatGPT securely. “We are in the midst of a rapid technological revolution, no less significant than the Industrial Revolution of the eighteenth and nineteenth centuries – the artificial intelligence revolution,” Weimann writes.“This multi-dimensional dramatic revolution is raising the concern that human society is unprepared for the rise of artificial intelligence.”Read IDF hits Tulkarem terror cell, reportedly killing Fatah commanderThe study outlines the most likely risks associated with terrorists’ access to AI technology:Effective propaganda, where AI can be used to produce and distribute influential content to various target populations, faster and more efficiently than ever before and disseminate hate speech, radical and violent ideologies for recruitment purposes. Spreading disinformation which can serve terrorists in their fear-inducing campaigns, and AI models can be a powerful weapon in modern disinformation wars by utilizing technology like deep fakes which can quickly reach huge audiences in an extremely short period of time.Interactive recruitment where AI-based chatbots can facilitate and enhance the recruitment of individuals for terrorist plots by automating virtual interactions with targeted individuals and can improve the reach of the virtual interactivity of targeted individuals and groups.Enhancing attack capabilities, where deep-learning models such as ChatGPT have the potential to enable terrorists to learn, plan, and coordinate their activities with greater efficiency, accuracy, and impact than ever before. AIChatGPTTerrorism