AI Trends and Predictions 2025 From Industry Insiders
IT leaders and industry insiders share their AI trends and predictions for 2025.
This time last year, AI — generative AI, specifically — was mostly hype. A lot has changed in one year. As one industry insider put it: "In 2023, organizations were exploring and experimenting, and in 2024, they were implementing AI at scale. Because of the widespread implementation, in 2025, we will see an emphasis on ROI." Another calls generative AI the most important tech trend of 2025.
Our 2025 tech predictions are in, complete with "anti-predictions" — highlighting trends widely expected to dominate the IT landscape but viewed differently by our experts. Not surprisingly, a number of our predictions revolve around artificial intelligence — including a bearish outlook about expanded AI adoption by businesses in 2025.
Now it's IT leaders' and industry insiders' turn to share what they are expecting from AI in 2025. Check out their predictions below:
AI Joins the Dev Team
2025 will be the year developer capacities are enhanced by the power of AI, as AI tools are officially integrated into the developer tech stack. Over the course of the next year, we'll see team structure and processes adapt to maximize collaboration between AI and developers, experimenting with AI-augmented workflows and increased automation of demanding responsibilities like on-call shifts to supercharge efficiency and velocity. — Matt Makai, VP of Developer Relations & Experience, LaunchDarkly
AI Will Transform Storage
In 2025, AI will continue to proliferate across all industries, driving new opportunities and challenges. The integration of AI into storage systems will be particularly transformative, with AI-powered solutions becoming increasingly common for optimizing performance, enhancing security and ensuring data reliability. This increase in AI workloads will lead to a surge in demand for high-performance storage solutions that can support these data-intensive applications, including large language models (LLMs), machine learning model training, and real-time data analytics. This will increase the requirements to data storage technologies in order to handle AI's specific needs for speed, scalability and efficiency. — Boyan Ivanov, CEO, StorPool Storage
The Rise of AI-Driven Sales Agents
In the next 12-18 months, we will see the rise of AI-driven agents in selling B2B goods in manufacturing, and B2B and B2C goods in retail, setting the foundation for other industries to experiment with them. These agents will understand the needs of the buyer, will be trained to interact with a seller's website using machine-to-machine protocols, and will have the ability to accurately source the right color/size/price at an unprecedented pace. Once these agents move from the idea stage to successful deployments to mainstream use, it will significantly change the way products are sold online, removing unnecessary human interactions and enabling people to focus on more impactful activities. — Jonathan Taylor, CTO, Zoovu
Growing Convergence of AI, AppSec, and Open Source
We will see the continued intersection of AI, AppSec, and open source — from malicious actors targeting open source models, the communities and platforms that host them, and organizations looking to leverage AI to address code analysis and remediation. Increasingly, we will see widely used OSS AI libraries, projects, models, and more targeted as part of supply chain attacks on the OSS AI community. Commercial AI vendors are not immune either, as they are large consumers of OSS but often aren't transparent with customers and consumers regarding what OSS they use. — Chris Hughes, chief security advisor at Endor Labs
AI Asset Management Challenges Emerge
Asset management challenges are coming to AI. As businesses start building out a catalogue of models, they will encounter challenges with size, portability, and discoverability. The industry will look for ways to get better compression with minimal reduction in accuracy from these assets so that they are more portable. There will be a need to effectively manage models, making them easy to find across organizations, and making them ever more interoperable. — Robert Elwell, VP of engineering, MacStadium
Ransomware and Digital Extortion (R&DE)
We expect R&DE incidents to continue at an elevated level in 2025, representing a significant threat to organizations of all sizes, industries, and geographies. 2024 was a record year for R&DE collectives with ZeroFox identifying an average of 388 incidents each month throughout 2024, compared to an average of 337 per month in 2023. Organizations that make up the manufacturing industry are likely to face the biggest threat from R&DE actors throughout 2025, with those within the retail, construction, healthcare, and technology sectors also prone to high levels of targeting. The greatest threat in early 2025 will very likely emanate from RansomHub, an extortion collective that was first observed in early 2024 and went on to become the most prominent R&DE outfit of the year. 2025 is likely to see an increasing number of new threat collectives, which continue to diversify the R&DE threat landscape. New collectives will also continue to develop and test new TTPs, such as increased emphasis on data extraction over traditional encryption methods, and to opt for double or triple extortion tactics in a bid to increase the chance of successful ransom demands. — Adam Darrah, VP of intelligence, ZeroFox
Geopolitical & Cyber Convergence
The cyber threat landscape in 2025 is expected to be heavily influenced by geopolitical developments, continuing the trend of increasing convergence between cyber and geopolitical spheres. Throughout 2024, geopolitical events directly impacted the motivations, capabilities, and intentions of cyber threat actors, including nation-state cyber capabilities, financially motivated DDW actors, ideologically motivated hacktivist collectives, and politically motivated activist groups. The dynamic and unpredictable geopolitical environment is expected to further influence cyber threat activities in 2025. Past and ongoing geopolitical events, such as the Russia-Ukraine war and the Israel-Hamas conflict, have facilitated elevated cyber threat activity. In 2025, we anticipate continued politically motivated cyber threats, including social engineering, data breaches, DDoS attacks, and malicious payload deployment, such as R&DE and spyware. Cybercriminal collectives are likely to align with geopolitical disputes, contributing to the complexity of the threat landscape. The EU's investment in high-tech fields and the geopolitical tensions between China, the US, and the EU are likely to intensify cyber threats, with state-backed actors targeting critical infrastructure and technology sectors. Russia and Iran are expected to use hybrid tactics, including cyber warfare, to advance their geopolitical agendas, further shaping the cyber threat landscape in 2025. — Adam Darrah, VP of intelligence, ZeroFox
Initial Access Brokers (IABs)
In 2025, Initial Access Brokers (IABs) are expected to remain a significant threat to organizations globally. The market for illicit network access surged in 2024, with record levels of IAB sales identified across DDW marketplaces. We anticipate this thriving market will continue in 2025, with IABs targeting organizations of all sizes, industries, and geographies. IABs sell unauthorized access to corporate networks by marketing compromised credentials and network entry points, allowing buyers to quickly exploit compromised networks with minimal investment and risk. The average purchase price of IAB sales in 2024 was under USD 5,000, offering substantial returns for threat actors, including R&DE collectives. The value of compromised access varies based on factors like information criticality, privilege level, and exploitation potential within the supply chain. Illicit access sales will likely continue underpinning the threat from R&DE operators in 2025, with security teams needing to be vigilant of IABs targeting them directly and indirectly via upstream partners. IABs are expected to focus more on third-party providers, perceiving them as having weaker security postures. North America will likely remain the primary target, followed by Europe, with industries like manufacturing, professional services, technology, retail, and financial services being the most attractive targets. — Adam Darrah, VP of intelligence, ZeroFox
First Major AI-Generated Code Vulnerability
Development teams have eagerly embraced AI, particularly GenAI, to accelerate coding and drive efficiency. While the push for the "10x developer" is transforming software creation, the need for speed can sideline or shortcut traditional practices like code reviews, raising significant security concerns. In the coming year, overconfidence in AI's capabilities could lead to vulnerable or malicious code slipping into production. GenAI is powerful but fallible — it can be tricked with prompts and is prone to hallucinations. This risk is not hypothetical: 78% of security leaders believe AI-generated code will lead to a major security reckoning. The CrowdStrike outage illustrated how quickly unvetted code can escalate into a crisis. With AI-generated code on the rise, organizations must authenticate all code, applications, and workloads by verifying their identity.
Code signing will become an even greater cornerstone in 2025, ensuring code comes from trusted sources, remains unchanged, and is approved for use. Yet, challenges persist: 83% of security leaders report developers already use AI to generate code, and 57% say it's now common practice. Despite this, 72% feel pressured to allow AI to stay competitive, while 63% have considered banning it due to security risks. Balancing innovation with security will be critical moving forward.— Kevin Bocek, chief innovation officer, Venafi, a CyberArk company
Everyone's a Creator: The Democratization of Specialized Knowledge Work
In 2025, AI tools will revolutionize knowledge work by enabling individuals to tackle tasks once reserved for specialists, from coding to design and content creation. Much like personal computers empowered workers to handle spreadsheets and documents independently rather than relying on centralized admin staff, AI will push creativity and productivity to the edge, placing advanced capabilities in the hands of individual contributors. This shift will not only accelerate workflows but also challenge traditional organizational structures as more people leverage AI to go solo or create in new ways. AI's role as a personal assistant and creative partner will reshape industries, making innovation more accessible than ever before. — Rob Brazier, VP of Product, Apollo GraphQL
AI-Driven APIs: A Wild Frontier
In 2025, the relationship between AI and APIs will enter uncharted territory, reshaping how systems are built and interact. AI will increasingly guide developers in crafting and consuming APIs, introducing new patterns and unpredictable usage scenarios. This shift will demand advanced observability tools to monitor and adapt to evolving behaviors, ensuring systems remain secure and efficient. As AI dynamically composes user experiences in real time, APIs will need to be more robust, resilient, and flexible than ever before. Businesses must embrace this wild frontier with innovation and foresight, as the synergy between AI and APIs transforms digital ecosystems in ways we're only beginning to understand. — Rob Brazier, VP of Product, Apollo GraphQL
AI and APIs: The Backbone of Intelligent Innovation
In 2025, the fusion of AI and APIs will redefine how businesses build and run intelligent systems. APIs will evolve from simple connectors to dynamic engines for innovation, driving experimentation and production at unprecedented scales. As AI applications proliferate, organizations will demand APIs that not only handle the chaos of rapid prototyping but also balance speed with robust security and cost efficiency in production environments. Granular access controls, real-time performance monitoring, and optimized compute environments will become non-negotiable for businesses navigating this new era. APIs will act as the trusted gatekeepers of sensitive data, ensuring that AI-driven systems are both powerful, smart, and secure. This synergy between AI and APIs will empower developers to build smarter, faster, and more resilient applications, setting a new standard for innovation across industries. — Subrata Chakrabarti, VP of Product Marketing at Apollo GraphQL
Smarter AI for Specialized Needs
In 2025, the future of AI will shift toward smaller, domain-specific systems designed to excel in targeted applications. These compact, context-rich models will redefine industries by offering unparalleled efficiency and precision. Rather than relying on broad, generalized AI, businesses will start to adopt solutions tailored to their unique needs — healthcare organizations will use AI for diagnostics, while financial institutions enhance fraud detection. By embedding deep and specialized knowledge directly into models, companies will deliver real-time insights and reduce resource demands. AI will take a step forward towards decision-making, serving as a critical assistant rather than a complete solution. This evolution will make AI more practical, accessible, and impactful, transforming specialized knowledge from an advantage into a necessity. — Subrata Chakrabarti, VP of Product Marketing at Apollo GraphQL
Ethics in AI Will Take a Step Forward in 2025
In 2025, geopolitical turbulence will continue, and misinformation is likely to abound. It's unlikely that new data privacy and AI policies will be passed and enforced in 2025, so customers will expect businesses to take responsibility for ethics in AI. As companies incorporate AI into their products, they have a responsibility to protect what and how the AI uses customer data, especially as it relates to sensitive data. Businesses must invest in ethical AI development, with an emphasis on transparency because AI adoption will directly correlate to the amount of trust the customers have in it. — Stephen Manley, CTO, Druva
2025 Will See the First Data Breach of an AI Model
Pundits have frequently warned about the data risks in AI models. If the training data is compromised, entire systems can be exploited. While it is difficult to attack the large language models (LLMs) used in tools like ChatGPT, the rise of lower-cost, more targeted small language models (SLM) make them a target. The impact of a corrupt SLM in 2025 will be massive because consumers won't make a distinction between LLMs and SLMs. The breach will spur the development of new regulations and guardrails to protect customers. — Stephen Manley, CTO, Druva
Synthetic Data Used More in AI Training to Safeguard Sensitive Customer Data, Creating New Risks
For AI to produce good results, it needs to be trained on good data and rigorously tested with prompt engineering. The business temptation is to use customer data to train AI models — but that causes a myriad of problems to crop up, such as data compliance breaches, higher impact of cyber risk, and higher likelihood of data leakage. To effectively combat these challenges, businesses will turn to synthetic data, or training data that AI models generate, to maintain safety best practices during the training process. This, however, will create new risks, since the synthetic data can create a feedback loop that will exacerbate any bias in the data. Therefore, companies will need to invest in transparency and increase the rigor in reviewing their AI-generated output. — Stephen Manley, CTO, Druva
2025 Is the Year of (Missing) ROI on GenAI Investments
The trough of disillusionment looms for GenAI, and the request for ROI will quicken the industry's descent into said trough. Every business is striving to understand the impact of GenAI, and savvy business leaders are already asking questions around accuracy, efficiency, and outcome to validate the IT spend allocated to it. Unless it's incorporated into a purpose-built tool from the ground up, GenAI won't drive significant measurable efficiency and many will feel let down by its initial promises. — Stephen Manley, CTO, Druva
Security Leaders Will Embrace AI Experimentation
2024 shocked many of us with AI technologies' sophistication and rapid advancement. The year also highlighted that we don't quite know how to incorporate such tools into work and which vendors can help us along the way. Organizations in 2025 will continue to experiment with AI to understand where it offers value. And we'll also see many startups experiment with business models and tech approaches. Security and IT leaders should be ready to help evaluate and onboard a diverse set of immature AI products. We'll need to comprehend a range of AI technologies and understand the expectations of diverse internal stakeholders so we can contribute toward making informed risk vs. reward decisions. —Lenny Zeltser, SANS Institute Fellow and CISO at Axonius
AI in Security: Balancing Human Expertise and Automation for Optimal Outcomes
AI-related advancements will continue to fuel discussions regarding the role of humans vs. automation in the workforce. Security teams will see more opportunities to use AI and non-AI technologies to automate tasks across many domains, including GRC, security operations, and product security. Security leaders will need to be strategic about deciding which tasks to leave for humans and which to automate. Given how rapidly the technology is changing, we should be ready to experiment and determine how to measure project outcomes to decide which approaches work best. —Lenny Zeltser, SANS Institute Fellow and CISO at Axonius
Multi-agent Neurosymbolic AI Will Advance Machine-to-Machine Collaboration
The first wave of multi-agent neurosymbolic AI applications that perform machine-to-machine collaboration will emerge in 2025. Agents across diverse systems — such as autonomous vehicles, robotics, and enterprise decision support platforms — will exchange and interpret complex symbolic representations of their surroundings in real time. These agents will work together to negotiate solutions, adapt to new situations, and coordinate actions based on both learned experiences and structured knowledge. This advancement will lead to a new wave of AI products capable of more intelligent teamwork and enhanced performance in complex environments, all while ensuring transparency and explainability in decision-making. — Dr. Jans Aasman, CEO, Franz
2025 Will Be the Year of the AI Agent
Instead of merely producing text or images, this new breed of AI application will be empowered to act. That might mean researching topics on the web, manipulating an application on a PC desktop, or any other task that can be performed via API. We're still a long way from general artificial intelligence, so these early agents will be quite specialized. We'll see the emergence of what might be called "agentic architectures" — focused use cases where AI can deliver immediate value. Likely examples include data modeling, master data management, analytics and data enrichment, where tasks are highly structured and prototypes have already shown promise. We'll see the first case studies in 2025, and then rapid uptake throughout the enterprise as lagging adopters see competitors gaining an edge. — Bob van Luijt, CEO, Weaviate
AI Moves Closer to the Edge
In the year ahead, we anticipate AI at the edge will further enhance applications and improve efficiency with increasingly specialized edge-AI chips that can enable tasks with lower power consumption. AI techniques like TinyML and model quantization will continue to advance, allowing more sophisticated AI algorithms to run on resource-constrained devices. We expect more real-time speech recognition, computer vision, and predictive maintenance on small edge devices, along with more local data processing. Current edge applications mostly use pre-trained models, but a move toward real-time, on-device training and fine-tuning will become more common. This means edge devices could adapt and learn from local data over time, improving performance and personalization without relying on cloud retraining. — Rashmi Misra, chief AI officer, Analog Devices
Business Leaders Must Measure Value of AI Apps
Companies that rush into AI adoption without understanding their internal needs and bandwidth risk overwhelming their security and data teams with information that doesn't provide valuable insights. As AI continues to grow, businesses aiming for long-term ROI must shift their focus from simply integrating AI capabilities to addressing organizations' shortcomings and measuring value. To accomplish this in the coming year, business leaders should collaborate closely with internal teams to identify their processes, bottlenecks, and needs. By understanding these challenges, leaders can work strategically with their teams to determine the most effective AI applications and ensure their teams are prepared to manage them successfully. —Rishi Kaushal, CIO, Entrust
The 'AI Winter' Is Not Coming
We're currently experiencing one of the most sustained stretches of interest and investment in AI that we've ever seen. While traditionally we've seen this hype give way to "AI winters" where enthusiasm and funding taper off, this time around, there are strong indicators that this momentum will continue into the new year and beyond. 2025 will be a year where scaled production of AI will sustain the investment in AI for years to come. This is just the beginning. — Raj Pai, Vice President, Product Management, Cloud AI, Google Cloud
2025 Is the Year of the Platform
If 2024 was the year of the LLM, 2025 will be the year of the platform. There's no shortage of models on the market — plenty to address just about a










