top of page

Top AI Tech Trends in 2025: Insights from Sertis Executives

Writer's picture: SertisSertis


In the context of providing over 400 bespoke AI and data deployments for ASEAN’s leading enterprises over the last decade, Sertis stays ahead of tech trends while actively contributing to them via work at Sertis Research Labs.  3 of Sertis’ top technology leaders share below their perspectives on what technology trends will shape the AI landscape in 2025.

Tee Vachiramon, CEO & Founder: Revolutionizing AI with New Frontiers

  • Reasoning AI: The next leap in artificial intelligence lies in reasoning capabilities, and we expect breakthroughs in AI systems that can reason through abstract concepts, interpret cause-and-effect relationships, and adapt to novel situations with greater autonomy. Unlike traditional AI models that excel in pattern recognition, reasoning AI aims to simulate human-like decision-making processes. 2025 will see concrete building block steps taken in this direction. These advancements will revolutionize how AI interacts with complex, multi-step problems, bringing it one step closer to true cognitive intelligence.

  • Quantum Computing Cracking Encryption: With the growing capabilities of quantum computing, the potential to crack traditional encryption methods is becoming a reality. This raises both opportunities and challenges for AI in safeguarding sensitive information. The intersection of AI and quantum computing is expected to fuel innovations in encryption techniques and cybersecurity frameworks. In 2025, we expect to see early demonstrations of quantum algorithms compromising widely used cryptographic standards, prompting accelerated adoption of post-quantum cryptography.

  • AI in Cybersecurity and Fraud Prevention: AI's role in cybersecurity is rapidly expanding. In 2025, technologies like Generative Adversarial Networks (GANs) and transformer-based models will enable AI to simulate and predict sophisticated fraud scenarios, while federated learning will enhance detection accuracy by training on decentralized data with privacy intact. Reinforcement learning will improve adaptability to evolving threats, and self-supervised learning will unlock insights from unlabeled data. Combined with advancements in AI explainability, these innovations will create systems that are more adaptive, precise, and transparent than pre-2025 solutions.



Kevin Baumgarten, PhD, Lead Director of Machine Learning Engineering: Balancing Cost and Efficiency in AI

  • Cost-Efficiency in LLM Applications: The growing demand for deploying large language models (LLMs) in production across industries calls for a more cost-efficient approach to running them at scale. Innovations such as model pruning, quantization, and distillation are key enablers of this shift. 

    • Model Pruning removes less significant weights from the model, reducing computational load while maintaining similar performance.

    • Model Quantization converts high-precision weights into low-precision floats, lowering memory and computational costs.

    • Model Distillation transfers knowledge from a large, complex "teacher" model to a smaller "student" model, optimizing efficiency without substantial accuracy loss.

Additionally, fine-tuning techniques like LoRA (Low-Rank Adaptation) allow modification of only a subset of parameters, thereby reducing the overall model size which leads to reduced memory and compute requirements. Fine-tuning smaller, pre-trained models for specific industries or tasks avoids the cost of training from scratch, enabling the use of smaller, more tailored, and efficient models.

These innovations allow enterprises to deploy smaller and faster versions of LLMs with negligible accuracy loss, significantly reducing deployment costs. As cost-efficiency becomes critical, these techniques will position scalable and affordable LLM solutions at the forefront in 2025.

  • AI Applications in Cybersecurity: Traditional methods of threat detection and response are increasingly struggling to keep up with the scale and complexity of modern cyberattacks. AI-driven systems, capable of analyzing vast amounts of data in real time to identify anomalous behavior, are proving indispensable. In 2025, advancements in AI will enable highly adaptive, context-aware security systems that go beyond anomaly detection to proactively counter evolving threats. 

In 2025, we can anticipate emerging technologies including machine learning models fine-tuned for adaptive threat detection, capable of analyzing attack vectors in real time, and large language models (LLMs) designed to scan and interpret phishing attempts or fraudulent communications with high contextual understanding. Self-supervised learning will allow AI to improve continuously using unlabeled threat data, while zero-shot and few-shot learning will enable systems to respond effectively to new attack patterns without extensive retraining. Furthermore, as regulatory requirements increase, organizations are relying on AI to ensure compliance. 

​​These innovations surpass generic solutions by enabling real-time, automated decision-making and scalable regulatory compliance, providing organizations with agile, cost-effective tools to safeguard their assets in a rapidly evolving threat landscape.



Aubin Samacoits, Ph.D, Director of Data Research & Consulting: The Rise of Domain-Specific AI

  • Domain-Specific AI Solutions: Advancements in large language models (LLMs) like GPT, Claude, and Gemini have highlighted their versatility, but the focus is shifting toward domain-specific AI. Open-source models (e.g., Llama, Qwen, DeepSeek) and the demand for scalable, cost-efficient solutions are driving this transition.

Fine-tuning pre-trained models on industry-specific datasets enables tailored AI systems for sectors like healthcare, retail, and manufacturing. These systems outperform general-purpose models on specific tasks while requiring fewer resources. Techniques such as transfer learning, LoRA, prompt engineering, and model distillation enhance performance and enable lightweight, specialized deployments—even for organizations with limited resources.

  • Security and Compliance: As AI integrates into critical workflows, securing sensitive data like proprietary designs or patient records is paramount. Privacy-enhancing technologies (PETs) such as differential privacy, homomorphic encryption, and federated learning address these concerns. For example, federated learning enables collaborative model training on sensitive data without exposing raw information, ensuring compliance with regulations.

Advancements in specialized AI agents and high-performance chips are enabling cost-effective on-premise AI deployment. This approach processes sensitive data internally, bypassing external cloud services, and offering enhanced security, greater control, and compliance assurance.

The future of AI lies in private LLMs and AI agents operating entirely within organizations’ infrastructure, providing secure, scalable solutions for stringent compliance needs.

  • Explainability: As large language models (LLMs) expand into sensitive domains like healthcare, law, and finance, explainability is becoming crucial. With hundreds of billions of parameters, LLMs are inherently complex and opaque, posing challenges for interpretation. To address this, researchers are developing explainability techniques tailored to LLMs:

    • Attention Visualization: Visualizing attention weights helps identify which input tokens influence predictions, providing insights into the model’s focus.

    • Saliency Maps: Methods like Integrated Gradients highlight the most influential input tokens, offering token-level explanations for decisions.

    • Prompt-Driven Explanations: LLMs can explain their own reasoning through tailored prompts, revealing their internal logic in human-readable terms.

    • Probing technique: By analyzing specific neuron behaviours, it is possible to get a better understanding of how models store and retrieve knowledge.

    • Counterfactual Explanations: Modifying inputs to observe output changes highlights decision boundaries and potential biases.

    • Error Analysis Tools: Frameworks like OpenAI’s Eval systematically identify issues like hallucinations or misinterpretations in model outputs.

Explainability builds trust by providing clear, traceable reasoning behind decisions. For example, a diagnostic AI must justify its recommendation based on specific symptoms or test results, while legal AI requires transparent logic to ensure compliance and accountability. Enhanced explainability will be a key driver of AI adoption, opening doors to broader applications across industries.



Final Thoughts

As AI evolves, 2025 will be marked by breakthroughs in reasoning capabilities, domain-specific solutions, cost-efficient deployment, and enhanced security measures. From reasoning AI simulating human-like decision-making to domain-specific AI tailored for industries, these advancements will redefine how organizations harness AI's potential.

Cost-efficiency techniques like model pruning, quantization, and distillation will make deploying large language models more accessible, while privacy-enhancing technologies and on-premise solutions will address critical security and compliance needs. Furthermore, explainability and advanced AI-driven cybersecurity will foster trust and resilience, ensuring AI adoption continues to expand across sensitive and complex domains.

Sertis maintains an active publishing research organization and team of IEEE Fellows. Engineers and CTOs can read more of research from Sertis Research Labs at the following https://www.sertiscorp.com/sertis-ai-research

For more about Enterprise Business Trends in AI for 2025, see the article from Sertis Chief Commercial Officer, Randy McGraw at https://www.sertiscorp.com/post/sertis-cco-vision-top-5-ai-trends-to-watch-in-2025-1

Comments


bottom of page