Loading stock data...
5533114

Nine out of ten organisations in a sweeping study are planning to adopt agentic AI models within the next three years, signaling a rapid shift from experimentation to production in Asia-Pacific and broader markets. The research, a collaboration between a leading global technology consultancy and a premier academic institution, surveyed 1,000 companies across 22 countries between January and March 2024, with Thailand excluded from this core sample and 250 firms representing the Asia-Pacific region. The findings illuminate a clear rise in sentiment and intent around artificial intelligence, as executives move to integrate advanced AI capabilities into daily operations, strategic planning, and customer interactions. The momentum underscores a broader industry trend toward leveraging generative AI to drive productivity, unlock revenue growth, and gain competitive advantage in an increasingly data-driven economy. Yet the study also highlights a balanced reality: while interest and planned investment are high, actual scaling remains a challenge, with only a minority of organisations currently delivering widespread GenAI capabilities in real-world environments.

GenAI adoption in Asia-Pacific: production deployments, executive sentiment, and regional realities

Across Asia-Pacific, organisations are transitioning from pilot projects to production deployments of generative AI and agentic AI technologies. This shift is driven by a recognition that AI capabilities can be embedded into core business processes, customer journeys, and decision-making workflows to deliver tangible outcomes. As more teams gain exposure to GenAI tools, they are exploring practical applications that extend beyond experimental use cases, including automating routine tasks, accelerating data analysis, and enabling more proactive risk management. The movement toward production is accompanied by a heightened sense of urgency among executives to realise measurable benefits, with many leaders seeking to shorten the time from development to deployment and to integrate AI outcomes with existing enterprise systems. The report captures a pivotal moment when AI transitions from the lab to the front line of business strategy, with organisations pursuing scalable architectures and governance that support reliable, repeatable results.

Within the regional landscape, digital transformation is a central driver of AI adoption. The majority of senior leaders report an ongoing commitment to digital initiatives as a foundation for AI-enabled growth. This alignment between digital transformation and AI investment signals that enterprises view AI as a multiplier for existing digital investments rather than a standalone substitute for infrastructure. The data indicates a strong correlation between leadership engagement and AI readiness: executives who are deeply involved in transformation programs tend to allocate resources more aggressively toward AI initiatives, seek stronger data capabilities, and advocate for more structured governance frameworks. As a result, companies are accelerating the modernization of data platforms, cloud architectures, and analytics ecosystems to unlock the full potential of agentic AI. In practical terms, this means upgrading data pipelines, centralising data governance, and enabling real-time access to trusted data for AI models and human decision-makers alike.

Industry dynamics in the region also reveal a tilt toward sectors with extensive data, complex risk profiles, and stringent compliance requirements. Banking, financial services, and insurance stand out as early adopters of GenAI, a trend driven by their long-standing investments in data infrastructure and risk controls. These sectors have already built robust data management foundations and implemented governance processes that can be extended to AI-enabled workflows. The natural progression for them involves integrating GenAI into customer onboarding, underwriting, fraud detection, claim processing, and regulatory reporting. The maturation of GenAI in these fields suggests that organisations with mature data practices can move more quickly to scaled deployment, while those with less developed data ecosystems may face steeper hurdles in achieving reliable AI outcomes.

From a workforce perspective, the rise of agentic AI is reshaping the skills landscape. Leaders acknowledge that the competencies required to deploy and manage advanced AI systems differ materially from traditional IT roles. As AI becomes more deeply embedded in business operations, organisations recognize the need to reskill employees and redefine job design to incorporate human-AI collaboration. The shift entails new roles in AI governance, data stewardship, model monitoring, and ethical oversight, as well as enhanced capabilities in change management and cross-functional collaboration. This transformation has implications for recruitment, training budgets, and performance incentives, as well as for culture and leadership practices that foster responsible and outcomes-driven AI use.

A guiding philosophy echoed by AI and data leaders emphasizes that technology is not the sole driver of transformation; rather, ethical, transparent, and accountable AI practices are essential to achieving trust and sustainable impact. The executive message centers on building an environment in which AI outcomes are reliable, decisions are explainable, and stakeholders feel confident about the systems that influence both customer experiences and workforce dynamics. In this context, responsible AI becomes a strategic differentiator rather than a compliance burden, enabling organisations to scale with confidence while safeguarding societal values and stakeholders’ interests.

Three pillars to scale AI responsibly: infrastructure, talent, and trust

To translate AI ambition into scalable, trustworthy practice, the study identifies three foundational pillars that organisations must strengthen. Each pillar encompasses concrete actions, governance mechanisms, and performance indicators designed to create a sustainable path from pilot to enterprise-wide AI deployment.

Digital infrastructure as the backbone of AI scale

A robust digital backbone is indispensable for successful AI scaling. The first pillar focuses on the underlying architecture, data pipelines, and operational platforms that enable GenAI to function at scale and with reliability. Leading organisations are investing in modernising data infrastructures, consolidating data sources, and establishing unified data models that provide consistent, high-quality inputs for AI models. This includes migrating to scalable cloud environments, enabling real-time data streams, and implementing data catalogs that improve discoverability and lineage tracking. The objective is to reduce data silos, improve data integrity, and ensure that AI systems have access to timely, trusted information essential for producing accurate outputs.

In practice, digital infrastructure upgrades translate into architectural patterns that support modular AI components, reusable services, and seamless integration with enterprise applications. Enterprises are prioritising secure data access controls, encryption, and privacy-preserving technologies to protect sensitive information while enabling AI workloads. Governance mechanisms for data usage, retention, and compliance are embedded into the infrastructure, ensuring that AI activities align with regulatory requirements and corporate policies. The outcome is a scalable foundation that not only accelerates GenAI deployment but also sustains performance, reliability, and security as AI initiatives expand across functions and business units.

Talent and human capital: reskilling for a new era

The second pillar centers on people. As agentic AI and GenAI scale across organisations, the required skill sets evolve. The report emphasises that the workforce must be retooled to design, deploy, monitor, and govern AI systems, beyond traditional coding and analytics. Reskilling initiatives are critical to preparing existing staff for more advanced responsibilities, including model governance, risk assessment, ethical considerations, and continuous performance monitoring. This transition involves a deliberate blend of upskilling, cross-functional collaboration, and the creation of new roles that align with the demands of AI-enabled business models. Enterprises are encouraged to develop structured training programs that cover technical competencies, domain expertise, data literacy, and the social and ethical implications of AI.

A successful talent strategy also requires thoughtful workforce planning and change management. Organisations should map current capabilities to future needs, identify gaps, and implement targeted learning journeys that accelerate mastery of AI tools while preserving job security and employee engagement. Leadership plays a pivotal role in communicating the vision, setting clear expectations, and modelling responsible AI use. By combining technical training with leadership development and ethical considerations, companies can build a resilient talent ecosystem that sustains AI initiatives and mitigates potential disruption to the workforce.

Trust, governance, and responsible AI

The third pillar—trust and governance—addresses the core challenge of delivering reliable AI outcomes while maintaining accountability, transparency, and data privacy. The study underscores that responsible AI is central to scaling AI with confidence. Organisations are urged to establish robust AI governance structures, articulate principled frameworks, and conduct rigorous AI risk assessments to identify, quantify, and mitigate potential harms. Systemic enablement for responsible AI testing is essential, enabling continuous experimentation within safe, controlled environments before broad release. This approach helps ensure that AI behavior remains aligned with business objectives and ethical standards.

Beyond governance, ongoing monitoring is critical. Companies must track model performance, detect drift, audit decision logic, and verify compliance with internal policies and external regulations. A culture of continuous improvement supports iterative refinement of AI systems, informed by feedback from users, stakeholders, and independent reviews. Equally important is workforce readiness and impact management. Training programmes should cover not only technical topics but also the social implications of AI adoption, with particular attention to data privacy and security. These measures are integral to sustaining trust with customers, employees, regulators, and the broader public as AI adoption accelerates.

The report notes a notable gap between ambition and confidence: only a small portion of organisations express high confidence in their ability to deploy AI effectively at scale. This gap underscores the importance of deliberate governance, mature risk management, and a proven capability to operate AI systems responsibly. By investing in governance, risk assessment, and systemic enablement for responsible AI testing, organisations can move from hopeful plans to dependable, scalable AI programs that deliver measurable value while preserving trust.

Leaders also outline strategic outcomes tied to responsible AI investments. They anticipate that, over the next two years, responsible AI spending will grow substantially—from a modest share to a majority of AI-related investments—reflecting a broader recognition that responsible AI is a prerequisite for sustainable success. As organisations formulate roadmaps for AI, the emphasis on governance and accountability will shape how they measure impact, allocate resources, and communicate progress to stakeholders.

Thailand’s regulatory framework and the path to compliance

Within the Southeast Asian context, regulatory readiness is increasingly central to AI strategy. A leading AI authority in Thailand emphasises the importance of establishing comprehensive AI regulations that align with the nation’s broader AI ambitions, including a national AI strategy and action plan. To assist organisations in navigating this evolving landscape, Accenture has developed a practical framework designed to help enterprises quickly assess the impact of new regulations and align their operations with evolving compliance requirements. This framework aims to expedite regulatory assessment, enable timely adaptation to new rules, and support organisations in executing compliance measures without compromising performance or innovation. The approach reflects a proactive stance toward regulatory readiness, recognising that clear, context-specific guidance can reduce uncertainty and accelerate responsible AI deployment in a country pursuing ambitious AI capabilities.

Thailand’s regulatory trajectory highlights the interplay between policy development and industry innovation. As the nation advances its AI capabilities through formal strategies and plans, private sector actors are encouraged to engage with policymakers and adapt internal controls to meet evolving standards. The emphasis on a framework that supports rapid regulatory assessment signals a broader trend toward governance-centric AI adoption, where firms prioritise compliance as a core component of strategic planning rather than a reactive constraint. The Thailand example illustrates how national strategies can harmonise with corporate initiatives, enabling both public and private sectors to benefit from AI-driven transformation while maintaining safeguards for data privacy, security, and social responsibility.

Industry dynamics, adoption patterns, and the workforce implications

Across industries, the appetite for agentic AI and GenAI is shaped by regulatory environments, data maturity, and the potential to unlock operational efficiencies. Banking and insurance sectors lead in early adoption because they have historically invested in data governance, risk controls, and regulatory reporting, creating a fertile ground for extending GenAI into core processes. In these sectors, AI can enhance accuracy in credit assessments, speed up underwriting workflows, and improve detection of anomalies in transactions and claims. While these benefits are substantial, the sectors also require rigorous controls to ensure compliance, protect customer data, and manage model risk. The experience of regulated industries underscores the importance of robust data infrastructures and disciplined governance as prerequisites for responsible scaling.

Beyond financial services, other industries are intensifying AI investments to raise productivity and accelerate decision-making across functions such as marketing, supply chain, human resources, and product development. The ambition to scale AI is accompanied by a recognition that governance, ethics, and accountability must be deeply integrated into every stage of the AI lifecycle. As AI models become more capable and autonomous, enterprises are adopting more formal risk assessment processes to anticipate potential harms, ensure explainability where necessary, and implement fallback mechanisms and human oversight in high-stakes contexts. The emphasis on risk-aware deployment reflects a broader understanding that AI’s value comes not only from speed and automation but also from responsible, auditable, and trusted outcomes.

The workforce implications of this AI escalation are profound. The skills required to design, deploy, and monitor agentic AI differ from traditional roles, driving a shift in talent strategy across organisations. There is a clear push toward developing internal capabilities that span data engineering, model governance, ethics, risk management, and change management. As AI systems take on more complex tasks, human expertise becomes more supervisory and strategic, focusing on setting ethical boundaries, ensuring alignment with business goals, and managing the human impact of automation. This transformation demands continuous learning, a culture of experimentation with guardrails, and proactive change management to promote adoption without disruption.

From a governance standpoint, organisations are building end-to-end frameworks that connect strategy, policy, technology, and operations. This includes establishing clear ownership structures for AI accountability, defining performance metrics tied to business outcomes, and implementing continuous monitoring to detect drift and degradation. The ultimate aim is to achieve a reliable feedback loop where AI systems improve over time while remaining aligned with organizational values and regulatory expectations. In practice, this means investing in automated audit trails, robust privacy protections, and transparent reporting that communicates how AI decisions influence outcomes to stakeholders, customers, and regulators.

Responsible AI: governance, ethics, and transparent deployment

A cornerstone of scaling AI responsibly is the establishment of governance mechanisms that ensure AI activities are aligned with ethical norms, legal obligations, and societal expectations. The study underscores the critical role of responsible AI in building trust and enabling broader adoption. Organisations are encouraged to codify AI principles, define clear boundaries for model usage, and implement processes that assess and mitigate potential risks before deployment. Responsible AI frameworks should include practical steps for testing, validation, and validation governance that extend beyond initial rollouts to continuous oversight.

Transparent deployment practices contribute to greater trust and reliability. This involves making model behavior and decision boundaries more understandable to users where feasible, providing explanations for AI-driven recommendations, and ensuring that automation does not obscure human accountability. Effective governance also requires ongoing risk assessment, including the evaluation of ethical considerations, regulatory compliance, and data privacy concerns. As AI capabilities evolve, governance frameworks must remain adaptable, incorporating feedback, lessons learned from incidents, and evolving standards to sustain responsible use over time.

Data privacy and security are inseparable from responsible AI. Enterprises must implement robust data protection measures, including access controls, encryption, and privacy-preserving techniques that enable AI to function without compromising sensitive information. A strong emphasis on data governance ensures that data used for training and inference remains accurate, up-to-date, and compliant with applicable laws and policies. The integrated approach—combining governance, ethics, transparency, and security—creates a solid foundation for scaling AI while protecting stakeholders’ interests and maintaining public trust.

Workforce readiness is another critical facet of responsible AI deployment. Organisations need to provide training that equips employees with the skills to work effectively with AI, understand its limitations, and recognise when human intervention is appropriate. Impact management, including clear communication about how AI changes may affect roles, career progression, and workplace dynamics, helps to smooth transitions and sustain morale. By coupling training with governance and risk management, companies can foster a culture that embraces AI innovation while upholding professional ethics and accountability.

The study projects a strategic shift in AI investment, with responsible AI spending expected to rise significantly over the next two years. Rather than viewing responsible AI as a compliance overhead, organisations are recognising its strategic value in driving sustainable, scalable innovation. The rising investment signals a maturation of the AI program, where governance, risk management, and ethical considerations become core components of business strategy, not afterthoughts. This evolution is essential for maintaining stakeholder confidence, meeting regulatory expectations, and realising the long-term benefits of AI-driven growth.

Thailand’s regulatory strategy and practical framework for compliance

Thailand is positioning itself as a forward-looking jurisdiction in the AI arena, emphasising the development of comprehensive regulations that reflect both the opportunities and the responsibilities associated with AI. The national AI strategy and action plan signal a clear policy direction, aiming to harmonise regulatory objectives with innovation goals. In this environment, organisations operating in or expanding to Thailand will benefit from a framework that helps them anticipate regulatory shifts, assess potential impacts, and prepare compliance measures in a timely and efficient manner.

Accenture has crafted a practical framework designed to help organisations rapidly assess the implications of new AI regulations and align their practices with evolving requirements. This framework provides a structured approach to regulatory impact assessment, enabling enterprises to identify relevant rules, gaps in current practices, and concrete steps needed to achieve compliance. By streamlining the regulatory assessment process, the framework supports faster adaptation to policy changes while preserving the ability to innovate and scale AI initiatives. The framework also aligns with broader governance objectives, reinforcing the importance of AI accountability, data protection, and ethical standards in a Thai regulatory context.

The Thai regulatory landscape is expected to influence how organisations design their AI programs, from data handling and model governance to risk management and stakeholder engagement. As the nation advances its AI capabilities through national strategy and action plans, companies are encouraged to establish proactive internal controls, build regulatory dashboards, and maintain ongoing dialogue with regulators. This proactive stance can reduce uncertainty, accelerate AI deployment, and ensure that operational practices stay aligned with public policy objectives. The emphasis on practical compliance tools suggests that Thailand intends to strike a balance between enabling innovation and safeguarding stakeholders’ interests through clear, actionable regulatory guidance.

Execution pathways: turning strategy into scalable practice

Across the Asia-Pacific region, organisations are translating ambition into executable roadmaps. This involves aligning business objectives with AI capabilities, setting measurable targets, and connecting AI initiatives to broader transformation programs. A successful execution approach combines top-down sponsorship with bottom-up delivery, ensuring that strategic priorities are translated into concrete projects and outcomes. It requires careful prioritisation, resource allocation, and the establishment of milestones that enable progress tracking and continuous learning.

Key execution tactics include building cross-functional teams that span data science, IT, operations, risk, and compliance. These teams work together to ideate, prototype, validate, and scale AI solutions that address real business pain points. Integrating AI into existing processes demands change management strategies that engage stakeholders at all levels, from frontline employees to senior executives. Clear communication about the benefits, expected impacts, and governance practices helps foster adoption and reduces resistance to change.

A practical emphasis on governance, risk management, and ethics should accompany every project. Organisations should implement model risk controls, monitoring dashboards, and auditing mechanisms to detect drift, biases, and unintended consequences. Establishing incident response plans and escalation paths ensures that problems are addressed promptly and transparently. By embedding responsible AI practices into day-to-day operations, companies can sustain momentum while maintaining trust with customers, employees, and regulators.

Conclusion

The sweeping study on AI adoption in Asia-Pacific and related regions indicates a decisive shift toward wide-scale adoption of agentic AI and GenAI within a relatively short horizon. A large majority of organisations signal plans to deploy these technologies in production over the next three years, reflecting a strong belief in AI’s potential to boost productivity and revenue. Yet the same research reveals that scaling remains a hurdle for many, with only a minority achieving enterprise-wide GenAI deployment thus far. The path to scalable AI is defined by three core pillars: building robust digital infrastructure, cultivating the right talent through comprehensive reskilling, and establishing trusted, responsible AI practices that ensure ethical, transparent, and accountable outcomes.

In Asia-Pacific, the alignment of digital transformation with AI investment signals a mature approach to technology adoption. Regulated sectors like banking and insurance are leading the way, leveraging their mature data ecosystems and risk controls to extend GenAI capabilities into critical processes. The workforce implications are significant, with a clear need for new roles, redefined job designs, and ongoing learning to sustain AI-enabled operations. Leadership in data and AI emphasises not only technical proficiency but also ethical governance, risk management, and trust-building as essential components of long-term success.

Thailand’s emphasis on regulatory readiness and a practical framework to interpret new rules demonstrates how policy can support innovation while protecting stakeholders. The collaboration between industry and government to establish robust AI governance and compliance pathways offers a blueprint for other markets seeking to balance ambitious AI agendas with responsible implementation. As organisations move from pilots to scalable programs, the combination of infrastructure, people, and governance will determine how effectively AI can be deployed to deliver real business value, while maintaining public trust and safeguarding data privacy and security. The convergence of strategic intent, disciplined execution, and responsible governance will shape the next era of AI-powered growth across Asia-Pacific and beyond.