Digital transformation and technology

Insights to accelerate your organization’s digital transformation, no matter where you are in your journey
Digital transformation is not a linear process. It is an evolution that helps ensure organizations keep pace with the expectations of their stakeholders. Digitally enabled, data-driven organizations can better anticipate the needs of their customers, more efficiently integrate into their value and supply chains, and build agility into their systems and processes.
This allows them to respond more quickly to market forces, capitalize on digital transformation opportunities and deliver exceptional customer experience. In order to derive true business value and enable an agile and continual approach to digital transformation models, organizations require:

Insights and Resources

Tools, resources and insights to help improve organizational performance and drive business growth.
Desa Technologies
Audit committees play a crucial role in the oversight of AI transformation

Audit committees bring crucial insights into people, processes and risks to the strategic integration of artificial intelligence.

Desa Technologies
Continuous vigilance: The role of the audit committee in cybersecurity

Cybersecurity is a process—and audit committees play a pivotal role throughout.

Desa Technologies
2023 Generative AI Adoption Index

Unlocking AI’s potential: Responsibly, strategically, together.​

Desa Technologies
Securing the transformation journey

insights on disruptive tech and cybersecurity from 2023 CEO Outlook survey.

Industry Perspectives

We believe that a digital transformation strategy should be designed with the customer in mind. And know that every industry and sector faces a unique set of challenges and opportunities. That’s why we take an industry-focused approach to digital transformation. From financial services to natural resources, technology to government, healthcare and more, our teams of professionals have the deep industry insights needed to help you define the future of your industry and drive your digital transformation efforts.

Getting a grip on generative AI

Generative AI – powered by large language models (or LLMs for short) that can analyze and generate text – produce new instantaneous content from text to images based on user prompts.

The next regulatory shakeup in asset management

New SEC rules for investment advisors raise requirements, boost cybersecurity resilience

Ready, set, go: The digital transformation journey for manufacturers

An individual, yet critical journey

How we can help

Our team of professionals can help you harness technology and digital innovation to build a connected enterprise. We do this by taking a customer-centric, industry-focused, data driven view of your organization and helping align your front, middle, and back office. This helps ensure that you are more connected with your customers, employees, and business partners. So that you are better positioned to respond quickly to market signals and able to pivot and seize opportunities.

Audit Committees Play A Crucial Role In The Oversight Of Ai Transformation​

Audit Committees Play A Crucial Role In The Oversight Of Ai Transformation​

Audit committees bring crucial insights into people, processes and risks to the strategic integration of artificial intelligence

While AI transformation is a part of the larger digital change, it’s taking on a life of its own. Just because a company can adapt to digital doesn’t mean it’s ready for AI it’s a different game. There are many more considerations, and it requires a different approach. AI will change how we think—not just how we work. While many people are excited about the possibilities of artificial intelligence, audit committees need to think bigger.
As we see more advances like generative AI and powerful language models, they will need to look at data quality, rethink how intellectual property is viewed, understand stakeholder impacts, and prepare for upcoming regulations. The emergence of AI models in various industries has led to a growing need for company boards to understand how to properly integrate, oversee and optimize these technologies within the organization. Boards must understand AI to align these models’ use with the company’s core objectives and ensure that AI initiatives create value and competitive advantage.
Understanding AI models is pivotal to assessing their ethical ramifications and potential biases and ensuring compliance with evolving regulations and standards, which can significantly impact corporate reputation and stakeholder relations. Audit committees must act as guardians in this rapidly evolving AI landscape, ensuring strategic value, ethical integration and organizational resilience.

Bringing AI to the enterprise is complex

Companies should think carefully before implementing AI solutions― and particularly GenAI. Transferring the intended benefits of AI proofs of concept (PoC) to enterprise-ready products or services is challenging. PoCs are obtained in highly structured environments where the input data has been sanitized, ensuring the model behaves as intended. When released into the organization where things are not perfectly curated, the model might not perform as expected or it may access data it wasn’t meant to.
To mitigate this, audit committees should ask management about the expected benefits of each instance of AI versus the costs and risks it might introduce. Just because something can be done using AI doesn’t always mean it should be done using AI. Audit committees should request external reviews of their organizations’ models regularly.
They should ask management whether the applications are performing as expected, if they’re only using the data they’re intended to and what risk controls are being put in place around unintended outcomes. Organizations applying AI solutions also need a proper foundation across talent, processes and data.
This begins with identifying where it makes the most sense to use AI in the business and overlaying where there is enough data with the quality necessary to operationalize it. Ensuring there’s executive buy-in is a must since it will require a significant commitment of time and resources. With any new model adoption, the audit committee will need to take a leadership role in driving conversations with management around cyber security, data privacy and regulatory compliance, and how models will interact with other applications the organization uses.
They will want to ensure that potential risks are mitigated through appropriate processes and controls and that the firm is engaging the right talent for AI initiatives. Most organizations will not build AI solutions from scratch.
Instead, they will buy applications that already have AI models built into them. Where this is the case, the audit committee will need to understand the risks introduced by these solutions, the controls put in place by the vendor and how these measure up to the control standards of the organization.

Continuous vigilance: The role of the audit committee in cybersecurity

Cybersecurity is a process, not just a project—and audit committees have a role to play throughout. That means ensuring management is continually identifying critical data and the threats to that data, protecting it, and planning for breaches.

Identify key data and understand vulnerabilities

Effective cybersecurity begins with understanding the organization’s data flow and clearly identifying its critical data and systems. Management must determine what data, intellectual property and personal information customers and suppliers are sharing with the organization and what obligations exist to protect that data. Audit committees need to ensure management has a process for identifying critical data internally and across the organization’s digital supply chain.
Management also needs to identify vulnerabilities in the digital supply chain and understand any potential impacts on the organization. That means understanding who has access to the organization’s most sensitive data and what controls third parties have in place to protect it.
The audit committee should question management on third-party assurances that data and systems are protected both internally and throughout the supply chain. Ideally, assurance will come from formal third-party validation such as ISO 27001 certification or a SOC 2 report.
Smaller companies that can’t provide this certification or reporting may need to be assessed by the organization’s internal auditors. Increasingly, firms are adopting a continuous assurance approach to monitoring third parties and using tools that will monitor vendors for security breaches.

Monitor the threat landscape

The audit committee should be apprised of the threat landscape and question management on their processes for monitoring it and identifying vulnerabilities. The threat actors targeting organizations are primarily cybercriminals using ransomware to extort money and nation-states targeting industries to perform reconnaissance or shut down critical infrastructure.
Organizations must monitor threats to both operational technology (OT) and IT, since OT can be used to gain access to IT and has been used successfully by nation-states to attack infrastructure. Cybercriminals are continuously innovating.
For example, they’ve developed techniques to obtain one-time passcodes for multi-factor authentication and they’re using generative AI to enhance their attacks. They’re also revictimizing their targets. That means organizations need to stay one step ahead of potential attackers.

Have a plan

The audit committee should ensure management can clearly articulate its plan in the event of a cyber incident. In addition to operational risks, the firm could face substantial reputational and regulatory risks. To mitigate this, there should be a clearly designated senior officer who is accountable for contacting privacy and financial regulators, law enforcement, affected individuals and, in most cases, the media with prepared communications.
Organizations are practicing their cyber response processes by using cyber tabletop exercises to improve their ability to respond to cyber incidents.
These are essentially cyber fire drills where participants practice prescribed responses to a threat and then debrief afterward. Companies are also using AI and machine learning to monitor for attacks or unusual activity and then respond more quickly. For example, if a laptop is compromised it can be automatically quarantined and the user notified.

2023 Generative AI Adoption Index

Ever since ChatGPT launched publicly on November 30, 2022, generative AI has caught the attention of users around the world – including USA. One year on, and the number of Americans using generative AI at work grew 16 per cent in the last six months alone, representing a 32 per cent annualized growth rate.
The benefits generative AI delivers to individuals and organizations in every industry is significant and will impact our lives in countless ways for the foreseeable future. But the technology also brings with it challenges that need to be addressed if we’re to drive value for the long term.
As generative AI re-shapes workplaces around the world, lawmakers – like those behind USA’s AI Code of Conduct, European AI Act, and the AI Executive Order – are fervently trying to keep pace. In this shifting regulatory scene, American businesses are focusing on the ethical, legal, and security implications of using generative AI to support smarter workflows, streamline processes, and foster agility.
In Desa’s latest Generative AI Adoption Index, we examine USA’s progress in adopting generative AI, with a focus on use cases and risks at work. Whether you’re exploring or fully entrenched in generative AI, the Index highlights will help you make informed decisions, manage risks, and integrate AI responsibly.

Research methodology

The Generative AI Adoption Index measures the use of generative AI tools among the population.
The Index is based on a Desa’s in USA survey of 4,515 conducted from October 20 to November 6, 2023, using Sago’s Methodify online research platform.
Of the total surveyed, 1,004 respondents said they use generative AI tools. The margin of error is +/- 3 percentage points, with a confidence level of 95 per cent.

Securing the transformation journey

While CEOs are seeing benefits from their investments in emerging technologies, they’re also nervous about cyber risks, ethical challenges and lack of regulation. In today’s environment, that’s especially the case with their investments in Generative AI (GenAI), with 75 per cent of CEOs making Generative AI a top investment priority according to the latest edition of Desa’s CEO Outlook survey. Disruptive tech like GenAI can boost productivity and drive innovation, but it can also open up organizations to new risks and amplify cyber vulnerabilities. In USA, 67 per cent of CEOs agree that disruptive technology will negatively impact prosperity over the next three years emerging or disruptive technology is considered the No. 1 threat to growth among CEOs.
Cybersecurity remains a key concern as the technology landscape evolves at a rapid pace, without the guardrails of regulation—and as bad actors also embrace disruptive tech to bolster their causes. On the other hand, cybersecurity dropped three spots since the 2021 survey, from third to sixth place, as a threat to growth.
This is likely due to cybersecurity tooling and processes being built into systems, rather than treated as an add-on. In fact, 56% of CEOs believe their organization is well-prepared for a cyberattack. For small and mid-sized businesses, emerging tech is the No 2.

Emerging tech and the threats to growth

Over half of CEOs say their organization is well-prepared for cyberattacks, yet they also fear risks that may arise with the introduction of GenAI. Those who say they’re unprepared cite concerns around vulnerable legacy systems or infrastructure, increasing cyber threat and attack sophistication, a lack of investment in cyber defenses and a shortage of skilled personnel. Bad actors are also investing in GenAI, but they’re not bound by the same ethical requirements for responsible AI as ‘good’ actors. That means they can stage larger-scale attacks more easily, while using GenAI to evade existing checks and balances. A majority of CEOs (93 per cent) agree that GenAI is a double-edge sword in that it may aid in the detection of cyberattacks, but also provide new attack strategies for adversaries. Bad actors are not typically going after IT systems directly they’re targeting employees—often new hires who are unfamiliar with personnel and processes—with phishing and spear-phishing campaigns. These campaigns are becoming increasingly sophisticated, thanks to advancements in technology. For example, GenAI can be used to automatically craft targeted campaigns that pull from victims’ social media profiles. So, while CEOs may feel their organization is well-prepared, their employees and customers may not feel the same way.

Getting a grip on generative AI

Generative artificial intelligence (AI) tools have exploded in popularity in the past year

Generative AI – powered by large language models (or LLMs for short) that can analyze and generate text – produce new instantaneous content from text to images based on user prompts.
This technology helps employees perform routine tasks quickly and efficiently, freeing them up to focus on high-value work.
Rather than replace jobs and human judgment, generative AI serves to augment human expertise and improve overall efficacy and productivity.
Desa’s in USA research earlier this year found that one in five Americans are already using generative AI platforms.1 Just over half save up to five hours a week and over two-thirds said the time saved by using generative AI tools allowed them to take on additional work that they otherwise wouldn’t have the capacity to take on.
According to Desa’s International’s newly released 2023 Annual CEO Outlook Survey, 70 per cent of global CEOs are making generative AI their top investment priority in the medium term, with over half (52 per cent) expecting a return-on-investment (ROI) in three-to-five years.
In USA, 75 per cent of the CEOs at some of the country’s biggest organizations are also investing heavily in this technology, with 55 per cent expecting to see a ROI in three-to-five years.3 They identified ethical challenges, including bias in datasets, a lack of regulation, the cost of implementation, and the lack of technical capability and skills to implement as the major challenges to adopting generative AI in their organization.
Recently the U.S. Space Force temporarily banned the use of web-based generative AI tools and LLMs that power them on government systems, citing concerns over cybersecurity, data handling, and procurement requirements, according to a Bloomberg News report.4 Bloomberg also reported that the Space Force intends to release new guidance within 30 days and the tools are not to be used unless specifically approved.
This speaks to the need for organizations to get ahead of their employees. With so many people experimenting with this technology and using it in their work, organizations need to manage the risks by developing responsible AI frameworks and educating their employees. This is the only way to govern AI use, control access, and empower your people.
Yet only about a third (32 per cent) of small- and medium-sized businesses recently told Desa’s that they have developed, or are in the process of developing, generative AI policies, controls, and guardrails.
Another 46 per cent agreed somewhat, suggesting that they are still in the early stages of figuring this out. At least there’s a recognition – and a start – being made to manage the risks.
For the aerospace and defence (A&D) industry, the technology is opening new applications, such as virtual training environments, simulating military scenarios, and assessing operational risks and allocation of resources.
In August, the U.S. Pentagon created a generative AI task force to analyze and integrate LLM tools across the U.S. Defense Department and has already found 200 potential uses for them, according to Bloomberg News.
Their experiments have focused on developing data integration and digital platforms across the military.
The goal is to use AI-enabled data in decision-making, sensors, and firepower. The Pentagon is inviting industry and academics to a Defense Data and AI Symposium in Washington in February to determine the viable use of LLMs and explore the future of data analytics and AI.
The overriding concern – in the private and public sector alike – is the phenomenon called hallucinations when the AI software fabricates information or delivers incorrect results not backed by real-world data.
AI hallucinations can be false content, news, or information about people, events, or facts. The spread of misinformation can have potentially catastrophic results. While the technology holds much promise and has already shown its potential, the defence industry will need to wrap it up tightly in a responsible AI framework and governance model. There are many examples of traditional AI already being developed in the defence industry – ranging from the new AI combat system Aegis for the U.S. Navy and the U.S. Army’s Tactical Intelligence Targeting Access Node (TITAN) to the Future Tactical Unmanned Aircraft System (FTUAS).
There are many other use cases being developed in the defence sector, but these examples demonstrate the need for continuous innovation and opportunities for partnering amongst players within the industry to meet the need to deliver quickly while balancing the potential risks.
Generative AI is different. The publicly available tools heighten privacy and security concerns. The defence industry will need programs that are traceable, transparent, and importantly, private, with strict protocols on access. The stakes are just too high without it.

The next regulatory shakeup in asset management

New SEC rules for investment advisors: Why now?

While the SEC started focusing on cybersecurity regulation in earnest fifteen years ago, its primary focus was financial services.
This focus has shifted over the past 24 months. As more individuals have entered the financial services market and as technology takes a central role in critical business operations, the SEC and state level regulators have broadened their focus to public companies, investment firms, broker dealers, exchanges, and other entities.
The heightened focus recognizes that companies operating in the asset management landscape are plugged into an increasingly wide range of systems, networks, and service vendor platforms. These interfaces pose cybersecurity risks and can lead to cybersecurity incidents, data breaches, and critical process failures impacting shareholders and markets and may result in financial loss, reputational damage, market impact loss in shareholder value and increased client turnover.
The amendments will hold more entities to account with the aim of increasing cybersecurity resilience within the financial sector as a whole.
With broad scope and several new requirements to observe, these proposed changes are significant and will have a major impact on the asset management industry.

Ready, set, go: The digital transformation journey for manufacturers

Start small and show successes

Before considering a large, transformational journey, manufacturers need to understand what they’re trying to achieve and why.
Look at areas of the business that are underperforming compared to your company or peer benchmarks. Are there areas where you haven’t invested, but you think investment would be valuable? Think about the key challenges you have to overcome to meet your strategic goals.
Do you need to lower costs, increase productivity, improve quality, enhance process visibility, address rising costs of raw materials, optimize inventory management, or attract a new customer base? Consider the parts of your business that will be key drivers for growth going forward.
Will technology help you get to market faster, differentiate you from your competition, or enhance your R&D efforts? Use this information to map out a clear strategy.
Then break your strategy down into smaller, mini projects that are attainable and beneficial. And ensure you set goals for each project that are tied to your business plan so you can measure success – for example, reducing downtime or machine failures by “x” or increasing output by “y”.
Digital transformations can take several years or longer to complete, so to ensure employee and shareholder engagement, it’s important to have key milestones to show progress and success along the way. It demonstrates that the strategy is working, and that the changes each project delivers are adding value to and financially benefitting the company.

Look for digital tools that offer quick returns

Artificial Intelligence (AI) and Robotic Process Automation (RPA) – Automates repetitive, paper-based processes and can be implemented in as little as three months, thereby reducing data-entry errors and freeing up your employees’ time to spend on higher value work.
Internet of Things (IoT) / 5G Technology – Can be used to remotely monitor machinery performance, automate testing, etc., thereby helping you move from preventative to predictive maintenance, and potentially 24/7 operations with remote monitoring.
3D printing – Brings agility to your production with rapid prototyping, onsite design, and mass customization, making R&D and innovation faster and easier.
Robotics – Incorporates everything from introducing robots in large numbers across the warehouse to increase efficiency, to smaller-scale applications, such as using a robotic solution in a targeted area of production to reduce safety risk or improve quality.