3 Keys to Disruptive Innovation

Business.com
3 Keys to Disruptive Innovation

At the recent International M&A Partners (IMAP) Fall conference in Miami, the global financial audience learned about innovation and how disruption is impacting industries of all sizes and in all sectors. John Nottingham, a principal with Cleveland-based design consultancy Nottingham Spirk, offered a number of key ideas and best practices to foster disruptive innovation in any company.

While every business owner should be thinking about innovation, Nottingham has specialized in it for decades. Nottingham and his company have developed more than 1,200 U.S. and international patents, leading to more than $50 billion in sales for innovative consumer and medical products now sold by companies that include Procter & Gamble, Unilever, Kraft, GE, Philips, Black & Decker, Medtronic, Avon, and M&M Mars. 

What products are we talking about?  For one, Nottingham Spirk transformed the electric toothbrush business. Working with innovation partner company Dr. John's, Nottingham and his team manufactured an electric toothbrush that would retail for $5, opening-up massive market opportunities. The result was the Dr. John's SpinBrush, which was sold to Procter & Gamble for $475 million. 

Here's another: Ever deal with a messy paint can while doing a home improvement project? You first pry the lid off with a screwdriver. Then you awkwardly tip and pour it while trying not to make a mess.  And when you're done, you pound the top back on with a mallet (or use the end of a screwdriver, like me) before figuring out what do to about the paint that gets stuck in the lip of the can. Pretty archaic when you think about it, right? Dutch Boy and Nottingham Spirk created the Twist & Pour paint can with a built-in handle, screw-on lid and integrated spout. Dutch Boy paint sales tripled in the first six months, and the innovative cans are now available across the Sherwin-Williams family of brands.

And another: Where one person might see a molded plastic bedpan, another envisions a kid's toy wagon.  Rotadyne was a rotational molding company known for making plastic medical supplies. It went from making boring (and icky) bedpans to molded plastic swings, cars, sandboxes, and playhouses. If you have a child under 10 in your household, you are likely familiar with the brand now known as Little Tikes. The owners of the company went from manufacturing and selling bedpans to toy sales of more than $600 million, followed by an exit sale to MGA Entertainment.

Here are three key takeaways about innovation and disruption.

It's not just something you do on Thursday

Innovation is more process than bolt of lightning. Sure, great ideas hit all of us in the shower, but if a company truly wants to innovate, it has to create an innovation plan of action, execute on it and hold people accountable. According to Nottingham, most companies spend about 95 percent of their time on core innovation, 4 percent on adjacent innovation and 1 percent on disruptive innovation. The equation, Nottingham states, needs to be reworked as follows: 

70% core innovation = 10% ROI

20% adjacent innovation = 20% ROI

10% disruptive innovation = 70% ROI

Here's why: The return on investment for disruptive innovation is far more compelling than that for core innovation. According to Nottingham, while critical, core innovation typically offers an ROI of 10 percent. Disruptive innovation, on the other hand, offers an ROI of 70 percent.

If you really want to change paradigms and shake up the known universe, focus on more disruption and less on adjacent or core innovation. And don't ask the team that's working on making iterative and core changes to also think about disruption, and vice versa. Silo your innovation teams, or outsource part of it, and put processes in place so that creativity flourishes across the innovation mission.

Workshopping and testing

We have all seen seemingly great ideas for products or services that never get traction.  We are a nation of innovators but only 5 percent of patents ever get commercialized, and 97 percent of those never make any money.

Why? Most entrepreneurs and companies falter at the point where they ask if their product can be manufactured and licensed at a competitive price. Before going into mass production and before making big financial commitments to manufacture, Nottingham advocates for workshopping and testing to determine the viability of the product in real time.

Going back to the Dutch Boy screw-top can, everyone thought they had a winner, but the company first tried it out in one market, which it was believed would make a good representative sample.  After the product flew off the shelves in the test market, Sherwin Williams took the Pour & Twist can nationwide. Today, Amazon makes it easy to test market products online while securing detailed sales data and shopper feedback.

Innovation should be every CEO's focus

Time has proven that really big companies are better at buying innovation rather than doing it themselves. Lower to mid-market businesses have the advantage of being nimble and not bogged down in bureaucracy, and the big guys know this.  About 20 years ago, Bill Gates was asked what he worried about at Microsoft, and he didn't say big competitors like Apple or Oracle.  He said, "I worry about someone in a garage inventing something that I haven't thought of."

Innovation needs to be part of every company's business plan, and it must be a process that should be strategically followed over the long haul.

How does your business stack up? Are you ready to disrupt?

How to Choose the Right Accountant for Your Business

Accountancy is an essential tool in the small business arsenal. Without good financial practice, the foundations of a stable brand simply cannot be built. While some microbusinesses may opt to manage their own finances, many larger firms require support from experts as their economic structure becomes more complicated. From managing payments to dealing with government obligations and taxation, accountants can provide a backbone on which the rest of a business is supported.

But choosing an accountant is a more important task than you might have previously thought.

Similar to all facets of the modern business operation, opting for the right person or firm for the job is essential for developing the best opportunities. This is because accountants have specialties with particular skills, strengths and weaknesses. They have unique experiences and varied qualifications/accreditations. While many may offer the same basic services, it is how they offer these services – along with the special knowledge they have – that is so important to achieving your business's specific development goals.

This then leads to an important question – how do you choose the right accountant for your business? The answer is simple and comes in two stages.

Stage one: Understand your own business needs

You can't choose the right accountant for your business if you don't understand what both your financial and growth needs are.

Consider that your accountant is like a piece of sports equipment. You wouldn't invest in a tennis racket without first considering things like grip size, weight, strings type and durability, for example. What are these based on? Your needs as a player and an individual.

So how do you identify the core needs of your business as they relate to accountancy? Well, start by thinking about the basics:

What does your business do? What industry does it operate in? What is the turnover? How many people do you employ? How many clients/customers do you work with? How much can you afford to spend on an accountant?

To help you get a sense of what I mean, here's an example.

Suppose you manage a digital marketing firm with a $200,000 turnover employing three people, with a client roster of 15 firms, earning enough money to pay for outsourced accountancy support. Now that we know the basics, we need to think about the financial tasks involved in the business. What are the most important elements of accountancy you need managing?

Our make-believe firm might have a lot of clients, but with relatively few employees and only a short list of suppliers, they don't need much help on things like payroll or third-party expense monitoring. What would be important? Things like:

Managing client invoices Paying taxes Balancing the books 

Of course, your financial needs will vary, and our example firm may need other support but to a lesser extent. However, by carefully thinking about what kind of tasks will be a priority for your accountant, you can start to understand what your business really needs from a finance expert.

Finally, think about tasks that aren't directly related to financial management but are more about economic and business growth. Accountants can support profitability in other ways – they aren't just bean counters that keep finances under control. They can also offer services like:

Company audits to address inconsistencies and identify overpayments Consultancy on growth and support in making financial decisions Building forecasts and projections for investors Establishment of businesses and development of new ventures 

Going back to the example business, let's say our digital marketing firm is looking for support in reaching their growth target of $500,000 annual turnover. To do this, they'll need advice and help to build projections for investment opportunities.

Once we've thought about this, we can move onto the next step of choosing the right accountant.

Stage two: Understand what kind of accountant you need

By taking the time to understand our businesses goals and needs as they relate to financial management and growth, we can start to paint a picture of what kind of accountant we actually need.

In the case of our example, we're a small business that needs support from an accountancy firm familiar with digital marketing, that can help us manage our internal finances while also offering advice on what we can do to improve our bottom line, boost revenue and attract investors.

This knowledge helps us narrow down what we are looking for. We are looking for an accountancy firm that:

Specializes in small businesses Has experience working in the digital marketing sector Can provide profit consultancy advice and business growth analysis 

By contrast, if our example were a multinational construction chain employing 3,000 people, with massive staff turnover, looking to maintain profitability while moving into architectural design as a side project, we'd need an entirely different type of accountant.

With a picture of your perfect accountancy firm or accountant in mind, you can now start looking for them. Hunt online and locally, seeking the best fit with your criteria. It may be a challenge to find exactly the right accountant for each and every need, but the closer you can get to your ideal candidate, the better they can help you achieve the success of your dreams.

Knowing these factors also helps you avoid firms and accountants that are not the right fit. Many accountancy agencies take on any client that comes their way simply to boost revenue, but they may lack the experience or service opportunities you really need.

If you don't consider your needs first, you won't be able to tell if a company is offering to support them.

The important takeaway from this article is that accountants are unique, just like you. Don't hire the first one you find or the cheapest firm. Consider what's important, how different accountants can help you progress towards certain goals or complete certain tasks, and you'll find you get a lot more out of your investment.

HBR.org
Paul Bradbury/Getty Images

While some companies — most large banks, Ford and GM, Pfizer, and virtually all tech firms — are aggressively adopting artificial intelligence, many are not. Instead they are waiting for the technology to mature and for expertise in AI to become more widely available. They are planning to be “fast followers” — a strategy that has worked with most information technologies.

We think this is a bad idea. It’s true that some technologies need further development, but some (like traditional machine learning) are quite mature and have been available in some form for decades. Even more recent technologies like deep learning are based on research that took place in the 1980s. New research is being conducted all the time, but the mathematical and statistical foundations of current AI are well established.

System Development Time

Beyond the technical maturity issue, there are several other problems with the idea that companies will be able to adopt quickly once technologies are more capable. First, there is the time required to develop AI systems. Such systems will probably add little value to your business if they are completely generic, so time is required to tailor and configure them to your business and the specific knowledge domain within it. If the AI you are adopting employs machine learning, you will have to round up a substantial amount of training data. If it manipulates language — as in natural language processing applications — it can be even more difficult to get systems up and running. There is a lot of taxonomy and local knowledge that needs to be incorporated into the AI system —similar to the old “knowledge engineering” activity for expert systems. AI of this type is not just a software coding problem; it is a knowledge coding problem. It takes time to discover, disambiguate, and deploy knowledge.

Particularly if your knowledge domain has not already been modeled by your vendor or consultant, it will typically require many months to architect. This is particularly true for complex knowledge domains. For example, Memorial Sloan Kettering Cancer Center has been working with IBM to use Watson to treat certain forms of cancer for over six years, and the system still isn’t ready for broad use despite availability of high-quality talent in cancer care and AI. There are several domains and business problems for which the requisite knowledge engineering is available. However, it still needs to be manipulated to a company’s specific business context.

Integration Time

Even once your systems have been built, there is the issue of integrating AI systems into your organization. Unless you are employing some AI capabilities that are embedded within existing packaged application systems that your company already uses (e.g., Salesforce Einstein features within your CRM system) the fit with your business processes and IT architecture will require significant planning and time for adaptation. The transition from pilots and prototypes to production systems for AI can be difficult and time-consuming.

Even if your organization is skilled at moving pilots and prototypes into production, you will also have to re-engineer the business processes to have full impact on your business and industry. In most cases AI supports individual tasks and not entire business processes, so you will have redesign business processes and new human tasks around it. If you want to affect customer engagement, for example, you will need to develop or adapt multiple AI applications and tasks that relate to different aspects of marketing, sales, and service relationships.

Human Interactions with AI Time

Finally, there are the human challenges of AI to overcome. Very few AI systems are fully autonomous, but are rather focused on augmentation of and by human workers. New AI systems typically mean new roles and skills for the humans who work alongside them, and it will typically require considerable time to retrain workers on the new process and system. For example, investment advice companies providing “robo-advice” to their customers have often attempted to get human advisors to shift their focus to “behavioral finance,” or providing advice and “nudges” to encourage wise decisions and actions in investing. But this sort of skill is quite different from providing advice about what stocks and bonds to buy, and will take some time to inculcate.

Even if the goal for an AI system is to be fully autonomous, it is likely that some period of time in augmentation mode will be necessary. During this period, a critical piece of machine learning occurs through interaction between the system and its human users and observers. Called interaction learning, this is a critical step for organizations to understand how the system interacts with its ecosystem. They can often gather new data sets and begin to bake them into algorithms during this period — which often takes months or years.

Governance Time for AI Applications

While AI systems are geared to provide exponential scale and predictions, they will need a much broader governing approach than the classic controls and testing driven approach. The efficacy of AI algorithms decays over time because these are built based on historical data and recent business knowledge. The algorithms can be updated as the machine learns from patterns in new data, but they will need to be monitored by subject matter experts to ensure the machine is interpreting the change in business context correctly. Algorithms will also have to be continuously monitored for bias. For instance, if an AI system is trained to create product recommendations based on customer demographics and the demographics change dramatically in new data, it may provide biased recommendations.

Governance will also include watching for customer fraud. As the systems become smart so will the users. They may try to game the systems with fraudulent data and activities. Monitoring and preventing this will require sophisticated instrumentation and human monitoring in the context of your business.

Winners Take All

It may, then, take a long time to develop and fully implement AI systems, and there are few if any shortcuts to the necessary steps. Once they have been successfully undertaken, scaling —particularly if the company has a plentiful supply of data and the knowledge engineering mastered— can be very rapid. By the time a late adopter has done all the necessary preparation, earlier adopters will have taken considerable market share — they’ll be able to operate at substantially lower costs with better performance. In short, the winners may take all and late adopters may never catch up. Think, for example, of the learning and capability that a company like Pfizer — which has, according to one of the leaders of the company’s Analytics and AI Lab, more than 150 AI projects underway — has already accumulated. Tech companies like Alphabet have even more learning; that company had 2700 AI projects underway as far back as 2015.

Admittedly, some steps can be accelerated by waiting if a company is willing to compromise its unique knowledge and ways of conducting business. Vendors are developing a vast variety of knowledge graphs and models that use techniques ranging from natural language processing, to computer vision. If one exists for your industry or business problem, and you’re willing to adopt it with little modification, that will speed up the process of AI adoption. But you may lose your distinctive competence or competitive advantage if you do not tweak it to fit your context and build everything around it.

The obvious implication is that if you want to be successful with AI and think there may be a threat from AI-driven competitors or new entrants, you should start learning now about how to adapt it to your business across multiple different applications and AI methods. Some leading companies have created a centralized AI group to do this at scale. Such central groups focus on framing the problems, proving out the business hypothesis, modularizing the AI assets for reusability, creating techniques to manage the data pipeline, and training across businesses. One other possibility may be to acquire a startup that has accumulated substantial AI capabilities, but there will still be the need to adapt those capabilities to your business. In short, you should get started now if you haven’t already, and hope that it’s not too late.

altmodern/Getty Images

The European Union has experienced a series of disasters over the last 10 years, each one of which has posed a major threat to its stability. First, there was the fallout from the 2008-9 financial crisis and the arguably ill-judged imposition of austerity on the Union’s southern members. Combined with the migration crisis, this encouraged the rise of populist anti-EU movements.  And then, there was the British vote to leave the EU altogether.

That the EU remains largely intact amidst all this is due in no small part to Germany. Under Chancellor Angela Merkel, Germany has occasionally been willing to bear a disproportionate burden of the costs of crisis management. Even when this was the case, it could not always mobilize support among other member states for its stance on issues such as the refugee crisisGermany has carried out what economist Charles Kindleberger once memorably described as the “bribery and arm twisting” necessary to keep alliances such as the EU afloat. In short, it has, to a certain extent, become Europe’s hegemon, an ancient Greek term designating the dominant member of an alliance or confederation.

Unfortunately, it’s a role that Germany has had increasingly to shoulder alone – and, as I argue in my recently published book, is unsustainable. As a “first among equals” Germany lacks the dominance or magnitude of advantage that typically exists in hegemonic relationships. Its traditional partner at the EU’s helm, France, has struggled with its own economic problems since 2008 and has taken a back seat in driving EU policy. And even before the Brexit vote, Britain, the EU’s second largest economy, had detached itself from the EU’s inner circle by remaining outside the Eurozone and out of the region of borderless travel known as the Schengen Area.

What’s more, Germany’s reluctance to alleviate the pain of economically-strapped member states aggravated rather than eased the Eurozone crisis, which was ultimately managed by the ECB rather than by Germany. Germany’s failure to consult on migration policy resulted in a number of other EU member states resisting demands to follow its stance during the refugee crisis. Overall, it cannot subordinate its own needs to the group’s needs to the extent necessary for a hegemon to retain its partners’ allegiances.

This means that if the EU is to survive the onset of a new crisis (or the flare up of an existing one) it will need stronger, more inclusive leadership than Germany has provided on its own. Given the absence of a single European country large enough to take on the role of hegemon on its own, it is likely two or more countries will need to come together to form a hegemonic coalition. There are three conceivable options:

A revived Franco German coalition

The resurrection of the EU’s traditional leadership constellation is politically feasible given the election of the Emmanuel Macron as French president in 2017 and the return of the Grand Coalition of German political parties in 2018. Franco-German cooperation can still be a powerful magnetic force in the EU and a bilateral Franco-German bargain can often provide a template for a larger Union agreement. This time around, however, it is likely the traditional roles would be reversed, with Paris looking to accelerate the speed of change, and Germany seeking to slow it down, particularly among those initiatives aimed at raising the volume of financial transfers between Eurozone member states.

A Weimar Coalition

While a rejuvenated Franco-German coalition appears obvious, it may not have the influence to mobilize Central and Eastern European member states, given the vast gap between their vision and that of the Polish and Hungarian governments in particular. An expanded coalition that includes Poland, a nominal partner of France and Germany in the “Weimar Triangle” founded after the end of the Cold War, could have greater legitimacy. However as long as the conservative and Eurosceptic Law and Justice Party remains in office in Poland, such a coalition will not materialize.

A new Hanseatic Coalition

The third conceivable coalition is named after the medieval association of trading cities stretching from the Netherlands in the west to the Baltic Sea in the east. This coalition would include Germany and the eight northern European member states whose finance ministers began to meet in early 2018 to discuss reforming the Eurozone. On monetary, fiscal, and EU budget policies these states are closer to Germany than Germany is to France. However, given their geographical and ideological positions it is unlikely they could integrate and mobilize the support of Southern, Central, and Eastern European members or that Germany would weaken its relationship with France in their favor.

An uncertain future

Whatever its make-up, any new hegemonic coalition will face an uncertain task. The nationalist and Eurosceptic trend which began in the 1990s and gained momentum during the recent refugee crisis amid public opposition to mass integration and long-standing fears about dilution of national identity and globalism, has crystallized into a major political force.

Across Europe, nationalist Eurosceptic parties have made significant electoral gains, in some cases taking office, in others becoming the main voice of the opposition, forcing centrist leaders to adapt their policies to win back conservative votes.  Although there remains strong resistance to far-right parties, as evidenced by Macron’s victory, support in France for the right-wing National Front party is higher than ever. If President Macron fails in his efforts to reform and rejuvenate the French economy, as he well might, the extreme Right is well-placed to benefit from his failure.

The situation in Germany is similar. Last year, the AfD became the first extreme right-wing Eurosceptic party to win seats in Germany’s federal parliament since 1953. Having won 12.6% of the vote, it is now the country’s biggest opposition party putting pressure on center right parties to accommodate Eurosceptic opinions.

Particularly in the wake of Brexit, Europe needs a new champion and without strong support from a politically dominant hegemon or hegemonic coalition, the risk that the Union will fall apart in new crises is very real. And we don’t have much time left for stabilizing hegemonic leadership to develop. Right now, there is a two-to-four-year window – at most – before the next French and German elections. If these countries’ current leaders do not take the opportunity to weld Europe more closely together, then the next big crisis may well signal the beginning of the end to the nearly 70-year-old project that has kept the peace in Western Europe, fostered its democracies, and helped to deliver growth and prosperity by keeping European countries’ economies and societies open to each other.

MIT Sloan Management Review

As artificial intelligence-enabled products and services enter our everyday consumer and business lives, there’s a big gap between how AI can be used and how it should be used. Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products.

Ethical issues with AI can have a broad impact. They can affect the company’s brand and reputation, as well as the lives of employees, customers, and other stakeholders. One might argue that it’s still early to address AI ethical issues, but our surveys and others suggest that about 30% of large companies in the U.S. have undertaken multiple AI projects with smaller percentages outside the U.S., and there are now more than 2,000 AI startups. These companies are already building and deploying AI applications that could have ethical effects.

Many executives are beginning to realize the ethical dimension of AI. A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organizations don’t yet have specific approaches to deal with AI ethics. We’ve identified seven actions that leaders of AI-oriented companies — regardless of their industry — should consider taking as they walk the fine line between can and should.

Make AI Ethics a Board-Level Issue

Since an AI ethical mishap can have a significant impact on a company’s reputation and value, we contend that AI ethics is a board-level issue. For example, Equivant (formerly Northpointe), a company that produces software and machine learning-based solutions for courts, faced considerable public debate and criticism about whether its COMPAS systems for parole recommendations involved racially oriented algorithmic bias. Ideally, consideration of such issues would fall under a board committee with a technology or data focus. Unfortunately, these are relatively rare, in which case the entire board should be engaged.

Some companies have governance and advisory groups made up of senior cross-functional leaders to establish and oversee governance of AI applications or AI-enabled products, including their design, integration, and use. Farmers Insurance, for example, established two such boards — one for IT-related issues and the other for business concerns. Along with the board, governance groups such as these should be engaged in AI ethics discussions, and perhaps lead them as well.

A key output of such discussions among senior management should be an ethical framework for how to deal with AI. Some companies that are aggressively deploying AI, like Google, have developed and published such a framework.

Promote Fairness by Avoiding Bias in AI Applications

Leaders should ask themselves whether the AI applications they use treat all groups equally. Unfortunately, some AI applications, including machine learning algorithms, put certain groups at a disadvantage. This issue, called algorithmic bias, has been identified in diverse contexts, including judicial sentencing, credit scoring, education curriculum design, and hiring decisions. Even when the creators of an algorithm have not intended any bias or discrimination, they and their companies have an obligation to try to identify and prevent such problems and to correct them upon discovery.

Ad targeting in digital marketing, for example, uses machine learning to make many rapid decisions about what ad is shown to which consumer. Most companies don’t even know how the algorithms work, and the cost of an inappropriately targeted ad is typically only a few cents. However, some algorithms have been found to target high-paying job ads more to men, and others target ads for bail bondsmen to people with names more commonly held by African Americans. The ethical and reputational costs of biased ad-targeting algorithms, in such cases, can potentially be very high.

Of course, bias isn’t a new problem. Companies using traditional decision-making processes have made these judgment errors, and algorithms created by humans are sometimes biased as well. But AI applications, which can create and apply models much faster than traditional analytics, are more likely to exacerbate the issue. The problem becomes even more complex when black box AI approaches make interpreting or explaining the model’s logic difficult or impossible. While full transparency of models can help, leaders who consider their algorithms a competitive asset will quite likely resist sharing them.

Most organizations should develop a set of risk management guidelines to help management teams reduce algorithmic bias within their AI or machine learning applications. They should address such issues as transparency and interpretability of modeling approaches, bias in the underlying data sets used for AI design and training, algorithm review before deployment, and actions to take when potential bias is detected. While many of these activities will be performed by data scientists, they will need guidance from senior managers and leaders in the organization.

Lean Toward Disclosure of AI Use

Some tech firms have been criticized for not revealing AI use to customers — even in prerelease product demos as with Google’s AI conversation tool Duplex, which now discloses that it is an automated service). Nontechnical companies can learn from their experience and take preventive steps to reassure customers and other external stakeholders.

A recommended ethical approach to AI usage is to disclose to customers or affected parties that it is being used and provide at least some information about how it works. Intelligent agents or chatbots should be identified as machines. Automated decision systems that affect customers — say, in terms of the price they are being charged or the promotions they are offered — should reveal that they are automated and list the key factors used in making decisions. Machine learning models, for example, can be accompanied by the key variables used to make a particular decision for a particular customer. Every customer should have the “right to an explanation” — not just those affected by the GDPR in Europe, which already requires it.

Also consider disclosing the types and sources of data used by the AI application. Consumers who are concerned about data misuse may be reassured by full disclosure, particularly if they perceive that the value they gain exceeds the potential cost of sharing their data.

While regulations requiring disclosure of data use are not yet widespread outside of Europe, we expect that requirements will expand, most likely affecting all industries. Forward-thinking companies will get out ahead of regulation and begin to disclose AI usage in situations that involve customers or other external stakeholders.

Tread Lightly on Privacy

AI technologies are increasingly finding their way into marketing and security systems, potentially raising privacy concerns. Some governments, for example, are using AI-based video surveillance technology to identify facial images in crowds and social events. Some tech companies have been criticized by their employees and external observers for contributing to such capabilities.

As nontech companies potentially increase their use of AI to personalize ads, websites, and marketing offers, it’s probably only a matter of time before these companies feel pushback from their customers and other stakeholders about privacy issues. As with other AI concerns, full disclosure of how data is being obtained and used could be the most effective antidote to privacy concerns. The pop-up messages saying “our website uses cookies,” a result of the GDPR legislation, could be a useful model for other data-oriented disclosures.

Financial services and other industries increasingly use AI to identify data breaches and fraud attempts. Substantial numbers of “false positive” results mean that some individuals — both customers and employees — may be unfairly accused of malfeasance. Companies employing these technologies should consider using human investigators to validate frauds or hacks before making accusations or turning suspects over to law enforcement. At least in the short run, AI used in this context may actually increase the need for human curators and investigators.

Help Alleviate Employee Anxiety

Over time, AI use will probably affect employee skill sets and jobs. In the 2018 Deloitte survey of AI-aware executives, 36% of respondents felt that job cuts from AI-driven automation rise to the level of an ethical risk. Some early concerns about massive unemployment from AI-driven automation have diminished, and now many observers believe that AI-driven unemployment is quite likely to be marginal over the next couple of decades. Given that AI supports particular tasks and not entire jobs, machines working alongside humans seems more probable than machines replacing humans. Nonetheless, many workers who fear job loss may be reluctant to embrace or explore AI.

An ethical approach is to advise employees of how AI may affect their jobs in the future, giving them time to acquire new skills or seek other employment. As some have suggested, the time for retraining is now. Bank of America, for example, determined that skills in helping customers with digital banking will probably be needed in the future, so it has developed a program to train some employees threatened by automation to help fill this need.

Recognize That AI Often Works Best With — Not Without — Humans

Humans working with machines are often more powerful than humans or machines working alone. In fact, many AI-related problems are the result of machines working without adequate human supervision or collaboration. Facebook, for example, has announced it will add 10,000 additional people to its content review, privacy, and security teams to augment AI capabilities in addressing challenges with “fake news,” data privacy, biased ad targeting, and difficulties in recognizing inappropriate images.

Today’s AI technologies cannot effectively perform some tasks without human intervention. Don’t eliminate existing, typically human, approaches to solving customer or employee problems. Instead — as the Swedish bank SEB did with its intelligent agent Aida — introduce new capabilities as “beta” or “trainee” offerings and encourage users to provide feedback on their experience. Over time, as AI capabilities improve, communications with users may become more confident.

See the Big Picture

Perhaps the most important AI ethical issue is to build AI systems that respect human dignity and autonomy, and reflect societal values. Google’s AI ethics framework, for example, begins with the statement that AI should “be socially beneficial”. Given the uncertainties and fast-changing technologies, it may be difficult to anticipate all the ways in which AI might impinge on people and society before implementation — although certainly companies should try to do so. Small-scale experiments may uncover negative outcomes before they occur on a broad scale. But when signs of harm appear, it’s important to acknowledge and act on emerging threats quickly.

Of course, many companies are still very early in their AI journeys, and relatively few have seriously addressed the ethics of AI use in their businesses. But as bias, privacy, and security issues become increasingly important to individuals, AI ethical risks will grow as an important business issue that deserves a board-level governance structure and process.

Comments