Last week I was at the AI Summit in New York (as co-chair and presenter), and I’m happy to report that everyone is now comfortable and excited about AI.
Okay, sorry, that’s a biased sample of people who would be naturally comfortable and excited about AI – data scientists, AI developers, AI vendors, etc. For business leaders and traditional professionals, the comfort and acceptance of AI is a bit more confusing.
Maybe there are less worries as AI develops and proves its worth, but people are still nervous about it. One of the most pronounced factors holding back the adoption of artificial intelligence is fear of the unknown. This includes justifiable concerns about bias, distrust of data, and reluctance to cede control to machines, which has made policymakers nervous about AI. Of course, real money – and lots of it – is at stake, and ultimately there are fears that AI is more fad than substance.
This lingering suspicion about AI was recently summarized in a study published in Harvard Business Review by Rebecca Karp and Aticus Peterson, both of Harvard. “Based on our ongoing research with dozens of companies, AI solutions most often fail to gain adoption because executives worry about how deploying AI might affect their business” , note the co-authors. “They worry that new technology will displace work, disrupt workplace dynamics, or require new skills to master, and they balk.”
It’s about throwing money at a new approach, but then letting it go. “Walking to the edge of deploying new technologies just to lose your temper – waste time and resources – is not the answer,” Karp and Peterson say. “On the contrary, leaders must strategically accelerate the deployment of AI technologies. Too often, organizations spend significant resources developing or acquiring transformative innovations, but don’t think enough about how to deploy them successfully. »
Industry experts across the spectrum of professions agree that AI is eliciting mixed emotions within the executive ranks. “Often, ignorance and fear of the unknown is one of the biggest barriers to AI adoption,” says Elad Tsur, founder of BlueTail (later sold to Salesforce, now known as of Salesforce Einstein), and now founder and CEO of Planck.
“There are two diametrically opposed forces that keep AI at bay: fear and irrational exuberance,” acknowledges Danny Tobey, partner at global law firm DLA Piper. “People don’t understand AI, so they worry about its unintended consequences, leading many people to bury their heads in the sand when they could create value for the business.”
Conversely, buying into the hype also leads to crushed expectations, Tobey continues. “There is so much excitement around AI that some people have unrealistic expectations of what it can and cannot do. They work from the science fiction view of AI as truly autonomous thinking machines with creative capacity, but the reality is that the power of AI today is deep but narrow. He can look for patterns in the data to solve problems, but he doesn’t know what a problem is yet. »
It’s going to take time until the nervousness around AI dissipates – and that may be when it’s no longer “AI”, but a standard part of a process. “Until AI is fully integrated as a standard in all business applications, many organizations will continue to fear the power and complexity of the technology,” said Sharad Varshney, CEO of OvalEdge. “Many business users may be hesitant to adopt AI technologies because they feel overwhelmed by the proposition of using them for critical business tasks. This is why, in my opinion, greater integration is fundamental.
The key is to help business leaders understand that “AI is fully manageable,” Varshney continues. “There is a misconception that when you integrate AI into your IT infrastructure, you somehow lose control over that aspect. Instead, the opposite is true. While AI and machine learning allow technology to grow and develop independently, ultimate control always remains in the hands of administrators.. AI technologies support specific business processes and are designed to achieve those outcomes according to user instructions.
AI fears can gradually be alleviated by demonstrating the value – rather than the peril – to the business. For example, when it comes to AI-based facial recognition systems, skepticism was initially very high,” says Tsur. “Even with an accuracy rating north of 99%, people doubted that AI could match or surpass human capabilities. However, a recent National Institute of Standards and Technology [NIST] A research study on facial recognition technology found that beyond improvements in statistical accuracy, AI also eliminates potential distractions present in monotonous manual review of repetitive tasks. While there have also been concerns about bias in facial recognition systems, this is being addressed, Tsur adds. “It is possible to train facial recognition models and create processes to address all groups, including those with physical disabilities or religious covers that limit typical data intake.”
There are two fundamental rules when deploying AI. “Garbage in, garbage out” and “correlation does not necessarily imply causation,” says patent attorney Andrew (AJ) Tibbetts, intellectual property attorney at Greenberg Traurig. “Both are in addition to an overarching rule prohibiting a company from simply putting data into a model and then blindly trusting the result. Before AI can be used reliably, the problem must be fully understood and enough data collected for this overall understanding of the problem, and the data should be prepared for AI processing. correlation is not causation” is perhaps well known, but may be overlooked by some impressed with the promise of AI and the race to deploy. If you fully understand the problem to which you are applying AI, you You can more easily verify the consistency of the response you get from it, thus avoiding misunderstandings downstream.
Keep humans in the loop, Tibbetts also advises. “AI can be particularly useful for making recommendations or making initial decisions that can be overridden by a human operator. There is always a risk that an AI system could misidentify a pattern or trend and therefore “A system acting on its own could make a bad decision. As such, while an AI system may be able to assess credit applications to determine whether an applicant demonstrates sufficient creditworthiness, it may be essential to have the recommendation verified by a human.
#Business #leaders #nervous #side #effects #artificial #intelligence