With a market projected to reach $70 billion by 2020, artificial intelligence (AI) is poised to have a transformative effect on consumers, enterprises, and governments around the world. At MWC Barcelona, we will explore the real potential of AI, how we must manage such a profound technological revolution and its impact on our professional and personal lives.
AI track sessions at MWC 2020 include:
• Becoming AI-Ready: Surviving & Thriving in a Robot Age
• Diversity & Democratisation: Ethics & Antidotes to Algorithmic Bias
• AI Autonomous Driving
• AI Data Governance Enabling Personalisation at Scale
• Dangerous Deepfakes & Public Distrust: Debating & Combatting Weaponization of AI
• Intelligent Automation at Scale: Exploring the Operator AI Implementation Journey
The proliferation of AI is having a significant impact on society, changing the way we work, live and interact. It is poised to have a major effect on sustainability, climate change and environmental issues. However, as AI acts more autonomously and becomes broader in its use, AI safety will become even more important. Commonly discussed risks include bias, poor decision-making, low transparency, job losses and malevolent use of AI (e.g. autonomous weaponry). While many believe the rise of artificial general intelligence (AGI) could massively benefit humanity by raising our quality of life as a civilization, some fear the development may lead to global doom.
“Our situation with technology is complicated, but the big picture is rather simple. Most AGI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history. It could enable brutal global dictatorship with unprecedented inequality, surveillance, suffering and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybody’s better off—the poor are richer, the rich are richer, everybody’s healthy and free to live out their dreams.” Max Tegmark, Professor, MIT
New technologies will cause job displacement and job creation. But perhaps the most significant impact is the degree to which it will bring about job change. McKinsey estimates ‘that about 75 million people worldwide will need to switch occupations by 2030 in the event that automation takes hold at a pace in the middle of our range of adoption scenarios. If the speed of adoption is faster, at the top end of our range, it could affect up to 375 million people, or about 14 percent of the global workforce,’
Machine-learning systems can easily pick up biases if their design and data sets are not carefully considered. Given that algorithms are rapidly becoming responsible for more decisions about our lives, by banks, healthcare companies and governments, any kind of inherent bias is a concern. Fears include whether AI algorithms are reinforcing racial stereotypes, gender biases and other prejudices as a result of a lack of diversity among scientists in the field — and of what happens to society when robots can do most jobs.
AI remains a jobseeker’s market, while the rapidly growing industry is failing to represent women. Equality in the field of AI expertise is vital to the technology’s ethical success. Sam Altman, the CEO of Open AI called machine learning “the most skewed field I know of right now”, in terms of PhD graduate gender balance, adding that AI “will have the most effect on the future of the world that we live in.”
In response to these concerns, ethical frameworks for AI are being written around the world. Last year, Professor Bengio co-authored the Montreal Declaration for Responsible AI, which named 10 pillars that AI development should not hamper including equity, democratic participation, and sustainable development. The UN, OECD, and Council of Europe have all formulated their own guides.
Autonomous driving technology is now a reality. If forecasts hold true, autonomous vehicles (AVs) operating as taxi fleets without human safety drivers could be in widespread use in cities around the world by 2030. Cities such as London, Shanghai, Pittsburgh and San Francisco all have test fleets of autonomous light passenger vehicles in operation (WEF), not without raising eyebrows and concerns from sceptical residents who have shown near zero tolerance for any accidents caused in the process. Self-driving cars have the potential to save millions of lives, reduce carbon emissions, give back billions of hours of time and restore freedom of movement. (GM Cruise). With significantly lower running costs than that of a vehicle with driver (estimated at between 44% and 61% lower for journeys between 10 and 20 km), autonomous transport offers a critical lifeline to our acutely congested cities.
Ensuring the accuracy and reliability of the data on which AI models are based is fundamental to business success. AI is only as good as the training data it’s given.
Many business leaders will apply ‘black box’ efforts toward new product introductions or marketing campaigns, assuming the data is accurate. However, if the wrong data is applied, huge investments could be at risk. This challenge is real. A recent KPMG CEO Survey showed nearly 50% of CEOs are concerned about the integrity of their data on which they base their critical decisions.
With vast amounts of data coming from multiple disparate systems, an effective data governance strategy needs to break down hidden or siloed data across the organization, empowering everyone to go beyond just producing and consuming data to trusting and using the data to optimize value through business analytics or AI applications.
Falsifying photos and videos used to take a lot of work. Now the advent of AI-generated imagery (GAN technology) has made it easier for anyone to convincingly tweak images or videos or can generate convincing images of people who don’t exist. These deepfakes have the potential to undermine truth, confuse viewers and sow discord on a larger scale than text-based fake news.
There have been many attempts at dealing with deepfakes, such as implementing digital watermarking or coming up with more offbeat detection methods. It’s difficult to catch all of them, however, particularly as it becomes easier and cheaper to generate more realistic photos and videos. Researchers suggest that regulators need to act quickly to crack down on deepfakes, companies who produce tools for deepfakes must invest in countermeasures, and social media firms should integrate those countermeasures directly into their platforms.
Automation is of central importance as mobile operators start to evaluate their commercial 5G strategies. From the operators’ perspective, the primary purpose of network automation is simplified network deployment, OPEX optimisation and a guarantee of user experience and service agility.
According to a 2018 report by Analysys Mason, 56% of mobile operators globally have little or no automation in their networks and yet, according to their own predictions, almost 80% expect to have automated 40% or more of their network operations, and one third will have automated over 80% by 2022.
To build a more valuable telecoms industry and to take advantage of automation and AI, a simplified network architecture and operations automation is needed to self-configure, self-heal, self-optimize and self-evolve telecom network infrastructures. Operators need to offer zero wait, zero touch and zero trouble services, which will provide the best-possible user experience, full lifecycle automation and maximum utilization. (TM Forum)