Taking back our AI Future…now

I have the privilege of leading a book club for a group of passionate readers and learners at AWS. Last month, we selected BetweenBrains: Taking Back Our AI Future as the book of the month. We invited the co-author, Dr. George Tilesch, to join the conversation. Dr. George is a senior global innovation and Artificial Intelligence (AI) expert who is a conduit and trusted advisor between the US and EU ecosystems, specializing in AI Ethics, Impact, Policy, and Governance. Most recently, he was Chief Strategy and Innovation Officer for Global Affairs, the public interest arm of Ipsos, a global top 3 research firm. In this role, he led the Digital Impact and Governance research and advisory practice. Through his thought leadership, he advises governments, think tanks and corporations on AI strategy. 

Dr. George co-authored the book with NASA innovation leader, Dr. Omar Hatamleh.

Below is an excerpt of my virtual interview with Dr. George. I hope you enjoy it.

There are already many books about artificial intelligence; what compelled/inspired you to write this book, and why is now the right time for everyone to read BetweenBrains?

First, when we started writing more than three years ago, the situation was wildly different re: the sheer quantity of AI Books present. As a transversal technology on the path of becoming near-ubiquitous, that also triggers fundamental philosophical/ethical questions; it is actually very welcoming to have many books on the topic, as it exposes myriad perspectives.

We got inspired to research and write at a point in time when there was an explosion in both AI capabilities and AI investment. Those days were characterized by boundless techno-optimism and the ecstasy of exponential returns. This period was the time of big bets on AI, and almost nobody talked about dangers or risks. Even those who did were talking about the dangers of a very distant future. What has changed since is that the media exposed the topic as something that is much more mainstream. Unfortunately, AI automatically became part of various mainstream black-or-white narratives circulating in our low-trust, high-tension world that is primarily digital now. As such, people became equally concerned and curious about AI but without the general, unbiased understanding that would enable taking stances based on the thorough proper analysis in the right mindset.  

When we started to write, we were pretty explicit about a few principles and the mission. We wanted to write a balanced book applying critical thinking and thorough analysis, but also with a strong and uncompromising moral anchor. We wanted to write a beautiful mixture that is full of accurate and relevant data on how our world has already changed because of AI but is also timeless in a sense because of fundamental philosophical questions it inevitably raises. We thought it essential to provide a peek into our field experience that we gathered while working with world leaders and top executives to make some aspects very practical and human at the same time. We wanted to cover the now and the near-term.  We believed much of the AI narrative was being captured by notions of Superintelligence, Singularity, or Robot Rights. These notions had way less relevance and impact on our lives than the AI technologies that were already out or ready to jump, near invisibly to most.

And most importantly, we wanted to ask the questions both profound and hard that would enable humanity to define the purpose and steer the course of a beneficial AI Future for our civilization. We wanted to expand the horizons of those AI practitioners and stakeholders who mostly see one slice of the enormous AI pie.  Also, talk to the broadest possible audience of informed digital citizens increasingly seeking answers to the myriad question marks that even AI’s current power and promise triggers.




What excites you the most about the potential (future) of Artificial Super Intelligence (ASI)? What scares you the most?

On the one hand, we expressly wanted to avoid getting into ASI territory in detail. It is a lure that is very hard to resist since I am convinced that many people, including myself, first got excited about AI as kids when reading sci-fi books. ASI captivates the mind and deserves much dialogue, but in our present setup having too much talk on ASI is happening at the expense of questions much more urgent and impactful. The book serves that sense of urgency that we, as authors, have gathered from the field.

However, I want to answer your question. ASI – only if crafted and bound successfully as a tool and companion to humanity – can lead to Utopia and bring about a currently unimaginable quality of life for our civilization. However, the viability of such an outcome is hard to foresee at this point. So much needs to change in our mindsets, values, and institutions…Therefore caution is very much in order, echoed by many concerned luminaries. I think it is still worth doing but proceed with maximum caution and foolproof models because we are toying with forces unprecedented and highly explosive. A fully autonomous general machine intelligence is trained to maximize its capabilities to exploit weaknesses for overcoming hurdles on its path of fulfilling its objectives – and we can easily become those hurdles. My biggest fear is the mere seconds or minutes a newborn AGI needs to explode in myriad directions and become ASI: I worry that we as humans may not be prepared for such a jump. Foresight is not our core strength.

Flashback and then fast forward to the present, what has surprised you the most about your journey thus far (in writing and publishing this book)?

The most positive surprise was that while I saw many AI stakeholder leaders are hampered – or even trapped – in the organizational logic they serve as employees, they share many concerns and fears as citizens, consumers, parents – human beings. These concerns open up connecting people, seeking consensus, and collaborating on a shared vision on the true civilizational purpose and stewardship of AI. The other side of the coin is the sense of helplessness many of them have – the mainstream AI narrative has been that of “inevitability” for too long, so people don’t think they have a say in its shaping. Many leaders still treat AI as pure hype or “just another Industrial Revolution,” we firmly disagree with that, demonstrating our thesis with a set of AI Power Principles that show why it is wildly different this time. Coming together for reining in the future in a human-centric way is therefore a very much aspirational goal and an uphill battle, but achievable.

What is the best actionable advice that you’ve received that continues to be a source of inspiration in good times and challenging times?

For our times of distrust, disinformation, mindless partisanship, and social fragmentation, the anchor for me has always been “strong opinions, loosely held.” It means that we have to have firm moral convictions, deeply informed factual perspectives, and courage to speak up. At the same time, it needs to be met with an equal amount of empathy, critical thinking, self-checking, and wisdom. The very core of our social cohesion is being attacked every single day by overwhelming forces. The final battlefield is within our minds; we all need to do a ton of homework, both individually and as a society.  

How would you advise executives, government agencies, and political leaders to use AI (for good) while eliminating bias?

We are at a point in time when a new, integrated socioeconomic AI vision and models need to be built, tough questions need to be asked, and both citizens, leaders, and institutions need to be brought up to speed. Regulating AI in a way that is both cautious and consensual is very much desired but does not do the full job. Getting from AI Ethical principles to fluid, agile AI Policy will be a long trek that should be revisited and adjusted every single day. The next few years will see a proliferation of AI predictive and decision support systems that will have a lure of relinquishing our best judgments and our mandate to decide and overrule. You will hear a lot of “AI made me do it” at all levels of society. Especially at this level of maturity, the growing cases of AI mishaps will at least partially be rooted equally in human action or omission as well as data and model biases. This new paradigm of in-betweenness vis-a-vis machine and human intelligence will last for a long time, actually, hopefully, forever, in a balanced way. Our generation must consciously lay down the foundations for this era that brings about less Artificial but Augmented Intelligence for our whole civilization. To achieve that beneficial outcome, we have no time to lose.

What is the biggest mistake you see when executives/companies/governments try to develop and implement an AI strategy? How would you advise them to change/augment their approach?

Well, it’s different in each sector – and even inside organizations, simply because each executive function sees AI in a different light. There is very little trust between sectors and a lack of understanding of each other’s interests. We need to build new frameworks of understanding and shared interests between the researchers, owners, regulators and users of AI.

If I really have to point out one factor is definitely this kind of fragmentation, turfs not talking to each other and lack of cross-organizational strategic thinking and execution about AI. There are lots of smaller but important pieces that are barriers: lack of data sophistication within the org, lack of an experimentation-type mindset that is essential for AI, lack of the right internal talent or being stuck in “eternal pilot mode”. We have to understand that for most organizations,  the challenge of mastering AI landed on top of a big luggage they’ve carried for decades now, that is affectionately called Digital Transformation. For many leaders it constitutes an external pressure while they are trying to keep the house together and deal with Data Strategy et al. If the “Let’s get AI” directive lands in the CTO’s office too soon, many others will never accept AI as their own. So my best advice would be to have an uncompromising focus and shared understanding on the board and C-Suite levels on what AI can reasonably deliver to your business strategy and what strategic segments can be driven by it.




Also, my pet peeve: AI Ethics is only an afterthought to many. Less than 20% of AI developers have received any kind of Ethics training.  Especially during tough times like this one, it is tempting to perceive AI Ethics as a barrier and a speed bump. The near future will see  unprecedented scaling of Narrow AI solutions, and without the right, conscious safeguards implemented, things can get ugly and lead to huge competitive disadvantages for organizations who moved too fast and broke too many things. A lot of my work is focused these days on proving to leaders that being “AI-ethical’ equals competitive advantage and pays off considerably. 

One thought on “Taking back our AI Future…now”

Comments are closed.