Dr. Amy Zalman, PhD, describes how organizations can integrate strategic foresight, avoid bureaucratic pitfalls, and harness new technologies like artificial intelligence.
The ability to see around corners has never been more critical. With new and promising technologies threatening disruption, it only follows that such fluctuation requires professional analysis and strategy. Foresight has become as essential as the best-laid quarterly plans.
According to Amy Zalman PhD—a Georgetown professor and founder of DC-based foresight consultancy Prescient—anyone can be a futurist. In fact, everyone should. Her clients include Fortune 500 companies and civilian and military government organizations; she’s on a mission to give organizations the tools they need to deploy strategic foresight at scale.
Dr. Zalman sat down recently with Toptal Insights Editor David Grabowski to discuss strategic foresight, trends in global transformation, the influx of artificial intelligence, and why a future-facing skillset will only become more critical.
The Role of Futurism and Futurists
David Grabowski: I read Alvin Toffler’s Future Shock six or seven years ago and found it fascinating, the idea that someone, 30 years ago, was more or less predicting trends that are now very much real. How do you define a futurist?
Dr. Amy Zalman: The stock answer is that there is no professional accreditation required. You can put “futurist” on a card today and be a futurist.
In Western Europe, it’s quite common for strategic foresight to be part of your planning process; it’s the way we would do strategic planning here. It isn’t “out there.” It’s not about flying cars. It’s about a set of activities for helping institutions or organizations or firms plan into the distant future for potential uncertainty.
So, my answer is that a futurist is a professional who—whether accredited or not—understands the mindsets, skillsets, tools, and frameworks that are available to people to help them make decisions now that will assist them to thrive in the future.
There is another answer which is the “futurist in everyone” answer. These tools and mindsets are not rocket science. They’re available to us; they’re just not part of our general culture. The great thing about being a futurist is that anyone can be one, and it would be good if everyone were.
DG: How does one become a futurist?
AZ: One of the ways to become a futurist is to adopt its mindsets. First, understand that the future will be different from the past. That’s easy to accept cognitively, but it becomes challenging to actually implement that idea, I think, at both a personal level and at an institutional level. It becomes easier if you reflect on the probable fact that at a personal level, much of the time, our lives are what we expected and projected for ourselves in the past. And what we thought we wanted changed along the way as well. Our current vision of how the future will unfold rarely comes true in the way we thought it would. That’s true for institutions as well as people
Second, be attuned to history because history holds all those lessons of how surprises happened.
The third is a little more technical—but again, not unavailable. Get clear about the fact that the future that we expect may be the one least likely to unfold. I can give you a picture of how to conceptualize this: the United States and the Soviet Union never did have an expected clash of the sort that was expected or a nuclear confrontation. Paradoxically, this is because it was expected and as a result, people worked to prevent it.
What we expect is usually the tacit sum of the ideas we hold today about the future and what would happen if today kept going into the future without any intervention. That usually doesn’t happen. It’s very interesting how we play with these in our culture; somebody remarked that it was easier in 1950 for a man to imagine a man setting foot on the moon than a woman sitting in the office next door. There are some kinds of change that we imagined with great ease, and they’re dramatic, and they’re often utopian or dystopian. The ones that actually unfold are less dramatic when they appear, like creeping social change. Even though there are signs they are unfolding, we don’t prepare for them, and so we are surprised.
DG: How does a futurist do their job? How do you come up with scenarios and decide which are more likely or pressing?
AZ: Well, there are a set of techniques and tools and frameworks. That’s what I teach and what I do with clients. They follow a path. First, I help them become clear-minded and nuanced about some of those mindsets I was talking about. “What do we expect? Let’s get conscious about what we expect so that we can also become aware that there are all kinds of possibilities that may lie outside of that expected framework. Now, let’s start to look at the different ways in which change is occurring such that it could affect our future.”
The typical route to understanding and analyzing a range of potential changes on the horizon and how they’re being mobilized is to build scenarios. None of those scenarios has to come true exactly. Instead, in the process of building them, people become aware of the range of potential futures.
Part of doing my job is also helping teams institutionalize the foresight process. So at its best, being a futurist organization is not about a report that you get, but actually having these skills and processes resident in-house. If you’re in a huge organization, map the flows of information through an organization so that you know whether cool stuff is being done, but it’s locked away from some of the people that may need to act on it.
DG: How do you decide which scenarios to act on?
AZ: The simple answer is you decide to act on the one that you want to unfold. Futurists often call this the “preferred future.” You find some space in that wide realm of the possible that you’ve identified and you say, “You know what? We want to go there. That’s where we want to be in 2030. We want to be in that market.” It doesn’t mean you ignore the other scenarios that may not happen, but the point is that if you’ve done this process right, you have sat and thought, “Okay, if this happens, are we prepared? What are the risks inherent in this unfolding?”
Incorporating Strategic Foresight in Organizations
DG: How do organizations implement these strategies? Is it as simple as hiring a futurist?
AZ: Some companies have in-house futurists. General Motors is a well-known one. You can build a capability in-house. You can hire a futurist, or you can hire a firm.
What an outside consultant should bring to you is not some plan and say, “Here it is. You do it.” What they should do is bring you skills. What we do is teach you to fish, so you don’t need an outsider forever.
A third option is a hybrid. We recently started an executive training course that will be offered in June called the Foresight Sandbox, where we will teach this capability. So you can send somebody from inside your organization to learn these skills and transport them into your team.
DG: What might signal an alarm bell for a futurist or someone working in strategic foresight? What signals “it’s time to take action on this?”
AZ: I think the question you’re asking is one that is looked at best from inside an organization. You’re in a constant state of preparation because you have an excellent process for looking at what we call the horizon, at incorporating signals and saying, “You know what? We are open to them.”
Once you have all that in place, you don’t do what bureaucracies desperately want to do, which is maintain the status quo and not hear disruptive signals. If you are GM, at what point do you say, “We’re not a car company anymore. We have to be a mobility company; we have to accept and embrace and compete on the fact that the car is now a computer.”
The way that kind of decision gets implemented is through the strategy process of a company at a high level. It comes from watching technology, watching cultural trends. What is technology doing? What are environmental pressures doing? Is that car going to be fuel powered? There are a lot of other countermanding events that suggest that it will be electric or something else. It may not even be a car by the time you get done with it.
Trends in Global Transformation
DG: We’ve talked a little bit about electric cars and changes concerning mobility, but what issues should executives be most concerned about right now?
AZ: Top of mind is the complexity of some of the macro changes in our world. I think it should be a significant part of foresight for business leaders to get an everyday understanding of what science is teaching us about complex systems; being aware of emergence. Nothing happens in a silo or a vacuum.
It’s evident that we’re in a transformational moment. Some people like to call it the Fourth Industrial Revolution; I don’t think that’s quite it. It is not merely an economic change. We’re in a technological shift on the scale of the industrial revolution, but what is most important is that it shifts other paradigms, other expectations.
So, what should business leaders be looking out for? One, climate change. That is probably the number one issue that leaders in the world need to be aware of. In the shorter term, extreme weather and its movements; in the longer term, the sustainability of everything, their business, the people who work there, the food supply that gets food into their stomachs every day.
Some geologists say now we’re in a new epoch; the last one was Holocene. So, for 10,000 years we have been understood to live in our own system while the earth systems operate on their own course. Anthropocene is the premise that that is no longer true; it doesn’t just affect us; we affect it.
Attendant to that, on an enormous scale, is food and population. We’re about to have 10 billion people on earth by 2050, probably. That introduces some resource stresses and opportunities for technology and for creativity that are quite impressive.
The third is the convergence of technologies that all start with a computer. I mean, they start with the digital, and from there they become artificial intelligence and machine learning, they become biotechnology, they become genomics and so on and so forth. The upshot of it is that we are very clearly introducing surprises and systemic risk into our social systems and critical infrastructures.
DG: Based on your expertise, what are the most significant shifts we’re about to see related to Artificial Intelligence?
AZ: We’re at the beginning of some of the biggest shifts: disruption and reconstitution of institutions on a large scale. So a car company turned mobility company is one good example.
You can look at healthcare too. CVS is a $73 billion company that just bought a health insurer, Aetna, which is a good example of a transformation that goes back to artificial intelligence. It goes back to the capacity to automate in their stores, to have pharmaceutical orders and medical records available across an extensive system, and to link those to insurance records and possibly additional data of their customers. These technological changes ultimately shift the paradigm of health care, and of where it takes place. In the near future, you may go to your local pharmacy for healthcare and that care may increasingly be preventive and data-driven.
The other important shifts are political. We live in a ‘peer-to-peer’ world and individuals have a great deal of technological firepower that we did not use to have. We can do so many things with our phones. How individuals can connect and gain a version of political power from that is in its infancy, and that is radical.
Then there are systemic risks. Nobody knows what’s inside Facebook’s big computer room. Nobody knows exactly how a black box of algorithms will behave, and some risks or disasters will only unfold when one algorithm runs into another.
DG: How does an organization make sure that they’re ready for these changes or shift if necessary?
AZ: You can go and take a course from any number of institutions. Prescient has partnered with Arizona State University’s School for the Future of Innovation and Society to create the Foresight Sandbox. We will teach the skills of strategic foresight and join them with learning about emerging technology, to begin to help companies answer the question you asked, which is, “how do you prepare?”
Non-technologists need to become literate enough to understand decisions and resource allocations that they’re making. Technology often has a complicated vocabulary; there are no authoritative terms. So having some clarity about technology is a good idea.
The other thing that happens with technology is—because it is so dramatic and overwhelming and a competitive advantage—it gets isolated from other planning. You need to integrate the planning for AI with a worldview that accepts uncertainty in the future.
Instead of focusing narrowly on “What’s our implementation?” or “What’s our digital transformation strategy?” you might start with the question, “What’s happening out there? Let’s look at what’s happening in artificial intelligence research. How is it changing our external environment, our internal environment? How is that possibly interacting with those other kinds of changes we talked about, whether they’re demographic or climate or something else?” That’s one.
The second is to look at risk, opportunity, implementation, and ethics in the same framework so that we don’t implement and then, in five years we go, “Oh no, we forgot about ethics. What has happened to our employees? Oh no, we forgot about security because we were so excited about implementing AI solutions.”
An integrative approach is best. People need to become literate in foresight, literate in technology-speak, and then use an integrative approach to thinking about and planning for the future.
The Accelerative Thrust and Agility
DG: Alvin Toffler talked about something he called the accelerative thrust: the idea that we’re not just changing more, but the rate at which we’re changing is increasing. Do you think the accelerative thrust is a concern in 2019? What might organizations look like in a superfluid world?
AZ: There are two things. The reality of speed, which is interesting, is algorithmic speed, is the speed and the capacity of computation to happen in a faster way with more stuff. I think that the objective truth of that is found in 5G networks, and then there are 6G and 7G networks. That’s real speed, where all the introduction of all that scary risk comes in when we are tethered to that system through our cars and homes and watches and the chips we embed in ourselves, and all that.
However, there are other kinds of speed and perceptions of pace that also counter that and are changing and developing all the time. I don’t know that fast is the only rate at which we are changing. The world is a big place, and that kind of sensibility of speed is not true everywhere. We still live in a world of layered perceptions of time. We perceive time in some very different ways—putting your hand in something hot is different from dancing all night or reading a boring book and time feels different eventually.
Some of this is going to be answered by a lot of “just in time.” That goes back to the beginning of our conversation: those are the kinds of paradigms shifts that people can prepare for. I don’t know about agility. I think the thing that we need to be agile about is understanding that these are fundamental changes in our metaphors and assumptions and they’re happening.
What’s important is to start asking “okay, what if the factory of the future isn’t a factory? What is it? What if the place where you get help in the future is not a doctor’s office? What if?” I think that’s the game that should be played. The reason people are asking about agility and speed, speed to catch up, is because they’re not changing their paradigms. You’re not going to speed up the same factory infinitely. Eventually, you’re going to change the paradigm, so you’re not asking all the time, “how do we do this?”
To return to my military example before, that’s an institution that’s like, “how do we acquire faster? How do we change our acquisition cycle?” It is, like some other institutions, very dependent and used to very long lead times. You just settled on the tank you want in 2075, and you know what we’re not doing in 2075? Fighting a land war with a tank. So eventually, if you want to survive, you say, “maybe the Army is not what it is. We have to radically reimagine what we are. Not speed up and speed up and speed up.” Eventually, you’ll probably get tired and fall down.
DG: Last question. Something else that Toffler discusses in Future Shock is the idea of incubating new technologies rather than releasing them right away. So not what we did with cars—”hey, we have cars now, let’s make a bazillion of them and not think about the consequences.” Are there concerns about unforeseen problems down the road related to AI or other new technologies?
AZ: The car is a fascinating example of how this does not happen. In 1900 there were three ways of fueling cars. One was electric, which nobody remembers. The other was by an external steam engine. The third was the internal combustion engine and gasoline.
If you were smart in 1900, you bought a steam-powered car. That’s what most people bought. And if you were smart, circa 2000, you’d buy an electric car. So why did we end up with gasoline? First, oil was discovered in Texas in plentiful amounts. Second, Henry Ford innovated and figured out how to create an internal combustion engine in mass numbers.
So uncertainty is always around the corner, and in 1900, people did not know all of the events that might unfold. Just as we don’t know now, but foresight helps mitigate the impacts of uncertainty. Asking what could happen that we don’t expect is a good start. Asking, what can we do now to make the best possible choices for the environment or consumers, is a good start. Most of the time people do not do this. But if we want to continue to thrive on this planet, we really need to start.