Abundance


We may very well think that dark times are approaching. Many of us ask ourselves, how long will it be before our world collapses under the strain of climate change, overpopulation and dwindling resources? Surely it’s just a matter of time?

Abundance; written by Peter Diamandis and Steven Kotler and published in 2012, argues otherwise. Far from being near to the end, society is on the cusp of a bright and innovative future, says the authors.

Changes in the world of business, technology and economics will transform societies across the globe for the better. In reading this book, you’ll find exactly what these changes are, some of which are less obvious than others.

For example, did you know that the computing power of the average laptop is close to overtaking that of the human brain? Or how access to the internet is making it easier for children in the developing world to get an education? Or how genetically engineered algae can solve the world’s energy crisis?

Technological and social innovations such as these will make our society a better place. Together, they will help move us away from the dangers we currently face and towards a bright, optimistic and abundant future.

The three most powerful points I took from the book were;

  1. The future is brighter than our brains and the media would have us believe.

  2. Many people believe that big businesses across the globe exploit the world’s poorest people and exacerbate income inequality, with the people on top enjoying an ever-increasing share of the profits

  3. From the potential role of robots and artificial intelligence in improving healthcare to the uses of nanotechnology and digital manufacturing in reducing waste and conserving natural resources, there are plenty of reasons to be optimistic that the future is not just bright, but may well be one of abundance.

Our brain’s architecture and the media lead us to have an overly pessimistic view of the future.

It’s hard to think about the future and not consider the potential dangers of war, terrorism, climate change, economic crises, population explosion and food shortages. Many of these threats seem so imminent that those who didn’t consider them when evaluating their future might be thought of as crazy.

In fact, there are underlying influences that tend to push us towards a pessimistic view of the future. The first is the architecture of our brains – principally, the section known as the amygdala. The amygdala is always on alert for threats in our environment and, when triggered, it initiates the fight-or-flight response. This reaction served us well in times when dangers around us were immediate and life-threatening, but is not so well suited to modern society, where threats tend to be more remote and probabilistic – e.g., the economy could nose-dive, there could be a terrorist attack, etc.

The second has to do with the kind of information we receive. News and media outlets are aware that positive news doesn’t elicit the same physiological reaction as threatening news, which is why they report true to the old adage “If it bleeds, it leads” in the battle for our attention. And so, we’re constantly bombarded with fearful images and scenarios that feed the amygdala, keeping us in a state of alert and preventing us from viewing the future objectively.

But if we look at the statistics, we would see that the industrialized world has never been safer. We’re living longer, wealthier, healthier lives and have massively increased our access to goods, services and information that our ancestors could never have imagined. Just as they were unable to fathom the impact of technological advances such as the internet, we also cannot see what affect future developments will have on our continued progress. The future is brighter than our brains and the media would have us believe.

The complex web of relationships among many of the world’s problems means they can be solved together.

The world is made up of complex systems where changes in one area can have an impact elsewhere. Natural ecosystems are a great example. Population changes in one species affect living and survival conditions for others. Although the complexity of some systems may appear to exacerbate the problems we face, it also presents great opportunities. If progress is made in one area, it can create momentum and positive benefits in others.

One of the major challenges we face in creating sustainability is the growth of the world’s population. With the current global population of seven billion projected to rise to nine billion by 2050, it’s difficult to imagine how seemingly dwindling resources, such as clean water, will be able to provide for so many more people.

This situation gets even more complicated when we consider that mortality rates will drop if we make greater strides in improving healthcare in developing nations, contributing to greater increases in population. But it would be far too simplistic to stop there. After all, there’s a strong correlation between birth and mortality rates. So although there may be short-term increases in the population, improvements in healthcare would actually slow population growth in the long run, says the authors.

When we look at Morocco, we see how quickly this can occur. In 1971, when child mortality rates were high, women had an average of 7.8 children. But, after the country made great strides in healthcare, education and women’s rights, birth rates dropped to as low as 2.7 children. When we also consider that much of the projected population growth is in Africa and Asia, the relationship between improving health outcomes and slowing population growth is much clearer. But this is only one example of the synergy between the various challenges we face in which progress in one area can mean improvements in others.

Far from being a source of global poverty, big business can raise standards of living among the world's poorest people.

Many people believe that big businesses across the globe exploit the world’s poorest people and exacerbate income inequality, with the people on top enjoying an ever-increasing share of the profits. However, this is increasingly not the case, as big businesses contribute more and more to the fight against global poverty.

One of the ways they are doing so is by developing cheaper products for people at the bottom of the income pyramid. Due to the skewing of income equality across the world, a vastly greater proportion of the population is in this demographic – that’s around four billion people. This constitutes a huge potential market and the opportunity to make a profit, while raising the standard of living for the world’s poorest citizens.

Grameenphone, a telecom company in Bangladesh, is a great example of a company employing this type of business model. When the company was launched, mobile phones cost far more than the average annual income, but phones were about to go digital, meaning that prices were set to drop dramatically over time.

By 2006, Bangladesh had sixty million cellphone users who added $650 million to the country’s GDP. Grameenphone had also invested $1.6 billion in network infrastructure, meaning that the money made in Bangladesh stayed there.

A second way that big businesses can contribute is through philanthropy. The high-tech revolution created a new breed of technophilanthropists, who, in comparison to earlier philanthropists, are younger, have a much more global vision, and have the business and political connections to really get things moving.

There are numerous examples, but the most prominent is Bill Gates, founder of the Gates Foundation, which aims to improve healthcare and fight extreme poverty. Gates has already donated $28 billion to the foundation and, in 2010, he launched the Giving Pledge with Warren Buffett, encouraging fellow billionaires to donate half of their wealth to philanthropic causes within their lifetimes.

Traditional methods of education are outdated and in need of a re-think, but ICT may provide the answer.

Access to education is a major global problem. We’re millions of teachers short and lack infrastructure – and where it does exist, it’s falling into disrepair. Students fortunate enough to have access to education are following seriously outdated frameworks. Business leaders agree that the educational system is not providing students with the skills needed to tackle the problems of the twenty-first century, particularly in areas such as critical thinking, creativity and problem solving.

Our current education system was formed during the industrial revolution, which not only influenced which subjects were taught, but how to teach. Industry required students to follow orders and fit in like cogs in a machine, so standardization through rote learning was the order of the day and conformity the desired goal. Society has since moved on, but education has not kept pace with these changes.

When we consider these problems, it seems education is in need of a serious re-think, but is it feasible with such thinly stretched resources? Fortunately, there is evidence to suggest that increased use and access to ICT may offer a solution which can address both issues of quality and delivery.

Sugata Mitra was concerned about educational access while running experiments on self-directed learning in Indian slums. He found that, just by providing a computer terminal and internet access, groups of young children could teach themselves and complete tasks without any previous experience or instructions.

Such evidence for self-directed learning is backed by the popularity of online learning platforms, such as the Khan Academy, where anyone can learn anything from basic math to quantum mechanics. This trend could be bolstered with educational video games, which have been shown to help students be highly motivated to take complex problems and find creative solutions.

Further development in such technologies, combined with a widespread push to increase access to the internet and online technology, may offer a cost-effective solution and provide individuals with a first-class, personalized education that delivers the skills today’s society needs.

A successful future depends on innovational freedom unhindered by fear of failure.

We are generally uncomfortable with the word failure. There’s a certain stigma attached to it and, because of this, many people go out of their way to avoid it – even if it means not attempting something new. However, failure should not be viewed as the endpoint, but more as part of a longer learning curve and a vital stepping stone towards innovation.

In this respect, Apple’s early attempt to introduce a personal digital assistant called the Newton is a revealing example. Commercially speaking, it was a disaster. Development costs were very high and sales were disappointing, but the story doesn’t end there. Much of the development work on the Newton went into creating a handwriting recognition system, which became the foundation for the hugely successful iPhone.

Companies are becoming increasingly aware of the potential benefits of failure, and many of them have developed techniques to make failure more acceptable in order to encourage innovation. Tata, for example, literally celebrates failure by giving an annual award for the best failed idea that taught the company a valuable lesson. Of course, the key ingredient here isn’t failure itself, but the freedom to innovate and learn from our mistakes.

Incentive prizes are a great example – because restrictions encourage innovation rather than constrain it. The prize itself is usually not enough to interest the big players in the industry, so smaller teams on restricted funding are attracted to compete. These restrictions force the teams to innovate and find cheaper solutions to problems.

With this in mind, the Entrepreneurship Center of MIT developed the 5 x 5 x 5 Rapid Innovation method: five teams of five employees work together for five days to develop five different “business experiments” that will cost no more than €5,000 and can be run within a five-week period. These limits force the teams to innovate and try new things. As the author suggests, it’s not so much about thinking outside the box as in the right-sized box.

Computer processors are set to surpass the human brain’s calculation capacity within the next fifteen years.

Back in 1965, before the advent of home computing, Gordon Moore, an employee of Fairchild Semiconductor, made a bold prediction about the future of the computer industry. He predicted that the number of transistors on a computer chip, and therefore the processing speed, would continue to double every two years for the next ten years. Moore went on to become a co-founder of Intel, and his prediction wasn’t just right for the next ten years, but remains relevant today, and has since become known as Moore’s law.

Consider that today’s low-end laptops can perform around 10 to 11 (1011) calculations per second and the human brain is estimated to perform 10 to 16 (1016) calculations per second. So, if computer processors continue to progress in line with Moore’s law, the average laptop will surpass the speed of the human brain in the next fifteen years. This exponential increase in processing speed will have tremendous implications, particularly in the field of artificial intelligence and robotics, says the authors.

But, while we’ve managed to develop technology in step with the law to date, there is skepticism that we can continue to do so due to the limitations of the technology. One of the main arguments is the fact that electrical signals require the movement of electrons, which generates heat. And this build-up of heat in microchips is seen as a barrier to achieving much greater processing speeds.

Clearly unfazed by such skepticism, the industry continues to progress and innovate. IBM has recently developed microchips that run on light, removing the potential limitations of electron-based chips. They predict that this technology will increase the speed of supercomputers a thousand-fold over the next eight years. With such breakthroughs and continued progress, it seems to be a question of when the average laptop will calculate faster than the human brain – not whether it will.

As technology advances and becomes more affordable, robots may soon take up their long-heralded role in society.

From Star Wars to 2001: A Space Odyssey, many a science fiction film has envisioned an age where robots live among us in society, helping or hindering us – depending on the plot, of course. Technology programs have seduced us with the idea of robots helping us around the house but still these promises hardly seem to have been realized today – self-operating vacuum cleaners aside.

Now, however, there are good reasons to believe that the wait for intelligent robots may be coming to an end. Although Moore’s law specifically deals with the increasing performance of computer chips, other essential components are experiencing similar exponential increases in performance with simultaneous drops in price. And as these components are pressed into action and mass produced, prices plummet even deeper.

Three-dimensional laser range finders, for instance, are a key element in allowing a robot to navigate a cluttered room. While they used to cost $5,000 dollars per unit, recent advances in the technology and an increase in their popularity due to their use in X-box Kinect devices has seen that price drop to $150 per unit.

The availability of such advanced hardware is being complemented by huge strides in artificial intelligence. We’re currently developing robots that can recognize individual people and react to movements and facial expressions with appropriate emotional responses. Industry experts already envisage such robots providing care for an increasingly aging population.

Although we may not have the singing, dancing humanoid robots of science fiction just yet, robots of different forms have already been produced to augment services in a number of areas. And, as the technology advances and the price of the components continues to fall, robots are set to take on an increasing role in society and our lives.

Nanotechnology allows us to create new materials with qualities and functionality leading to major advances in many areas.

After witnessing the horrific problems created by the lack of access to clean water in the aftermath of the Asian tsunami and Hurricane Katrina, Michael Pritchard, an English engineer and expert in water treatment, was motivated to do something about it. He set about creating a simple and portable solution and, just a few years later, in 2009 he created the simple design of the Lifesaver water bottle.

Lifesaver certainly doesn’t look high-tech, with a hand-pump on one end and a filter on the other. However, Pritchard realized early on in the design stage that conventional filters were able to capture most bacteria but not viruses, which are far more microscopic, so he decided to look to nanotechnology.

Nanotechnology involves building things on an atomic scale. Using atoms as building blocks makes it possible to create unique materials, called nanocomposites, and even tiny programmable assemblers, called nanomachines. These nanomachines can self-replicate and build other nanomaterials, creating greater efficiency in the technology and allowing for fantastic innovation potential.

This technology has allowed us to create materials with interesting properties. For instance, nanocomposites that are considerably stronger than steel and can be produced at a fraction of the cost. Nanoscale components are also being used to improve the efficiency of energy technologies, particularly in solar cells.

In the case of the Lifesaver, nanotechnology allowed Pritchard to create a much finer filter that removes everything there is to remove. This filter makes the water safe to drink without the need for expensive chemical- or energy-intensive methods.

Far from limited to improving filters, nanotechnology has the potential to boost progress in any number of areas. As a relatively new technology, we’ve only just begun to realize its potential.

3D printing: changing how we design and create, enabling mass-innovation and reducing waste.

Imagine you’ve just broken your last coffee cup. You could run to the store and buy a new one or order one online and wait for it to be delivered. But let’s imagine that, instead, you could browse designs online, download whatever cup design you like, hit “print” and a desktop device manufactures the item for you in minutes.

Far from science fiction, the above scenario is already possible. The device is a 3D printer, which can fabricate an object of any shape by laying down successive layers of material, one on top of the next. Recent advances in the technology have made it possible for current models to print in an exceptional range of materials, such as plastic, glass, steel and even titanium, and also print combinations of materials in intricate patterns, creating materials with interesting functional properties.

But 3D printing is not limited to producing objects: it’s also making waves in medical fields, where cells and tissue can now be printed. Early applications have included printing skin tissue, such as ears, for use in cosmetic surgery, but further research into printing complex organs, such as replacement kidneys for transplants, is underway.

While the innovative potential of 3D printing is creating a lot of excitement, the approaching affordability and availability of the technology to the home user may be the bigger game changer. Being able to produce and modify your own products whenever you need them minimizes the need for large-scale production of many goods on the market, creating resource savings through reductions in waste and shipping.

Despite this technology being in its infancy, we are already seeing the potential for 3D printing to revolutionize the way we think about and create items and the potential for further innovation grows as the range of applications expands.

Biotechnology provides solutions to global problems, particularly in agriculture, healthcare, energy and the environment.

Although biotechnological applications in food have created much controversy in recent years, the science itself is nothing new. The 12,000-year history of farming is characterized by farmers manipulating living systems, creating new strains of crops through cross-pollination and manipulating the plants’ DNA.

Technology may have moved on, but the principle of manipulating organisms remains the same. Today, advances in genetic engineering provide solutions that are proving to be a key weapon in the fight to feed an ever increasing population.

One of many examples is the Gates Foundation, which is helping to develop BioCassava Plus, a root vegetable fortified with vitamins and engineered to be more pest-resistant and longer-lasting. By 2020, this single crop could improve the health of 250 million people for whom it is a daily meal.

However, the applications of biotechnology are not limited to food production. Craig Venter, famed for his project to sequence the human genome, is currently working to develop strains of algae as a biofuel source. Using algae is extremely beneficial as it doesn’t require arable land, can be grown in saltwater and is also capable of absorbing carbon from nearby power stations. If Venter hits his target, he will be able to produce 10,000 gallons of biofuel per acre, no small feat when compared to the 18 gallons produced by corn.

Such innovative examples may be just the tip of the iceberg as reductions in price and increased availability of technology puts them within the grasp of willing amateurs. This is already bearing fruit: shortly after the BP oil spill in the Gulf of Mexico, a group of students from the Delft University of Technology, created “Alkanivore,” a bug able to consume oil spills. As the preserve of large organizations, biotechnology is already providing key advances, but easier access to the technology is multiplying the potential to find innovative solutions to food, energy and other global problems.

Global connectivity is accelerating the sharing of information – solving social problems and preventing oppression.

When Colombian computer engineer Oscar Morales created a Facebook group one morning in 2008, he could never have imagined the eventual consequences. Morales created the group to make a stand against the terrorism and kidnapping by the Revolutionary Armed Forces of Colombia, or the FARC. By the end of the week, the group had one hundred thousand members and, as the numbers grew, this virtual protest turned into a real one.

A month later, twelve million people were out in the streets protesting in two hundred cities, leading to a massive wave of demilitarization, with soldiers leaving the FARC. As the Arab Spring has shown, the spread of the internet and social media has helped people share and discuss societal problems, while providing tools to organize and fight oppression. An Egyptian protester summed this up in the following tweet: “We use Facebook to schedule protests, Twitter to coordinate, and Youtube to tell the world.”

Access to information has proven to be the major catalyst in this process and we have never had so many people with access to so much information. Just consider that a Masai warrior with a smartphone currently has access to more information than the US president had fifteen years ago. Despite this fact, it’s easy to forget that a significant portion of the world’s population does not yet have access to the internet. It’s estimated that three billion people are set to get online by the year 2020. Imagine the impact of people from all social statuses joining the global conversation, sharing their ideas and opinions.

The spread of global connectivity is already helping to solve societal problems across the globe. As it continues to grow, we could well see an increase in popular protests, started by people like Oscar Morales.

Advances in solar energy technology are increasing usage, lowering production prices and furthering innovation.

If we want to talk about abundance in terms of energy sources, then we don’t really need to look much further than the sun. It’s estimated that the solar power in the deserts of North Africa is enough to supply forty times the world’s current usage.

With so much energy available from one source, why are we not capturing more of it? Many of the areas where solar power is most readily available do not have the money, industry or political stability to create the infrastructure to capitalize on this bounty. The price and relative lack of efficiency of first-generation solar cells was surely a big factor in the lack of take-up.

Since then, huge strides have been made in improving the efficiency of solar power through the use of thinner layers of silicon, of nanotechnology to concentrate the solar energy and of smarter capturing systems that follow the sun. As advances in efficiency encourage greater acceptance, larger scale production allows for increased affordability, creating a virtuous cycle. Solar prices are estimated to be dropping by 6 percent, and capacity growing by 30 percent annually.

Despite this continuing trend, the age of solar panels covering rooftops may only be a fleeting one, as advances have meant that we can now create much smaller, yet increasingly efficient cells. In fact, we may not need rooftop panels at all. New Energy Technologies has recently found a way to turn an ordinary window into a solar panel by using the world’s smallest organic solar cell. They are not only far smaller, but also perform ten times better than today’s commercial models. All these innovative trends encourage greater acceptance of the technology, which in turn helps to create further efficiencies through mass production, reducing the prices even more.

More efficient methods of growing food, like urban farming, will greatly reduce the need for natural resources and land.

As the proportion of the population living in cities continues to increase and the amount of land suitable for growing crops decreases, the distance we ship food continues to climb. In the US, for example, the average distance foodstuffs travel is 1,500 miles, but a meal of different ingredients could easily be five times that amount. In a world of scarce resources, this kind of practice seems unsustainable, but it’s hard to resolve as we move further away from where our food is grown.

Although on a much smaller scale, the US military faced a similar problem in feeding their troops in the Middle East. Due to the terrain and location, they couldn’t ship fresh food in and were forced to develop methods for producing crops without access to fertile soil.

As a result, they developed hydroponics, a system of growing plants suspended in a nutrient-rich fluid. Later developments brought about greater efficiencies with aeroponics, where plants and crops are fed through gases in the surrounding environment. These methods not only removed the dependence on fertile soil, but also greatly reduced the amount of fresh water required. Agriculture currently accounts for approximately 70 percent of the fresh water we use; aeroponics would need just six percent.

Implementation of such growing systems reduces the need for arable land, which creates the possibility of urban or vertical farming. We could build inner-city, purpose-built structures or adapt old multi-story buildings, which would virtually eliminate transport distances and free up vast areas of land.

Such urban farms, aside from providing plant crops, could also incorporate systems of aquaculture, meaning that fish and seafood could be farmed in cities. This would not only give over-fished seafood stocks a much needed recovery break, but also provide nutrients for the plants.By employing such methods in or around population centers, we could eliminate or greatly reduce many of the current system’s resource inefficiencies.

Using affordable sensors will help reduce waste by greatly improving the efficiency of delivery systems.

Optimizing production and capturing techniques is only one side of the coin in the battle towards efficient management of the world’s resources. We also need to ensure that delivery systems of resources and products are efficient in order to minimize waste. Take water, for example: an estimated 20 percent of fresh water is lost through contamination or leaks in the network of pipes that delivers the water to taps. Such a high percentage of waste is alarming with such a vital resource, but this doesn’t have to be the case.

“Smart-pipes” using nanotechnology have been developed by Chicago’s Northwestern University with nanosensors that measure everything from water quality to water flow. By connecting the sensors to a network, most likely the internet, we can develop smart distribution systems. Further high-tech solutions are on the horizon with the possibility of pipes that not only know when they’ve sprung a leak, but can repair themselves when it happens.

Today, sensors such as these have become so much cheaper and more readily available that we can employ them in a variety of areas to monitor pretty much anything. These technologies have the potential to improve the efficiency of delivering virtually everything, not just water. Having sensors in goods, products and household items also creates the potential for all kinds of efficient automation. For example, your house could identify when you’re running low on essential items and order them for you. But domestic uses are dwarfed by the potential business uses, where raw material orders could be programmed to match demand and streamline supply chains, minimizing waste to an extraordinary degree.

The technology to create smarter delivery and distribution systems is available and if we invest in creating goods more efficiently, minimizing the waste of resources will be part of this efficiency drive.



What I took from it.

Abundance provides a breathtaking tour of key technologies and the implications of their projected exponential growth, giving us a glimpse of how they may develop and discussing the ways in which this will impact society.

From the potential role of robots and artificial intelligence in improving healthcare to the uses of nanotechnology and digital manufacturing in reducing waste and conserving natural resources, there are plenty of reasons to be optimistic that the future is not just bright, but may well be one of abundance.


My Rating