1 2 3 4 Previous Next

Featured Content

53 Posts
As computer algorithms proliferate, how will bias be kept in check?

 

What if machines and AI are subject to the same flaws in decision-making as the humans who design them? In the rush to adopt machine learning and AI algorithms for predictive analysis in all kinds of business applications, unintended consequences and biases are coming to light.

 

One of the thorniest issues in the field is how, or whether, to control the explosion of these advanced technologies, and their roles in society. These provocative issues were debated at a panel, “The Tyranny of Algorithms?” at the MIT CODE conference last month.

 

Far from being an esoteric topic for computer scientists to study in isolation, the broader discussion about big data has wide social consequences.

 

As panel moderator and MIT Professor, Sinan Aral, told attendees, algorithms are “everywhere: They’re suggesting what we read, what we look at on the Internet, who we date, our jobs, who are friends are, our healthcare, our insurance coverage and our financial interest payments.” And, as he also pointed out, studies disturbingly show these pervasive algorithms may “bias outcomes and reify discrimination.”

 

Aral, who heads the Social Analytics and Large Scale Experimentation research programs of the Initiative on the Digital Economy, said it’s critical, therefore, to examine both the proliferation of these algorithms and their potential impact — both positive and negative — on our social welfare.

 

Growing Concerns

For example, predictive analytics are being used more frequently to determine the rise of violent recidivism among prison inmates, and by police forces to ascertain resource allocation. Yet, tests show that African-American inmates are twice as likely to be misclassified in the data. In a more subtle case, search ads are excluding certain populations, or making false assumptions about consumers based solely on algorithmic data. “Clearly, we need to think and talk about this,” Aral said.

 

Several recent U.S. government reports have been issued voicing concern about improper use of data analytics. In an Oct. 31 letter to the EEOC, The President of the Leadership Conference on Civil and Human Rights, wrote:

[The Conference] believes that big data is a civil and human rights issue. Big data can bring greater safety, economic opportunity, and convenience, and at their best, data-driven tools can strengthen the values of equal opportunity and shed light on inequality and discrimination. Big data, used correctly, can also bring more clarity and objectivity to the important decisions that shape people’s lives, such as those made by employers and others in positions of power and responsibility, However, at the same time, big data poses new risks to civil and human rights that may not be addressed by our existing legal and policy frameworks. In the face of rapid technological change, we urge the EEOC to protect and strengthen key civil rights protections in the workplace.

 

Even AI advocates and leading experts see red flags. At the CODE panel discussion, Harvard University Professor and Dean for Computer Science, David Parkes, said, “we have to be very careful given the power” AI and data analytics have in fields like HR recruitment and law enforcement. “We can’t reinforce the biases of the data. In criminal justice, it’s widely known that the data is poor,” and misidentifying criminal photos is common.

 

And Alessandro Acquisti, Professor of Information Technology and Public Policy at Carnegie Mellon University, told of employment and hiring test cases where extraneous personal information was used that should not have been included.

The panel, from left, Sinan Aral, David Parkes, Alessandro Acquisti, Catherine Tucker, Sandy Pentland and Susan Athey.

For Catherine Tucker, Professor of Management Science and Marketing at MIT Sloan, the biases in social advertising often stem from nuances and subtleties that machines don’t pick up. These are “the real worry,” she said. Coders aren’t sexist and the data is not the problem.

 

Nonetheless, discriminatory social media policies — such as Facebook’s ethnicaffinity tool — are increasingly problematic.

 

Sandy Pentland — a member of many international privacy organizations such as the World Economic Forum Big Data and Personal Data initiative, as well as head of the Big Data research program of the MIT IDE — said that proposals for data transparency and “open algorithms” that include public input about what data can be shared, are positive steps toward reducing bias. “We’re at a point where we could change the social contract to include the public,” he said.

 

The Oct. 31 EEOC letter urged the agency “to take appropriate steps to protect workers from errors in data, flawed assumptions, and uses of data that may result in a discriminatory impact.”

 

Overlooking Machine Strengths?

But perhaps many fears are overstated, suggested Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business. In fact, most algorithms do better than people in areas such as hiring decisions, partly because people are more biased, she said. And studies of police practices such as “stop-and-frisk” show that the chance of having a gun is more accurately predicted when based on data, not human decisions alone. Algorithmic guidelines for police or judges do better than humans, she argued, because “humans simply aren’t objective.”

Athey points to “incredible progress by machines to help override human biases. De-biasing people is more difficult than fixing data sets.” Moreover, she reminded attendees that robots and ‘smart’ machines “only do what we tell them to do; they need constraints” to avoid crashes or destruction.

That why social scientists, business people and economists need to be involved, and “we need to be clear about what we’re asking the machine to do. Machines will drive you off the cliff and will go haywire fast.”

 

Ultimately, Tucker and other panelists were unconvinced that transparency alone can solve the complex issues of machine learning’s potential benefits and challenges — though global policymakers, particularly in the EU, see the merits of these plans. Pentland suggested that citizens need to be better educated. But others noted that shifting the burden to the public won’t work when corporate competition to own the algorithms is intensifying.

 

Athey summed up the “tough decisions” we face saying that “structural racism can’t be tweaked by algorithms.” Optimistically, she hopes that laws governing safety, along with self-monitoring by businesses and the public sector, will lead to beneficial uses of AI technology. Surveillance can be terrifying, she said, but it can also be fair. Correct use of police body cameras, with AI machines reviewing the data, for example, could uncover and solve systemic problems.

 

With reasonable governments and businesses, solid democracy, and fair news media in place, machines can serve more good than harm, according to the experts. But that’s a tall order, especially on a global scale. And perhaps the even more difficult task is defining — or dictating — what bias is, and what we want algorithms to do.

Theoretically, algorithms themselves could be designed to combat bias and discrimination, depending on how they are coded. For now, however, that design process is still the domain of very nonobjective human societies with very disparate values.

As a recent Harvard Business Review article stated:

“Big data, sophisticated computer algorithms, and artificial intelligence are not inherently good or bad, but that doesn’t mean their effects on society are neutral. Their nature depends on how firms employ them, how markets are structured, and whether firms’ incentives are aligned with society’s interests."


Watch the panel discussion video here.

 

Originally published at ide.mit.edu.

In his new book, The Age of Em, the professor and futurist, Robin Hanson, gives us a peek at how the world of AI and brain emulations might actually work — and it’s more Sci than Fi.

 

There is general agreement that AI is already having a huge impact on our work and leisure lives, as well as society as a whole — and the effects are escalating rapidly. More open to debate is exactly what to expect and when. Long-term projections for many economists and robotics designers go only as far as the next five, or maybe 20 years, when driverless cars, elder-case assistants and automated factories will be the norm.

Robin Hanson takes another tack. The associate professor of economics at George Mason University is also a research associate at the Future of Humanity Institute of Oxford University. His multidisciplinary background includes a doctorate in social science from California Institute of Technology, master’s degrees in physics and philosophy from the University of Chicago, and nine years as a research programmer at Lockheed and NASA.

At a recent MIT IDE seminar, Hanson talked about his new book, The Age of Em: Work, Love and Life when Robots Rule the Earth, saying that the three most disruptive transitions in history were the introduction of humans, farming and industry. If a similar transition lies ahead, he says, “a good guess for its source is artificial intelligence in the form of whole-brain emulations, or “ems,” sometime in the next century.”

In the book, which Hanson describes as much more science than fiction, he outlines a baseline scenario set modestly far into a post-em-transition world where he considers computer architecture, energy use, cooling infrastructure, mind speeds, body sizes, security strategies, virtual reality conventions, labor market organization, management focus, job training, career paths, wage competition, identity, retirement, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, growth rates, coalition politics, governance, law, and war — in other words, just about everything!

Hanson offered additional insights in a brief Q &A with IDE editor and content manager, Paula Klein, as follows:

 

Q: For those unfamiliar with your work, please describe your brain emulation hypothesis and the (perhaps, unsettling) post-em-transition society as a whole. Specifically, what will the economic areas of labor, wealth, careers and wages look like? What will be the role of humans?

 

A: Brain emulations, or “ems,” are one of the standard long-discussed routes by which we might achieve human-level artificial intelligence (AI) in the next century. I take a whole book to carefully analyze what the world looks like, at least for a while, if this is the first kind of cheap human level AI. Quickly, all humans must retire, though collectively, humans are rich and get richer fast. Ems congregate into a few very dense cities, leaving the rest of Earth to humans, at least for a while.

 

The em population and the em economy grow very fast, and wages fall so low that they barely cover the cost for an em to exist. Ems run at a wide range of speeds, typically, 1,000 times human speed. Most ems are copies of the best few hundred humans, are near a peak productivity middle age, and are short-lived “spurs” that last only a few hours before they end or retire. Ems that continue on longer have a limited useful career of perhaps a subjective century, after which they must retire to a slower speed.

 

The em era may only last a year or two, after which something else may happen, I know not what. But both humans and em retirees will care greatly about what that might be. For more details, I refer you to the book.robin_hanson1.jpg

 

Q: Even some very astute AI researchers — including here at MIT — tend to promote humans and machines working together and complementing each other’s strengths and weaknesses. In “the Age of Em” is this a viable option? Are the primary differences in perspective a matter of timing — e.g. your much longer timeframe vs short-term views?

 

A: If there were only a few humans and a few machines, both might always have jobs, as there could be plenty work for all. But when we can make more machines than we need, then there is the risk that machines could be better than humans at most all jobs, just as autos are now better than horses at most all transport tasks.

 

When machines are very different from humans, then it is more plausible that each type can have many jobs where they are better than the other. After all, we have a huge variety of tasks to do, and traditional machines and software are very different from humans. But because ems are so very similar to humans, they are also very close substitutes. So, if ems are much better at any jobs, they are probably better at most all jobs. And that is in the short term, soon after ems arrive; in the long term, the differences only get bigger.

 

cover age of em.jpg

Q: How do you distinguish your work from sci-fi? One book reviewer described the concept as a “dystopian, hellish cyberworld,” while many others praise your perceptiveness and scientific research. Are your ideas taken seriously by world leaders and governments? How do think we should be preparing?

 

A: Ems are a possibility long-discussed in science fiction and futurism. I’m not advocating for this world; I’m just trying to describe a likely scenario. Aside from the fact that it has no characters or plot, my book should mainly be distinguished from science fiction by the fact that it makes sense at a detailed level. I’ve spent many years working it out, and draw on expertise in an unusually wide range of disciplines. I hope experts in each field will read it and publicly declare it to be mostly a reasonable application of basic results from their field. At the moment, however, the book is too new for world leaders to be even aware of it, much less take it seriously.

 

The em revolution won’t happen soon, but when it does appear it could happen within just five years. So rather than wait for clear signs before responding, we should just prepare and wait. For example, we should ensure that we all have assets or insurance to cover our needs in this scenario, be ready to quickly create enough regulatory flexibility to allow fast em development nearby, and consider teaching our children to be willing and ready to emigrate to this new civilization.

For most of the 20th Century, serious corporations judiciously invested in strategic research and development (R&D) efforts to innovate and bring new products to market. Dedicated teams got patents and top executives carefully monitored results. That traditional R&D paradigm is being challenged.

Michael Schrage, MIT IDE Visiting Scholar, writes that “Disciplined digital design experimentation and test cultures increasingly drive tomorrow’s innovations and strategies. Innovation investments emphasizing Research & Development (R&D) will increasingly yield to practices supporting Experiment & Scale (E&S).”

In today’s high-bandwidth, massively networked environments, he observes, so-called good ideas matter much less; testable hypotheses matter much more. “Tomorrow’s innovations and strategies will increasingly be the products — and byproducts — of real-time experimentation and testing.”

Schrage, who is also a Visiting Fellow in the Imperial College Department of Innovation and Entrepreneurship, discussed the E&S concept in a brief Q&A with MIT IDE Editor, Paula Klein. His blog on the topic can be read on the MIT Sloan Management Review web site here.  

 

Q: What’s wrong with the traditional corporate R&D model? Are there still cases – such as spinoffs or innovation labs -- where R&D has value, say, for long-term or large-scale projects?

 

A: The classic, linear R&D models best exemplified by IBM’s Watson Labs, or the former Bell Labs, GE’s Schenectady, N.Y. headquarters, or Heinrich Caro’s BASF were effective for their time, but that time is past. Ironically, but appropriately, R&D breakthroughs over the past 25 years have shattered the traditional enterprise R&D model. Capital-intensive, linear, proprietary and ‘over-the-wall’ processes -- that translated basic research into preliminary developments, and preliminary developments into prototypes and pilots, and pilots into production processes -- have given way to innovation initiatives that are more op-ex than cap-ex. They are more open, agile, iterative, digital, networked, interdisciplinary, customer-centric and more user-aware than ever. These digital architectures and associated processes typically deliver innovation faster, better and cheaper than their analog predecessors.

 

Innovation is really no longer about better understanding of requirements; it focuses on creatively imagining compelling use cases.

After decades of squandering tens of billions of dollars and Euros boosting R&D budgets, sophisticated entrepreneurs and CEOs better appreciate the power and potential of human capital and creativity over financial capital and proprietary investment.

 

Innovation has become more of an ecosystem capability than a business process. The bottom line KPI is shifting from ‘what new proprietary products and services are coming out of our labs?’ to ‘How can our best customers and prospects get great value from our prototypes, design resources, algorithms and research communities?’

Today’s E&S does a measurably better job of answering that question than yesterday’s R&D. Between the Internet of Things and the ongoing rise of machine learning, the economics of E&S render much of traditional R&D an anachronistic albatross.

 

Q: Where does E&S usually originate: From the top down or from grassroots, bottom-up experimentation? How widespread is it at present?

 

A: If you’re Google, Amazon, Apple, Uber or Netflix, E&S originates from the top-down; there’s nothing like a founder’s imprimatur to get innovators and intrapreneurs to embrace E&S.  If you’re an IBM, PwC, Toyota, BASF or WalMart, there’s a mix. Top executives have to give permission – if not guidance – for an E&S ethos, but you’ve got to give the people closest to the customers and clients both the tools and incentive to experiment.

 

I find it impossible to come up with meaningful estimates as to how pervasive these practices now are, but I would confidently assert that they exist much in the way organizations once talked about shadow apps and bootlegged projects. I’m seeing a lot more cloud-enabled shadow experimentation and exploration with clients, customers and channels. I’m seeing it with digital agencies on the marketing side and with contract manufacturers and post-industrial designers on the supply-chain side.

 

When you look at the innovation tempo and customer response enjoyed by a Netflix, an Amazon or a Facebook, you see that E&S is integral to their current success.

 

Q: How important is corporate culture in the success of a virtual research center? Are there environments where E&S just won’t work?

 

A: This is the question I find most irksome and frustrating. There’s no escaping the painful truth that culture matters most. Most MBAs – and far too many engineers – are educated, trained and rewarded for coming up with plans and analyses rather than running simple, fast, cheap and scalable experiments. In 1995 – maybe even as late as 2005 – the enterprise economics of planning, analysis and pilots were cost-effective innovation-process investments. And while that hasn’t been true for a decade,

too many established organizations have legacy innovation approaches and processes that treat E&S as an end-of-pipeline practice rather than a wellspring of disruptive innovation, inspiration and insight.

It’s reminiscent of the painful 1980s/1990s phenomenon of inspecting quality in instead of designing quality in from the start.

 

These issues, indeed, relate more to cultural inertia than technical competence or economic cost. Plainly put, E&S doesn’t work in executive environments where validating plans is valued over learning-by-doing. Today, leading-by-example has to embrace leading-by-experiment.

Blockchain can help secure digital transactions while also protecting user privacy

 

By Alex Pentland

 

Note: Professor Alex Pentland of MIT’s Media Lab, will host a panel on Big Data 2.0: Next-Gen Privacy, Security, and Analytics, at the upcoming MIT CIO Symposium on May 18. This article focuses on one aspect of his work.

 

 

The exponential growth of mobile and ubiquitous computing together with big data analysis, are transforming the entire digital landscape, or “data ecology.” These shifts are having a dramatic impact on people’s personal data-sharing awareness and sensitivities as well as their cybersecurity: The more data created and shared, the more concerns rise. Recently, apprehensions have reached critical mass regarding privacy and the use of personal data, partially due to media exposure of cybersecurity breaches and intelligence scandals. The surge of mobile transactions, micropayments, and connected sensors in both private and public spaces is expected to further exacerbate this tension.

What’s needed is a “new deal on data” where security concerns are matched with transparency, control and privacy, and are designed into the core of any data-driven service [1].

In order to demonstrate that such a win-win data ecology is possible, my students and I have developed Enigma, a decentralized, computation platform enabling different participants to jointly store and run data computations while keeping the data completely private [2]. Enigma promotes a viable digital environment by supporting four key requirements:

  • That data always be encrypted
  • That computation happens on encrypted data only
  • That data owners will control access precisely, absolutely and with an audit trail
  • That there are means to reliably enable payment to data owners for use of their data

 

Data-owner Control
From the user’s perspective, Enigma is a hosted computer cloud that ensures both the privacy and integrity of their data. The system also allows any type of computation to be outsourced to the cloud while guaranteeing the privacy of the underlying data and the correctness of the result. A core feature of the system is that it allows data owners to define and control who can query it, ensuring that the approved parties only learn the output. Moreover, no other data leaks to any other party.The Enigma cloud itself is comprised of a network of computers that store and execute queries. Using secure multi-party computation, each computer only sees random pieces of the data, preventing information leaks. Furthermore, queries carry a micro-payment to the provider of the computing resources, as well as a payment to the users whose data is queried, thus providing the foundation for the rise of a sustainable, secure data market.


Secure Storage
To illustrate how the Enigma platform works, consider the following example: a group of data analysts of an insurance company wishes to test a model that leverages people’s mobile phone data. Instead of sharing their raw data with the data analysts in the insurance company, customers can securely store their data in Enigma, and only provide the data analysts with permission to execute their study. The data analysts, therefore, are able to execute their code and obtain the results, but nothing else. In the process, the users are compensated for giving access to their data, and the owners of the network computers are paid for their computing resources.
Three types of entities are defined in Enigma, and each can play multiple roles (see Figure 1). Owners are those sharing their data into the system and controlling who can query it; Services, if approved, can query the data without learning anything else beyond the answer to their query; and Parties (or computing parties) are the nodes that provide computational and storage resources but they only see encrypted or random bits of information. In addition, all entities are connected to a blockchain, as shown below.
blockchain graphic.png

Figure 1. Overview of Enigma’s decentralized platform.
When owners share data, the data is split into several random pieces called shares. Shares are created in a process of secret-sharing, and they perfectly hide the underlying data while maintaining some necessary properties. This allows them to be queried later in this masked form. Since users in Enigma are owners of their data, we use the blockchain as a decentralized, secure database that is not owned by any party. This also permits an owner to designate which services can access its data and under what conditions, and it permits parties to query the blockchain and ensure that it holds the appropriate permissions. In addition to being a secure and distributed public database, the blockchain is also used to facilitate payments from services to computing parties and owners, while enforcing correct permissions and verifying that queries execute correctly.

Policy Questions
In summary, a sustainable data ecology requires that data is always encrypted,and that computation happens on encrypted data only. It also requires that owners of the data control access to their data precisely, absolutely and in a way that can be audited. Finally, it requires that data owners are reliably paid for use of their data. Enigma accomplishes these requirements, providing proof that a sustainable, secure data ecology is possible.
The major design question remaining about this ecology is one of policy: What is the trade-off between user security and access by law enforcement and intelligence services? In the current Engima system this trade-off is handled by leaving metadata encrypted, but visible. Other trade-offs are possible, including full anonymization, or building-in the ability for court-ordered investigators to penetrate anonymity though zero-knowledge proofs (which Is different than back-door approaches).
For additional information see http://trust.mit.edu
[1] Pentland, A. Reality mining of mobile communications: Toward a new deal on data. World Economic Forum Global IT Report 2008, Chapter 1.6, (2008), 75–80.[
2] Zyskind, G., Nathan, O., and Pentland, A. (2015). Decentralizing privacy: Using blockchain to protect personal data. In Proceedings of IEEE Symposium on Security and Privacy Workshops, 180–184.

The banking industry has long been one of the major users of IT; among the first to automate its back-end and front-office processes and to later embrace the Internet and smartphones.  However, banking has been relatively less disrupted by digital transformations than other industries. In particular, change has come rather slowly to the world’s banking infrastructure.

 

“With advances in technology, the relationship that customers have with their bank and with their finances has changed…” notes a recently released Citigroup report,  Digital Disruption: How FinTech is Forcing Banking to a Tipping Point. “So far these have been seen more as additive to a customer's banking experience…

Despite all of the investment and continuous speculation about banks facing extinction, only about 1% of North American consumer banking revenue has migrated to new digital models… we have not yet reached the tipping point of digital disruption in either the U.S. or Europe.”

 

Recently, I discussed some of the highlights of Citi’s excellent FinTech report. Investments in financial technologies have increased by a factor of 10 over the past five years. The majority of these investments have been concentrated in consumer payments, particularly on the user experience at the point of sale, while continuing to rely on the existing legacy payment infrastructures.  I’d like to now focus on the potential evolution of the backbone payment infrastructures.

 

Transforming this highly complex global payment ecosystem has proved to be very difficult. It  requires the close collaboration of its various stakeholders, including a variety of financial institutions, merchants of all sizes, government regulators in just about every country, and huge numbers of individuals around the world. All these stakeholders must somehow be incentivized to work together in developing and embracing new payment innovations. Not surprisingly, change comes slowly to such a complex ecosystem.

 

The Promise of Blockchain

But sometimes, the emergence of an innovative disruptive technology can help propel change forward. The Internet proved to be such a catalyst in the transformation of global supply chain ecosystems. Could blockchain technologies now become the needed catalyst for the evolution of legacy payment ecosystems?

 

The blockchain first came to light around 2008 as the architecture underpinning bitcoin, the best known and most widely held digital currency. Over the years, blockchain has developed a following of its own as a distributed data base architecture with the ability to handle trust-less transactions where no parties need to know nor trust each other for transactions to complete. Blockchain holds the promise to revolutionize the finance industry and other aspects of the digital economy by bringing one of the most important and oldest concepts, the ledger, to the Internet age.1462755710712.jpg

 

Ledgers constitute a permanent record of all the economic transactions an institution handles, whether it’s a bank managing deposits, loans and payments; a brokerage house keeping track of stocks and bonds; or a government office recording births and deaths, the ownership and sale of land and houses, or legal identity documents like passports and diver licenses. Over the years, institutions have automated their original paper-based ledgers with sophisticated IT applications and data bases.

 

But while most ledgers are now digital, their underlying structure has not changed. Each institution continues to own and manage its own ledger, synchronizing its records with those of other institutions as appropriate - a cumbersome process that often takes days. While these legacy systems operate with a high degree of robustness, they’re rather inflexible and inefficient.

 

In a recent NY Times article, tech reporter Quentin Hardy, nicely explained the inefficiencies inherent in our current payment systems.

“In a world where every business has its own books, payments tend to stop and start between different ledgers. An overseas transfer leaves the ledger of one business, then goes on another ledger at a domestic bank. It then might hit the ledger of a bank in the international transfer system.  It travels to another bank in the foreign country, before ending up on the ledger of the company being paid. Each time it moves to a different ledger, the money has a different identity, taking up time and potentially causing confusion. For some companies, it is a nightmare that can’t end soon enough.”

Blockchain-based distributed ledgers could do for global financial systems what the Internet has done for global supply chain systems.

As Citi’s Digital Disruption report notes, blockchain technologies “could replace the current payment rail of centralized clearing with a distributed ledger for many aspects of financial services, especially in the B2B world… But even if Blockchain does not end up replacing the core current financial infrastructure, it may be a catalyst to rethink and re-engineer legacy systems that could work more efficiently.”  The report goes on to explain why the blockchain might well prove to be a kind of Next Big Thing.

 

Decentralized and Disruptive

“Blockchain is a distributed ledger database that uses a cryptographic network to provide a single source of truth. Blockchain allows untrusting parties with common interests to co-create a permanent, unchangeable, and transparent record of exchange and processing without relying on a central authority.  In contrast to traditional payment model where a central clearing is required to transfer money between the sender and the recipient, Blockchain relies on a distributed ledger and consensus of the network of processors, i.e. a supermajority is required by the servers for a transfer to take place. If the Internet is a disruptive platform designed to facilitate the dissemination of information, then Blockchain technology is a disruptive platform designed to facilitate the exchange of value.”

 

The report summarizes some of the blockchain key advantages:

  • Disintermediation: Enables direct ownership and transfer of digital assets while significantly reducing the need for intermediary layers.
  • Speed & Efficiency: The reengineering - i.e., reduction - of unnecessary intermediate steps will ultimately results in faster settlements, lower overall costs and more efficient business models.
  • Automation: Programmability enables automation of capabilities on the ledger (e.g. smart contracts), that can be executed once agreed upon conditions are met.
  • Certainty: System-wide audit trails make it possible to track the ownership history of an asset, providing irrefutable proof of existence, proof of process and proof of provenance.

But much, much work remains to be done. Blockchain is still at the bleeding edge, lacking the robustness of legacy payment systems. Distributed ledger systems have only been around for less than a decade, and are thus quite immature compared to the existing, decades-old financial infrastructures. While legacy payment infrastructures are complicated, inefficient and inflexible, they actually work quite well, being both safe and fast. Replacing them will be a tough and lengthy undertaking, no matter how innovative and exciting the new technologies might be.

 

It’s too early to know if the blockchain will join the pantheon of Next Big Things and become a major transformational innovation. As we’ve seen with other such successful innovations - e.g., the Internet, the Web, Linux - collaborations between universities, research labs, companies and government agencies are absolutely essential. So are close collaborations among technology developers and users in order to get the architecture right, agree on open standards, develop open source platforms and set up governance processes embraced by all.

 

In a short number of years, blockchain technologies have made a lot of progresshttp://blog.irvingwb.com/blog/2014/02/reflections-on-bitcoin.html. We might well be close to an ecosystem-wide FinTech tipping point. It will be fascinating to see how it all plays out in the years to come.

 

The complete blog was first posted April 18 here.

Whose on-demand economy is it anyway? That seemed to be a sub-theme of the recent recent On-Demand Economy conference held by MIT’s Initiative on the Digital Economy. Not only did discussions focus on who will own and operate the actual platforms and technology infrastructure in today’s nascent markets, but who will use them, who will benefit and who will lose out?

 

Uber and Airbnb are superstars of today’s digital economy. And while they believe in the strength and viability of their platform-based models and mobile-app strategies, they also are examining their meteoric rise and considering their next moves.

During a panel discussion, economist Jonathan Hall, of Uber, said the company epitomizes the on-demand economy, because riders don’t have to book in advance and drivers work at a moment’s notice. “They turn on the phone and can access customers and work.

At the same time, if Uber drivers are attracted to flexible hours and self-reliance, they can just as easily work for Lyft or another on-demand platform service provider, Hall said. “Other companies will compete with Uber for workers. We have to keep innovating.”

 

More competition should yield more opportunities, lower prices and better service. But in the on-demand economy, traditional assumptions are being upended leaving academics and policymakers to sort out the payback from the pitfalls.

 

Not a Zero-sum Game

Peter Coles, an economist at Airbnb, also acknowledged the growing pains and complexity of sustaining success in a nascent, constantly changing and competitive marketplace. And the new models may not be a zero-sum game.For instance, Coles noted that the hotel industry remains robust in spite of Airbnb’s expansion. Other on-demand service providers, such as Thumbtack, see lower barriers to entry for small business owners as a result of the service’s accessibility and low-cost model — but the platform hasn’t really scaled up yet.OE Event Photo 2.png

 

Larry Mishel, President of the Economic Policy Institute, reminded attendees of the infancy of the current market, noting that digital platforms like Uber “scratch an itch” but are still small, particularly from a jobs-creation perspective.

Uber employs only about half of 1% of the private-sector workforce,” Mishel said, and most drivers seek supplemental income, not basic wages. “That’s not the future of work,” but merely a “distraction, he added.

Mishel has said that Uber drivers should be considered employees, not independent contractors, since they are on-call for the company when they are working.

 

While “on-demand is quickly becoming table stakes for commerce-oriented businesses,” as one report noted, a few speakers said that the biggest winners seem to be in select groups: VCs funding billion-dollar “unicorns” — i.e. Uber, which is valued at more than $50 billion with hardly any physical assets; middle-class homeowners who can rent out unused bedrooms on Airbnb, and transitional workers who can manage on part-time wages with no benefits.

 

(From left, Jon Lieber of Thumbtack, Jonathan Hall, Peter Coles and Larry Mishel.)


A History of Temporary Workers

Mishel, as well as Lee Dyer, Chair of the Department of Human Resource Studies at the ILR School, Cornell University, pointed out that temporary workers are not new to the labor force and that most wages have been stagnant for 12 years. “We haven’t had lots of choices in the job market, “Mishel said. “That’s why they want the supplemental work. Opportunities are great, but why is it that so many people need this extra work? Let’s study that, too.”

 

Dyer noted that 80% of businesses use workers who are not full-time. “That means lots of important work is done by outside firms and employees.” His work focuses on the organizational impact and the need for better metrics to manage the workforce more strategically. “Most businesses don’t know how much it’s costing them to get the work done,” he said, and that can hurt agility, costs and revenue.

 

Taking a big-picture view, U.S. Sen. Mark Warner (D-VA) wants to look at the future of work and “re-imagine the social contracts of the 20th Century for the 21st Century.” Like him, many attendees were interested in examining — and offering solutions for — some of the most vexing macro-economic issues we face: global demographic shifts, unemployment, automation and productivity.

 


In his keynote address, Warner (left), spoke cogently about the current on-demand economy transformation of the workforce saying it needs “urgent” attention, including discussion, ideas, local and state partnerships with business, and “real-time efforts of social change and policy.” With the U.S. Congress in a tailspin, Warner is hoping The Aspen Institute’s Future of Work Initiative, a group he co-chairs, will lead the way. The group bills itself as a “nonpartisan effort to identify concrete ways to strengthen the social contract in the midst of sweeping changes in today’s workplace and workforce.

   Warner noted that “21st-century innovation is transforming the American workplace far faster than a  20th-century government and 1930s social contract can keep pace.”


In addition to the on-demand economy, the Initiative focuses on “Capitalism 2.0”: How best to inspire a 21st-Century capitalism for a 21st-Century workforce by rewarding employers for reducing inequality, helping workers get ahead, and facilitating access to benefits and protections to secure workers’ futures.” These goals align closely to those of the MIT Inclusive Innovation Competition, which held a showcase at the conference.

 

Global Labor Tipping Point?

From a global perspective, Jonas Prising, ManpowerGroup Chairman and CEO, sees structural changes since the last recession that have permanently altered talent acquisition and the supply and demand of workers. Minimum wage is a global discussion and job mobility is a challenge for many. Individuals change jobs more often, populations are aging and technology has increased productivity and lowered prices, he told attendees. This “confluence of factors, along with pervasiveness and speed,” represent a tipping point in today’s labor market that’s different than the past.

Specifically, Prising sees a huge shift to “employment security, not job security,” even though most economies are still tied to old models of job security. Protecting jobs at all costs will lead to greater losses and unemployment, he said.

 

It’s an exciting time for innovation and growth, Prising said on a panel led by MIT IDE co-director, Andy McAfee, but there is also more workforce polarization and unemployment. Outside of the U.S., attempts to equalize the labor market are more common: France offers paid job training, and countries such as Denmark and Finland encourage nontraditional co-employment options as well as salaries during job transition periods.20160315_IDE_Conference_February_2016 26.jpg

 

Overall, the next several years may be a difficult transition period to new economic models. Dyer said that even full-time jobs are temporary these days. Two- to three-year “tours of duty” are common, but those arrangements are only positive if everyone agrees to them and an infrastructure is in place to offer compensation and benefits. Prising summed up the labor situation by saying: “Off the table now is the promise that you come and work for 40 years and we’ll take care of you. On the table is the notion that we’ll give provide new skills and opportunities;” at least for a while.

 

(From left, Jonas Prising, Lee Dyer and panel moderator, MIT IDE co-director, Andy McAfee)




Read related blogs about the event here and here.

Growing opportunities to collect and leverage digital information have led many managers to change how they make decisions – relying less on intuition and more on data. As Jim Barksdale, the former CEO of Netscape quipped, “If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” Following trailblazers such as Caesar’s CEO Gary Loveman – who attributes his firm’s success to the use of databases and cutting-edge analytical tools – managers at many levels are now consuming data and analytical output in unprecedented ways.

 

This should come as no surprise. At their most fundamental level, all organizations can be thought of as “information processors” that rely on the technologies of hierarchy, specialization, and human perception to collect, disseminate, and act on insights. Therefore, it’s only natural that technologies delivering faster, cheaper, more accurate information create opportunities to re-invent the managerial machinery.

 

At the same time,

large corporations are not always nimble creatures. How quickly are managers actually making the investments and process changes required to embrace decision-making practices rooted in objective data? And should all firms jump on this latest managerial bandwagon?

We recently worked with a team at the U.S. Census Bureau and our colleagues Nick Bloom of Stanford and John van Reenen of the London School of Economics to design and field a large-scale survey to pursue these questions in the U.S. manufacturing sector. The survey targeted a representative group of roughly 50,000 American manufacturing establishments.

 

Our initial line of inquiry delves into the spread of data-driven decision making, or “DDD” for short. We find that the use of DDD in U.S. manufacturing nearly tripled between 2005 and 2010, from 11% to 30% of plants. However, adoption has been uneven. DDD is primarily concentrated in plants with four key advantages: 1) high levels of information technology, 2) educated workers, 3) greater size, and 4) better awareness.

 

Four factors are driving data-driven decision-making:

 

IT: DDD is more extensive in firms that have already made significant IT investments. Quite intuitively, firms make better use of DDD when they have more sophisticated IT to track, process, and communicate data. Likewise, they enjoy higher returns from IT when it guides decision-making and action at the firm.

 

College degrees: Having a larger share of workers (including both managers and non-managers) with Bachelor’s degrees also predicts the use of DDD. This may reflect the way formal education can make people more comfortable with quantitative and data-centric ways of understanding the world.

 

dddchart.png

 

Size: Both single-plant firms, and those with multiple plants are increasing their reliance on DDD at roughly the same rate. However, single-plant establishments are still at less than half the adoption level of their bigger brethren (see Figure at right). That’s no surprise — plants that belong to larger, multi-unit firms have the advantage of being able to learn from each other and share infrastructure.

 

Awareness: Last but not least, even DDD-ready firms may lag behind due to a simple lack of awareness about its benefits. In order to adopt DDD, firms first have to learn about emerging practices and how they might work (or not) in their particular organization. Plants that report a larger number of opportunities to learn about new management practices – like hearing about it from other units of the same firm, from outside consultants or new employees, or from trade associations or supply chain partners – are far more likely to report being at the frontier of data-driven decision making. If you share this article with your co-workers, you might see your own firm’s use of DDD jump up a notch.

 

For all its benefits,DDD may not be the path to salvation for every firm. Even managers who have received the DDD gospel may oversee environments that do not permit reliable data collection. For many types of decisions, especially those for which little quantitative data exist, the broader knowledge and experience of leaders still outperforms purely data-driven approaches. Furthermore, the costs of moving to the DDD frontier are not trivial, and may outweigh the benefits – particularly if the scale of operations is just too small.

That said, the tripling of DDD rates in just five years suggests that firms are overcoming any implementation barriers quite rapidly. Our analysis sheds considerable light on what makes DDD a good fit for a wide range of firms. Yet even among plants that are, on paper, likely adopters, only a minority had adopted DDD by the end of our sample period in 2010.

We expect adoption to continue to trend upward, as technology costs fall, management practices evolve, and awareness spreads.

 

Our follow-on research is focused on pinning down how much firms may expect to benefit from DDD, and on discovering the ingredients for success in different settings. No doubt the hype surrounding big data and analytics is great. However, our results offer objective empirical evidence that there is something beyond the hype: firms are rapidly adopting DDD and fundamentally changing how they approach management in the digital age.

 

Kristina McElheran is an Assistant Professor of Strategic Management at the University of Toronto and a Digital Fellow at the MIT Initiative on the Digital Economy.

Erik Brynjolfsson is the Schussel Family Professor at MIT’s Sloan School of Management and the director of the MIT Initiative on the Digital Economy.


This blog first appeared in Harvard Business Review on Feb. 3, 2016 here.

For me, the best thing about [the 2015] World Economic Forum in Davos was an exposure to worldviews very different from my own. Professionally, I hang around mainly with technologists, entrepreneurs, businesspeople, and economists at American universities.

 

People within these groups certainly don’t agree with each other all of the time, or with me, but most of us do share some baseline assumptions on important topics. These include:

  • Creative destruction is good news. Better products take market share from inferior ones, more nimble and innovative companies displace slow and sleepy older ones, and entire industries — like those for cameras, film, and standalone GPS devices — can be swept away by something as simple as a smartphone. This process should be encouraged, even though it’s not pleasant for all parties involved, and even though it leads to job loss and worker dislocation.
  • Markets allocate better than bureaucrats do. Economist Alan Blinder put it beautifully: “I believe every mainstream economist sees the invisible hand as one of the great thoughts of the human mind… Throughout recorded history, there has never been a serious practical alternative to free competitive markets as a mechanism for delivering the right goods and services to the right people at the lowest possible costs.”
  • There is such a thing as too much regulation. Almost all of my colleagues would agree that regulation and licensing are necessary for protecting public health and safety (as Eduardo Porter pointed out: “I’m reassured that if I ever need brain surgery, the doctor performing it will have been recognized by the profession to be up to the task.”). But it can go to far. Studies have found, for example, that requiring licenses for too many jobs can hurt employment. And I have absolutely no idea why courts in some countries second-guess the names parents give their children (plus, I kind of like “Fraise”).
  • Business is not the enemy. It’s certainly not the case that companies always and everywhere do only good, or that in looking after their interests they inevitably advance our own. But they are the source of the great majority of the goods and services we enjoy, and they provide most of the jobs and wages. As they compete with each other to satisfy our needs and whims, they make our lives better.
  • The state can’t provide jobs to everybody. The totalitarian promise of centralized full employment couldn’t stand in the real world. The government’s proper role is instead to set up an economic environment that’s conducive to private sector job and wage growth. The vast majority of people I interact with would also agree that the government should provide a safety net for those who fall too far behind, and to take care of orphans, the mentally ill, and other vulnerable groups.
  • We’re right about these things. Virtually all my colleagues believe that the statements above are no longer open for debate among serious people. Theory, experiment, and especially experience have shown that they’re correct.

Davos was a revelation for me because I came across serious, smart, and influential people who didn’t appear to accept these statements nearly as wholeheartedly as I do. And these people were not from strange or faraway lands (if there were delegations from North Korea or Cuba at the meeting, I didn’t come across them). Instead, they were from Europeans, my first cousins in the global family.

 

In Switzerland, I moderated an open forum session titled “Employment: Mind the Gap?” I was the only American on stage, and there was only one representative from the private sector on the panel. Two European trade unionists, a French economist, and the prime minister of Sweden (himself a former trade unionist) made up the rest of the speakers.

 

I found the discussion fascinating because once we got past the initial uncontroversial remarks (yes, education is important; yes, we must all work together…) we got into a conversation about the right way to mind the gap. As it unfolded,

I came to the conclusion that the majority of people on stage did not share my economic worldview as expressed in the statements above. Instead, they seemed to believe much more strongly in government planning, programs, and protections as the best way to ensure good jobs and wages. And they seemed willing to sacrifice some flexibility, decentralization, and innovation — perhaps a lot — in the pursuit of stability and prosperity for workers.

 

I pointed out that economic data from the European continent in recent years was not encouraging for this economic worldview, and my onstage popularity as a moderator dipped sharply. Panelists responded, correctly, that this was in part because of differing responses to the Great Recession. They seemed less willing to engage with the idea that it might also be because several European countries were trying to fight the uncomfortable dynamism of today by making sure those who had jobs yesterday would not lose them.

 

In case I wasn’t clear enough in Davos, let me be clear here: I don’t think this will work. To paraphrase Churchill, countries today have a choice between turbulence and anaemia. If they choose anaemia, they will still have turbulence.

 

 

This blog first appeared on FT.com Jan. 29, 2015 here.

Two leading economists and professors—NYU’s Paul Romer and Chad Jones of Stanford—shared macroeconomic insights about growth and inequality at separate MIT IDE seminars recently.

 

Jones, Professor of Economics at the Graduate School of Business at Stanford University, was a PhD student when he took a class of Romer’s at the University of Rochester in 1984. Six years later, Romer—a graduate of the University of Chicago business school -- published a seminal paper on Endogenous Technological Change about economic growth and “non-rival goods” that multiply and yield rapid innovation and growth. “The paper changed the way economists understand growth,” Jones wrote recently on the 25th anniversary of the publication.

 

Romer (pictured at right) went on to become an urban-planning policy adviser, entrepreneur, NYU professor, and director of the Marron Institute of Urban Management. He and Jones have collaborated over the years on growth-theory research.romer.jpg

 

At an October IDE seminar, Romer described himself as an economic optimist because he takes a long-term view when it comes to analysis and the potential of new ideas. Using historical data as the context for examining current conditions, Romer sees “a much more developed system of learning now [than in the past]...

There’s a world out there of incredible possibilities," and ideas are the greatest non-rival goods of all.

 

“When we perceive a threat, we don’t see other things around it,” he said. “We have to get out of that state...that distorts our collective learning” and use science to analyze financial markets. "Two decades ago, economists were preoccupied with threats posed by inflation that would spiral ever higher. They were resigned to a permanent slowdown in productivity growth. Looking back, everyone was too pessimistic about the rate of progress,” he said. “Their fears did not even get the signs right...economists are now struggling with the threat of inflation that is too low.”

 

The Case for Theory

During his talk, as well as in his blog, Romer questioned the current trend away from economic theory. Without theory, you can’t “explain what’s going on in context and why things occasionally fall apart.” Big data offers unlimited observations, for example, but the challenge is to aggregate the data and find a theory that you can use to manage, test and narrow down data points for specific outcomes, he said.

 

Also in October, Jones (pictured below, left) spoke at the IDE about his "Schumpeterian Model of Top Income and Inequality." He pointed out that while top income inequality rose sharply in the United States over the last 35 years, it increased only slightly in economies like France and Japan.

 

jones-charles-irving.jpgJones, a Research Associate of the National Bureau of Economic Research, also considers himself an optimist. At his talk, he noted that the development of the Internet and “a reduction in top tax rates are examples of changes that raise the growth rate of entrepreneurial incomes and therefore increase Pareto inequality.” However, he said,

policies that stimulate “creative destruction” reduce top inequality.

Examples cited include “research subsidies, or a decline in the extent to which incumbent firms can block new innovation. Differences in these considerations across countries and over time, perhaps associated with globalization, may explain the varied patterns of top income inequality that we see in the data.”

 

I'm still left wondering what economists can tell those struggling with global poverty today. How long will they have to wait to see shifts? For that ray of hope, I find the work of folks like Ann Mei Chan --who is exploring ways to use digital technology to end poverty-- something to be really optimistic about.

 

What are your thoughts?

A few months ago I attended the 12th annual Brookings Blum Roundtable on Global Poverty, a meeting that brought together around 50-60 policy and technical experts from government, academia, business, investors and NGOs. This year’s event was focused on the impact of digital technologies on economic development in emerging and developing countries.  It was organized around six different sessions, each of which explored the links between technology and development through a different lens.

 

Prior to the event, the Brookings Institution commissioned a policy brief for each session to help set the stage for the ensuing discussions. I wrote one of the briefs, Will the Digital Revolution Deliver for the World’s Poor. I would now like to discuss another brief which I found quite interesting; Will the Spread of Digital Technologies Spell the End of the Knowledge Divide? by Deepak Mishra, lead economist at the World Bank. Dr. Mishra is also co-director of the World Development Report 2016: Internet for Development, which will shortly be released by the World Bank.

 

As discussed in my policy brief, Internet access and mobile phones are being rapidly transformed from a luxury to a necessity that more and more people can now afford.  Advances in technology keep expanding the benefits of the digital revolution across the planet. Over the coming decade, a 2013 McKinsey study estimates that up to 3 billion additional people will connect to the Internet through their mobile devices, enabling them to become part of the global digital economy.

 

Our digital revolution is accelerating three important trends that should significantly improve the quality of life of the world’s poor: businesses are developing offerings specifically aimed at lower-income customers; governments are improving access to public and social services including education and health care; and mobile money accounts and digital payments are increasing financial inclusion.

 

In addition, notes Mishra, the digital revolution is significantly expanding the availability of knowledge, thus leading to an increasingly global knowledge-based society. But this evolution comes with some very important caveats:

“Evidence suggests that digital technologies are in fact helping to expand knowledge, but are not succeeding in democratizing it. That is, digital technologies are helping to bridge the digital divide (narrowly defined), but are insufficient to close the knowledge divide.  Democratizing knowledge is more than a matter of connectivity and access to digital devices.  It requires strengthening the analog foundations of the digital revolution - competition, education (skills), and institutions - that directly affect the ability of businesses, people, and governments to take full advantage of their digital investments.”

 

Let’s look a little closer at what’s entailed in these three “analog foundations of the digital revolution.”

  • Regulations that promote competition: Lowering the cost of starting firms, avoiding monopolies, removing barriers to adoption of digital technologies, ensuring the efficient use of technology by businesses, enforcement of existing regulations, …
  • Education and skill development: Basic IT and digital literacy, helping workers adapt to the demands of the digital economy, preparing students, managers and government officials for an increasingly digital world, facilitate life-long learning, …
  • Institutions that are capable and accountable: Empowering citizens through digital platforms and information, e-government services, digital citizen engagements, increased incentives for good governance both in public sector and private firms, …

 

Digital technologies are necessary, but not sufficient. Countries must also strengthen these analog foundations to realize the benefits of their technology investments as well as narrow their knowledge divide.

 

Mishra’s observations bring to mind similar discussions around the impact of technology on business and economic productivity.  In their 2009 book, Wired for Innovation: How Information Technology is Reshaping the Economy, Erik Brynjolfsson and Adam Saunders wrote about the impact of technology-based innovation on business productivity:

“The companies with the highest returns on their technology investments did more than just buy technology; they invested in organizational capital to become digital organizations. Productivity studies at both the firm level and the establishment (or plant) level during the period 1995-2008 reveal that the firms that saw high returns on their technology investments were the same firms that adopted certain productivity-enhancing business practices.  The literature points to incentive systems, training and decentralized decision making as some of the practices most complementary to technology.”

 

Organizational capital is a very important concept, critical to enable companies to take full advantage of their technology investments. Similarly, at a country level, nations must strengthen their analog foundations to realize the full benefits from their digital investments.

 

Digital technologies have been diffusing around the world at an unprecedented rate. “The average diffusion lag is 17 years for personal computers, 13 years for mobile phones, and five years for the Internet, and is steadily falling for newer technologies.” The world is more connected than ever before. But, Mishra reminds us that “nearly 6 billion people do not have broadband, 4 billion do not have Internet access, nearly 2 billion do not use a mobile phone, and half a billion live outside areas with a mobile signal.” According to Mary Meeker’s 2015 Internet Trends Report, Internet access and smartphone subscriptions continue to grow rapidly around the world, adding around 200 million per year and 370 million per year respectively. Much progress is being made in closing the digital divide, but much remains to be done.


 

 

Continue reading the full blog, which was posted on October 20, here.


In his important recent essay “The Return of Nature”, Jesse Ausubel of Rockefeller University, highlights a wonderful phenomenon: we humans have been giving land back to nature, since we no longer need it for our purposes. Even though the world’s population continues to increase, we have in all likelihood past the peak for farmland; the number of acres under cultivation globally has been slowly declining, and will continue to do so. Forest loss also appears to have been reversed in recent years, and our use of natural resources such aluminium, copper and timber is decreasing over time.

 

The article’s subtitle, “How technology liberates the environment”, identifies why this is. Software, sensors, data, autonomous machines and all the other digital tools of the second machine age allow us to use a lot fewer atoms throughout the economy. Precision agriculture enables great crop yields with much less water and fertilizer. Cutting-edge design software envisions buildings that are lighter and more energy efficient than any before. Robot-heavy warehouses can pack goods very tightly, and so be smaller. Autonomous cars, when (not if) they come, will mean fewer vehicles in total and fewer parking garages in cities. Drones will replace delivery trucks. And so on.

 

The pervasiveness of this process, which Mr. Ausubel labels “de-materialization,” might well be part of the reason that business investment has been so sluggish, even in the US, where profits and overall growth have been relatively robust. Why build a new factory, after all, if a few new computer-controlled machine tools and some scheduling software will allow you to boost output enough from existing ones? And why build a new data centre to run that software when you can just put it all in the cloud?

 

One strong piece of evidence in support of the de-materialization hypothesis is the fact that while overall business investment has been in the doldrums for a while, U.S. corporate investment in software is near an all-time high. It was at 1.824 percent  of gross domestic product in the second quarter of 2015, very near its previous peak of 1.837 percent in Q1 2001, when a combination of Y2K paranoia and dot-com hysteria caused a sharp and unprecedented spike in software spending.

 

 

There’s no spike just this time — just a pretty steady increase in the software intensity of the economy. I bet it will continue, and continue to drive de-materialization. If your business is based on selling lots of atoms, this might not be happy news for you.

 

 

This blog first appeared in FT.com on Sept. 29, 2015 here.

I recently got invited to speak at Brooklyn 1.0, a conference of “design, people and technology” to be held this autumn in the borough that is New York City’s hipster hothouse. I accepted because preparing for the talk would force me to think more about an important topic: what’s up with the kids today?

 

The common answer at present seems to be something like, “Oh my, so much! The millennial generation is like none before it. The members of its tribe are more idealistic, more altruistic and more entrepreneurial. They’re already changing the world, and the best is surely yet to come.”

 

Breathlessness like this quickly activates my skepticism (and, if I’m being honest, my grouchiness). Haven’t we always been saying this about young people, and haven’t they always responded by, well, growing up? The Woodstock generation created and enjoyed the summer of love (lucky them) but then turned into the ageing boomers of today who, now that they are in charge, seem to be engaging in sclerotic politics, running rapacious companies and listening to bad music just like their fathers, and their fathers before them.

 

So what forces, if any, might prevent today’s millennials from becoming stodgy and conventional, and joining the System with demographic predictability?

The biggest [change]I can come up with is technological progress. Modern techs let young people live lives and create careers that were simply not possible a generation ago. This is already causing important changes, and I expect them to continue.

 

Let’s look at lifestyle first. This great video from Best Reviews shows how all the contents of a 1981 office fit into a laptop and phone in 2014. But even this underestimates the changes. A connected young person today can communicate endlessly around the world for free (OK, at zero marginal cost, which is close enough to the same thing). She can also maintain robust social and professional networks, and stay abreast of work conversations and workflow with tools such as Slack. If she actually needs to go somewhere it’s trivial to find and pay for a cheap flight, a non-traditional place to stay and a ride across town. There are also plenty of online marketplaces, both general and specialised, to help her find a job, at gig, a co-worker or a little help.

 

It’s possible, of course, to make too much of these developments, but I think the bigger mistake is to underestimate them. They combine to enable a life where the longstanding trade-off between fluidity and productivity is greatly eased.

 

Walter Frick provides the best evidence I’ve seen that young people are already changing the business world. He looked as carefully as possible at the ages of the founders of the so-called “unicorns” — private companies valued at more than a billion dollars. While there were a few holes in the data, Mr. Frick’s startling conclusion was that at least half the founders were almost certainly younger than 35 when they launched their companies. This is a remarkable amount of success and value creation among people who in earlier times would still have been at the beginning of their careers. Facebook’s Mark Zuckerberg is an extreme example of a more general phenomenon: the rise of young tech moguls.

We haven’t seen the last of them.

Large start-up communities around the world — from San Francisco to London, Berlin to Tel Aviv, Shanghai to Singapore — are buzzing with energy. And young technologists are doing much more than writing apps these days.

 

They’re biohacking, extending the blockchain that underlies bitcoin and learning to make almost anything. Pharma and biotech, financial services, and manufacturing will be at least somewhat shaken up by their work.

 

So the kids are, in fact, all right. I look forward to hanging out with them.

 

 

 

This blog first appeared on the FT site August 13 here.

The past few years have seen the rise of what’s been variously referred to as the on-demand, collaborative, sharing, or peer-to-peer economy.  Regardless of what we call it, this trend has captured the public’s imagination.  Articles on the subject now appear fairly frequently.  Some of the articles are focused on the empowerment nature of these technology-based economic models, enabling people to get what they need from each other.  Others are more concerned with on-demand’s impact on the very nature of work in the 21st century.

 

In an excellent 2013 report, industry analyst Jeremiah Owyangargues that the collaborative economy is the evolution of the Internet-based economy of the past two decades.  The one-to-many Web 1.0 made lots of information accessible to individuals, but control remained mostly in the hands of institutions.  It was followed by the many-to-many Web 2.0, where individuals could easily share content and opinions with each other.

Now, the on-demand phase of the Internet economy is enabling individuals to go way beyond sharing information.

 

“An entire economy is emerging around the exchange of goods and services between individuals instead of from business to consumer,” wrote Owyang.  “This is redefining market relationships between traditional sellers and buyers, expanding models of transaction and consumption, and impacting business models and ecosystems…  This results in market efficiencies that bear new products, services, and business growth.”

 

In 2011, Time Magazine named the sharing economy one of 10 Ideas that Will Change the World.  “Someday we'll look back on the 20th century and wonder why we owned so much stuff… [S]haring and renting more stuff means producing and wasting less stuff, which is good for the planet and even better for one’s self-image…  But the real benefit of collaborative consumption turns out to be social.  In an era when families are scattered and we may not know the people down the street, sharing things - even with strangers we’ve just met online  - allows us to make meaningful connections.”

 

This early bloom has now started to fade. “If you want to start a fight in otherwise polite company, just declare that the sharing economy is the new feudalism, or else that it’s the future of work and all the serfs should just get used to it, already,” wrote a recent Wall Street Journal article.  “Uber isn’t the Uber for rides - it’s the Uber for low-wage jobs,” note the critics.  “Boosters of companies like Uber counter that they allow for relatively well-compensated work, on demand.”

 

A Financial Times article reflected on what it means to be running “a collaborative business model within a capitalist framework.  Are the two even compatible?  Or is there a fundamental conflict at the heart of an industry that preaches collaboration but, due to being radically commercialised by venture capital money from Silicon Valley, also needs to profiteer from the goodwill of others if it’s to remain viable?  For the most part it’s a hypocrisy the community is trying to address…  For now, the uncomfortable truth is that the sharing economy is a rent-extraction business of the highest middle-man order.”

 

This past May, OuiShare Fest, a three-day collaborative economy festival took place in Paris.  There was much discussion that this emerging economy is now practically owned by Silicon Valley’s 1 percent.  “The sharing economy has created 17 billion-dollar companies (and 10 unicorns),” said this article.  In a keynote at the festival, Owyang noted that the VC money being poured into the sector already far outweighs the monies that flowed into social media at this stage of its development.  “It’s worth noting that the early hope that this sharing market would foster altruism and a reduction of income inequality can now be refuted,” he said.  “

The one percent clearly own the sharing startups, which means this is continued capitalism - not idealistic socialism.”

 

Read the full post of my July 21 blog here.

There are some striking similarities shared by Mustafa Suleyman co-founder of DeepMind, and Luis von Ahn, co-founder of Duolingo: Both are young, idealistic, non-U.S.-born entrepreneurs pushing the envelope of new technologies that also have groundbreaking social implications. DeepMind was founded in 2011 and bought last year by Google for $400 million, while three-year-old Duolingo is one of the most popular free apps on Google Play and on iPhones. 7ad4c756-ae69-4d3e-8bdc-5e45c9fcfd97-2060x1236.jpeg

 

The stories part ways when it comes to approach, however: Duolingo’s claim to fame is the crowdsourcing nature of its iterative methodology for learning a foreign language.

 

London-based DeepMind takes a rigorous, programming-intensive dive into the world of artificial general intelligence (AGI) and machine learning. Both co-founders described their products and dreams at recent IDE events.

 

At the IDE annual meeting in May, Suleyman (pictured below at left) explained how his AGI systems employ neural networks and deep learning methods to solve tasks without prior programming. Using Atari computer games as its test case, DeepMind programs learned automatically--from recognizing raw images--how

Mustafa-Suleyman_IMG_0721-e1432300484446.jpgto reach high game scores with only pre-training instructions. The program figured out how to succeed at nearly 50 Atari games without any foreknowledge of how to play them.

 

Suleyman has high goals for AGI. Rather than fearing its power to replace humans and perform devious actions, he sees AGI tackling some of the world’s biggest problems including clean water access, financial inequality, fraud detection and reducing stock market risks. “Maybe AGI can shape a better world,” he claimed. Discussions over ethics and safety measures are certainly needed, along with verification and security methodologies, he acknowledged.

 

Google sees huge commercial potential in the technology and already has used AGI in its Streetview map app, photo apps and to replace 60 hand-crafted systems across the company. AGI has also made huge strides in speech recognition as in Android phones and Google Translate, which Suleyman said reduces transcription errors by 30 percent.

 

Clearly, “AI has arrived,” he said at the IDE event, though AGI’s full potential is still a long way off. The programs are weak at conceptualization, an area where humans can work with AI to add more abstract thinking.

 

Crowdsourcing Language Learning

Guatemalan native Von Ahn (pictured below at right) also aimed high when he tinkered with app dev as a young software designer. He created and sold two companies to Google (including reCAPTCHA) while still in his 20s before creating, Duolingo with co-founder Severin Hacker, and he also teaches at Carnegie-Mellon University.Screen_shot_2011-07-08_at_11.42.53_PM vonahn.png

 

His initial goal was universal education, he told attendees at the July 10 IDE Platform Strategy Summit. When he stumbled on the fact that two-thirds of the world’s 1.2 billion language learners are studying English to rise up from poverty, he decided that a free or low-cost app was the key to their success. Duolingo was the result.

 

His plan was to make it simple and game-like (offering bonus points and incentives for to move to the next level, for instance), to attract and keep learners. Soon, the site scaled rapidly and began crowdsourcing methods and tips so that students also become the teachers. Today, more than 100,000 schools are using the program and it has 40 million registered users—mostly outside of schools.

 

What’s evident from these two examples is the rapid-fire pace and unlimited potential of many digital technologies. Not incidentally, maybe they will improve the world along the way.

 

 

 

For more about MIT's robotic efforts, read the blog here.

 

For more on DeepMinds see:

 

For more on Duolingo see:

A group I’m part of made a strong claim recently. A number of executives, entrepreneurs, and investors from the high tech industries, along with some economists (who tend to believe that technological progress is the only free lunch around) got together to draft an “open letter on the digital economy”.

 

One of our main goals with it was to advocate a set of policies to deal with the fact that, as we wrote, “the benefits of [the current] technological surge have been very uneven.” I’ve written about these policies before; they include education and immigration reform, infrastructure investment, and greater support for basic research.

 

But we also wanted to make the case that technology is not the enemy, and should not be demonized or thwarted. Hence our strong claim: “The digital revolution is the best economic news on the planet.”

 

To back this up I could cite a lot of relatively dry research about the quality and productivity benefits of technology investment by companies, or about the sustained and huge quality improvements and cost declines of digital products themselves. I could also refer to historical research about the big and positive changes brought on by previous technology surges such as steam power and electrification.

 

But I want to be more vivid than that. So to make my case I’m going to point to two pieces of research that looked closely and carefully at what happens when digital technologies arrive at the base of the pyramid — the billions of people living in the world’s emerging economies.

 

The first is a study I consider a classic: Robert Jensen’s “The Digital Provide: Information (Technology), Market Performance, and Welfare in the South Indian Fisheries Sector,” published in 2007. [Mr] Jensen was able to document what happened when the fishermen in the state of Kerala got mobile phones for the first time in the late 1990s. As he puts it, their adoption “was associated with a dramatic reduction in price dispersion, the complete elimination of waste, and near-perfect adherence to the Law of One Price. Both consumer and producer welfare increased.”

 

This is circumspect economist-speak for “important things got much better, right away.”

The material conditions of people’s lives improved, and also became more predictable. And [Mr] Jensen’s work makes clear that this improvement was due to the phones, and not to any other factors like policy changes or increased aid.

 

The second is what we’re learning about mobile phone-based cash transfer programmes like GiveDirectly. Ample research shows that giving very poor people money, even with no conditions or strings attached, helps them greatly. They tend not to mis-spend it, and it leads to long-lasting positive changes in their lives. Mobile phones allow these transfers to go directly to intended recipients without middlemen; this keeps overhead low and reduces bribes and theft.

 

We’re making great strides toward reducing dire poverty around the world. The spread of ever-more powerful technology throughout the base of the pyramid will accelerate this — I believe more quickly than any other possible intervention.

 

So I’m confident in our letter’s claim that tech progress is the best economic news on the planet. It brings with it challenges that we need to acknowledge and confront, but so do all good things.

 

 

This post originally appeared in my Financial Times blog here. More on the letter --and its signers--can be found here.

Filter Blog

By author: By date:
By tag: