1 2 Previous Next

Featured Content

27 Posts authored by: Paula Klein
As computer algorithms proliferate, how will bias be kept in check?

 

What if machines and AI are subject to the same flaws in decision-making as the humans who design them? In the rush to adopt machine learning and AI algorithms for predictive analysis in all kinds of business applications, unintended consequences and biases are coming to light.

 

One of the thorniest issues in the field is how, or whether, to control the explosion of these advanced technologies, and their roles in society. These provocative issues were debated at a panel, “The Tyranny of Algorithms?” at the MIT CODE conference last month.

 

Far from being an esoteric topic for computer scientists to study in isolation, the broader discussion about big data has wide social consequences.

 

As panel moderator and MIT Professor, Sinan Aral, told attendees, algorithms are “everywhere: They’re suggesting what we read, what we look at on the Internet, who we date, our jobs, who are friends are, our healthcare, our insurance coverage and our financial interest payments.” And, as he also pointed out, studies disturbingly show these pervasive algorithms may “bias outcomes and reify discrimination.”

 

Aral, who heads the Social Analytics and Large Scale Experimentation research programs of the Initiative on the Digital Economy, said it’s critical, therefore, to examine both the proliferation of these algorithms and their potential impact — both positive and negative — on our social welfare.

 

Growing Concerns

For example, predictive analytics are being used more frequently to determine the rise of violent recidivism among prison inmates, and by police forces to ascertain resource allocation. Yet, tests show that African-American inmates are twice as likely to be misclassified in the data. In a more subtle case, search ads are excluding certain populations, or making false assumptions about consumers based solely on algorithmic data. “Clearly, we need to think and talk about this,” Aral said.

 

Several recent U.S. government reports have been issued voicing concern about improper use of data analytics. In an Oct. 31 letter to the EEOC, The President of the Leadership Conference on Civil and Human Rights, wrote:

[The Conference] believes that big data is a civil and human rights issue. Big data can bring greater safety, economic opportunity, and convenience, and at their best, data-driven tools can strengthen the values of equal opportunity and shed light on inequality and discrimination. Big data, used correctly, can also bring more clarity and objectivity to the important decisions that shape people’s lives, such as those made by employers and others in positions of power and responsibility, However, at the same time, big data poses new risks to civil and human rights that may not be addressed by our existing legal and policy frameworks. In the face of rapid technological change, we urge the EEOC to protect and strengthen key civil rights protections in the workplace.

 

Even AI advocates and leading experts see red flags. At the CODE panel discussion, Harvard University Professor and Dean for Computer Science, David Parkes, said, “we have to be very careful given the power” AI and data analytics have in fields like HR recruitment and law enforcement. “We can’t reinforce the biases of the data. In criminal justice, it’s widely known that the data is poor,” and misidentifying criminal photos is common.

 

And Alessandro Acquisti, Professor of Information Technology and Public Policy at Carnegie Mellon University, told of employment and hiring test cases where extraneous personal information was used that should not have been included.

The panel, from left, Sinan Aral, David Parkes, Alessandro Acquisti, Catherine Tucker, Sandy Pentland and Susan Athey.

For Catherine Tucker, Professor of Management Science and Marketing at MIT Sloan, the biases in social advertising often stem from nuances and subtleties that machines don’t pick up. These are “the real worry,” she said. Coders aren’t sexist and the data is not the problem.

 

Nonetheless, discriminatory social media policies — such as Facebook’s ethnicaffinity tool — are increasingly problematic.

 

Sandy Pentland — a member of many international privacy organizations such as the World Economic Forum Big Data and Personal Data initiative, as well as head of the Big Data research program of the MIT IDE — said that proposals for data transparency and “open algorithms” that include public input about what data can be shared, are positive steps toward reducing bias. “We’re at a point where we could change the social contract to include the public,” he said.

 

The Oct. 31 EEOC letter urged the agency “to take appropriate steps to protect workers from errors in data, flawed assumptions, and uses of data that may result in a discriminatory impact.”

 

Overlooking Machine Strengths?

But perhaps many fears are overstated, suggested Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business. In fact, most algorithms do better than people in areas such as hiring decisions, partly because people are more biased, she said. And studies of police practices such as “stop-and-frisk” show that the chance of having a gun is more accurately predicted when based on data, not human decisions alone. Algorithmic guidelines for police or judges do better than humans, she argued, because “humans simply aren’t objective.”

Athey points to “incredible progress by machines to help override human biases. De-biasing people is more difficult than fixing data sets.” Moreover, she reminded attendees that robots and ‘smart’ machines “only do what we tell them to do; they need constraints” to avoid crashes or destruction.

That why social scientists, business people and economists need to be involved, and “we need to be clear about what we’re asking the machine to do. Machines will drive you off the cliff and will go haywire fast.”

 

Ultimately, Tucker and other panelists were unconvinced that transparency alone can solve the complex issues of machine learning’s potential benefits and challenges — though global policymakers, particularly in the EU, see the merits of these plans. Pentland suggested that citizens need to be better educated. But others noted that shifting the burden to the public won’t work when corporate competition to own the algorithms is intensifying.

 

Athey summed up the “tough decisions” we face saying that “structural racism can’t be tweaked by algorithms.” Optimistically, she hopes that laws governing safety, along with self-monitoring by businesses and the public sector, will lead to beneficial uses of AI technology. Surveillance can be terrifying, she said, but it can also be fair. Correct use of police body cameras, with AI machines reviewing the data, for example, could uncover and solve systemic problems.

 

With reasonable governments and businesses, solid democracy, and fair news media in place, machines can serve more good than harm, according to the experts. But that’s a tall order, especially on a global scale. And perhaps the even more difficult task is defining — or dictating — what bias is, and what we want algorithms to do.

Theoretically, algorithms themselves could be designed to combat bias and discrimination, depending on how they are coded. For now, however, that design process is still the domain of very nonobjective human societies with very disparate values.

As a recent Harvard Business Review article stated:

“Big data, sophisticated computer algorithms, and artificial intelligence are not inherently good or bad, but that doesn’t mean their effects on society are neutral. Their nature depends on how firms employ them, how markets are structured, and whether firms’ incentives are aligned with society’s interests."


Watch the panel discussion video here.

 

Originally published at ide.mit.edu.

In his new book, The Age of Em, the professor and futurist, Robin Hanson, gives us a peek at how the world of AI and brain emulations might actually work — and it’s more Sci than Fi.

 

There is general agreement that AI is already having a huge impact on our work and leisure lives, as well as society as a whole — and the effects are escalating rapidly. More open to debate is exactly what to expect and when. Long-term projections for many economists and robotics designers go only as far as the next five, or maybe 20 years, when driverless cars, elder-case assistants and automated factories will be the norm.

Robin Hanson takes another tack. The associate professor of economics at George Mason University is also a research associate at the Future of Humanity Institute of Oxford University. His multidisciplinary background includes a doctorate in social science from California Institute of Technology, master’s degrees in physics and philosophy from the University of Chicago, and nine years as a research programmer at Lockheed and NASA.

At a recent MIT IDE seminar, Hanson talked about his new book, The Age of Em: Work, Love and Life when Robots Rule the Earth, saying that the three most disruptive transitions in history were the introduction of humans, farming and industry. If a similar transition lies ahead, he says, “a good guess for its source is artificial intelligence in the form of whole-brain emulations, or “ems,” sometime in the next century.”

In the book, which Hanson describes as much more science than fiction, he outlines a baseline scenario set modestly far into a post-em-transition world where he considers computer architecture, energy use, cooling infrastructure, mind speeds, body sizes, security strategies, virtual reality conventions, labor market organization, management focus, job training, career paths, wage competition, identity, retirement, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, growth rates, coalition politics, governance, law, and war — in other words, just about everything!

Hanson offered additional insights in a brief Q &A with IDE editor and content manager, Paula Klein, as follows:

 

Q: For those unfamiliar with your work, please describe your brain emulation hypothesis and the (perhaps, unsettling) post-em-transition society as a whole. Specifically, what will the economic areas of labor, wealth, careers and wages look like? What will be the role of humans?

 

A: Brain emulations, or “ems,” are one of the standard long-discussed routes by which we might achieve human-level artificial intelligence (AI) in the next century. I take a whole book to carefully analyze what the world looks like, at least for a while, if this is the first kind of cheap human level AI. Quickly, all humans must retire, though collectively, humans are rich and get richer fast. Ems congregate into a few very dense cities, leaving the rest of Earth to humans, at least for a while.

 

The em population and the em economy grow very fast, and wages fall so low that they barely cover the cost for an em to exist. Ems run at a wide range of speeds, typically, 1,000 times human speed. Most ems are copies of the best few hundred humans, are near a peak productivity middle age, and are short-lived “spurs” that last only a few hours before they end or retire. Ems that continue on longer have a limited useful career of perhaps a subjective century, after which they must retire to a slower speed.

 

The em era may only last a year or two, after which something else may happen, I know not what. But both humans and em retirees will care greatly about what that might be. For more details, I refer you to the book.robin_hanson1.jpg

 

Q: Even some very astute AI researchers — including here at MIT — tend to promote humans and machines working together and complementing each other’s strengths and weaknesses. In “the Age of Em” is this a viable option? Are the primary differences in perspective a matter of timing — e.g. your much longer timeframe vs short-term views?

 

A: If there were only a few humans and a few machines, both might always have jobs, as there could be plenty work for all. But when we can make more machines than we need, then there is the risk that machines could be better than humans at most all jobs, just as autos are now better than horses at most all transport tasks.

 

When machines are very different from humans, then it is more plausible that each type can have many jobs where they are better than the other. After all, we have a huge variety of tasks to do, and traditional machines and software are very different from humans. But because ems are so very similar to humans, they are also very close substitutes. So, if ems are much better at any jobs, they are probably better at most all jobs. And that is in the short term, soon after ems arrive; in the long term, the differences only get bigger.

 

cover age of em.jpg

Q: How do you distinguish your work from sci-fi? One book reviewer described the concept as a “dystopian, hellish cyberworld,” while many others praise your perceptiveness and scientific research. Are your ideas taken seriously by world leaders and governments? How do think we should be preparing?

 

A: Ems are a possibility long-discussed in science fiction and futurism. I’m not advocating for this world; I’m just trying to describe a likely scenario. Aside from the fact that it has no characters or plot, my book should mainly be distinguished from science fiction by the fact that it makes sense at a detailed level. I’ve spent many years working it out, and draw on expertise in an unusually wide range of disciplines. I hope experts in each field will read it and publicly declare it to be mostly a reasonable application of basic results from their field. At the moment, however, the book is too new for world leaders to be even aware of it, much less take it seriously.

 

The em revolution won’t happen soon, but when it does appear it could happen within just five years. So rather than wait for clear signs before responding, we should just prepare and wait. For example, we should ensure that we all have assets or insurance to cover our needs in this scenario, be ready to quickly create enough regulatory flexibility to allow fast em development nearby, and consider teaching our children to be willing and ready to emigrate to this new civilization.

For most of the 20th Century, serious corporations judiciously invested in strategic research and development (R&D) efforts to innovate and bring new products to market. Dedicated teams got patents and top executives carefully monitored results. That traditional R&D paradigm is being challenged.

Michael Schrage, MIT IDE Visiting Scholar, writes that “Disciplined digital design experimentation and test cultures increasingly drive tomorrow’s innovations and strategies. Innovation investments emphasizing Research & Development (R&D) will increasingly yield to practices supporting Experiment & Scale (E&S).”

In today’s high-bandwidth, massively networked environments, he observes, so-called good ideas matter much less; testable hypotheses matter much more. “Tomorrow’s innovations and strategies will increasingly be the products — and byproducts — of real-time experimentation and testing.”

Schrage, who is also a Visiting Fellow in the Imperial College Department of Innovation and Entrepreneurship, discussed the E&S concept in a brief Q&A with MIT IDE Editor, Paula Klein. His blog on the topic can be read on the MIT Sloan Management Review web site here.  

 

Q: What’s wrong with the traditional corporate R&D model? Are there still cases – such as spinoffs or innovation labs -- where R&D has value, say, for long-term or large-scale projects?

 

A: The classic, linear R&D models best exemplified by IBM’s Watson Labs, or the former Bell Labs, GE’s Schenectady, N.Y. headquarters, or Heinrich Caro’s BASF were effective for their time, but that time is past. Ironically, but appropriately, R&D breakthroughs over the past 25 years have shattered the traditional enterprise R&D model. Capital-intensive, linear, proprietary and ‘over-the-wall’ processes -- that translated basic research into preliminary developments, and preliminary developments into prototypes and pilots, and pilots into production processes -- have given way to innovation initiatives that are more op-ex than cap-ex. They are more open, agile, iterative, digital, networked, interdisciplinary, customer-centric and more user-aware than ever. These digital architectures and associated processes typically deliver innovation faster, better and cheaper than their analog predecessors.

 

Innovation is really no longer about better understanding of requirements; it focuses on creatively imagining compelling use cases.

After decades of squandering tens of billions of dollars and Euros boosting R&D budgets, sophisticated entrepreneurs and CEOs better appreciate the power and potential of human capital and creativity over financial capital and proprietary investment.

 

Innovation has become more of an ecosystem capability than a business process. The bottom line KPI is shifting from ‘what new proprietary products and services are coming out of our labs?’ to ‘How can our best customers and prospects get great value from our prototypes, design resources, algorithms and research communities?’

Today’s E&S does a measurably better job of answering that question than yesterday’s R&D. Between the Internet of Things and the ongoing rise of machine learning, the economics of E&S render much of traditional R&D an anachronistic albatross.

 

Q: Where does E&S usually originate: From the top down or from grassroots, bottom-up experimentation? How widespread is it at present?

 

A: If you’re Google, Amazon, Apple, Uber or Netflix, E&S originates from the top-down; there’s nothing like a founder’s imprimatur to get innovators and intrapreneurs to embrace E&S.  If you’re an IBM, PwC, Toyota, BASF or WalMart, there’s a mix. Top executives have to give permission – if not guidance – for an E&S ethos, but you’ve got to give the people closest to the customers and clients both the tools and incentive to experiment.

 

I find it impossible to come up with meaningful estimates as to how pervasive these practices now are, but I would confidently assert that they exist much in the way organizations once talked about shadow apps and bootlegged projects. I’m seeing a lot more cloud-enabled shadow experimentation and exploration with clients, customers and channels. I’m seeing it with digital agencies on the marketing side and with contract manufacturers and post-industrial designers on the supply-chain side.

 

When you look at the innovation tempo and customer response enjoyed by a Netflix, an Amazon or a Facebook, you see that E&S is integral to their current success.

 

Q: How important is corporate culture in the success of a virtual research center? Are there environments where E&S just won’t work?

 

A: This is the question I find most irksome and frustrating. There’s no escaping the painful truth that culture matters most. Most MBAs – and far too many engineers – are educated, trained and rewarded for coming up with plans and analyses rather than running simple, fast, cheap and scalable experiments. In 1995 – maybe even as late as 2005 – the enterprise economics of planning, analysis and pilots were cost-effective innovation-process investments. And while that hasn’t been true for a decade,

too many established organizations have legacy innovation approaches and processes that treat E&S as an end-of-pipeline practice rather than a wellspring of disruptive innovation, inspiration and insight.

It’s reminiscent of the painful 1980s/1990s phenomenon of inspecting quality in instead of designing quality in from the start.

 

These issues, indeed, relate more to cultural inertia than technical competence or economic cost. Plainly put, E&S doesn’t work in executive environments where validating plans is valued over learning-by-doing. Today, leading-by-example has to embrace leading-by-experiment.

Blockchain can help secure digital transactions while also protecting user privacy

 

By Alex Pentland

 

Note: Professor Alex Pentland of MIT’s Media Lab, will host a panel on Big Data 2.0: Next-Gen Privacy, Security, and Analytics, at the upcoming MIT CIO Symposium on May 18. This article focuses on one aspect of his work.

 

 

The exponential growth of mobile and ubiquitous computing together with big data analysis, are transforming the entire digital landscape, or “data ecology.” These shifts are having a dramatic impact on people’s personal data-sharing awareness and sensitivities as well as their cybersecurity: The more data created and shared, the more concerns rise. Recently, apprehensions have reached critical mass regarding privacy and the use of personal data, partially due to media exposure of cybersecurity breaches and intelligence scandals. The surge of mobile transactions, micropayments, and connected sensors in both private and public spaces is expected to further exacerbate this tension.

What’s needed is a “new deal on data” where security concerns are matched with transparency, control and privacy, and are designed into the core of any data-driven service [1].

In order to demonstrate that such a win-win data ecology is possible, my students and I have developed Enigma, a decentralized, computation platform enabling different participants to jointly store and run data computations while keeping the data completely private [2]. Enigma promotes a viable digital environment by supporting four key requirements:

  • That data always be encrypted
  • That computation happens on encrypted data only
  • That data owners will control access precisely, absolutely and with an audit trail
  • That there are means to reliably enable payment to data owners for use of their data

 

Data-owner Control
From the user’s perspective, Enigma is a hosted computer cloud that ensures both the privacy and integrity of their data. The system also allows any type of computation to be outsourced to the cloud while guaranteeing the privacy of the underlying data and the correctness of the result. A core feature of the system is that it allows data owners to define and control who can query it, ensuring that the approved parties only learn the output. Moreover, no other data leaks to any other party.The Enigma cloud itself is comprised of a network of computers that store and execute queries. Using secure multi-party computation, each computer only sees random pieces of the data, preventing information leaks. Furthermore, queries carry a micro-payment to the provider of the computing resources, as well as a payment to the users whose data is queried, thus providing the foundation for the rise of a sustainable, secure data market.


Secure Storage
To illustrate how the Enigma platform works, consider the following example: a group of data analysts of an insurance company wishes to test a model that leverages people’s mobile phone data. Instead of sharing their raw data with the data analysts in the insurance company, customers can securely store their data in Enigma, and only provide the data analysts with permission to execute their study. The data analysts, therefore, are able to execute their code and obtain the results, but nothing else. In the process, the users are compensated for giving access to their data, and the owners of the network computers are paid for their computing resources.
Three types of entities are defined in Enigma, and each can play multiple roles (see Figure 1). Owners are those sharing their data into the system and controlling who can query it; Services, if approved, can query the data without learning anything else beyond the answer to their query; and Parties (or computing parties) are the nodes that provide computational and storage resources but they only see encrypted or random bits of information. In addition, all entities are connected to a blockchain, as shown below.
blockchain graphic.png

Figure 1. Overview of Enigma’s decentralized platform.
When owners share data, the data is split into several random pieces called shares. Shares are created in a process of secret-sharing, and they perfectly hide the underlying data while maintaining some necessary properties. This allows them to be queried later in this masked form. Since users in Enigma are owners of their data, we use the blockchain as a decentralized, secure database that is not owned by any party. This also permits an owner to designate which services can access its data and under what conditions, and it permits parties to query the blockchain and ensure that it holds the appropriate permissions. In addition to being a secure and distributed public database, the blockchain is also used to facilitate payments from services to computing parties and owners, while enforcing correct permissions and verifying that queries execute correctly.

Policy Questions
In summary, a sustainable data ecology requires that data is always encrypted,and that computation happens on encrypted data only. It also requires that owners of the data control access to their data precisely, absolutely and in a way that can be audited. Finally, it requires that data owners are reliably paid for use of their data. Enigma accomplishes these requirements, providing proof that a sustainable, secure data ecology is possible.
The major design question remaining about this ecology is one of policy: What is the trade-off between user security and access by law enforcement and intelligence services? In the current Engima system this trade-off is handled by leaving metadata encrypted, but visible. Other trade-offs are possible, including full anonymization, or building-in the ability for court-ordered investigators to penetrate anonymity though zero-knowledge proofs (which Is different than back-door approaches).
For additional information see http://trust.mit.edu
[1] Pentland, A. Reality mining of mobile communications: Toward a new deal on data. World Economic Forum Global IT Report 2008, Chapter 1.6, (2008), 75–80.[
2] Zyskind, G., Nathan, O., and Pentland, A. (2015). Decentralizing privacy: Using blockchain to protect personal data. In Proceedings of IEEE Symposium on Security and Privacy Workshops, 180–184.

Whose on-demand economy is it anyway? That seemed to be a sub-theme of the recent recent On-Demand Economy conference held by MIT’s Initiative on the Digital Economy. Not only did discussions focus on who will own and operate the actual platforms and technology infrastructure in today’s nascent markets, but who will use them, who will benefit and who will lose out?

 

Uber and Airbnb are superstars of today’s digital economy. And while they believe in the strength and viability of their platform-based models and mobile-app strategies, they also are examining their meteoric rise and considering their next moves.

During a panel discussion, economist Jonathan Hall, of Uber, said the company epitomizes the on-demand economy, because riders don’t have to book in advance and drivers work at a moment’s notice. “They turn on the phone and can access customers and work.

At the same time, if Uber drivers are attracted to flexible hours and self-reliance, they can just as easily work for Lyft or another on-demand platform service provider, Hall said. “Other companies will compete with Uber for workers. We have to keep innovating.”

 

More competition should yield more opportunities, lower prices and better service. But in the on-demand economy, traditional assumptions are being upended leaving academics and policymakers to sort out the payback from the pitfalls.

 

Not a Zero-sum Game

Peter Coles, an economist at Airbnb, also acknowledged the growing pains and complexity of sustaining success in a nascent, constantly changing and competitive marketplace. And the new models may not be a zero-sum game.For instance, Coles noted that the hotel industry remains robust in spite of Airbnb’s expansion. Other on-demand service providers, such as Thumbtack, see lower barriers to entry for small business owners as a result of the service’s accessibility and low-cost model — but the platform hasn’t really scaled up yet.OE Event Photo 2.png

 

Larry Mishel, President of the Economic Policy Institute, reminded attendees of the infancy of the current market, noting that digital platforms like Uber “scratch an itch” but are still small, particularly from a jobs-creation perspective.

Uber employs only about half of 1% of the private-sector workforce,” Mishel said, and most drivers seek supplemental income, not basic wages. “That’s not the future of work,” but merely a “distraction, he added.

Mishel has said that Uber drivers should be considered employees, not independent contractors, since they are on-call for the company when they are working.

 

While “on-demand is quickly becoming table stakes for commerce-oriented businesses,” as one report noted, a few speakers said that the biggest winners seem to be in select groups: VCs funding billion-dollar “unicorns” — i.e. Uber, which is valued at more than $50 billion with hardly any physical assets; middle-class homeowners who can rent out unused bedrooms on Airbnb, and transitional workers who can manage on part-time wages with no benefits.

 

(From left, Jon Lieber of Thumbtack, Jonathan Hall, Peter Coles and Larry Mishel.)


A History of Temporary Workers

Mishel, as well as Lee Dyer, Chair of the Department of Human Resource Studies at the ILR School, Cornell University, pointed out that temporary workers are not new to the labor force and that most wages have been stagnant for 12 years. “We haven’t had lots of choices in the job market, “Mishel said. “That’s why they want the supplemental work. Opportunities are great, but why is it that so many people need this extra work? Let’s study that, too.”

 

Dyer noted that 80% of businesses use workers who are not full-time. “That means lots of important work is done by outside firms and employees.” His work focuses on the organizational impact and the need for better metrics to manage the workforce more strategically. “Most businesses don’t know how much it’s costing them to get the work done,” he said, and that can hurt agility, costs and revenue.

 

Taking a big-picture view, U.S. Sen. Mark Warner (D-VA) wants to look at the future of work and “re-imagine the social contracts of the 20th Century for the 21st Century.” Like him, many attendees were interested in examining — and offering solutions for — some of the most vexing macro-economic issues we face: global demographic shifts, unemployment, automation and productivity.

 


In his keynote address, Warner (left), spoke cogently about the current on-demand economy transformation of the workforce saying it needs “urgent” attention, including discussion, ideas, local and state partnerships with business, and “real-time efforts of social change and policy.” With the U.S. Congress in a tailspin, Warner is hoping The Aspen Institute’s Future of Work Initiative, a group he co-chairs, will lead the way. The group bills itself as a “nonpartisan effort to identify concrete ways to strengthen the social contract in the midst of sweeping changes in today’s workplace and workforce.

   Warner noted that “21st-century innovation is transforming the American workplace far faster than a  20th-century government and 1930s social contract can keep pace.”


In addition to the on-demand economy, the Initiative focuses on “Capitalism 2.0”: How best to inspire a 21st-Century capitalism for a 21st-Century workforce by rewarding employers for reducing inequality, helping workers get ahead, and facilitating access to benefits and protections to secure workers’ futures.” These goals align closely to those of the MIT Inclusive Innovation Competition, which held a showcase at the conference.

 

Global Labor Tipping Point?

From a global perspective, Jonas Prising, ManpowerGroup Chairman and CEO, sees structural changes since the last recession that have permanently altered talent acquisition and the supply and demand of workers. Minimum wage is a global discussion and job mobility is a challenge for many. Individuals change jobs more often, populations are aging and technology has increased productivity and lowered prices, he told attendees. This “confluence of factors, along with pervasiveness and speed,” represent a tipping point in today’s labor market that’s different than the past.

Specifically, Prising sees a huge shift to “employment security, not job security,” even though most economies are still tied to old models of job security. Protecting jobs at all costs will lead to greater losses and unemployment, he said.

 

It’s an exciting time for innovation and growth, Prising said on a panel led by MIT IDE co-director, Andy McAfee, but there is also more workforce polarization and unemployment. Outside of the U.S., attempts to equalize the labor market are more common: France offers paid job training, and countries such as Denmark and Finland encourage nontraditional co-employment options as well as salaries during job transition periods.20160315_IDE_Conference_February_2016 26.jpg

 

Overall, the next several years may be a difficult transition period to new economic models. Dyer said that even full-time jobs are temporary these days. Two- to three-year “tours of duty” are common, but those arrangements are only positive if everyone agrees to them and an infrastructure is in place to offer compensation and benefits. Prising summed up the labor situation by saying: “Off the table now is the promise that you come and work for 40 years and we’ll take care of you. On the table is the notion that we’ll give provide new skills and opportunities;” at least for a while.

 

(From left, Jonas Prising, Lee Dyer and panel moderator, MIT IDE co-director, Andy McAfee)




Read related blogs about the event here and here.

Two leading economists and professors—NYU’s Paul Romer and Chad Jones of Stanford—shared macroeconomic insights about growth and inequality at separate MIT IDE seminars recently.

 

Jones, Professor of Economics at the Graduate School of Business at Stanford University, was a PhD student when he took a class of Romer’s at the University of Rochester in 1984. Six years later, Romer—a graduate of the University of Chicago business school -- published a seminal paper on Endogenous Technological Change about economic growth and “non-rival goods” that multiply and yield rapid innovation and growth. “The paper changed the way economists understand growth,” Jones wrote recently on the 25th anniversary of the publication.

 

Romer (pictured at right) went on to become an urban-planning policy adviser, entrepreneur, NYU professor, and director of the Marron Institute of Urban Management. He and Jones have collaborated over the years on growth-theory research.romer.jpg

 

At an October IDE seminar, Romer described himself as an economic optimist because he takes a long-term view when it comes to analysis and the potential of new ideas. Using historical data as the context for examining current conditions, Romer sees “a much more developed system of learning now [than in the past]...

There’s a world out there of incredible possibilities," and ideas are the greatest non-rival goods of all.

 

“When we perceive a threat, we don’t see other things around it,” he said. “We have to get out of that state...that distorts our collective learning” and use science to analyze financial markets. "Two decades ago, economists were preoccupied with threats posed by inflation that would spiral ever higher. They were resigned to a permanent slowdown in productivity growth. Looking back, everyone was too pessimistic about the rate of progress,” he said. “Their fears did not even get the signs right...economists are now struggling with the threat of inflation that is too low.”

 

The Case for Theory

During his talk, as well as in his blog, Romer questioned the current trend away from economic theory. Without theory, you can’t “explain what’s going on in context and why things occasionally fall apart.” Big data offers unlimited observations, for example, but the challenge is to aggregate the data and find a theory that you can use to manage, test and narrow down data points for specific outcomes, he said.

 

Also in October, Jones (pictured below, left) spoke at the IDE about his "Schumpeterian Model of Top Income and Inequality." He pointed out that while top income inequality rose sharply in the United States over the last 35 years, it increased only slightly in economies like France and Japan.

 

jones-charles-irving.jpgJones, a Research Associate of the National Bureau of Economic Research, also considers himself an optimist. At his talk, he noted that the development of the Internet and “a reduction in top tax rates are examples of changes that raise the growth rate of entrepreneurial incomes and therefore increase Pareto inequality.” However, he said,

policies that stimulate “creative destruction” reduce top inequality.

Examples cited include “research subsidies, or a decline in the extent to which incumbent firms can block new innovation. Differences in these considerations across countries and over time, perhaps associated with globalization, may explain the varied patterns of top income inequality that we see in the data.”

 

I'm still left wondering what economists can tell those struggling with global poverty today. How long will they have to wait to see shifts? For that ray of hope, I find the work of folks like Ann Mei Chan --who is exploring ways to use digital technology to end poverty-- something to be really optimistic about.

 

What are your thoughts?

There are some striking similarities shared by Mustafa Suleyman co-founder of DeepMind, and Luis von Ahn, co-founder of Duolingo: Both are young, idealistic, non-U.S.-born entrepreneurs pushing the envelope of new technologies that also have groundbreaking social implications. DeepMind was founded in 2011 and bought last year by Google for $400 million, while three-year-old Duolingo is one of the most popular free apps on Google Play and on iPhones. 7ad4c756-ae69-4d3e-8bdc-5e45c9fcfd97-2060x1236.jpeg

 

The stories part ways when it comes to approach, however: Duolingo’s claim to fame is the crowdsourcing nature of its iterative methodology for learning a foreign language.

 

London-based DeepMind takes a rigorous, programming-intensive dive into the world of artificial general intelligence (AGI) and machine learning. Both co-founders described their products and dreams at recent IDE events.

 

At the IDE annual meeting in May, Suleyman (pictured below at left) explained how his AGI systems employ neural networks and deep learning methods to solve tasks without prior programming. Using Atari computer games as its test case, DeepMind programs learned automatically--from recognizing raw images--how

Mustafa-Suleyman_IMG_0721-e1432300484446.jpgto reach high game scores with only pre-training instructions. The program figured out how to succeed at nearly 50 Atari games without any foreknowledge of how to play them.

 

Suleyman has high goals for AGI. Rather than fearing its power to replace humans and perform devious actions, he sees AGI tackling some of the world’s biggest problems including clean water access, financial inequality, fraud detection and reducing stock market risks. “Maybe AGI can shape a better world,” he claimed. Discussions over ethics and safety measures are certainly needed, along with verification and security methodologies, he acknowledged.

 

Google sees huge commercial potential in the technology and already has used AGI in its Streetview map app, photo apps and to replace 60 hand-crafted systems across the company. AGI has also made huge strides in speech recognition as in Android phones and Google Translate, which Suleyman said reduces transcription errors by 30 percent.

 

Clearly, “AI has arrived,” he said at the IDE event, though AGI’s full potential is still a long way off. The programs are weak at conceptualization, an area where humans can work with AI to add more abstract thinking.

 

Crowdsourcing Language Learning

Guatemalan native Von Ahn (pictured below at right) also aimed high when he tinkered with app dev as a young software designer. He created and sold two companies to Google (including reCAPTCHA) while still in his 20s before creating, Duolingo with co-founder Severin Hacker, and he also teaches at Carnegie-Mellon University.Screen_shot_2011-07-08_at_11.42.53_PM vonahn.png

 

His initial goal was universal education, he told attendees at the July 10 IDE Platform Strategy Summit. When he stumbled on the fact that two-thirds of the world’s 1.2 billion language learners are studying English to rise up from poverty, he decided that a free or low-cost app was the key to their success. Duolingo was the result.

 

His plan was to make it simple and game-like (offering bonus points and incentives for to move to the next level, for instance), to attract and keep learners. Soon, the site scaled rapidly and began crowdsourcing methods and tips so that students also become the teachers. Today, more than 100,000 schools are using the program and it has 40 million registered users—mostly outside of schools.

 

What’s evident from these two examples is the rapid-fire pace and unlimited potential of many digital technologies. Not incidentally, maybe they will improve the world along the way.

 

 

 

For more about MIT's robotic efforts, read the blog here.

 

For more on DeepMinds see:

 

For more on Duolingo see:

A recurring theme at the May 20 MIT CIO Symposium was how business executives are finally Leading Digital. There’s no one silver bullet that will guarantee success, but importantly, IT leaders are taking on the strategic business roles that have been discussed for the last few decades.

 

For example, CIO Leadership Award winner Michael Nilles wears both traditional IT and corporate strategy hats at Schindler Group [see related blog here]. Nilles, who participated in the Leading Digital panel moderated by MIT IDE Research Scientist, George Westerman, is responsible for traditional areas such as global business process management, IT and shared services. In addition, he’s the CEO of Schindler Digital Business AG, where he is “digitizing relationships with customers” and leveraging innovative digital business models, like the IoT, as competitive advantage for the company in the industry.

 

By contrast, at DHL Express Americas, Pablo Ciano –a finalist for the award -- said the only way he can actively advocate for IT in strategic business decisions is to have a CTO carry out the day-to-day operations such as running the data centers and infrastructure for the company’s 500,000 employees and global logistics network in 220 countries.

 

Regardless of organizational structure, digital transformation and platform strategies are key to corporate survival—even for current industry leaders. Specifically, CIOs were told to work across business lines and across industries in unconventional ways. As Vivanda CEO Jerry Wolfe told attendees,

if you want to make change: “Don’t separate IT from the business. Imagine yourself in a service business” that’s different from the one you’re in; then, create a concrete plan that the board and CEO can embrace.

 

Why Automation Matters

An emphasis on business acumen doesn’t mean ignoring emerging technologies that are at your corporate doorstep, of course. At a session on The Impact of Automation, MIT IDE co-director Erik Brynjolfsson said that business and social contracts are “unraveling” and too many organizations still don’t keep pace with technological advances. He challenged his panelists to tell IT leaders why robotics and automation need to be top-of-mind.

 

Professor Mary Cumming (left),mcummings.jpg director of the Humans and Autonomy Lab at Duke University, cited some provocative data about commercial pilots and her own experience as a military pilot to drive home the point.  On average, commercial pilots actually fly their planes only about 20 percent of the time, she said. The rest of the time it is controlled automatically.  “Humans don’t do well” with repetitive tasks over long time periods, according to Cummings. That’s why autonomous cars and automated business processes will become more accepted: humans get bored and won’t always intervene appropriately.

 

At the same time, she and MIT CSAIL Director, Daniela Rus, said that robots are still in the early stages, particularly when it comes to tasks that require reasoning, expertise and judgment.

CIOs will have to sort out which processes and jobs require judgment or involve high levels uncertainty, and which are ripe for automation. Cummings noted that IT of the future will need people who can understand the social and societal implications of IT in addition to programming literacy.

 

 

Also view the two-part video clips from Leading Digital authors, George Westerman, Andrew McAfee and Didier Bonnet here and here.

MIT alumni and venture capitalist, Brad Feld, is intense when he talks about the state of innovation-- but not because it’s the next big thing, or some new trend to study. With a 20-year track record as both a tech company leader seeking startup investment and as a VC funding others, Feld knows how critical new ideas are to fueling the economy. He also knows the pitfalls.

 

brad feld photo.jpg


An early-stage investor and entrepreneur since 1987, Feld co-founded the Foundry Group, Mobius Venture Capital, and prior to that, Intensity Ventures. He’s also a co-founder of the TechStars venture. Feld rode the dotcom bubble in the late 1990

s, and the subsequent bust in the next few years. He has plenty of stories to tell of giant companies going bankrupt, lawsuit filings and fledgling startups that made it big. For him, today’s romance with entrepreneurship is part of a long continuum where ideas, investments and technological ebb and flow over time.

 

“I’m actually getting tired of word entrepreneurship,” Feld told IDE co-founder, Andrew McAfee, during a fireside chat recently held at MIT. In the world of VCs, “words get co-opted and eventually mean nothing.”

 

Focusing on Ideas

What really matters, according to Feld, is for developers and business people to focus on their ideas and what they can contribute. “Macroeconomics is not really relevant.”

Even in the lean years of VC funding, Feld said he tried to tune out “macroeconomics and media speculation,” and build a business heads-down. “That’s entrepreneurship, and it happens continually…you can’t label it, and it’s not a new thing.”

 

At MIT, innovation is “wired into the DNA,” Feld said, “now it’s spreading out into society.” People want to create their own future and startups are a great way to do that. His advice?  Entrepreneurs shouldn’t listen to VCs on how to do it; they should listen to data and use history to help them make decisions.

 

Feld is a maverick in other ways, too. Two decades ago he eschewed both Silicon Valley and Boston tech communities for the slower pace of Boulder, Colo. Location shouldn’t dictate a business, he said; creative communities can thrive in everywhere. “Technology in Dubai isn’t following Silicon Valley processes… The goal is not to replicate, but to build your own special magic anywhere. Each ecosystem grows its own way.”

 

Four Ways to Make Your Mark; or Not

Although he values individualism, Feld touts four principals—rarely followed, he says-- to help innovators get their companies off the ground:

 

1. Company leaders have to be entrepreneurs themselves to understand what’s really needed.

2. Take a 20-year view; forget what the market is like today or this year. Governments, academia and businesses all are on short-term cycles.

3. Be inclusive; don’t form geeky cliques or you’ll miss out on great collaboration opportunities. Take diversity seriously and you’ll get better outcomes.

4. Be more extroverted. Actually take part in activities and events with everyone. Mix it up on purpose. Hang out with—and welcome--investors, developers, politicians and anyone who can help; especially if they have backgrounds that are different from what you’re used to.

 

Wanting to walk the walk when it comes to diversity and correcting the gender imbalance in STEM careers, Feld is on the board of the National Center for Women & Information Technology, in Boulder. Among the issues it addresses are structural problems in middle and high schools and unconscious bias in IT workplaces that deter women. “It’s not that women are broken or men are bad. Environments matter.”

 

While still intense, Feld also advises hard-driven tech leaders to unplug and find ways to balance life and work. In his own career, he’s had to struggle with the pressures of R&D, funding, long hours and his own mental health. He now speaks about ways to “disconnect” and reenergize in order to keep the creativity flowing in a pressure-cooker environment. “You can’t motivate others if you’re exhausted” or if you’re not motivated yourself.

.

And that’s a life lesson you probably won’t learn in business school.

 

 

 

Brad Feld blogs at FeldThoughts and is a speaker on the topics of venture capital investing and entrepreneurship. He also blogs for Startup Revolution, and Ask the VC.

He holds Bachelor of Science and Master of Science degrees in Management Science from the Massachusetts Institute of Technology and is also an art collector and long-distance runner. He has completed 23 marathons as part of his mission to finish a marathon in each of the 50 states.

Several keynote addresses at the IDE conference in London, held April 10, shed new light on the impact of the digital economy on labor, productivity, macroeconomics and society as a whole. A few highlights are noted below.

 

Erik Brynjolfsson: Second Machine Age Economics

Leading the discussion, IDE Director and MIT Sloan Professor Erik Brynjolfsson, told attendees about the dramatic advancements in robotics in the past few years—in dexterity, language and problem-solving, for example-- and how businesses can take advantage of these opportunities. He also offered insights about what skills humans will need to hone in a future where we will work alongside of robots and machines for improved medical diagnostics, factory output and data analytics. At the same time, he noted that there will be groups of people that may be left behind when technology zooms ahead of organizational structures, cultures and regulations, and how to address some of these inequities.

Eriks’ presentation can be viewed in full below or on YouTube here.

 

 


Roberto Rigobon: Tracking Consumer Trends

It’s easy to collect data, but much harder to create accurate and actionable information based on that data. MIT Sloan professor Roberto Rigobon and his research team have spent years demonstrating that big data collection can reveal macroeconomic trends for individual countries or larger global regions. At the IDE conference he explained how The Billion Prices Project collects online retail data on hundreds of items to more accurately determine inflation rates that will help set public policies. His latest research offers new ways to measure consumer purchasing power around the world using McDonald’s Big Mac as an international standard.

 

What’s next? Rigobon hopes to quantify the economic impact on retail sales and supplies in the face of natural disasters, labor market changes and digital economy disruptions. View his full keynote presentation here. Additional background on his work can also be found here.

 

Marshall Van Alstyne: Platforms Beat Products

MIT researcher Marshall Van Alstyne spoke about the dramatic shifts taking place in worldwide economies as a result of platforms and new business models. Every industry from—taxis to food services to video games—is transforming as a result of communities of users and ecosystems that are beating out traditional customer/supplier models. Marshall’s presentation can be viewed here and more background on his work can be found on here and on our Platform Economics page here.

The Innovator’s Hypothesis (MIT Press, 2014), by Michael Schrage, is based on the deceptively simple premise that "cheap experiments are worth more than good ideas." In fact, this type of value-producing innovation requires large-scale cultural and strategic shifts throughout the organization. To ease the way, Schrage, an MIT IDE Research Fellow who has written extensively about business innovation, offers a 5X5 framework and lots of actionable advice based on his advisory experiences with entrepreneurial organizations and Fortune 50 corporations.

Michael-Schrage-Brazil.jpgSchrage says that business experiments should be “simple, fast and cheap.” He debunks many common-wisdom economic myths and strategies that in practice impede the implementation of new ideas. In fact, he has an entire chapter explaining why ‘good ideas’ are bad for innovation.  In the following conversation, IDE Digital Community contributor, Paula Klein, asked Schrage to respond to three questions about the book and his objectives.


Q: What are the key takeaways you want top management and entrepreneurs to better understand by reading the book? What's your 'call to action'?

 

A: Plan less, experiment more – much more. Stop looking for good ideas; start thinking in terms of testable business hypotheses. Also, treat business experiments as investments in discovering who you want your customers and clients to become – not just what you want them to do or buy.

 

I’d like readers to recognize and embrace that digital media, methods and platforms have completely, utterly and profoundly transformed the economics of experimentation worldwide. The best way to take advantage of those new economics isn't by throwing stuff against a wall to see if it sticks, but by having the courage and discipline to test out business hypotheses that can incrementally - or even fundamentally - change how you define and create value.

Today's reality - and tomorrow's norm - is that the economics of doing little digital experiments are as important as the economics of analyzing Big Data. In addition to presenting a pretty darn good and proven innovation methodology, The Innovator's Hypothesis is a manifesto declaring that hypothesis and experimentation matter more for value creation than grand plans and terabyte-sized datasets.

 

Q. Some reviewers - and even friendly critics - argue that the kind of experimentation you discuss emphasizes marginal and incremental innovation over more disruptive or transformative initiatives. How do you respond?

 

A: I feel badly about that. It means that I failed to communicate that serious innovators and serious companies need to encourage their people to come up with business hypotheses and experiments that fundamentally challenge their business-model assumptions, not that simply build on or extend them. One of the key and core concepts of the book is that top management need to encourage and oversee a portfolio of business hypotheses and experiments, not just the best experiments, or those that affirm the strategic direction of the company. The book is filled with examples - both successful and failed - of companies from Google to Amazon to Blockbuster, where a diversity of business experimentations led to greater strategic clarity, focus and impact for the enterprise.

 

It may be obvious, but cultures of experimentation-- behaviorally, organizationally and economically-- are much different than cultures of analysis.

Q: What should be the ultimate goal of IT and business innovation? Is it better/faster/cheaper technology and processes? Revenue and competitive advantage? Societal benefits? Or maybe all of these?

A: That question allows me to link The Innovator's Hypothesis to my previous book, Who Do You Want Your customers To Become? Over 50 years ago, Ted Levitt, the Harvard Business School professor who became editor of the Harvard Business Review, published a popular and profound HBR article, Marketing Myopia. Levitt galvanized both the marketing and strategic planning communities by asking and elaborating on a simple question: What Business Are You In?

 

In an era of ongoing technological and innovation-centric disruption, that 'simple' question grows ever-more difficult to answer. Despite its ongoing relevance, my research and advisory work led me to conclude Levitt’s question didn't go far enough. It's not enough to better define our present business; we need to better define our customer’s future. Who do we want our customers to become? Henry Ford didn't just mass-produce the automobile; he effective enabled the mass production of the human capital of driving. Google isn’t just a search company; its algorithms turn everyone who uses them into searchers. Google effectively creates, captures and leverages the human capital of millions of its customers worldwide.

 

My conclusion?

Successful innovators transform the human capital, competencies, capabilities and creativity of their customers and clients.

 

 

Therefore, the ultimate goal of IT and business innovation is to make customers and clients better. My design heuristic for innovators worldwide is simple: Making Customers Better, Makes Better Customers. That is, when your innovations improve your customers, your customers become more valuable to you.

For your holiday reading and viewing, here’s very brief compilation of reports, blogs and videos that capture some of the highlights of the past year in four broad areas: IT and business leadership, innovation, automation/digitization and big data. Obviously, huge advances are taking place in the fields of ubiquitous computing, security, platforms and social media, among others, that we will continue following in 2015, as well. Meanwhile, enjoy a cup of winter cheer and catch up with these trends and TED.com blogs to recap the year.

Leadership

 

  • In the new book, Leading Digital, MIT IDE authors George Westerman and Andy McAfee--along with their CapGemini co-author, Didier Bonnet--discuss ways to lead digital transformation and become digital "masters." Click here to watch the video.
  • Why Leadership-development Programs Fail. Sidestepping four common mistakes can help companies develop stronger and more capable leaders, save time and money, and boost morale.  McKinsey Quarterly, January 2014 by Pierre Gurdjian, Thomas Halbeisen and Kevin Lane

 

Innovation

 

  • The Innovator’s Hypothesis. MIT IDE researcher Michael Schrage’s new book explains how "cheap experiments are worth more than good ideas." Read more here.
  • The Reverse Innovation Paradox. Business experts say a wealth of new products and ideas will flow from emerging economies to developed markets—but real-world examples are hard to find. Strategy + Business, March 2014 by John Jullens


Digitization and Automation

 

Big Data

 

 

And for something entirely different, here's another year-end must-read from the TED blog:

 

May your holidays be full and your technology be bright!

As Labor Day approaches in the U.S., many explanations for long-term unemployment–that is, unemployment lasting more than 27 weeks--continue to be discussed, especially as jobs go unfilled and automation becomes more pervasive.

 

Some economists, such as EPI Research and Policy Director Josh Bivens and economist Heidi Shierholz write that these rates are a normal “sign that there is still a great deal of slack in the economy.” That may be, but many see the continuation of the trend—where about 3 million Americans have been unemployed for more than six months, and the percentage of the unemployed who are long-term unemployed remains at the highest level since 1948--as a huge cause for concern.

 

chart.JPG

 

 

At the MIT Sloan School of Management, Assistant Professor Ofer Sharone, is studying employment trends from the human perspective asking how job seekers approach unemployment and what cultural and personal beliefs shape their attitudes and influence their success or failure in attaining work. Sharone received a Ph.D. in sociology from the University of California, Berkeley, and a J.D. from Harvard Law School.

 

U.S./Israeli Contrasts

Sharone's research specifically focuses on career transitions, job searching and unemployment. His recently published book, Flawed System/Flawed Self: Job Searching and Unemployment Experiences (University of Chicago Press) contrasts the job searching experiences of white-collar workers in Israel and the United States, and challenges many long-held cultural explanations. (The book is gaining traction, by the way, and just won two best book awards at the American Sociological Association).Ofer-Sharone.jpg

 

Despite searching for work under similar economic conditions, Sharone finds that “long-term unemployed white-collar workers in Israel and the United States come to different subjective understandings of their difficulties in finding work.”


Based on cross-national, in-depth interviews with unemployed job seekers and at job-search support organizations, he says that many labor-market institutions generate distinct job search “games” which in turn, create unique unemployment experiences.

 

At a recent MIT IDE lunch seminar, Sharone discussed his research including two key conclusions:

  • Israelis blame the system, while Americans blame themselves for long-term unemployment.
  • The nature of a job-seekers’ subjective responses has profound individual and societal implications.

 

The Perfect Fit Versus an Imperfect System

Sharone says that U.S. attitudes are not just the result of individualism; “it’s structural.”  Israeli white-collar job requirements are very rigid and applicants compete on objective skill assessments, pre-job testing, rankings and resumes.

In the U.S., however, intangibles like networking, likability and interpersonal ‘soft skills’ are often key, and those who don’t excel think “something is wrong with me.”

U.S. hiring agents consider skills, of course, but they also value credentials, background and “fit” –wanting to know the “person behind the skills,” emotions, and so on.

 

The application process also plays a big part. In the U.S., online job boards “are a black hole,” so networking, self-promotion and follow-up are standard and can be decisive in hiring.  Israelis take pride in their merit-based system and frown on networking, which is usually viewed as “pulling strings” or currying favor.

 

As a result, those who don’t get the job in Israel see the system as unfair or arbitrary, but they don’t take it personally. In the U.S., when your life is an open book, rejection can be very hard and personal. Understanding these differences and offering appropriate coaching and support may lead to more successful outcomes, Sharone says. Realizing the differences, “is a first step in mobilizing for social change.”

 

To bolster his research, Sharone is piloting a new initiative to help the long-term unemployed and to gather valuable research on both job-seeking and hiring practices.

 

The Institute for Career Transitions, (ICT) operates from the assumption that not only does long-term unemployment continue, it is having hugely debilitating effects on those caught in its grips. ICT is a non-profit organization whose mission is “to generate effective strategies, offer practical support, and increase public understanding of the challenges facing professionals in career transitions.” It offers five key areas of focus:

  1. Fostering collaboration
  2. Generating research
  3. Increasing awareness
  4. Engaging professionals
  5. Providing assistance

 

The first ICT initiative aims to provide free and effective job search support for unemployed job seekers. Longer term, it would like to provide data-driven strategic guidance and policy recommendations for professionals undergoing career transitions.

 

In the words of Bivens and Shierholz:

It's too soon to give up on the long-term unemployed. We found no evidence that today's high long-term unemployment rate is due to anything other than the weak economy. For example, were we facing skills mismatches, there would be evidence of tight labor markets relative to 2007 for workers in at least some occupations, industries, education levels, or demographics. However, the long-term unemployment rate is elevated across the board. There just aren't enough jobs for everyone who's looking.

 

Regardless of whether the elevated long-term unemployment is structural or due to the weak economy, Sharone,  who is the Mitsubishi Career Development Professor and an Assistant Professor of Work and Organization Studies at the MIT Sloan School of Management, wants to offer relief to those who are hardest hit.

At the risk of sounding cliché or understating the obvious, it was another very big year for information technology. Big isn’t always easy, of course, nor is it always pleasant-- certainly not for all businesses, economies or employees. But for sure, 2013 didn’t lack in excitement for those on the leading edge of change, disruption and innovation—and that clearly means for the researchers and experts at the Center for Digital Business.

 

Among the top issues I heard about most were robotics, big data/predictive analytics, and platform economics; topics that will continue to be top-of-mind as we turn the calendar to a new year. What’s more, they are very much inter-related.

 

Very briefly, I’d like to recap for you a short list of a few of the important facts, figures, quotes and challenges from the past year based on blogs posted on this site. I hope it will help you focus and prioritize as you enter another big year ahead.

 

    1. We are in the midst of a seismic shift in business models, powered by the Internet and a generation of connected users.  Sangeet Paul Choudary, Geoffrey Parker and Marshall Van Alstyne
    2. Data-driven decisions, predictions, and diagnoses are much better than those that come from human intuition and HiPPOs (the ‘Highest-Paid Person’s Opinions’). The research is overwhelming on this point. HiPPOs might be good for some things, but their crystal balls just don’t work very well. The soulless output of a data-driven, mechanistic algorithm is demonstrably and significantly better, in domain after domain. So HiPPOs should become an endangered species. Andrew McAffee
    3. The results of automation and technological advances are not equitable. “Unambiguously, technology grows the overall pie, but under the surface, technological progress doesn’t’ “raise all boats equally." Andrew McAffee
    4. You don’t need a dedicated group for big data. Think of it as a continuation of current efforts not a “rip and replace” model. Hadoop isn’t replacing data warehouses, but it is augmenting it. Tom Davenport.
    5. “ It's an arms race, with those best able to use data and technology outcompeting other buyers and sellers.” Erik Brynjolfssen

 

Wishing peace, happiness and creative solutions to all in 2014.

Driverless vehicles are getting smarter every day, but until mapping and navigational technologies are greatly improved, getting from here to there will still require human intervention for the foreseeable future.

 

That’s the opinion of John Leonard, roboticist and Professor of Mechanical and Ocean Engineering at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). At a seminar hosted by the MIT Initiative on the Digital Economy last week, Leonard spoke about the current state of navigation and mapping technologies for autonomous vehicles and what’s needed to take them to the next level.

 

While commercial ventures such as Google’s and Mercedes’ driverless cars, as well as projects by Cadillac and Telsa are highly touted and sophisticated –and some argue that these cars are safer than human drivers -- Leonard believes that safety concerns and costs, as well as technology obstacles, will keep the vehicles out of the mainstream for decades to come. “Driving a car or a plane is different than operating an elevator,” he said, and more evidence and discussion are needed before autonomous vehicles proliferate on the nation’s highways and local roads.

 

“There are too many unexpected problems and dangers,” while driving that require human intervention and judgment, he says. What if there is a construction detour? A speed zone? Bad weather?  As for costs, the sensors needed for such precision are very expensive and while they could get cheaper as sales increase, affordability has been an elusive goal to date. “I don’t expect to have truly driverless taxis in Manhattan in my lifetime,” he says.

 

Lessons Learned From the Darpa Challenge

Leonard clearly believes that self-driving vehicles have the potential to transform our transportation system and he was part of the MIT team participating in the 2006-07 DARPA challenge, that developed and designed a “robocar” Land Rover using a vision-guidance system. It implemented radar and laser scanners to plan and navigate the route, negotiate turns and detect objects on the course. In retrospect, he says, the design was probably too complex and the car finished in fourth place in the competition.

 

Nevertheless, the experience illustrated the real-world challenges carmakers face in developing viable commercial products. One primary impediment is the need for better computations and understanding of the physical environment and terrain before the cars can make long, complicated journeys.

 

Autonomous vehicles require computerization of physical, behavioral and procedural actions. The seminar focused on ways to improve navigation and mapping, and techniques for better map-building such as using lidar and vision data in large-scale dynamic environments. In addition, he discussed localization, where the aim is to compute the position of an autonomous vehicle with respect to a previously built map. He also raised some of the open legal, ethical, security, and economic questions associated with self-driving cars, and their potential impact on the labor market.

 

The bottom line to me was to go ahead and renew my driver’s license; it looks like my Prius won’t be doing errands for me unattended for quite some time. Then again, progress in this field has been exponential. How soon do you expect to see versions of autonomous vehicles on the road?

John is a member of this community and can be reached here.

Watch the seminar in this video:

 

Filter Blog

By author: By date:
By tag: