Marshall Van Alstyne Explains Platform Strategies

At the recent Platform Strategy Summit, Van Alstyne talked about 'extraordinary changes' taking place.


Where to Find More Content

Spaces

    Groups

      Actions

      Take the Poll

      Refresh this widget
      Digital Business Transformation

      How would you describe the current pace and accomplishments of digital technologies at your business or enterprise?

      Paula KleinCreated by Paula Klein on Sep 21, 2015 in Public Site: MIT IDE

      Most Recent Blog Posts

      Refresh this widget
      As computer algorithms proliferate, how will bias be kept in check?

       

      What if machines and AI are subject to the same flaws in decision-making as the humans who design them? In the rush to adopt machine learning and AI algorithms for predictive analysis in all kinds of business applications, unintended consequences and biases are coming to light.

       

      One of the thorniest issues in the field is how, or whether, to control the explosion of these advanced technologies, and their roles in society. These provocative issues were debated at a panel, “The Tyranny of Algorithms?” at the MIT CODE conference last month.

       

      Far from being an esoteric topic for computer scientists to study in isolation, the broader discussion about big data has wide social consequences.

       

      As panel moderator and MIT Professor, Sinan Aral, told attendees, algorithms are “everywhere: They’re suggesting what we read, what we look at on the Internet, who we date, our jobs, who are friends are, our healthcare, our insurance coverage and our financial interest payments.” And, as he also pointed out, studies disturbingly show these pervasive algorithms may “bias outcomes and reify discrimination.”

       

      Aral, who heads the Social Analytics and Large Scale Experimentation research programs of the Initiative on the Digital Economy, said it’s critical, therefore, to examine both the proliferation of these algorithms and their potential impact — both positive and negative — on our social welfare.

       

      Growing Concerns

      For example, predictive analytics are being used more frequently to determine the rise of violent recidivism among prison inmates, and by police forces to ascertain resource allocation. Yet, tests show that African-American inmates are twice as likely to be misclassified in the data. In a more subtle case, search ads are excluding certain populations, or making false assumptions about consumers based solely on algorithmic data. “Clearly, we need to think and talk about this,” Aral said.

       

      Several recent U.S. government reports have been issued voicing concern about improper use of data analytics. In an Oct. 31 letter to the EEOC, The President of the Leadership Conference on Civil and Human Rights, wrote:

      [The Conference] believes that big data is a civil and human rights issue. Big data can bring greater safety, economic opportunity, and convenience, and at their best, data-driven tools can strengthen the values of equal opportunity and shed light on inequality and discrimination. Big data, used correctly, can also bring more clarity and objectivity to the important decisions that shape people’s lives, such as those made by employers and others in positions of power and responsibility, However, at the same time, big data poses new risks to civil and human rights that may not be addressed by our existing legal and policy frameworks. In the face of rapid technological change, we urge the EEOC to protect and strengthen key civil rights protections in the workplace.

       

      Even AI advocates and leading experts see red flags. At the CODE panel discussion, Harvard University Professor and Dean for Computer Science, David Parkes, said, “we have to be very careful given the power” AI and data analytics have in fields like HR recruitment and law enforcement. “We can’t reinforce the biases of the data. In criminal justice, it’s widely known that the data is poor,” and misidentifying criminal photos is common.

       

      And Alessandro Acquisti, Professor of Information Technology and Public Policy at Carnegie Mellon University, told of employment and hiring test cases where extraneous personal information was used that should not have been included.

      The panel, from left, Sinan Aral, David Parkes, Alessandro Acquisti, Catherine Tucker, Sandy Pentland and Susan Athey.

      For Catherine Tucker, Professor of Management Science and Marketing at MIT Sloan, the biases in social advertising often stem from nuances and subtleties that machines don’t pick up. These are “the real worry,” she said. Coders aren’t sexist and the data is not the problem.

       

      Nonetheless, discriminatory social media policies — such as Facebook’s ethnicaffinity tool — are increasingly problematic.

       

      Sandy Pentland — a member of many international privacy organizations such as the World Economic Forum Big Data and Personal Data initiative, as well as head of the Big Data research program of the MIT IDE — said that proposals for data transparency and “open algorithms” that include public input about what data can be shared, are positive steps toward reducing bias. “We’re at a point where we could change the social contract to include the public,” he said.

       

      The Oct. 31 EEOC letter urged the agency “to take appropriate steps to protect workers from errors in data, flawed assumptions, and uses of data that may result in a discriminatory impact.”

       

      Overlooking Machine Strengths?

      But perhaps many fears are overstated, suggested Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business. In fact, most algorithms do better than people in areas such as hiring decisions, partly because people are more biased, she said. And studies of police practices such as “stop-and-frisk” show that the chance of having a gun is more accurately predicted when based on data, not human decisions alone. Algorithmic guidelines for police or judges do better than humans, she argued, because “humans simply aren’t objective.”

      Athey points to “incredible progress by machines to help override human biases. De-biasing people is more difficult than fixing data sets.” Moreover, she reminded attendees that robots and ‘smart’ machines “only do what we tell them to do; they need constraints” to avoid crashes or destruction.

      That why social scientists, business people and economists need to be involved, and “we need to be clear about what we’re asking the machine to do. Machines will drive you off the cliff and will go haywire fast.”

       

      Ultimately, Tucker and other panelists were unconvinced that transparency alone can solve the complex issues of machine learning’s potential benefits and challenges — though global policymakers, particularly in the EU, see the merits of these plans. Pentland suggested that citizens need to be better educated. But others noted that shifting the burden to the public won’t work when corporate competition to own the algorithms is intensifying.

       

      Athey summed up the “tough decisions” we face saying that “structural racism can’t be tweaked by algorithms.” Optimistically, she hopes that laws governing safety, along with self-monitoring by businesses and the public sector, will lead to beneficial uses of AI technology. Surveillance can be terrifying, she said, but it can also be fair. Correct use of police body cameras, with AI machines reviewing the data, for example, could uncover and solve systemic problems.

       

      With reasonable governments and businesses, solid democracy, and fair news media in place, machines can serve more good than harm, according to the experts. But that’s a tall order, especially on a global scale. And perhaps the even more difficult task is defining — or dictating — what bias is, and what we want algorithms to do.

      Theoretically, algorithms themselves could be designed to combat bias and discrimination, depending on how they are coded. For now, however, that design process is still the domain of very nonobjective human societies with very disparate values.

      As a recent Harvard Business Review article stated:

      “Big data, sophisticated computer algorithms, and artificial intelligence are not inherently good or bad, but that doesn’t mean their effects on society are neutral. Their nature depends on how firms employ them, how markets are structured, and whether firms’ incentives are aligned with society’s interests."


      Watch the panel discussion video here.

       

      Originally published at ide.mit.edu.

      Business leaders are continuously seeking out new and improved ways to drive decisions and meet consumer needs. Many digital platform companies are conducting experiments with online users every minute to glean insights, improve engagement and heighten transaction volume, for instance.

       

      In particular, companies such as Yahoo, Nike, Facebook and Google can monitor the habits and data of ten- to 100 million people at a time. At that scale — and in real-time — huge data sets can be analyzed and dissected very differently than in most of today’s R&D environments. Sinan Aral, Professor of Management at the MIT Initiative on the Digital Economy, described some of these experiments and their benefits at the 2015 CIO Symposium.

       

      At the same time, many businesses find large-scale experimentation unattainable. At this year’s May 18 CIO Symposium, Prof. Aral discussed new ways to achieve these goals: using big data analytics to substitute for marketing experimentation in business environments. Each approach can reap rewards, he said, but since it’s not always easy, affordable or feasible to do large-scale experimentation, social analytics using big data offers good alternatives.

       

      “Experimentation is very robust, but can be very narrow and difficult to do,” Aral told attendees. “If you have a big digital platform, a/b testing is easy; but if you have a product that’s not digital, or it’s not legitimate to randomly test strategies or approaches on users,” a big data solution may be more appropriate.

       

      Deconstructing Fitness Habits

      As an example, Aral cited a large fitness company that analyzed data about consumer running habits in order to create engagement and peer-to-peer demand for products and services. To measure exercise frequency, the company tracked global data on 14 million runners over five years. It also added a social network component to encourage new data input. That data then was correlated to weather data to establish patterns and to determine the influence of outside factors and peers on running activity.

      In another case, The New York Times collected data from tweets and site clicks to see how word-of-mouth recommendations and social media drive readership — and what that means to monetization and paywall design.

       

       

      For the full details on these two cases, watch Prof. Aral’s presentation here.

      For more on Prof. Aral’s work, view details here.

      More

      Erik Brynjolfsson Discusses the On-Demand Economy at the World Economic Forum

      Andrew McAfee on Technology and the American Workforce

      On the PBS NewsHour last April, Andy McAfee spoke about how increased use of technology does play a role in the current disappointing job growth statistics.




      The Second Machine Age: Challenges for CXOs

      Erik Brynjolfsson, co-author of the book The Second Machine Age, discussed with I-CIO the profound impact of automation on every industry and how CXOs must play a key role--now.


      HTML