The banking industry has long been one of the major users of IT; among the first to automate its back-end and front-office processes and to later embrace the Internet and smartphones.  However, banking has been relatively less disrupted by digital transformations than other industries. In particular, change has come rather slowly to the world’s banking infrastructure.

 

“With advances in technology, the relationship that customers have with their bank and with their finances has changed…” notes a recently released Citigroup report,  Digital Disruption: How FinTech is Forcing Banking to a Tipping Point. “So far these have been seen more as additive to a customer's banking experience…

Despite all of the investment and continuous speculation about banks facing extinction, only about 1% of North American consumer banking revenue has migrated to new digital models… we have not yet reached the tipping point of digital disruption in either the U.S. or Europe.”

 

Recently, I discussed some of the highlights of Citi’s excellent FinTech report. Investments in financial technologies have increased by a factor of 10 over the past five years. The majority of these investments have been concentrated in consumer payments, particularly on the user experience at the point of sale, while continuing to rely on the existing legacy payment infrastructures.  I’d like to now focus on the potential evolution of the backbone payment infrastructures.

 

Transforming this highly complex global payment ecosystem has proved to be very difficult. It  requires the close collaboration of its various stakeholders, including a variety of financial institutions, merchants of all sizes, government regulators in just about every country, and huge numbers of individuals around the world. All these stakeholders must somehow be incentivized to work together in developing and embracing new payment innovations. Not surprisingly, change comes slowly to such a complex ecosystem.

 

The Promise of Blockchain

But sometimes, the emergence of an innovative disruptive technology can help propel change forward. The Internet proved to be such a catalyst in the transformation of global supply chain ecosystems. Could blockchain technologies now become the needed catalyst for the evolution of legacy payment ecosystems?

 

The blockchain first came to light around 2008 as the architecture underpinning bitcoin, the best known and most widely held digital currency. Over the years, blockchain has developed a following of its own as a distributed data base architecture with the ability to handle trust-less transactions where no parties need to know nor trust each other for transactions to complete. Blockchain holds the promise to revolutionize the finance industry and other aspects of the digital economy by bringing one of the most important and oldest concepts, the ledger, to the Internet age.1462755710712.jpg

 

Ledgers constitute a permanent record of all the economic transactions an institution handles, whether it’s a bank managing deposits, loans and payments; a brokerage house keeping track of stocks and bonds; or a government office recording births and deaths, the ownership and sale of land and houses, or legal identity documents like passports and diver licenses. Over the years, institutions have automated their original paper-based ledgers with sophisticated IT applications and data bases.

 

But while most ledgers are now digital, their underlying structure has not changed. Each institution continues to own and manage its own ledger, synchronizing its records with those of other institutions as appropriate - a cumbersome process that often takes days. While these legacy systems operate with a high degree of robustness, they’re rather inflexible and inefficient.

 

In a recent NY Times article, tech reporter Quentin Hardy, nicely explained the inefficiencies inherent in our current payment systems.

“In a world where every business has its own books, payments tend to stop and start between different ledgers. An overseas transfer leaves the ledger of one business, then goes on another ledger at a domestic bank. It then might hit the ledger of a bank in the international transfer system.  It travels to another bank in the foreign country, before ending up on the ledger of the company being paid. Each time it moves to a different ledger, the money has a different identity, taking up time and potentially causing confusion. For some companies, it is a nightmare that can’t end soon enough.”

Blockchain-based distributed ledgers could do for global financial systems what the Internet has done for global supply chain systems.

As Citi’s Digital Disruption report notes, blockchain technologies “could replace the current payment rail of centralized clearing with a distributed ledger for many aspects of financial services, especially in the B2B world… But even if Blockchain does not end up replacing the core current financial infrastructure, it may be a catalyst to rethink and re-engineer legacy systems that could work more efficiently.”  The report goes on to explain why the blockchain might well prove to be a kind of Next Big Thing.

 

Decentralized and Disruptive

“Blockchain is a distributed ledger database that uses a cryptographic network to provide a single source of truth. Blockchain allows untrusting parties with common interests to co-create a permanent, unchangeable, and transparent record of exchange and processing without relying on a central authority.  In contrast to traditional payment model where a central clearing is required to transfer money between the sender and the recipient, Blockchain relies on a distributed ledger and consensus of the network of processors, i.e. a supermajority is required by the servers for a transfer to take place. If the Internet is a disruptive platform designed to facilitate the dissemination of information, then Blockchain technology is a disruptive platform designed to facilitate the exchange of value.”

 

The report summarizes some of the blockchain key advantages:

  • Disintermediation: Enables direct ownership and transfer of digital assets while significantly reducing the need for intermediary layers.
  • Speed & Efficiency: The reengineering - i.e., reduction - of unnecessary intermediate steps will ultimately results in faster settlements, lower overall costs and more efficient business models.
  • Automation: Programmability enables automation of capabilities on the ledger (e.g. smart contracts), that can be executed once agreed upon conditions are met.
  • Certainty: System-wide audit trails make it possible to track the ownership history of an asset, providing irrefutable proof of existence, proof of process and proof of provenance.

But much, much work remains to be done. Blockchain is still at the bleeding edge, lacking the robustness of legacy payment systems. Distributed ledger systems have only been around for less than a decade, and are thus quite immature compared to the existing, decades-old financial infrastructures. While legacy payment infrastructures are complicated, inefficient and inflexible, they actually work quite well, being both safe and fast. Replacing them will be a tough and lengthy undertaking, no matter how innovative and exciting the new technologies might be.

 

It’s too early to know if the blockchain will join the pantheon of Next Big Things and become a major transformational innovation. As we’ve seen with other such successful innovations - e.g., the Internet, the Web, Linux - collaborations between universities, research labs, companies and government agencies are absolutely essential. So are close collaborations among technology developers and users in order to get the architecture right, agree on open standards, develop open source platforms and set up governance processes embraced by all.

 

In a short number of years, blockchain technologies have made a lot of progresshttp://blog.irvingwb.com/blog/2014/02/reflections-on-bitcoin.html. We might well be close to an ecosystem-wide FinTech tipping point. It will be fascinating to see how it all plays out in the years to come.

 

The complete blog was first posted April 18 here.

A few months ago I attended the 12th annual Brookings Blum Roundtable on Global Poverty, a meeting that brought together around 50-60 policy and technical experts from government, academia, business, investors and NGOs. This year’s event was focused on the impact of digital technologies on economic development in emerging and developing countries.  It was organized around six different sessions, each of which explored the links between technology and development through a different lens.

 

Prior to the event, the Brookings Institution commissioned a policy brief for each session to help set the stage for the ensuing discussions. I wrote one of the briefs, Will the Digital Revolution Deliver for the World’s Poor. I would now like to discuss another brief which I found quite interesting; Will the Spread of Digital Technologies Spell the End of the Knowledge Divide? by Deepak Mishra, lead economist at the World Bank. Dr. Mishra is also co-director of the World Development Report 2016: Internet for Development, which will shortly be released by the World Bank.

 

As discussed in my policy brief, Internet access and mobile phones are being rapidly transformed from a luxury to a necessity that more and more people can now afford.  Advances in technology keep expanding the benefits of the digital revolution across the planet. Over the coming decade, a 2013 McKinsey study estimates that up to 3 billion additional people will connect to the Internet through their mobile devices, enabling them to become part of the global digital economy.

 

Our digital revolution is accelerating three important trends that should significantly improve the quality of life of the world’s poor: businesses are developing offerings specifically aimed at lower-income customers; governments are improving access to public and social services including education and health care; and mobile money accounts and digital payments are increasing financial inclusion.

 

In addition, notes Mishra, the digital revolution is significantly expanding the availability of knowledge, thus leading to an increasingly global knowledge-based society. But this evolution comes with some very important caveats:

“Evidence suggests that digital technologies are in fact helping to expand knowledge, but are not succeeding in democratizing it. That is, digital technologies are helping to bridge the digital divide (narrowly defined), but are insufficient to close the knowledge divide.  Democratizing knowledge is more than a matter of connectivity and access to digital devices.  It requires strengthening the analog foundations of the digital revolution - competition, education (skills), and institutions - that directly affect the ability of businesses, people, and governments to take full advantage of their digital investments.”

 

Let’s look a little closer at what’s entailed in these three “analog foundations of the digital revolution.”

  • Regulations that promote competition: Lowering the cost of starting firms, avoiding monopolies, removing barriers to adoption of digital technologies, ensuring the efficient use of technology by businesses, enforcement of existing regulations, …
  • Education and skill development: Basic IT and digital literacy, helping workers adapt to the demands of the digital economy, preparing students, managers and government officials for an increasingly digital world, facilitate life-long learning, …
  • Institutions that are capable and accountable: Empowering citizens through digital platforms and information, e-government services, digital citizen engagements, increased incentives for good governance both in public sector and private firms, …

 

Digital technologies are necessary, but not sufficient. Countries must also strengthen these analog foundations to realize the benefits of their technology investments as well as narrow their knowledge divide.

 

Mishra’s observations bring to mind similar discussions around the impact of technology on business and economic productivity.  In their 2009 book, Wired for Innovation: How Information Technology is Reshaping the Economy, Erik Brynjolfsson and Adam Saunders wrote about the impact of technology-based innovation on business productivity:

“The companies with the highest returns on their technology investments did more than just buy technology; they invested in organizational capital to become digital organizations. Productivity studies at both the firm level and the establishment (or plant) level during the period 1995-2008 reveal that the firms that saw high returns on their technology investments were the same firms that adopted certain productivity-enhancing business practices.  The literature points to incentive systems, training and decentralized decision making as some of the practices most complementary to technology.”

 

Organizational capital is a very important concept, critical to enable companies to take full advantage of their technology investments. Similarly, at a country level, nations must strengthen their analog foundations to realize the full benefits from their digital investments.

 

Digital technologies have been diffusing around the world at an unprecedented rate. “The average diffusion lag is 17 years for personal computers, 13 years for mobile phones, and five years for the Internet, and is steadily falling for newer technologies.” The world is more connected than ever before. But, Mishra reminds us that “nearly 6 billion people do not have broadband, 4 billion do not have Internet access, nearly 2 billion do not use a mobile phone, and half a billion live outside areas with a mobile signal.” According to Mary Meeker’s 2015 Internet Trends Report, Internet access and smartphone subscriptions continue to grow rapidly around the world, adding around 200 million per year and 370 million per year respectively. Much progress is being made in closing the digital divide, but much remains to be done.


 

 

Continue reading the full blog, which was posted on October 20, here.


The past few years have seen the rise of what’s been variously referred to as the on-demand, collaborative, sharing, or peer-to-peer economy.  Regardless of what we call it, this trend has captured the public’s imagination.  Articles on the subject now appear fairly frequently.  Some of the articles are focused on the empowerment nature of these technology-based economic models, enabling people to get what they need from each other.  Others are more concerned with on-demand’s impact on the very nature of work in the 21st century.

 

In an excellent 2013 report, industry analyst Jeremiah Owyangargues that the collaborative economy is the evolution of the Internet-based economy of the past two decades.  The one-to-many Web 1.0 made lots of information accessible to individuals, but control remained mostly in the hands of institutions.  It was followed by the many-to-many Web 2.0, where individuals could easily share content and opinions with each other.

Now, the on-demand phase of the Internet economy is enabling individuals to go way beyond sharing information.

 

“An entire economy is emerging around the exchange of goods and services between individuals instead of from business to consumer,” wrote Owyang.  “This is redefining market relationships between traditional sellers and buyers, expanding models of transaction and consumption, and impacting business models and ecosystems…  This results in market efficiencies that bear new products, services, and business growth.”

 

In 2011, Time Magazine named the sharing economy one of 10 Ideas that Will Change the World.  “Someday we'll look back on the 20th century and wonder why we owned so much stuff… [S]haring and renting more stuff means producing and wasting less stuff, which is good for the planet and even better for one’s self-image…  But the real benefit of collaborative consumption turns out to be social.  In an era when families are scattered and we may not know the people down the street, sharing things - even with strangers we’ve just met online  - allows us to make meaningful connections.”

 

This early bloom has now started to fade. “If you want to start a fight in otherwise polite company, just declare that the sharing economy is the new feudalism, or else that it’s the future of work and all the serfs should just get used to it, already,” wrote a recent Wall Street Journal article.  “Uber isn’t the Uber for rides - it’s the Uber for low-wage jobs,” note the critics.  “Boosters of companies like Uber counter that they allow for relatively well-compensated work, on demand.”

 

A Financial Times article reflected on what it means to be running “a collaborative business model within a capitalist framework.  Are the two even compatible?  Or is there a fundamental conflict at the heart of an industry that preaches collaboration but, due to being radically commercialised by venture capital money from Silicon Valley, also needs to profiteer from the goodwill of others if it’s to remain viable?  For the most part it’s a hypocrisy the community is trying to address…  For now, the uncomfortable truth is that the sharing economy is a rent-extraction business of the highest middle-man order.”

 

This past May, OuiShare Fest, a three-day collaborative economy festival took place in Paris.  There was much discussion that this emerging economy is now practically owned by Silicon Valley’s 1 percent.  “The sharing economy has created 17 billion-dollar companies (and 10 unicorns),” said this article.  In a keynote at the festival, Owyang noted that the VC money being poured into the sector already far outweighs the monies that flowed into social media at this stage of its development.  “It’s worth noting that the early hope that this sharing market would foster altruism and a reduction of income inequality can now be refuted,” he said.  “

The one percent clearly own the sharing startups, which means this is continued capitalism - not idealistic socialism.”

 

Read the full post of my July 21 blog here.

"Entrepreneurship has never been easier, but entrepreneurship is on the decline."  I first heard this surprising paradox a few months ago in a talk by MITs’s Andy McAfee.  Digital technologies are inexpensive and ubiquitous, startups have access to all kind of cloud-based business services, and customers can now be easily reached and supported over mobile devices.  It should be easier than ever - as books and articles keep reminding us - to become an entrepreneur and start your own company.  But in fact, entrepreneurship has been in decline for years.

 

Most everyone I’ve mentioned this paradox to - business colleagues, investors, journalists - doesn’t quite believe it.  But that’s what the data shows. In a recent paper, University of Maryland economist John Haltiwanger and collaborators used business data from the US Census Bureau to calculate the annual startup rates over the past several decades - i.e. the number of new firms in each year divided by the total number of firms.  The startup rate was 12% in the late 1980s, went down to 10.6% just before the 2007 Great Recession, and then fell sharply below 8%.  Such sharp declines add up over time.  In the late 1980s, 47% of all firms were 5 years or younger.

The percentage of young firms declined to 39% in the mid 2000s, and has since continued its downward trend.

In another recent paper, economists Ian Hathaway and Robert Litan analyzed the same Census Bureau Data and showed that in addition to the continuing decline of new firm formations, failure rates have steadily increased for all companies under 16 years old, and they’ve been particularly high for early-stage firms.

 

Further evidence of this entrepreneurship decline is found in Where the Jobs Are: Entrepreneurship and the Soul of the American Economy, published in September of 2013 by John Dearie and Courtney Geduldig.  Their book is primarilyfocused on the close link between entrepreneurship and job creation.  It cites a 2009 study that shows that between 1980 and 2005, all net new job creation in the US came from young businesses less than 5 years old.  They conclude that as a result of the decline in entrepreneurship, “the nation’s job creation engine - new business formation - has been breaking down in recent years.”

 

What’s going on?  Why this perplexing paradox that many have so much trouble accepting because it runs counter to the stories of successful billion-dollar ventures we read so much about in the news?  Last month I read an article in Newsweek by technology author and columnist Kevin Maney, Tech Bubble? No, It's a Startup Wealth Gap, that sheds considerable light on these questions.

 

It turns out that startups have their own version of the wealth gap between the rich and superrich - the 1% and .1% - and everyone else.  The rising US wealth and income inequalities - brought to light this past year by Thomas Piketty’s surprising best seller, Capital in the Twenty-First Century - is being repeated in the start-up world.

 

“Everything goes to the top 1 percent,” writes Maney. “[B]elow the top tier you’ll find a whole lot of striving and desperation - a burbling stew of stagnant companies, founder angst and money-losing investments… Losers get scraps. Tech is definitely red-hot, if you look only at the winners.”

 

Continue reading the full blog post, which appeared January 6, here.

I recently read a very interesting article,  Big-Bang Disruption.  The article was written by Larry Downes and Paul Nunes and published in the March, 2013 issue of the Harvard Business Review.  It was later expanded into a book, Big Bang Theory: Strategy in the Age of Devastating Innovation published this past January.

 

The authors’ key premise is that innovation is once more undergoing drastic changes, driven by the exponential advances in digital technologies.  Companies are now able to create products that are simultaneously better, cheaper and more appealing than those of competitors.  As a result, “entire product lines and whole markets are now being created or destroyed overnight.”  Examples include Skype, iTunes, the Kindle, Netflix, Facebook, Twitter, and smartphone apps like Google maps and Angry Birds.


Big-bang disruptions are often unplanned and unintentional.  They are typically discovered through continuous market experimentation.  They upend the conventional thinking on strategy, marketing and innovation, giving rise to a new set of business rules.  “Nearly everything you think you know about strategy and innovation is wrong,” they note.

 

The article contrasts big-bang disruption with disruptive technologies and innovations, the well known concepts most associated with Harvard professor Clayton Christensen.  In his 1997 seminal book, The Innovator’s Dilemma, Christensen succinctly defined disruptive innovations as “new products, services, or business models that initially target small, seemingly unprofitable customer segments, but eventually evolve to take over the marketplace.”  Industry leaders are often blindsided by such innovations, because their attention is focused on their existing products and customers.  “No company or industry is immune.”    


Management experts have embraced Christensen’s work.  They all pretty much agree that companies must be on the lookout for such potentially disruptive changes, and try to spot them as early as possible.  Once it’s clear that the innovation is inevitable, the company must step up to embrace it, - whether it likes it or not - with efforts like in-house skunkworks, marketplace experiments, collaborations with early adopters and evaluation of potential acquisitions.  If the business mobilizes quickly enough, it can turn the disruptive change from an existential threat to a strategic opportunity.


“But the strategic model of disruptive innovation we’ve all become comfortable with has a blind spot,” write Downes and Nunes.  “It assumes that disrupters start with a lower-priced, inferior alternative that chips away at the least profitable segments, giving an incumbent business time to start a skunkworks and develop its own next-generation products."

 


“We’re accustomed to seeing mature products wiped out by new technologies and to ever-shorter product life cycles.  But now entire product lines - whole markets - are being created or destroyed overnight.  Disrupters can come out of nowhere and instantly be everywhere.  Once launched, such disruption is hard to fight.  We call these game changers big-bang disrupters. They don’t create dilemmas for innovators; they trigger disasters.”


“The first key to survival is understanding that big-bang disruptions differ from more-traditional innovations not just in degree but in kind. Besides being cheaper than established offerings, they’re also more inventive and better integrated with other products and services.  And today many of them exploit consumers’ growing access to product information and ability to contribute to and share it.”


In a recent Forbes interview, Paul Nunes was asked what a company can do to predict or defend against big-bang disruptions, given their speed and scale.  “Realistically,” answered Nunes, “there is no solution to the problem.  What we are advocating is gaining an understanding of the new realities of the market, and transforming your organization to better align with those realities.”


Continue reading the full blog, which was first published on March 12, here.


Tom Malone gave a very interesting talk on collective intelligence at the IBM Cognitive Systems Colloquium which I recently attended and wrote about.  Malone is Professor of Management at MIT’s Sloan School and the founding director of the MIT Center for Collective Intelligence (CCI)His research is primarily driven by this fundamental question: “How can people and computers be connected so that - collectively - they act more intelligently than any individuals, groups, or computers have ever done before?”  This is a very important question to explore to help us understand the impact of our increasingly smart machines on the very nature of work and organizations.

 

Malone and his collaborators are conducting research on a number of topics in this areahttp://en.wikipedia.org/wiki/Collective_intelligence.  Do groups exhibit characteristic levels of intelligence which can be measured and used to predict the group’s performance across a wide variety of cognitive tasks?  If so, can you devise tests to measure the group’s intelligence using methodologies and statistical techniques similar to those that have been applied to measure the IQs of individuals for the past hundred years?

 

To answer these questions they conducted a number of studies where they randomly assigned individuals to different groups, which then worked on a variety of tasks.  Their research results were published in the October 2010 issue of Science.

 

 

http://www.sciencemag.org/The studies measured the individual IQs of each of the participants, and found that group intelligence was only moderately correlated with the average and maximum intelligence of the individual group members.  But, they did find three group attributes that were statistically significant factors in predicting how well each group will do on a wide range of tasks.  One was the average social sensitivity and perceptiveness of the group, that is, the ability of group members to read each other’s emotions.  They also found that groups in which a few people dominated the conversation did not perform as well as those groups where speaking and contributions were more evenly distributed.  Finally, the studies found that collective intelligence positively correlated with the proportion of women in the group, most likely because women generally score higher in social perceptiveness tests.

 

Much additional research is needed, but there seems to be evidence that something like

collective intelligence does indeed exist and can be measured.  It’s primarily dependent on how well the individual members of the group work together and, to a lesser extent, on their individual abilities.

 

In another set of studies, Malone and collaborators looked at whether groups that included both humans and computers did better at making decisions than either the humans or computers by themselves.  Their experiment, reported in this working paper, used the concept of prediction markets to predict what the next play would be in a football game.  Some of the predictions were made by groups of humans; some by different computer-based statistical models; and some by combining the human and computer predictions.

 

They found that the computer-only predictions were better than those made by the human groups, but that the hybrid of humans and computers made the best overall predictions, being both more accurate and more robust to different kinds of errors.  They attribute the results to the fact that people and computer models have different strengths and weaknesses.  The computers use sophisticated statistical analysis and have no biases, but have trouble dealing with unstructured and common sense information that humans are very good at.  On the other hand, humans are prone to biases and fatigue and are often not so good at evaluating probabilities.  In addition, our judgement can be influenced by the dynamics of the group.  “Therefore,” notes the paper, “combining human and machine predictions may help in overcoming the respective flaws of each.”

 

Read the rest of the blog on my Web site here.

A few weeks ago I wrote about my recent participation in the 2013 Roundtable on Institutional Innovation, an Aspen Institute event that took a close look at the impact of digital technologies on the evolution of companies and other organizations.  At the Roundtable, I heard a number of interesting presentations.  One of the most intriguing was The Collaborative Economy, by Jeremiah Owyang, a partner and analyst at Altimeter Group, whose research focuses on the changing relationships between companies and their customers.

 

Since the advent of the steam engine in the late 18th century, technology advances have been radically improving the productivity of business, enabling companies to significantly lower their prices while providing higher quality products and services.  Throughout the Industrial Revolution of the past two centuries, a stream of disruptive technologies, - steam engines, railroads, electricity, cars, airplanes, phones, radio, TV and so on, - have transformed the economy and just about every single industry, as well as re-shaping the institutions of society.

 

Is our present digital revolution qualitatively different from those of the past two centuries, which were primarily driven by machines and other industrial-age physical technologies?  Could it be that our continuing advances in digital technologies are now leading us to a new kind of information society and knowledge-based economy, which could, over time, be as transformative as the 18th century transition from pre-industrial agrarian societies to technology-based industrial societies?

 

The industrial economy has been primarily based on production, with GDP as the key measure of economic activity.  The collaborative economy feels very different.  There is a lot I find appealing about it, in particular, its potential impact on the really critical issue of jobs in the digital economy.  Given that large public and private sector institutions are not expected to create enough new jobs, the collaborative economy might be one of the most important ways for individuals to come up with all kinds of innovative ways of making a living.

 

“An entire economy is emerging around the exchange of goods and services between individuals instead of from business to consumer,” writes Owyang.  “This is redefining market relationships between traditional sellers and buyers, expanding models of transaction and consumption, and impacting business models and ecosystems.  We refer to this trend as the Collaborative Economy, defined as . . .an economic model where ownership and access are shared between corporations, startups, and people. This results in market efficiencies that bear new products, services, and business growth.”

 

This trend is also referred to as the sharing economy, the name NY Times columnist Tom Friedman used in a recent OpEd, Welcome to the ‘Sharing Economy’.  The OpEd featured Airbnb, an online community marketplace where individuals list, find, and book accommodations around the world.  Airbnb was founded in 2008 and already lists accommodations in 34,000 cities and 192 countries, including 23,000 in New York City and 24,000 in Paris.  “The sharing economy  - watch this space. This is powerful,” is Friedman’s overall conclusion.

 

In his excellent research report on the subject, Owyang explains what the collaborative economy is all about.  He views it as the next major phase of the Internet-based economy of the past two decades. In the first phase, the Web made lots of information accessible to individuals, but control remained primarily in the hands of institutions.  This one-to-many Web 1.0 then gave way to the many-to-many social media phase.  Web 2.0 now enabled individuals to easily communicate and share content and opinions with each other.  Now, the collaborative economy is enabling individuals to go beyond sharing information.
To continue reading the full blog, go to this page for the original posting from August 19.

 

Filter Blog

By author: By date:
By tag: