Empowering Personal Data For Exponential Economic Growth
By Milton Pedraza, Luxury Institute CEO and Richard Whitt, former Google Senior Executive
Post pandemic, the rapid evolution of the personal data economy will have huge implications for the luxury industry. Our industry is one where client relationships are an inherent part of the business model. The only way ethical luxury brands can achieve true personalization is when clients trust our privacy policies, and share their rich, real-time data by consent, so that we can personalize and customize to build deep, long-term client relationships. This is the future of personalization.
Today, we find ourselves living our lives in the context of the unthinkable – a pandemic that has paralyzed the entire world. With all that we must face and overcome in the near-term, it is still important to take stock of where we want to be on the other end of a dark tunnel.
As we pause to reflect, we should consider some of the opportunities to explore new ways, not merely to survive, but to thrive and flourish as individual human beings, and as the collective human family. If novel, radically empowering market institutions and practices solutions ever were needed, it is for that still-brightly beckoning future. As Rahm Emanuel somewhat infamously pointed out during the 2008 global financial emergency, “Never let a good crisis go to waste.”
Despite some claims to the contrary, the digital economy of 2020 has become largely platform-centric, rather than human-centric. While the lauded platforms-based business model can provide tremendous benefits on all sides, it also can become overly weighted to privilege one side over another – and above all, the platform itself. Unfortunately, in too many instances, Web-based digital platforms have adopted that substantial asymmetry. At one end, the “user” bears little resemblance to an empowered and protected customer or client. On the other end flourishes a vast ecosystem of data brokers, advertisers and marketers, each often bearing no legitimate commercial relationship to the “user.”
The prevailing assumption of too many of these companies is that personal data is the new oil, a resource to be “extracted” and “processed” and utilized to persuade, or even manipulate, the unaware user. As pernicious an analogy as it is, this description does capture the assumed reality of personal data as the natural resource of the Web economy. Implicit in the “data as oil” analogy behind Big Data is that, beyond other types of information (such as data generated by artificial machines or natural environments), the data of user “subjects” must be utilized at industrial strength levels.
In particular, the largest platforms, such as Facebook, Google, Microsoft, Apple, and Amazon, have embraced the model of providing “free” products and services in exchange for the personal data of their “users.” Many people do not fully appreciate the extent of the tradeoffs involved. Massive server farms in the “cloud” aggregate the data of literally billions of people. Data scientists then build, test, and deploy algorithms based on AI and machine learning, in order to generate inferences and insights about our thoughts and feelings. The results return to a feedback loop that drives further actions, including shaping human intentionalities and behaviors. In essence, each user is targeted by organizations seeking to aim their advertisements, offers, and messages – designed to derive profits for a data-hungry ecosystem.
One of the many ironies of this situation is that the personal data that the platform companies extract, analyze, and utilize for their own financial gain represents a relatively thin slice of humanity. If the data makes money, all efforts must be aimed at extracting it; if it doesn’t make money, now, or in the future, it isn’t deemed to be worth anything. Most of us would like to think of ourselves as human beings with intrinsic worth that far exceeds our net pecuniary value to shadowy players in the Web economy.
Further, much of our personal data sits in walled-off silos. For example, Facebook has no access to individual medical data. Google has no access to individual financial data. Studies also show that a large chunk of the data held by third parties is inaccurate, incomplete, or obsolete – including so-called “dark” data. It turns out that the way data is collected today severely limits its full value potential – not just as a matter of money – to individuals, and to society. Even the ad-tech platforms themselves are limited by the existing system. Part of the proof is in the dismal ad click-through and purchase rates, and the ever-growing popularity of browser-based ad blockers and “Do Not Track” notifications.
The point here is not necessarily to impugn those entities that have created and fostered the current Web ecosystem. Its players have become caught up in an often-toxic data access system perpetuated for the benefit of a relative few. Many would welcome the opportunity to exit the existing surreptitious data extraction/analysis/influence model but may have painted themselves into a corner where too-rapid change could prove fatal.
It is said that disruptive change rarely comes from within an existing system. The solutions to major problems perpetuated by entrenched incumbents tend not to emerge from those same incumbents. It was the brilliant scientist Buckminster Fuller who boldly stated, “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”
In that spirit, we propose to build a new parallel digital model that makes obsolete, and even irrelevant, the existing toxic ad-tech model. There is an opportunity to develop an entirely new paradigm for the Age of the Algorithm: namely, to place the individual, and her personal data rights, at the center of the digital economy. Some initiatives, such as privacy laws and legislation, are designed to reduce harm from the current generation of AI deployed by digital platforms. But harm reduction, while absolutely crucial, is not enough.
We propose nothing less than promoting the autonomy and best interests of individual human beings. When individuals can control their data and digital lives, the door opens wide to the advent of trustworthy entities to serve their needs. This means that ethical brands that align their purpose, values and behaviors to serve the best interests of their customers and society, can profit exponentially in this alternative version of the digital economy. Governments can build much-needed trust as well, if they take steps to fully respect the digital human rights of their citizens. It is a win-win-win proposition. Let’s briefly describe the proposed new agential paradigm with some useful examples. In later papers, we will further develop these ideas, and concrete steps to put them into action.
1. Create a new personal digital economy, where individuals control personal data technologies.
The first step in the new digital economy is to flip the current data control model on its head and return control to the rightful owners – individuals. Today, laws such as Europe’s General Data Protection Regulation (GDPR), and the State of California’s Consumer Privacy Act (CCPA), point us in that general direction. Technology exists as well to empower individuals to aggregate their personal data in their own secure, encrypted cloudlet, device, or hard drive. Logically, the individual human being is the only entity on the planet that knows all her data sources: medical, financial, travel, entertainment, social network, location-based, Web browsing histories, retail purchases, video chatting, calling, and texting, and more. Moreover, only these same individuals also can define and understand their true needs, wants and expectations in real time. And even in the surveillance economy, individuals within the GDPR orbit have the sole legal consent power to share relevant data with trusted parties.
For too many companies today, data is just a form of property, a line item on the balance sheet. Another way to conceive of all those data points is as elements in human digital “lifestreams” – in essence, the many ways we experience the world of the Web, and beyond. We develop these digital lifestreams through our activities online, the media we upload to social networks, and other kinds of interactions. Of course, in many instances, these elements of our digital lives are controlled by the websites and applications with which we are interacting. To fully unlock the real value (both economic and otherwise) of one’s digital lifestreams, the individual must have full by-consent, by-permission control of her personal data. She must be free to use and easily share access to the lifestreams for relational, consumption, and “common-good” causes.
Numerous types of technology can provide individuals with control over their digital lifestreams. To name just one, personal data storage can be populated with structured data from every source. For example, within what could be called their digital health lifestreams, individuals can aggregate a panoply of useful metrics: DNA analysis, medical history, real-time vital metrics such as body temperature/blood pressure/heart rate, exercise regimen, dietary consumption, mental state via text/voice analysis, location, and other critical health elements. When combined, these disparate pieces of data can yield a treasure trove that can serve the health interests of the individual. If shared on a voluntary and anonymized basis, that same data could yield enormous benefits to society as well. Just imagine if such privacy-protecting data access technologies were widely available to researchers today, as our doctors and other healthcare personnel valiantly combat a global pandemic.
2. Enable the emergence of private, personalized AI services for the benefit of the individual’s well-being.
Corporations typically utilize data garnered from several sources for many purposes. With the assistance of armies of expert services providers, they use their proprietary data for critical functions – such as aggregating, analyzing, reporting, generating key performance indicators, insights, planning, investing, purchasing and executing innovative growth strategies and tactics. This is how businesses protect, enhance and promote their own best interests. Businesses maintain a level of strict confidentiality with their internal data, unless they are a public company. Even then, they report only what is required by law.
Up to now, entire tech developer ecosystems have emerged within, and around, platforms such as Apple, Facebook and Google in order to develop algorithms for the ad-tech economy. As it turns out, many “free”, and even paid apps, persuade or coax users into one-sided use terms, and then sell or provide the users’ personal data to third parties, including the platform. Individual software applications also have inherent limitations – they specialize in one discrete function, and so are limited in their ability to provide insights that encompass the complex, multi-faceted digital lifestreams, particular events and sets of experiences within which ordinary people live.
We believe that individuals should have the same rights and capabilities as corporations in order to protect, enhance and promote their best interests. One way to accomplish this is to engender an ecosystem of developers which will work for individuals directly. These customer-centric digital tools and algorithms would only serve one master: the individual. And many of these tools would be multi-dimensional, mirroring the digital lives of the people they serve. From the individual’s perspective, no longer would it be necessary to use dozens, if not hundreds, of inefficient and ineffective apps that never talk to one another. Importantly, the insights and inferences generated by these tools and algorithms would be completely private. They belong to each individual, who then can choose to share them with trusted individuals and entities for mutual benefit. Just as corporations do, each human being should have considerable say in whether, and how, her unique person is presented to herself, and to the rest of the world.
3. Trusted intermediaries protect, enhance and promote the best interests of individuals.
Individuals from all walks of life confront many challenges when dealing with the complicated sets of interfaces and actions presented by the Web. Those with especially complex lives, and the financial wherewithal, currently avail themselves of expert services to help run their lives more smoothly and efficiently. When given new freedom to control their digital lives, building and maintaining new databases, establishing new bots and other computational tools – the likelihood is that most individuals will not want to take on that added complexity all by themselves. Most of us simply desire knowledge from the data-derived insights, recommendations and implementation strategies, and then straightforward ways to incorporate that knowledge as enhancements into our lives.
To fill that need, we envision a new type of trusted intermediary that will emerge to represent the interests of customers and clients – from just a handful, to many millions at a time. These gatekeepers will have just one mission: to staunchly and vigorously protect, enhance and promote the best interests of their customers and clients. The trusted intermediaries would provide access to the digital technologies, virtual agents such as bots and personal AIs, and individual-enhancing products and services. They could do so as direct providers, as part of an ecosystem of vetted, ethical providers, or a mix of both.
A chief challenge of course is to build organically the necessary levels and degrees of trust to make this human-centric ecosystem a reality. The common law of the UK and the United States, among other countries, points to a particularly attractive legal model to govern the practices of these trusted intermediaries. Fiduciary law recognizes that a specific mechanism is necessary to address power asymmetries and trust challenges in a commercial relationship – whether derived from an entity having superior technical know-how, or an individual becoming vulnerable by sharing with the entity some highly personal information. Physicians, attorneys, accountants, and even librarians are examples of existing trust-based fiduciaries in our society.
Under this proposed model, digital intermediaries would agree to operate under heightened fiduciary obligations; they include duties of care (do me no harm), loyalty (promote my interests), and confidentiality (protect my confidences). Importantly, such intermediaries could help bring back some much-needed balance to the interactions between large platform companies and their “users.” Many potential for-profits and non-profits could develop operational models under this category of trusted intermediaries. Because individuals would be free to choose and partner with intermediaries, for a variety of functions, no one entity would be able to dominate the space. Individuals could choose to join natural cohort groups and communities of interest to better leverage positive outcomes. One-off digital transactions can evolve naturally into longstanding trust-built commercial relationships.
If this approach is adopted, one certain outcome is that individuals will experience a far more level playing field, as human beings, clients and customers. Basic human trust will become the “new oil” to fuel mutually beneficial personal and business relationships. Autonomy-denying terms like “user” and “object” need not be used again to describe a human being in the digital age.
Conclusion: The digital era contains so much potential for all of us. We don’t need to continue turning over our collective freedom and individual autonomy to large institutions. Instead, we can work together to empower human beings to have greater individual agency and choice. Humans, machines and institutions then can exist together, building mutually beneficial, trusted relationships. Well-earned trust, rather than surreptitiously gathered personal data, will fuel a new digital economy.
About Richard Whitt, GLIA Foundation President
Richard Whitt is president of the GLIA Foundation, and founder of the GLIAnet Project which seeks to build a new Web ecosystem based on trustworthy digital intermediaries and Personal AIs.
An experienced corporate strategist and technology policy attorney, Richard currently serves as Fellow in Residence with the Mozilla Foundation, and Senior Fellow with the Georgetown Institute for Technology Law and Policy. Richard is an eleven-year veteran of Google (2007-2018), where most recently he served as Google’s corporate director for strategic initiatives.
Richard is a cum laude graduate of Georgetown University Law Center, and magna cum laude graduate of James Madison University.
About Milton Pedraza, Luxury Institute CEO
Milton Pedraza is the CEO of the Luxury Institute and a private investor. Luxury Institute is the world’s most trusted research, training, and elite business solutions partner for luxury and premium goods and services brands. With the largest global network of luxury executives and experts, Luxury Institute has the ability to provide its clients with high-performance, leading-edge business solutions developed by the best, most successful minds in the industry.
Over the last 17 years, Luxury Institute has served over 1,100 luxury and premium goods and services brands. The Institute has conducted more quantitative and qualitative research with affluent, wealthy and uber-wealthy consumers than any other entity. This knowledge has led to the development of its scientifically proven high-performance, neuroscience and emotional intelligence-based education system, Luxcelerate, that dramatically improves brand culture and sales performance.
Milton is a recognized investor and authority on the Personal Data Economy, Privacy and Personalization, and Artificial Intelligence technologies. Prior to founding the Luxury Institute, his successful career at Fortune 100 companies included executive roles at Altria, PepsiCo, Colgate, Citigroup and Wyndham Worldwide.