🎙️ Q&A with Debbie Reynolds

Big Data is Watching: Privacy Expert Sounds Alarm on Runaway Corporate Surveillance

Photo credit: Debbie Reynolds

To commemorate Data Privacy Week in the United States, we interviewed Debbie Reynolds, "The Data Diva". Reynolds is a data privacy and emerging technology expert, U.S. Department of Commerce to the Internet of Things (IoT) Advisory Board member, and IEEE Committee Chair for Cyber Security for Next Generation Connectivity Systems for Human Control and Flow, both focused on data privacy. With over 20 years of experience navigating the ever-evolving landscape of emerging technologies, Reynolds guides companies on how to responsibly leverage customer data. In this Q&A, Reynolds shared her invaluable insights on humanity’s pressing challenges, changes she is seeing, the obstacles that lie ahead, best practices to adopt, and how users can collectively build a more ethical digital future.

"Americans use 4,416,720 GB of internet data including 188,000,000 emails, 18,100,000 texts and 4,497,420 Google searches every single minute" - Forbes, 2019

Sudeshna Mukherjee (SM): Looking back on your 20-plus years in emerging technologies and data privacy, how has the landscape evolved, and what challenges do you see looming ahead?

Debbie Reynolds (DR): My career predates the commercial internet, so I have witnessed the explosion of technology and data firsthand. As technology evolved, privacy concerns arose due to the sheer volume of personal data flowing into various systems. Now, we're experiencing exponential data growth and increasingly sophisticated data usage, making privacy even more intricate. Meanwhile, regulators strive to hold companies accountable through laws, but technology outpaces legal frameworks. This "regulation gap" will likely widen as new technologies emerge. While keeping up with innovation is impossible, the rapid evolution of technology and digital transformation will make crafting effective regulations even more challenging.

SM: Agree with you that technology is outpacing law. How will this impact vulnerable communities who lack access to legal recourse compared to those in the global north? Do you have any quick thoughts on how existing laws can better protect them?

DR: Absolutely, and it's a pressing issue. We risk creating a digital caste system where some have far greater access to justice than others. Existing inequalities in legal access are only amplified in this digital landscape, where novel forms of harm emerge faster than regulations can adapt. While regulations are crucial, they're not a magic bullet. We need a stronger ethical framework. Just because something is legal doesn't make it ethical. Companies must adopt a more human-centered approach to data and technology use, prioritizing the well-being of individuals and proactively upholding their ethical duties, not waiting for regulations to catch up.

SM: You mentioned companies taking responsibility for data and technology use. As an advocate for responsible data use, what is your advice to companies that are just starting to collect and leverage customer data? What are some best practices, and which companies are already using them? 

DR: Transparency should be foundational in how companies handle the data of individuals. Companies can no longer operate solely on a transactional basis, offering goods or services without acknowledging their data responsibility. Customers demand transparency in how their data is handled, and rightly so. Responsible companies go beyond understanding the benefits data offers to their business and articulate how it benefits the individual. When they can not demonstrate these individual benefits, red flags should go up. Many data abuses stem from practices that harm, not help, the individual. By using this as a litmus test, companies can navigate the future of data protection effectively.

SM: Speaking of data abuse, when data leaks occur at major platforms like Meta and LinkedIn, acting as both social hubs and virtual "phone books," protection becomes key. With the rise of generative AI, the potential for these leaks to escalate is immense. What proactive and reactive strategies do you recommend for companies to safeguard against such breaches?

DR: Two key strategies can help companies mitigate data breaches, both proactive and reactive. Proactively, companies need to be more selective in what data they collect and why. Often, breaches occur due to collecting unnecessary data or retaining data with low business value but high security and data privacy risks. The solution? Collect as little data as possible, discuss the purpose of collecting data before collecting, and have a clear data lifecycle plan. Modern data privacy practices like data minimization emphasize keeping data only for its necessary purpose. Previously, companies could store data indefinitely, but recent breaches highlight the risks of storing outdated data. Cybercriminals target such data, knowing it might be less secure. Therefore, developing a strategy for data end-of-life is crucial. This could involve anonymization, deletion, or providing users with control over their data. This shift from traditional data practices requires companies to be more mindful of collection and retention.

SM: As a member of the U.S. Department of Commerce's Internet of Things Advisory (IoT) Board, you have a unique perspective. What excites you about the potential of IoT, and what concerns must companies address?

DR: The sheer ubiquity of IoT devices is astounding! People often underestimate how many internet-connected devices they own, from smart speakers and thermostats to connected appliances like a washer/dryer or a toaster. These devices, like computers without screens, constantly perform updates and collect data you might not be aware of. This makes them prime targets for cybercriminals, as many users rarely check security settings or Internet connections. A scary statistic suggests that IoT devices can be attacked within five minutes of connecting to a network.

Despite the security concerns, I am excited about the potential of IoT, especially in medical applications. Imagine doctors remotely monitoring patients with devices that provide crucial data for diagnosis and treatment. Such advancements can truly benefit individuals. However, technology is a double-edged sword. While it offers benefits, we must acknowledge and address the risks. Often, the focus is on convenience and features, neglecting potential downsides. Consumers ultimately bear the risks, often unknowingly. I aim to empower people to make informed choices. By understanding the risks and their data's implications, individuals can decide which devices or features they want, or even avoid them altogether.

Ultimately, it's about giving people agency over their data and understanding how these devices operate. This is the key to ensuring a responsible and secure future for the Internet of Things.

SM: Your comment on consumer rights and privacy reminds me of a recently published Australian research that highlights how smart speakers can be misused in domestic abuse situations. While researchers and policymakers see the potential for these devices to assist law enforcement and victims, ethical concerns remain. What are your thoughts on using IoT devices as emergency tools, like the American 911 system

DR: The challenge lies in the fundamental difference between consumer rights and human rights in the context of privacy in the United States. This surfaces in situations like domestic violence investigations. Imagine an abusive husband who owns the smart speaker in a household. The victims, even though directly affected, have no legal claim to the device or its data. This legal framework gap presents a significant hurdle in utilizing IoT devices for law enforcement investigation purposes.

For example, an app designed for abuse victims to discreetly seek help might seem promising. However, who controls the device? Who owns it and pays for it? If the abuser holds those rights, the abused user lacks privacy protections on that device. Sending a secret message could be risky, as the owner can access all data. Therefore, using IoT devices in such scenarios is not only challenging but potentially dangerous for abuse victims. The current legal framework needs reevaluation to ensure human rights, particularly in vulnerable situations, are adequately protected within the evolving IoT landscape.

SM: Let's explore emerging technologies like generative AI, voice cloning, and biometrics. Which one holds the most promise but also poses risks regarding data privacy and ethical use? How can we mitigate these risks?

DR: There's a lot to unpack here! Generative AI, while not a revolution, is an evolutionary step forward in computing. Its impact will be horizontal, affecting nearly every industry. From everyday tools to corporate applications, it is forcing companies to rethink training and education on responsibly leveraging these tools. Just like any powerful tool, it is a double-edged sword. Generative AI offers exciting possibilities but also carries potential downsides. Understanding both sides is crucial for maximizing its benefits. The same applies to biometrics like voiceprints and facial recognition.

Privacy concerns with these technologies stem primarily from data collection and retention. Privacy issues are minimized if a technology doesn't collect or retain personal data. However, many of these technologies are designed to remember data, not to forget data, which raises ethical concerns. The onus lies on developers and implementers to consider data retention carefully. Ask yourselves: Why collect this data? Is it necessary? Can we minimize risk to individuals? Some companies are exploring promising solutions, like blurring faces or deleting data after specific tasks (e.g. verifying passport identity at an airport). I believe such approaches to reduce data retention represent the best practices for responsible biometric data collection.

SM: You raise a crucial point about the potential misuse of biometric data collection. Initially intended for contact tracing, government COVID programs became tools for widespread surveillance and highlighted dangers of unchecked data collection. Echoing Kashmir Hill's concerns about Clearview facial recognition used by law enforcement in the United States, personal data is circulating online without our knowledge. Given that the US has consumer rights but no GDPR-like framework, how can its residents balance data privacy and utility? What should the government do, considering the 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence to address these concerns?

DR: Absolutely. There's a dire need for both regulation and education. The Federal Trade Commission's recent actions against companies misusing facial recognition data show a growing awareness of potential abuses. Identifying the "whys" and "hows" of data collection is crucial, as many instances demonstrate clear misuse. In my opinion, extensive data collection inherently increases the risk of abuse. Over-collection, lack of data lifecycle strategies, and repurposing data beyond its intended use are highly problematic.

While regulation is necessary, it always lags behind technological advancements. By the time regulations react to a harmful practice, the damage might already be done. What if someone can't afford legal recourse after suffering harm from data misuse? Or if the harm is so severe it destroys their livelihood? We are entering an age where there may be no adequate redress for the harm humans may face due to data abuse or misuse. We need proactive solutions to mitigate these harms before they reach the courts.

Over-collection, lack of data lifecycle strategies, and repurposing data beyond its intended use are highly problematic.

SM: Shifting gears, what accomplishment are you most proud of, particularly within the area of data privacy protection? And where do you still hope to make an impact?

DR: While pinpointing a single "proudest moment" is challenging, I feel deeply fulfilled by having an audience that listens to The Data Diva Talks Privacy Podcast, including regulators who can shape privacy practices. Witnessing their response to my insights, whether through policy changes or companies adjusting their data handling, truly motivates me. 

An impactful example was being interviewed and quoted by Kashmir Hill in a New York Times article about child privacy. I highlighted how a specific company collected children's data and potentially matched it across the internet. Remarkably, within a week of that article's publication, the company opted to blur all children's images. This incident demonstrated the power of advocacy and raising awareness, encouraging positive changes even without regulations.

My ultimate goal, particularly in the States, is to establish privacy as a fundamental human right. Currently, unlike in Europe and India, the United States lacks this crucial constitutional safeguard. Having a human right to privacy would address scenarios like the abuser owning the smart device and controlling all the data. With privacy as a human right, even the abused individual would have rights, regardless of the device ownership.

This incident demonstrated the power of advocacy and raising awareness, encouraging positive changes even without regulations.

SM: You're a true role model for those seeking career transitions or advancement in data privacy. I want to ask, for early-career professionals, particularly those from underrepresented and diverse groups, what core skills and opportunities can help them stand out in the field of AI governance and responsible technology?

DR: Thank you for highlighting the importance of diversity! It's crucial in data privacy, as it deals with the nuances of the human experience. Different perspectives and values enrich the conversation and lead to more inclusive solutions. For aspiring data privacy professionals, I emphasize staying ahead of the technology curve. Regulations always lag behind technological advancements, so anticipating privacy risks is essential.

If you are in a company implementing new technology, proactively identify and voice concerns—research how similar situations have been handled elsewhere. Become a thought leader within your organization. Additionally, strong communication skills are paramount. You must be able to effectively communicate privacy issues across all levels, from C-suite executives to frontline workers. Unfortunately, many organizations operate in silos, with limited cross-functional understanding. Future leaders must bridge these gaps and ensure everyone understands how privacy impacts their role.

While companies often claim "everyone is responsible" for data privacy and cybersecurity, true accountability requires specific guidance and ownership. As you navigate your career, advocate for clear expectations and responsibilities within your organization. Remember, these emerging issues are not fully addressed in traditional business education. You will stand out and contribute significantly to the field by embracing these challenges and thinking strategically.

SM: With so many data privacy certifications available, how effective are they, especially for those starting? Do you recommend specific ones? Are there other resources to consider?

DR: For those new to the field or seeking more credibility, I recommend two key strategies: volunteering in privacy-related projects and exploring certifications.

Volunteering with companies, organizations, or even local businesses provides valuable hands-on experience and a deeper understanding of practical privacy challenges. This can significantly boost your resume and demonstrate your commitment to the field.

I view certifications as particularly helpful for those without extensive prior experience. While certifications might not be necessary for established professionals like myself, they can be valuable for newcomers looking to build credibility and demonstrate their knowledge base.

These certifications can also help individuals connect their skills to the privacy domain and identify potential career paths. Remember, privacy impacts various departments and roles within organizations. You can enhance your career prospects across different sectors by showcasing your understanding of privacy principles.

SM: What content have you found particularly helpful or insightful, even outside of certifications?

DR: While I don't recommend just one source, there are ways to stay informed. The International Association of Privacy Professionals (IAPP) website is a great starting point. It offers valuable resources, including free ones, even if you're not a member or planning to get certified. However, the key is to be proactive in your learning. I highly recommend setting up a Google Alert for privacy-related articles. This will keep you updated on the latest developments and emerging issues, ensuring your knowledge stays current. Staying ahead of the curve is crucial in this rapidly evolving field.

SM: Thank you! Since it's Privacy Week, do you have a favorite book that explores the themes we've discussed today?

DR: Excellent question! I recently read Your Face Belongs to Us by Kashmir Hill who you also mentioned. It's a deep dive into facial recognition technology and its privacy implications, which aligns perfectly with our conversation. Hill’s reporting style is commendable, going beyond surface-level analysis and delving into the core issues.

Debbie Reynolds can be reached at LinkedIn.

Edited by: Lee Howard

Subscribe to The Ethical Tech Digest

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe