Iot Security / Privacy: Behind The Tech Curve – Update

White Paper
April 27, 2020

The IoT’s growing range of services and functionalities pose new security and privacy threats.  Laws are being adopted, “best” practices updated.  But AI data inference and facial recognition technologies using freely available information sidestep network security measures and anonymized data, making “consent” and willful disclosure beside the point.  It is up to the public to decide what to do. But first it must exercise its right to know. Can it?

Following the IT development cycle, the rise of the “Internet of Things” (IoT) – devices incorporating sensors, micro-controllers, software and connectivity – is far outpacing legal frameworks, aka “safeguards”, particularly in the consumer sector.  Boosted by FCC approval of 5G spectrum, this technology – security/privacy chasm is widening, driven by an exponential increase in device-generated data, outmoded add-on software development practices, and in the start-up space, by priorities, time, money, resources, and in some cases, by values.

Exploding Market

Gartner has estimated that there were 14.2 billion connected things – cars, home security and automation systems, insulin pumps, pacemakers, FitBits – in use in 2019.  It projects 25 billion connected things by 2021, an increase of nearly 5 billion over its prior estimate of 20.4 billion devices in use in 2020.[1]  IDC data estimates that 152,200 IoT devices will be connected every minute by 2025, when nearly 80 billion devices are forecast to be added annually.[2]

Fortune Business Insights forecasts that the global IoT devices market, which was valued at $190 billion in 2018, will reach $1.11 trillion by 2026 at a 24.7% compound annual growth rate.  The North American IoT market alone, generated almost $84 billion in 2018.[3]

Ericsson has projected cellular IoT growth will lead to 3.5 billion cellular IoT connections by 2023, at annual anticipated growth rate of 30%.  This is almost double the increase of prior estimates –  in part because of the prospective introduction of 5G release and anticipated demand in China, and Asia generally.[4]

Security – Early Efforts

Manufacturers have traditionally not emphasized security, and the cyber security industry itself has focused on large, private sector firms such as financial institutions, not on IoT threat vectors into public infrastructure (utility grids, traffic control) and consumer products.

Recognizing that the IoT is making security imperative for all embedded software, the Federal Trade Commission (the first federal agency to do so) issued a detailed report on the IoT’s Consumer and Privacy Risks in January 2015[5], calling on manufacturers to incorporate secure software in their products. Among the FTC’s recommendations –

  • Build security into devices at the outset, not as an afterthought in the design process[6];
  • Consider a “defense-in-depth” strategy; use multiple layers of security;
  • Monitor connected devices throughout their life cycle; provide remote security patches.

Demonstrating growing concern about to the IoT’s exposure of consumers, the FTC’s report was followed in February by a report from Sen. Ed Markey (D-MA) criticizing car makers’ “alarmingly inconsistent and incomplete” security measures.[7]

Growing Vulnerabilities

Even as the IoT began to ramp up, the world-wide average total cost of a data breach (350 companies in 11 countries) in 2014 was $3.8 million, a 23% increase from 2013.  U.S. data breach-related costs began growing rapidly.[8]  And studies began to show that cyber security vulnerabilities were present in newly introduced network-capable consumer products, such as infant monitors.[9]

While IoT devices have many benefits they also carry many vulnerabilities and weaknesses which are not limited to insecure passwords, networks and device interfaces.

IoT devices are frequently compromised and added to botnets.  These dangerous networks are frequently used to launch distributed denial of service (DDoS) malware and brute-force attacks against businesses of all sizes globally.[10]

Notable IoT statistics concerning attacks on or using connected devices, include:

  • Fewer than 20% of risk professionals can identify a majority of their organization’s IoT devices. Fewer than 20% of the 605 survey respondents from a combined Ponemon Institute and Shared Assessments study can identify the majority of their organization’s IoT devices. Furthermore, 56% report not keeping an inventory of IoT devices, and 64% report not keeping an inventory of IoT applications.
  • 75% of infected devices in IoT attacks are routers. Symantec data indicates that infected routers accounted for 75% of IoT attacks in 2018, and connected cameras accounted for 15% of them.
  • IoT devices typically attacked within 5 minutes. Five minutes. That’s the average amount of time that it takes for an IoT device to be attacked once connected to the Internet, according to NETSCOUT’s Threat Intelligence Report from the second half of 2018.
  • IoT Malware Attacks Skyrocketed in 2018, Trend Continues to 1H 2019

SonicWall reports that IoT malware attacks jumped 215.7% to 32.7 million in 2018 (up from 10.3 million in 2017). The first two quarters of 2019 have already outpaced the first two quarters of 2018 by 55%. If this rate continues, it’ll be another record-setting year for IoT malware attacks.[11]


The California IoT Law, the first in the nation to expressly regulate the security of connective devices, went into effect on January 1, 2020.[12]  In contrast to California Consumer Privacy Act of 2018 (CCPA)[13] which protects personal information, the new Cal IoT law is aimed at safeguarding the security of both IoT devices and any information contained on IoT devices.

Currently, there are no U.S. national security standards for IoT devices; security features and protections are left to the discretion of manufacturers and vendors.  Supporters in Congress have tried repeatedly but unsuccessfully to advance an IoT cybersecurity bill that would set minimum security standards for the connected devices that the federal government purchases for various projects. The most recent attempt is the IoT Cybersecurity Improvement Act of 2019 which was introduced in the Senate on March 11, 2019.[14]

Reducing Risks – the Business Perspective

Breaches of consumer and proprietary business information are very costly for firms, resulting in fines, damage awards, litigation and compliance expenses, forensic audits and consulting fees.[15]

Practical steps and questions business should consider to mitigate risks to IoT implementations include:

  • Device-Specific Software. Design security software into a network capable product which takes into consideration what kind of information it will collect and how that data will be used.
  • Environment. How the device will work in its environment; e.g., will the device be deployed in a public space or in an area where there is an expectation of privacy; will the device be maintained by a third-party vendor?
  • Change in Risk Profile. How does the IoT device change a company’s traditional business risks – consider its industry, the type of information it collects, whether its government regulated, or adheres to industry standards.
  • Legal.  Shift data breach risks via contract with IoT vendors; require vendors to meet product security and testing standards; carry insurance; have information security programs which are “commercially reasonable” in firm’s industry, and consistent with its compliance obligations; require vendor data breach notification; compliance with applicable national laws, regulations; require vendor subcontractors conform to vendor’s data handling obligations.[16]

Societal Implications – the User Perspective

Given the geometric growth rate of IoT device- generated data and communication nodes, the privacy and security implications of the IoT extend orders of magnitude beyond the traditional corporate risk calculus.

The 20th century business risk and product liability models, with their focus on the financial interests of corporate stakeholders (shareholders, employees, suppliers, etc.) and cost/benefit are too limited in scope.  “Cost per stolen record” does not acknowledge, nor can “optimizing” it deal with the potential threat – frequently personalized, and immediate – to the physical safety of masses of users presented by the IoT security / privacy vacuum.

Consider …

The structure of a firm’s IoT implementation involves several components (devices > communication gateways > smartphones > cloud storage or on-site storage), each of which offers multiple axes of attack[17] for private and state-sponsored hackers, and criminal gangs.

Take portable monitors.  There is no single application “location” for wearable medical and fitness trackers.  The device’s functionality is actually distributed – on the wearable itself, in a smartphone app and in the cloud. Similarly, some of the medical and fitness data reside on the device itself, but much of it is stored on the smartphone, and some is stored in the cloud.[18]

A 2016 study at Bingham University, New York found that smartwatches, fitness trackers and other wearable devices may be giving away bank PINs. Data from sensors embedded in wearable devices can be used to track hand movements which, when combined with computer algorithms, can be used to identify access codes.[19]

Devices which collect, store and transmit personal data via radio frequency technology (RFID), the next generation bar code, also present very significant security vulnerabilities.

In 2016, malware clandestinely placed on IoT devices remained undetected for an average of 312 days.  Efforts since then have been directed at developing software to fully automate the security and malware detection lifecycle to reduce the delay to minutes or seconds.[20]

Compounding the situation, legitimate outside experts are being deterred from investigating or disclosing security vulnerabilities in IoT devices by the threat that manufacturers will accuse them of violating their exclusive “Digital Rights Management” rights, under the “anti-circumvention” rule, Section 1201, of the Digital Millennium Copyright Act (DMCA) of 1998 which makes it illegal to break access controls to copyrighted works, including software.[21]

But beating hackers to the punch makes no difference if there is self-serving delay in informing those affected.  Apart from the many such cases involving private sector businesses, there are federal agency examples, including the U.S. Office of Personnel Management (inordinate, unexplained delay in disclosure of suspected state-sponsored hack of millions of current and former federal employee records, including security clearance information), and the FDIC (cover-up of suspected Chinese government hack of top officials’ personal information and U.S. bank customer data).


How will a “balance” between security and privacy on the IoT be struck?  Articles about technology security and privacy often conclude with such a question.  But AI algorithms fed by big data have now made such questions beside the point and demand the public’s attention, as discussed below.[22]

Your Data Are Not Anonymous

This paper now narrows it’s focus to the security and privacy of personal identifying information (PII) in connection with devices and services vis the explosively growing IoT, a/k/a “the Wild, Wild West of the Internet”[23].  By design these IoT “solutions” amass incomprehensible volumes of data (2.5 quintillion bytes of data daily according to one estimate [24]) about device operating environments and users in order to, ostensibly, improve service.  Whether such data is stored locally (on the device) or in the cloud, the driving force is to profit by aggregating and processing, and then monetizing the knowledge derived from the data.  These processes raise many privacy-related issues[25], including whether PII can be effectively anonymized on the IoT as discussed below.

It is important to note at the outset that much digitized data which has been anonymized is therefore not considered “personal” data in many countries and is shared and marketed by researchers and data brokers without violating privacy laws.  For example, the EU’s 2019 General Data Protection Regulation (GDPR)[26] rules attempt to safeguard user data while promoting online commerce. The GDPR allows firms to gather data without consent, provided it has been properly anonymized[27], use it for any purpose, and retain it indefinitely.  Nevertheless, anticipating that the IoT would raise concerns requiring a range of responses, the EU Commission’s 2010 report, The Cluster of European Research Projects on the Internet of Things[28], targeted seventeen security and privacy issues for research.

Similarly, under the California Consumer Privacy Act of 2018 (2018)[29] “personal information does not include information that is de-identified or aggregated.”[30]

Data Types

Effective advertising and marketing depend on having detailed and accurate customer

data.  When a firm tracks it customers’ behavior on its website, it is gathering “first-party” data.  It may then “share” that information with other companies it to create synergies.  Such sharing is being driven by the increasing availability of IoT data (GPS sensors, smart utility meters, fitness devices, etc.). These are examples of “second-party” data. Firms also frequently supplement their own first-party data with additional information (“third-party” data) from data brokers such as Acxiom, which collects up to 1,500 data points on 700 million consumers worldwide.[31]

How PII is Anonymized

Where a dataset such as a credit card file includes individuals’ PII or personally identifying “attributes” (quantitative data about a person or household), in order to protect privacy and meet legal requirements, the data is “de-identified”.  Such data may subjected to a variety of anonymizing techniques (some of which are outlined below) which may retain all or some of the data but which attempt to remove source-identifying information.

  • Data Masking Data is disguised by substituting one value for another which prevents subsequent unmasking.
  • Data Fabrication Some or all Identifying attributes are replaced with fictitious data.
  • Generalization Data is combined or, conversely limited, to reduce the possibility of identification while preserving desired degree of accuracy.
  • Data Swapping Certain personally identifying values in a dataset are rearranged to prevent cross referencing to data sources.
  • Data Perturbation (or Differential Privacy) A dataset can be anonymized by making slight modifications to certain of its quantitative elements. The size of the changes is in proportion to the size of the database so that the changed values appear valid although they are not in fact.
  • Homomorphic Encryption Data is encrypted in manner which prevents reading but allows manipulation by the “data controller”.[32]  Results can be decrypted upon return to controller.[33]
  • Synthetic data Computer generated data which is unrelated to actual in the subject dataset. Models of a real dataset’s patterns are derived by applying various statistical functions. This avoids modifying or using the original dataset to protect privacy and security.[34]

Even though the application of these or other techniques may de-identify PII, that does not mean it has been effectively anonymized, as discussed below.

Nothing Digital Is Anonymous

The vulnerability of anonymized PII has been pointed out repeatedly.  In 1996, well before the advent of the IoT, the Massachusetts Group Insurance Commission (MGIC) released anonymized data showing the state employees’ hospital visits.  Names, addresses and social security numbers had been deleted.  Then governor William Weld assured the public that patient privacy had been preserved.  Weld was soon proven wrong.  A researcher located the governor’s medical records in the MGIC dataset by using Weld’s zip code and birth date (obtained from voter rolls), and the knowledge that he had visited the hospital on a particular day, to identify him.  She sent his medical records to his office.[35]

In 2008 a study by two University of Texas researchers successfully identified Netflix users in a Netflix-supplied dataset of nameless customer records.[36]  A 2013 Harvard study re-identified patient names in a publicly available Washington State hospitalization dataset.[37]  In 2015, as the potential range IoT’s applications began to be appreciated and its growth accelerated, one study demonstrated that credit card metadata which had been de-identified, could be combined with as few as four random bits of public information (e.g., Twitter, Instagram posts) using algorithms to re-identify 90 % of shoppers by name.[38]

A 2017 study by University of Melbourne researchers re-identified anonymous Australia health department medical billing data by cross-referencing “mundane facts” such as the of birth years of mothers and their children.[39]  As one of the researchers remarked, “It’s convenient to pretend it’s hard to re-identify people, but it’s easy”.[40]

Data’s Lifecycle  – Multiple Hack Vectors

IoT data is continuously generated by devices, streams into and transits through networks, and resides in the cloud, on call.  In order to route data to the desired destination, the communication interfaces or “nodes” of a network are identified with media access control (MAC) addresses.  These addresses can be used to track data paths.  By combining the MAC addresses of several devices and sensors it is possible to develop unique data profiles from which user location and behavior patterns can be identified and traced.   Consequently, both users’ PII and network paths must be anonymized. This is difficult because of the great number of devices and services, data is produced in incompatible formats, and network technology has limited ability to maintain anonymity.

Further, to achieve “comprehensive” anonymity, in addition to data, routing and communication paths, data modeling, storage, analytics and aggregation must be secured.[41]

This involves moving “…beyond the deidentification release-and-forget model”.[42]  It requires active involvement by stakeholders, including device manufacturers, IoT cloud services and platform providers, third-party application developers, government and regulatory agencies (standardization and interoperability), and users and non-users (vis information captured by devices about persons within devices’ “view”.[43]

But is end-to-end security over the IoT actually possible?  At present the answer is no.  As one commentator put it, “If your systems are digital and connected… to the internet, …they can never be made fully safe.  Period.”[44]

Legal Safeguards?

Unlike the event studies cited above, last year researchers at the Université Catholique de Louvain and Imperial College London constructed a model to estimate the degree of difficulty in deanonymizing any given dataset. They found that a dataset with just 15 demographic attributes, “would render 99.98% of Americans unique”.  And as the authors point out, data brokers market deidentified datasets containing much more information (scores more data points) per individual.[45]

So, in addition to problematical network security, the UCLouvain / ICL and comparable studies strengthen the conclusion that companies and data brokers cannot – in reality – meet the security and anonymization requirements of the GDPR and the CCPA, for example.

Two issues dominate –

  • Under the GDPR[46], CCPA[47] and similar regimes, how can a firm can make a “reasonable” judgment that its anonymization protocols are compliant when there are no metrics specifying how difficult it must be to re-identify data? Consider the studies (drawn from a large and accumulating body of similar literature) noted here.  Legalistic standards such as “reasonableness” do not and cannot, ultimately, provide solid guidance in the face of continuously evolving hacking expertise.  One view of the “guidance” provided by a statutory / regulatory reasonableness standard in this context would be whatever the consensus of experts might be about the state of anonymization technology relative to contemporaneous hacking techniques (of which “experts” often learn about only post hoc) during the period at issue.  Another approach courts may use is the “risk/utility” test to judge whether a defendant’s actions were “reasonable” and were comparable to others in the same industry, and if the potential harm outweighs the burden of adopting protocols necessary to prevent such harm.[48]  Still a third “standard” by which to interpret what is “reasonable “ are the 20 controls in the Center for Internet Security’s Critical Security Controls[49], which then California Attorney General Kamala Harris stated define the minimum level of information security that all organizations that collect or maintain personal information should meet.  The failure to implement all the controls that apply to an organization’s environment constitutes a lack of “reasonable” security.[50]

Absent precise guidance, counsel advise tech clients to mitigate risk by applying cost / benefit analyses to their network security and data anonymization protocols, i.e., how much is protecting users’ privacy worth to us?[51]  (Off-loading these risks onto third-party vendors and consequently elevating users’ risks is also often recommended.[52])

  • Apart from unauthorized data access and the security of anonymized data itself, studies have repeatedly demonstrated that it is possible to re-identify individuals using external data sources. Where a de-identified dataset is analyzed using exogenous information which is not itself de-identified (the universe of such information is exploding), such sources can function as de-encryption keys to unlock the identities of individuals in the anonymized dataset.

PII Security – False Pretenses

These two hard realities, in addition to network vulnerabilities, are why tech company defendants argue truthfully and – richly – that despite their privacy policies’ representa­tions, users and consumers should have no reasonable expectation of privacy.  This in turn raises the issue of whether legislators are misleading the public about the efficacy of privacy laws by promoting the idea that they can actually accomplish their purpose.  Yes, the perfect is the enemy of the good, and no, there is no ethically defensible reason not to attempt to protect what personal privacy still exists.  But government needs to be clear that the notion that personally identifying information can be effectively “anonymized” or “de-identified” is wrong.  The failure to be frank with the public about something as existential as the protection of personal privacy is contributing to the erosion of public trust.[53]  When authorities do not level with the public they corrode social institutions whose declining legitimacy is becoming politically destabilizing.[54]

Public officials’ honesty about privacy and the efficacy of efforts to protect it have never been more important than they are now in the digital age.  Why?  Because “privacy” is a defining element of what it is to be a human being.  It is central to all humane societies.

Harvard Business School Prof. Shoshana Zubhoff has argued that the IoT is part of a rapidly expanding tech ecosystem with monopoly power which compromises free will and autonomy, and threatens freedom.

“The age of surveillance capitalism is a titanic struggle between capital and each one of us.  It is a direct intervention into free will, an assault on human autonomy.”  It is the capture of our intimate personal details, even of our faces.  “They have no right to my face, to take it when I walk down the street.”  Such violations threaten our freedom, Zuboff says.  “When we think about free will, philosophers talk about closing the gap between present and future.  We make ourselves a promise:  I’ll do something with that future moment – go to a meeting, make a phone call.  If we are treated as a mass of ‘users’, to be herded and coaxed, then this promise becomes meaningless.  I am a distinctive human.  I have an indelible crucible of power within me…  I should decide if my face becomes data, my home, my car, my voice becomes data.  It should be my choice.[55]

Although California does not have a specific biometric privacy law regulating the capture, processing, storage, and transfer of biometric data by businesses similar to the Illinois 2008 Biometric Information Privacy Act[56], it has attempted to address Prof. Zubhoff’s concerns.  The CCPA treats biometric information (including facial recognition data)[57] as one of the categories of PII protected by the law, and as such provides consumers the right to obtain information about the collection and sale of that information and opt out of its sale.  In addition, California recently joined Oregon, New Hampshire and several municipalities in banning law enforcement use (body cams) of facial recognition technology.[58]

Nevertheless, under the CCPA affirmative opt-outs of “personal information” collection are required, and the proposed California Privacy Rights Act of 2020 opt-out or limitation provisions regarding the sale or sharing of “sensitive personal information” do not acknowledge the continuing insecurity of cybersecurity.[59]

On a related matter, vetting the integrity of how PII is analyzed and used by commercial and other entities, the federal Algorithmic Accountability Act[60] (introduced by Democrats in April, 2019) proposes to impose federal oversight of AI and data privacy.  The Act would regulate “automated decision systems” that make decisions or facilitates human decision-making which affects consumers.  Businesses would be required to screen for implicit AI bias and discrimination and resolve any issues. The FTC Federal Trade Commission would have oversight responsibility.

Both government and the tech industry have promoted the demonstrable fictions of “privacy” and “anonymity” in order to mollify the public, and keep it from demanding an “opt-in”[61] consent regime in which users and consumers affirmatively assume the risks their private personal information will be compromised – but which would stifle the “sharing” ecosystem and cost businesses huge sums.  Moreover, it has been argued that privacy and cybersecurity are converging, and with the growth of AI data inference sophistication, individual consent of any type will no longer play a role in privacy and data protection [62]  If so, can the ability to perform data pattern recognition can be constrained?[63]  No, that genie is out of the bottle.

For companies this is about facilitating advertising and marketing efforts.  Studies have shown that that consumers are willing to share information with a brand that they trust will protect their information.[64]  Anything that threatens trust, e.g., acknowledgement of the limits of data anonymity and cybersecurity generally, threatens the bottom line.  This concern is summed up by business scholars Sachin Gupta and Matthew Schneider.

The promised benefits of data-driven marketing are at grave risk unless businesses can do a better job of protecting against unwanted data disclosures. The current approach of controlling access to the data or removing personally identifiable information does not control the risk of disclosure adequately. Other approaches, such as aggregation, lead to severe degradation of information. It’s time for businesses to consider using statistical approaches to convert the original data to synthetic data so they remain valuable for data-driven marketing, yet adequately protected.[65]

Notably and typically, the ability to use de-identified data from external sources to identify individuals with AI data inference algorithms, is not addressed.[66]  In the interests of maintaining a functioning democracy by promoting a smart, humane capitalism, it must be.  In this writer’s view, there is no chance this will originate from business or government.

A Failure Will

Regulators’ failure to disrupt the overreaching of Facebook, Amazon, Apple, Netflix and Google through antitrust enforcement has allowed them to dominate online retail offerings, direct consumers to preferred sites, distribute content via a limited number of platforms, and use their algorithms and AI to serve up news according to inferred user preferences,[67]; all this while extracting economic rents.  The result?  Consumers seeking online services for which there are no or few realistic alternatives are faced with a choice – agree to dominant actors’ terms of service and privacy policies which do not and cannot ensure security and privacy, or go without.

Government failure to address privacy and public safety extends to threats arising from the startup world as well, a prime example being Clearview AI.  Clearview’s facial recognition software allows the user to take a person’s photo, upload it to Clearview’s platform, and then compare it with billions of photographs in its database of billions which have been scraped off the internet, including Facebook, YouTube and Instagram sites.[68]  Clearview’s biggest clients have been law enforcement agencies.[69]  In February 2020, Clearview disclosed that hackers had broken into its database and stolen its entire client list.[70]

Both an opt-in consent regime and a moratorium on facial recognition technology should be considered by federal and state governments.  But whether or not any action is taken authorities have the obligation to tell the public that data inference and recognition technologies are out of the bottle, their use cannot be effectively constrained given their spread, and they will continue to be available on the dark web.


The vestiges of Americans’ personal privacy are now under assault by “functionalities” and algorithms that are stripping what is left of their right to be let alone.  In the words of one tech legal executive “…privacy is now best described as the ability to control data we cannot stop generating, giving rise to inferences we can’t predict.”[71]  This is incompatible with a democratic society.  Only the public can take back its privacy by demanding[72] that its representatives act.  But first its representatives must tell it the truth.  This is not something which can be addressed through policy or law.  If this is to happen it will only be because the press continues to expose what is happening to Americans’ privacy.[73]

Let us hope that when government[74], with the aid of big tech[75], starts using tracking technology to enforce public health lockdowns, and then inevitably in other “exigent circumstances”[76], the press will shine its light on exactly what is being done.[77]

[1] See C. Crane, “ IoT Statistics: An Overview of the IoT Market”, hashedout, Sept. 4, 2019, Avail. at:

[2] M. Rosen, “Driving the Digital Agenda Requires Strategic Architecture”, IDC Data, 2015.  Avail at:

[3] See “IoT Market Analysis – 2026”, Fortune Business Insights, Jul. 2019.  Avail. at:

[4] Ericsson Monthly Mobility Report, June 2018. Avail at: documents/2018/ericsson-mobility-report-june-2018.pdf.

[5]  “Internet of Things Privacy and Security in a Connected World”, FTC. Jan. 27, 2015,  Avail. at: https://www.ftc .gov/news-events/press-releases/2015/01/ftc-report-internet-things-urges-companies-adopt-best-practices.

[6] Id.  See also “Instead of company investments at the front end, the unstructured work of safety and privacy evaluation has instead been picked up by cybersecurity researchers and hackers at the back-end”, L. Zanolli, “Welcome to Privacy Hell, Also Known as the Internet of Things” fast Mar. 23, 2015.  Avail at:;

J.M. Porup, “Internet of Things” security is hilariously broken and getting worse, Arstechnica Jan. 23, 2016.

Avail. at:

[7] Sen. E. Markey (D-MA), “Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk”, Feb. 2, 2015.  Avail. at: 2.pdf.

[8] Ponemon Institute, 2015 Cost of Data Breach Study: Global Analysis (May 2015) (costs rising from $188/per stolen record in FY2013 to $217/per record in FY2015 Ponemon Institute, 2015 Cost of Data Breach Study: Global Analysis.

[9]  M. Stanislav and T. Beardsley, “Hacking IoT: A Case Study on Baby Monitor Exposures and Vulnerabilities”, Rapid7 (Sept. 2015) (identifying backdoor credentials, direct browsing, authentication bypass, and predictable information leak vulnerabilities in certain baby monitors). Avail. at: /docs/Hacking-IoT-A-Case-Study-on-Baby-Monitor-Exposures-and-Vulnerabilities.pdf

[10] C. Crane, n. 1, supra.

[11] Id.

[12] A manufacturer that sells or offers to sell a connected device in California must equip the device with a reasonable security feature or features that are all of the following: “(1) Appropriate to the nature and function of the device. (2) Appropriate to the information it may collect, contain, or transmit. (3) Designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.”  Cal. Civ. Code § 1798.91.04(a)).

While the law only vaguely defines the term “security feature,” it provides that, subject to the preceding requirements, a connected device equipped with a means for authentication outside a local area network will be deemed a reasonable security feature if either of the following requirements are met: “(1) The preprogrammed password is unique to each device manufactured” and “(2) The device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.”   Cal. Civ. Code § 1798.91.04(b)).

The law also has a broad definition of “connected device,” which is defined as “any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.” Cal. Civ. Code § 1798.91.05(b)).

As such, the law is not limited to mere consumer devices, but potentially includes, to the extent a device is not subject to other federal law or regulations, industrial IoT devices, retail point-of-sale devices, and health-related devices that connect to the internet and that receive an IP address or Bluetooth address.

[13] n. 29, infra.

[14] The list of failed bills includes:  the IoT Cybersecurity Improvement Act of 2017; the IoT Federal Cybersecurity Improvement Act of 2018 ; the IoT Cybersecurity Improvement Act of 2017 (which would have set minimum security standards for connected devices purchased by the government, but not electronics in general); the IoT Consumer TIPS Act of 2017 (which would have directed the FTC to develop educational resources for consumers around connected devices); and the SMART IoT Act (which would have required the Department of Commerce to conduct a study on the state of the industry).

[15] R. Merker, “Buying IoT Technology: How to Contract Securely”, Jun. 17, 2016.  Avial. at:

[16]  Id.

[17] See Data’s Lifecycle – Multiple Hack Vectors, pp. 13-14, infra.

[18]  C. Pettey, “Build a Blueprint for the Internet of Things”, , May 25, 2016.  Avail. at:

[19] O. Hughes, “Your smartwatch might be giving away you bank PIN, say scientists”. Jul. 7, 2016, International Business Times.  Avail. at:

[20] P. Szoldra, “DARPA’S Cyber Grand Challenge Has Computers Hacking Each Other”, Tech Insider, Jul. 13, 2016.  Avail at:

[21]  The Electronic Frontier Foundation has sought to have Section 1201 declared unconstitutional.  Avail. at: (Jul. 21, 2016).

[22] See S. Thompson, “One Nation, Tracked:  Twelve Million Phones, One Dataset, Zero Privacy,” New York Times, Dec. 19, 2019, (surveillance omnipresent:  scores of unregulated companies recording and storing millions of mobile phone users’ location data (“your life is an open book”) via software add-ons to mobile phone apps).  Avail. at:

[23] C. Brook, “New IoT Security Bill Passes Another Hurdle”, DigitalGuardian, June 13, 2019).  Avail. at:

[24] B. Marr, “How Much Data Do We Create Every Day? The Mind-Blowing Stats Everyone Should Read”, Forbes, May 21, 2018.  Avail. at:

[25] In addition to the security of anonymized PII, other privacy and security- related IOT issues include:  user consent acquisition, user control and unrestricted data movement, use of data for purposes outside scope of consent, patching and upgrading device firmware and software without user consent, failure to update software security.

[26] General Data Protection Regulation, (Regulation (EU) 2016/679), effective May 25, 2018; replaced EU Directive and its member state implementing laws.  Avail. at:

[27] GDRP, Recital 26 (Principals of Data Protection Not Applicable to Anonymous Data).

[28] H. Sundmaeker et al., “Vision and Challenges for Realizing the Internet of Things,” Cluster of European Research Projects on the Internet of Things (2010). Avail. at:

[29] Cal. Civ. Code §§ 1798.100-1798.199, effective Jan. 1, 2020.

[30] Cal. Civ. Code § 1798.140(o)(3) (as amended).

[31] S. Gupta and M. Schneider, “Protecting Customers’ Privacy Requires More than Anonymizing Their Data”, Harvard Business Rev., June 1, 2018. Avail. at:

[32] See “Definition of Terms”, GDPR, n. 27, supra.

[33] A. Hern, “ ‘Anonymised’ Can Never Be Totally Anonymous, Study Says”, The Guardian, Jul. 23, 2019. Avail. at:

[34] What Is Data Anonymization?,, 2020.  Avail. at:

[35] N. Anderson, “ ‘Anonymized’ Data Really Isn’t – And Here’s Why Not”, Sept. 8, 2009, Arstechnica.

Avail. at:

[36]  A. Narayanan and V.Shmatikov, “Robust De-anonymization of Large Sparse Datasets.”, Sept. 1, 2008.

Avail. at:

[37] L. Sweeney, “Matching Known Patients to Health Records in Washington State Data.” 2013, Data Privacy Lab, IQSS, Harvard University. Avail. at:

[38]  Y. de Montjoye, et al. “Unique in the Shopping Mall: On the Reidentifiability of Credit Card Metadata”, Science, Vol. 347, Issue 6221, pp. 536-539 (2015).  Avail. at:

[39]  C. Culnane, et al., “Health Data In An Open World: A Report in Reality-Identifying Patients in the MBS/PBS Dataset and the Implications for Future Releases of Australian Government Data”, Univ. of Melbourne, Dec. 18, 2017.  Avail. at:

[40] O. Solon, “Data Is A “Fingerprint’:  Why You Aren’t As Anonymous As You Think Online”, The Guardian, Jul. 13, 2018.  Avail. at:

[41] C. Perera, et al., “Big Data Privacy in the Internet of Things Era”, IT Professional, Vol. 17, No. 3, pp. 32-39 (2015). Avail. at: Data_Privacy_in_ the_Internet_ of_Things_Era.

[42] Rocher L. et al, n. 45, infra.

[43] C. Perera, et al., n. 41, supra. See also C. Bannon, “The IoT Threat to Privacy”, TechCrunch, Aug. 14, 2016 (arguing that IoT firms’ best practice consumer privacy policies should be modeled on Creative Commons’ licenses: “layered” consumer privacy policy with legal code, plain language, machine-readable (code restricting technology access only to consumer-authorized information) layers.  Avail. at:

[44] A Bachman, “Internet Insecurity, Part 5:  The End of Cybersecurity” Series, Harvard Business Rev., May 23, 2018 (“Here’s the brutal truth:  It doesn’t matter how much your organization spends on the latest cybersecurity hardware, software, training, and staff or whether it has segregated its most essential systems from the rest. If your mission-critical systems are digital and connected in some form or fashion to the internet (even if you think they aren’t, it’s highly likely they are), they can never be made fully safe. Period.”).  Avail. at:

[45] Rocher L. et al, “Estimating the Success of Re-identification in Incomplete Datasets Using Generative Models.” Nature Communications, Vol. 10, 3069 (2019). Avail. at:

[46] GDPR, n. 18 supra, Title 26 (“The principles of data protection should apply to any information concerning an identified or identifiable natural person.  Personal data which have undergone pseudonymisation, which could be attributed to a natural person by the use of additional information should be considered to be information on an identifiable natural person.  To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.  To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.”)  (Emphasis supplied.)

See GDPR Art 29 Data Protection Working Party, Opin. 05/2014 On Anonymization Techniques,

Avail. at:

“To anonymize any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified.  An important factor is that the processing must be irreversible. The focus is on the outcome: that data should be such as not to allow the data subject to be identified via “all” “likely” and “reasonable” means.” (Emphasis supplied.)

[47] Information which is de-identified is no longer deemed “personal information” under the California Consumer Privacy Act (CCPA). CCPA defines “de-identified information” as:

“Information that cannot reasonably identify, relate to, describe, be capable of being associated with, or be linked, directly or indirectly, to a particular consumer, provided that a business that uses de-identified information:

  1. has implemented technical safeguards that prohibit re-identification of the consumer to whom the information may pertain.
  2. has implemented business processes that specifically prohibit re-identification of the information.
  3. has implemented business processes to prevent inadvertent release of de-identified information.
  4. makes no attempt to re-identify the information”

Cal. Civ. Code § 1798.140(h). (Emphasis supplied.)

[48] R. Lazio & M. Davis, “Cybersecurity Risk:  What Does a ”Reasonable” Posture Entail and Who Says So?”, ciodive, Jul. 22, 2019. . Avail. at:

[49] Center for Internet Security, “The 20 CIS Controls & Resources”.  Avail. at: /cis-controls-list/.  See Foundational Control 13, Data Protection.

[50] K. Harris, Attorney General, “California Data Breach Report”, California Dept. of Justice, Feb. 2016.  Avail. at:

[51] See, e.g., S. Harrington, et al., “Practical Tips for In-House Counsel From Recent Cybersecurity Decisions”, Orrick Trust Anchor – Cyber, Privacy & Data Innovation, Mar. 3, 2020;  Avail. at: trustanchor/2020/03/05/practical-tips-for-in-house-counsel-from-recent-cybersecurity-decisions/:

Assess the “reasonableness” of your cybersecurity, despite the difficulty of doing so.” Arguably, a requirement to have “reasonable” cybersecurity is unconstitutionally vague, because it fails to provide businesses with fair notice of what security measures are “reasonable.”  See LabMD, Inc. v. F.T.C., 894 F.3d 1221 (11th Cir. 2018) (overturning an FTC order requiring a company to implement a “reasonably designed” security system because the order did not specify what measures would comprise such a system or how reasonableness would be determined).  However, both regulators and private plaintiffs continue to assert claims premised on alleged failures to implement “reasonable” security, making it worthwhile for businesses to assess their practices. On this front, companies can—without conceding that doing so is a required component of “reasonable” security—engage in risk assessments that take into account the costs and benefits of potential enhancements to the company’s security posture and seek to comply with emerging cybersecurity standards….

[52] Id. See also, R. Merker, nn. 15, 16, supra.

[53] See “Concern About Tech Companies Is Bipartisan and Widespread, New Gallup-Knight Poll Finds”, Knight Foundation, Mar. 11, 2020.  (“…People don’t trust tech companies to police content on their platforms, but they trust the government even less’ …Fifty-nine percent say elected officials and political candidates are paying too little attention to technology issues.”)  Avail. at:

[54] See., e.g., T. Steyer, “What I Learned While Running for President,”, New York Times, Mar. 8, 2020.  Avail. at:

Before I ran for president, I had the opportunity to meet people across the country….  Many felt disconnected and left behind by the political establishment and elites in New York and Washington.  Most people I met felt that the government was broken and that their vote didn’t count because of a corporate stranglehold on our democracy….  Meeting Americans has reinforced my sense of deep governmental failure… [and] reinforced my deep misgivings about how the elite media, political insiders and big corporations have an impact on our democracy.

[55] J. Kavenna, “Shoshana Zuboff: ‘Surveillance Capitalism Is An Assault On Human Autonomy”, The Guardian, Oct. 4, 2019. (Emphasis supplied.)  Avail. at  See also, S. Zuboff, The Age of Surveillance Capitalism:  The Fight for a Human Future at the New Frontier of Power (2019).

[56] Illinois Comp. Stat. § 740, 14/5 (Biometric Information Privacy Act).

[57] Cal. Civ. Code § 1798.140(b).

[58] The Body Camera Accountability Act (AB 1215).  Avail. at: faces/bill TextClient.xhtml?bill_id=201920200AB1215.  See also B. Hodges & K. Menemeier, “The Varying Laws Governing Facial Recognition Technology”,, Jan. 28, 2020.  Avail. at: /2020/01/28/varying-laws-governing-facial-recognition-technology/id=118240/.

[59] Proposed California Privacy Rights Act of 2020 (CPRA aka CCPA 2.0) Amend., Cal. Civ. Code, Sec. 1798.121, Consumer’s Right to Limit Use and Disclosure of SPI. A. McTaggart, Submission of Amendments to The California Privacy Rights and Enforcement Act of 2020, Version3, No. 19-0021, and Request to Prepare Circulating Title and Summary (Amendment). Avail. at:  Office of California Attorney General, initiatives/pdfs/19-0021A1%20%28Consumer%20Privacy%20-%20Version%203%29_1.pdf .

Among other things, the:

  • CPRA provides user-consumers with the ability to opt out of the sale or sharing of their personal information and to limit the use of their sensitive personal information through an opt-out preference signal sent with the consumer’s consent by a platform, technology or mechanism.
  • Requires issuance of regulations to clarify topics including: business purpose, requirements for cybersecurity audit for particularly risky processing; access and opt out rights for automated decision making and profiling; and opt out by technical preferences.

[60] H.R.2231 — 116th Congress (2019-2020).   Avail. at:

[61] The CCPA prohibits businesses from selling the personal information of California residents under the age of 16 without their opt-in consent (or the consent of their parent or guardian for residents under the age of 13).  Cal. Civ. Code § 1798.120(c).

Businesses that offer services or have websites used by minors in California are already impacted by the FTC’s Children’s Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501-6506, and California’s “Online Eraser Law,” Cal. Bus. & Prof. Code §§ 22580-22582.  These laws aim to protect minors and children in certain jurisdictions.  Precisely how they function, however, varies significantly, resulting in a complex set of questions as to which law applies, when, and to whom COPPA protects children anywhere in the United States and defines a child as an individual under the age of 13.  See 16 C.F.R. § 312.2.

COPPA operates by, among other things, requiring verifiable parental consent before collecting personal information from children under 13, and giving parents the ability to access and delete that data.  California’s Online Eraser Law protects minors, defined as individuals under the age of 18, Cal. Bus. & Prof. Code § 22580(d), by allowing them to request, and obtain removal of, content or information posted by them on a website, online service, or mobile application.  Businesses, wherever located, must comply with these laws if their website or service is directed to minors, or if the business has actual knowledge that a minor is using its website or service.

[62] A. Burt, “Privacy and Cybersecurity Are Converging. Here’s Why That Matters for People and for Companies” Harvard Business Rev., Jan. 3, 2019.  Avail. at:

[63] S. Wachter, and B. Mittelstadt, B, “A Right to Reasonable Inferences:  Re-Thinking Data Protection Law in the Age of Big Data and AI”, Colum. Bus. Law Rev (2019), at pp. 494–620.  Avail. at: https://journals.library.columbia .edu/index.php/CBLR/article/view/342

[64] S. Gupta and M. Schneider, supra, n. 31.

[65] Id.  For a similar view, see also “Technologists vs. Policy Makers”, B. Schneier, IEEE Security & Privacy, Jan./Feb. 2020 (“…we’re building complex socio-technical systems that are literally creating a new world.”)

Avail. at:

[66] See J, Knee, “Review: Competing in the Digital Age, New York Times, Jan. 17, 2020 (Despite differing perspectives, Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World by M. Iansite and K. Lakhani  and The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power by M. Cusumano, A. Gawer and D. Yoffie  express similar concerns regarding the ethical and regulatory implications of digital platforms that dominate markets.  The authors suggest that business leaders have fallen short in their efforts to achieve growth without abusing market power.  As a result, “a new kind of managerial wisdom” will be required to realize the full promise of this new technology.)  Avail. at:

[67] H. Cristerna, “How Should Antitrust Regulators Check Silicon Valley’s Ambitions?” New York Times, Jul. 3, 2018 (arguing “old economy” firms must be allowed to merge to compete with tech giants on a level playing field).

Avail. at:

[68] The EU’s chief digital official has said that facial recognition technology violates GDPR consent requirements.

  1. Macaulay, “Automated Facial Recognition Breaches GDPR, Says EU Digital Chief”,, Feb. 17, 2020. Avail. at:

[69] K. Hill, “The Secretive Company That Might End Privacy as We Know It”, New York Times, Jan. 18, 2020

(A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says”). Avail. at: technology/clearview-privacy-facial-recognition.html.  See also, K. Hill, “Unmasking A Company That Wants to Unmask Us All”, New York Times, Jan. 20, 2020. Avail. at:

[70] B. Swan, “Facial-Recognition Company That Works With Law Enforcement Says Entire Client List Was Stolen”, The Daily Beast, Feb. 26, 2020. Avail. at:

[71] See “Concern About Tech Companies Is Bipartisan and Widespread, New Gallup-Knight Poll Finds”, n. 49, supra; Burt, “Privacy and Cybersecurity Are Converging. Here’s Why That Matters for People and for Companies”, n. 53, supra.

[72] See C. Crane, n. 1, supra.  (“Findings from an Economist Intelligence Unit (EIU) survey of 1,600 consumers in eight countries indicate that 92% of global consumers want to control the types of personal information that is automatically collected about them. The same number of consumers want to increase punishments for companies that violate consumers’ privacy. Data from the EIU survey suggests that the majority of surveyed consumers are concerned that “small privacy invasions may eventually lead to a loss of civil rights.”)

[73] Will enforcement of the CCPA track Europe’s experience with the GDPR? See A. Satariano, “Europe’s Privacy Law Hasn’t Shown Its Teeth, Frustrating Advocates” New York Times, Apr. 27, 2020 (“Nearly two years in, there has been little enforcement of the General Data Protection Regulation, once seen as ushering in a new era.  Europe’s rules have been a victim of a lack of enforcement, poor funding, limited staff resources and stalling tactics by the tech companies, according to budget and staffing figures and interviews with government officials.”)  Avail. at:

[74] See J. Stanley & J. Granick, “The Limits of Location Tracking in an Epidemic”, ACLU White Paper Apr. 8, 2020.  Avail. at:

[75] See, e.g. R. Brandom & A. Robertson, “Apple and Google are building a coronavirus tracking system into iOS and Android”, The Verge, Apr. 10, 2020. Avail. at:

[76] See E. Goitein & A. Boyle, “Trump Has Emergency Powers We Aren’t Allowed to Know About”, New York Times, Apr. 10, 2020 (Discussing presidential emergency powers set forth in classified “emergency action documents”. “…we know of no evidence that the executive branch has ever consulted with Congress — or even informed any of its members — regarding the contents of presidential emergency action documents.”)

[77] See N, Singer & C. Sang-Hun, “As Coronavirus Surveillance Escalates, Personal Privacy Plummets”, New York Times, Mar. 23, 2020 (“Tracking entire populations to combat the pandemic now could open the door to more invasive forms of government snooping later.”) Avail. at: /technology/ coronavirus-surveillance-tracking-privacy.html.

See also, D. Halbfinger, et al., “To Track Coronavirus, Israel Moves to Tap Secret Trove of Cellphone Data”, New York Times, Mar. 16, 2020. (The information, intended for use in counterterrorism, would help identify people who have crossed paths with known patients).  Avail. at: