<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
     xmlns:snf="http://www.smartnews.be/snf"
     xmlns:media="http://search.yahoo.com/mrss/" >

    <channel>
        <title>The Intercept</title>
        <atom:link href="https://theintercept.com/staff/belle-lin/feed/" rel="self" type="application/rss+xml" />
        <link>https://theintercept.com/staff/belle-lin/</link>
        <description></description>
        <lastBuildDate>Thu, 23 Apr 2026 14:35:01 +0000</lastBuildDate>
        <language>en-US</language>
                <sy:updatePeriod>hourly</sy:updatePeriod>
        <sy:updateFrequency>1</sy:updateFrequency>
        <generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">220955519</site>
            <item>
                <title><![CDATA[Uber Patents Reveal Experiments With Predictive Algorithms to Identify Risky Drivers]]></title>
                <link>https://theintercept.com/2021/10/30/uber-patent-driver-risk-algorithms/</link>
                <comments>https://theintercept.com/2021/10/30/uber-patent-driver-risk-algorithms/#respond</comments>
                <pubDate>Sat, 30 Oct 2021 12:00:16 +0000</pubDate>
                                    <dc:creator><![CDATA[Belle Lin]]></dc:creator>
                                		<category><![CDATA[Justice]]></category>
		<category><![CDATA[Technology]]></category>

                <guid isPermaLink="false"></guid>
                                    <description><![CDATA[<p>Surveilling drivers under the guise of safety is a common thread in Uber’s patents. Experts warn the systems described could reinforce existing inequalities.</p>
<p>The post <a href="https://theintercept.com/2021/10/30/uber-patent-driver-risk-algorithms/">Uber Patents Reveal Experiments With Predictive Algorithms to Identify Risky Drivers</a> appeared first on <a href="https://theintercept.com">The Intercept</a>.</p>
]]></description>
                                        <content:encoded><![CDATA[<p><u>Safety issues have</u> dogged Uber since its early days as a black car-hailing service. Car accidents and physical altercations have persisted despite Uber’s attempts to monitor its cars and vet drivers; reports of sexual violence in its vehicles eventually led Uber to admit it was “<a href="https://www.uber.com/newsroom/turning-the-lights-on/">not immune</a>” to the problem.</p>
<p>Amid public backlash and calls to address rider safety, Uber rolled out a flashy “<a href="https://www.theatlantic.com/sponsored/uber-2018/safety-first-an-inside-look-at-ubers-new-business-model/1951/">Safety First</a>” initiative in 2018, adding features like 911 assistance to its app, tightening screening of drivers, and for the first time, issuing a <a href="https://www.uber.com/us/en/about/reports/us-safety-report/">safety report</a> that outlined traffic fatalities, fatal physical assaults, and sexual assaults on its platform.</p>
<p>That was before the pandemic. Over the past year and a half, safety took on new significance; where it used to mean drivers had to make riders feel taken care of, it now means that in addition to protecting riders from the virus, drivers have to figure out how to keep themselves healthy.</p>
<p>And Uber again found itself persuading riders not to abandon its platform over safety fears — requiring drivers to submit a selfie verifying they’re wearing a mask, offering limited amounts of cleaning supplies, and asking riders to complete a safety check before getting into a vehicle.</p>

<p>While Uber’s changes might ease some riders’ concerns, they don’t offer the same level of automation and scale that an algorithmic solution could. It’s a potential path hinted at by a series of Uber patents, granted from 2019 to last summer, which outline algorithmic scoring and risk prediction systems to help decide who is safe enough to drive for Uber.</p>
<p>Taken together, they point to a pattern of experimentation with algorithmic prediction and driver surveillance in the name of rider safety. Similar to widely criticized algorithms that help price insurance and <a href="https://theintercept.com/2020/07/12/risk-assessment-tools-bail-reform/">make decisions</a> on bail, sentencing, and parole, the systems described in the patents would make deeply consequential decisions using digital processes that are difficult or impossible to untangle. While Uber’s quest to make safety programmatic is tantalizing, experts expressed concern that the systems could run afoul of their stated purpose.</p>
<p>An Uber spokesperson wrote in an emailed statement that although the company is “always exploring ways that our technology can help improve the Uber experience,” it does not currently have products tied to the safety scoring and risk assessment patents.</p>
<p>As the battle over drivers’ <a href="https://theintercept.com/2021/05/06/pro-act-uber-lyft-doordash-instacart-lobbying/">legal classification</a> continues, after Uber and Lyft used their deep pockets to fund an <a href="https://www.nytimes.com/2020/11/04/technology/california-uber-lyft-prop-22.html">election victory</a> that let them keep drivers as contractors in California, there are urgent concerns that systems like these could become another means to remove drivers without due process, especially as the pandemic has laid bare the vulnerability of gig workers who lack the safety net employees can lean on.</p>
<!-- BLOCK(photo)[1](%7B%22componentName%22%3A%22PHOTO%22%2C%22entityType%22%3A%22RESOURCE%22%7D)(%7B%22scroll%22%3Afalse%2C%22align%22%3A%22bleed%22%2C%22bleed%22%3A%22xtra-large%22%2C%22width%22%3A%22auto%22%7D) --><figure class="img-wrap align-bleed xtra-large-bleed width-auto" style="width: auto;"><!-- CONTENT(photo)[1] -->
<img loading="lazy" decoding="async" width="2000" height="1333" class="aligncenter size-large wp-image-374774" src="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg" alt="Close-up of Uniden dashboard camera (dashcam) installed on the interior window of an Uber vehicle in San Ramon, California; dashcams are often used by crowdsourced taxi drivers to increase driver and passenger safety, September 27, 2018. (Photo by Smith Collection/Gado/Getty Images)" srcset="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=2000 2000w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=300 300w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=768 768w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=1024 1024w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=1536 1536w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=540 540w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=1000 1000w" sizes="auto, (max-width: 1200px) 100vw, 1200px" />
<figcaption class="caption source pullright">Close-up of a dashboard camera installed on the interior window of an Uber vehicle in San Ramon, Calif., on Sept. 27, 2018.<br/>Photo: Smith Collection/Gado/Getty Images</figcaption><!-- END-CONTENT(photo)[1] --></figure><!-- END-BLOCK(photo)[1] -->
<h3>Watched at All Times</h3>
<p>One patent for <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10423991.PN.">scoring driver safety risk</a> relies on machine learning and rider feedback and notably suggests a driver’s “heavy accent” corresponds to “low quality” service.</p>
<p>Another aims to <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10720050.PN.">predict safety incidents</a> using machine-learning models that determine the likelihood that a driver will be involved in dangerous driving or interpersonal conflict, utilizing factors like psychometric tests to determine their “trustworthiness,” monitoring their social media networks, and using “official sources” like police reports to overcome biases in rider feedback.</p>
<p class="p1"><!-- BLOCK(pullquote)[2](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22right%22%7D) --><blockquote class="stylized pull-right" data-shortcode-type="pullquote" data-pull="right"><!-- CONTENT(pullquote)[2] -->“The mistake is that a driver loses their livelihood, not that someone gets shown the wrong ad.”<!-- END-CONTENT(pullquote)[2] --></blockquote><!-- END-BLOCK(pullquote)[2] --></p>
<p>Jeremy Gillula, former tech projects director at the Electronic Frontier Foundation who now works as a privacy engineer at Google, said using algorithms to predict a person’s behavior for the purpose of “deciding if they’re going to be a danger or a problem” is deeply concerning.</p>
<p>“Some brilliant engineers realized we can do machine learning based on people’s text, without realizing what we really want to get, and what it actually represents in a real-life application,” he said. “The mistake is that a driver loses their livelihood, not that someone gets shown the wrong ad.”</p>

<p>Surveilling drivers under the guise of safety is a common thread in Uber’s patents. Many evaluate drivers’ performance using information from their phones, including one that <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10402771.PN.">scores their driving ability</a> and suggests tracking their eye and head movements with phone cameras, and another that <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10654411.PN.">detects their behavioral state</a> (angry, intoxicated, or sleepy) and assigns them an “abnormality score.”</p>
<p>Additional patents aim to <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10611304.PN.">monitor drivers’ behavior</a> using in-vehicle cameras and <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10297148.PN.">approximate “distraction level”</a> with an activity log that tracks what else they’re doing on their phones; making a call, looking at a map, or even moving the phone around could indicate distraction.</p>
<p>Jamie Williams, a former staff attorney at EFF focused on civil liberties who now works as a product counselor, said drivers should be aware they’re “being watched at all times.”</p>
<p>The patents also mirror technologies recently <a href="https://www.theverge.com/2021/2/3/22265031/amazon-netradyne-driveri-survelliance-cameras-delivery-monitor-packages">implemented</a> by Amazon in its delivery vans. The company announced plans in February to install video cameras that use AI to track drivers’ hand movements, driving abilities, and facial expressions. Data collected by the cameras <a href="https://www.theinformation.com/articles/how-amazon-is-using-high-tech-cameras-to-rate-driver-safety">determines a “safety score”</a> and could result in a driver being terminated. Drivers have <a href="https://www.reuters.com/article/global-tech-privacy-idUSL8N2KB4D5">told</a> Reuters: “The cameras are just another way to control us.”</p>
<h3>The Promise of Safety</h3>
<p>The algorithm outlined in a <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=10417343.PN.">2019 safety risk scoring patent</a> shows how dangerous these systems can be in real life, experts said, noting that it could mimic riders’ existing biases.</p>
<p>The system described in the patent uses a combination of rider feedback and phone metadata to assign drivers a safety score based on how carefully they drive (“vehicle operation”) and how they interact with passengers (“interpersonal behavior”). A driver’s safety score would be calculated once a rider submits a safety report to Uber.</p>
<p>After a report is submitted, according to the patent, it would be processed by algorithms along with any associated metadata, including the driver’s Uber profile, the trip duration, distance traveled, GPS location, and car speed. With that information, the report would be classified into topics like “physical altercation” or “aggressive driving.”</p>
<p>A driver’s overall safety score would be calculated using weighted risk assessment scores from the interpersonal behavior and vehicle operation categories. This overall score would determine if a driver has a low, medium, or high safety risk, and consequently, if they should face disciplinary action. Drivers with a high safety risk might receive a warning in the app, a temporary account suspension, or an unspecified “intervention” in real time.</p>
<p>Adding a further layer of automation, the patent also describes a system that automatically tweaks driver safety scores based on specific metadata. A driver who has completed a certain number of trips would be marked as safer, while one who has generated more safety incidents would be marked as less safe. According to the patent, a driver who works at night is considered less safe than a driver who works during the day.</p>
<p>While this design may seem straightforward — shouldn’t a more experienced driver who has better road visibility be considered safer? — experts say any automated decision-making requires that developers make meaningful choices to avoid inserting bias into the entire system.</p>
<p>Gillula said Uber’s automated rules could make decisions based on flawed human assumptions. “Race may be correlated with what time of day you’re operating as an Uber driver. If it’s a second job because you have to work during the day, it seems ridiculous to penalize you for that,” he said. “This is exactly the sort of thing that worries me.”</p>
<p>If Uber wants to make its algorithmic scoring fair, it would need to be transparent about how drivers are being evaluated and give them a proper feedback channel, Williams said. “Machine learning algorithms can be wrong; users can be wrong,” she said. “It’s very important to have clear processes, transparency, and awareness about what’s going into the score.”</p>
<!-- BLOCK(photo)[4](%7B%22componentName%22%3A%22PHOTO%22%2C%22entityType%22%3A%22RESOURCE%22%7D)(%7B%22scroll%22%3Afalse%2C%22align%22%3A%22center%22%2C%22width%22%3A%221024px%22%7D) --><figure class="img-wrap align-center  width-fixed" style="width: 1024px;"><!-- CONTENT(photo)[4] -->
<img loading="lazy" decoding="async" width="4500" height="2995" class="aligncenter size-large wp-image-374776" src="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg" alt="The driver rating screen in an Uber app is seen February 12, 2016 in Washington, DC. Global ridesharing service Uber said February 12, 2016 it had raised $200 million in additional funding to help its push into emerging markets.The latest round comes from Luxembourg-based investment group LetterOne (L1), according to a joint statement.  / AFP PHOTO / Brendan Smialowski        (Photo credit should read BRENDAN SMIALOWSKI/AFP via Getty Images)" srcset="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=4500 4500w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=300 300w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=768 768w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=1024 1024w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=1536 1536w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=2048 2048w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=540 540w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=1000 1000w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=2400 2400w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=3600 3600w" sizes="auto, (max-width: 1200px) 100vw, 1200px" />
<figcaption class="caption source">The driver rating screen in the Uber app is seen on Feb. 12, 2016, in Washington, D.C.<br/>Photo: Brendan Smialowski/AFP via Getty Images</figcaption><!-- END-CONTENT(photo)[4] --></figure><!-- END-BLOCK(photo)[4] -->
<h3>Questions of Accuracy and Fairness</h3>
<p>Risk assessment algorithms have long been used by insurance companies to set policyholder premiums based on indicators like age, occupation, geographical location, and hobbies. Algorithms are also utilized in the criminal justice system, where they’re applied at <a href="https://theintercept.com/2020/07/12/risk-assessment-tools-bail-reform/">nearly every stage</a> of the legal process to help judges and officials make decisions.</p>
<p>Proprietary algorithms like <a href="http://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf">COMPAS</a>, used in states like Florida and Wisconsin, determine an individual’s risk of recidivism on a scale of 1 to 10, with certain numbers corresponding to low, medium, and high risk — the same rubric Uber’s patent follows.</p>
<p>Though Uber aims to predict “safety” risk in its patents, it faces the same fundamental questions of fairness and accuracy leveled at criminal justice algorithms. (The bias inherent in those algorithms has been pointed out <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">again</a> and <a href="https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/">again</a>.) If the design of an algorithm is flawed from the outset, its outcomes and predictions will be too. In the criminal justice context, rearrest is an imperfect proxy for recidivism because arrests are so closely tied to factors like where you live, whether you interact with the police, and what you look like, Gillula said.</p>
<p>Uber’s current rating system, which allows riders and drivers to rate one another on a five-star scale, is similar to the interpersonal behavior category described in Uber’s safety risk scoring patent: Both rely on subjective judgments that are the basis for doling out punishments. Under its current system, Uber “deactivates” or fires drivers whose ratings drop below a certain threshold. The policy has long infuriated drivers who say they have no real way of contesting unfair ratings: They’re funneled through a support system that prioritizes passengers and rarely provides a satisfactory resolution.</p>
<p>Bhairavi Desai, executive director of the New York Taxi Workers Alliance, said drivers are not protected from passengers’ racism, bias, or bigotry. “We’ve talked to drivers who feel like they’ve gotten a lower rating because they’re Muslim,” she said. “I know of African American drivers who stopped working for them because they felt that they would be rated lower.”</p>
<p>Former driver Thomas Liu <a href="https://www.bloomberg.com/news/articles/2020-10-26/uber-sued-for-using-biased-customer-ratings-to-fire-drivers">sued Uber</a> last October, proposing a class-action suit on behalf of nonwhite drivers who were fired based on racially “biased” ratings. Williams said the safety score would be subject to the same concerns: “People could put a safety report in just because they don’t like a driver. It could be racially biased, and there could be a lot of misuse of it.”</p>
<p>Varinder Kumar, a former New York City yellow cab driver, was permanently deactivated by Uber in 2019. He’d been driving for Uber every day for nearly five years, and the deactivation meant the sudden loss of $400 to $500 per week.</p>
<!-- BLOCK(pullquote)[5](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22left%22%7D) --><blockquote class="stylized pull-left" data-shortcode-type="pullquote" data-pull="left"><!-- CONTENT(pullquote)[5] -->“You ask them what happened, they always say it’s a safety issue.”<!-- END-CONTENT(pullquote)[5] --></blockquote><!-- END-BLOCK(pullquote)[5] -->
<p>“I went to the office five times, I emailed them, and they said it was because one customer complained,” Kumar said. “Whenever you go there, you ask them what happened, they always say it’s a safety issue. I’ve been driving in New York City since 1991 and had no accident, no ticket, so I don’t know what kind of safety they’re looking for.”</p>
<p>The kind of safety outlined in Uber’s safety risk scoring patent isn’t clear to Kumar either. He said the interpersonal behavior reporting would cause the same problems as Uber’s rating system: “Customers file a complaint even if they are not 100 percent right.” Meanwhile, the vehicle operation category could unfairly penalize New York City drivers who need to drive more aggressively.</p>
<p>Joshua Welter, an organizer with Teamsters 117 and the affiliated Drivers Union, said algorithmic discipline remains a top issue for drivers. “It’s no wonder Uber and Lyft drivers across the country are rising up and taking action for greater fairness and a voice on the job, like due process to appeal deactivations,” Welter said. “It’s about basic respect and being treated as a human being, not a data experiment.”</p>
<!-- BLOCK(photo)[6](%7B%22componentName%22%3A%22PHOTO%22%2C%22entityType%22%3A%22RESOURCE%22%7D)(%7B%22scroll%22%3Afalse%2C%22align%22%3A%22bleed%22%2C%22bleed%22%3A%22xtra-large%22%2C%22width%22%3A%22auto%22%7D) --><figure class="img-wrap align-bleed xtra-large-bleed width-auto" style="width: auto;"><!-- CONTENT(photo)[6] -->
<img loading="lazy" decoding="async" width="2000" height="1334" class="aligncenter size-large wp-image-374777" src="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg" alt="A traveler uses a smartphone in front of a vehicle displaying Uber Technologies Inc. signage at the Oakland International Airport in Oakland, California, U.S., on Tuesday, Aug. 6, 2019. Uber Technologies Inc. is scheduled to release earnings figures on August 8. Photographer: David Paul Morris/Bloomberg via Getty Images" srcset="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=2000 2000w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=300 300w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=768 768w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=1024 1024w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=1536 1536w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=540 540w, https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=1000 1000w" sizes="auto, (max-width: 1200px) 100vw, 1200px" />
<figcaption class="caption source pullright">A traveler uses a smartphone in front of a vehicle displaying Uber signage at the Oakland International Airport in California on Aug. 6, 2019.<br/>Photo: David Paul Morris/Bloomberg via Getty Images</figcaption><!-- END-CONTENT(photo)[6] --></figure><!-- END-BLOCK(photo)[6] -->
<h3>An Artificial Trust</h3>
<p>The basis for Uber’s safety experimentation is user data, and Daniel Kahn Gillmor, a senior staff technologist at the American Civil Liberties Union’s Speech, Privacy, and Technology Project, said Uber is “sitting on an ever-growing pile of information on people who have ever ridden on its platform.”</p>
<p>“This is a company that does massive experimentation and has shown little regard for data privacy,” he added.</p>
<p>In addition to a <a href="https://movement.uber.com/">vast trove of data</a> gathered from over 10 billion trips, Uber collects telematics data from drivers such as their car’s speed, braking, and acceleration using GPS data from their devices. In 2016, it launched a safety device called the Uber Beacon, a color-changing orb that mounts to a car’s windshield. It was <a href="https://www.uber.com/newsroom/beacon/">announced</a> as a device that assisted with rider pickups, without mention of the fact that it contained sensors for collecting telematics data. In a <a href="https://web.archive.org/web/20190428194127/https:/eng.uber.com/uber-beacon/">now-deleted blog post</a> from 2018, Uber engineers touted the Beacon’s benefit as a device solely managed by Uber for testing algorithms and said it collected better data than drivers’ devices.</p>
<p>Brian Green, director of technology ethics at Santa Clara University’s Markkula Center for Applied Ethics, questioned the motives behind Uber’s data collection. “If the purpose of [Uber’s] surveillance system is to promote trust — if a corporation wants to be trustworthy — they have to allow the public to look at them,” he said. “A lot of tech companies are not transparent. They don’t want the light shone on them.”</p>
<p>Welter said that when companies like Uber experiment with worker discipline based on black box algorithms, “both workers and consumers alike should be deeply concerned whether the reach of big data into our daily lives has gone too far.”</p>
<p>In addition to providing a view into Uber’s safety vision, the patents demonstrate the scope of its machine learning ambitions.</p>
<!-- BLOCK(pullquote)[7](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22right%22%7D) --><blockquote class="stylized pull-right" data-shortcode-type="pullquote" data-pull="right"><!-- CONTENT(pullquote)[7] -->“We’re dealing with so many people we don’t know that tech and surveillance steps in to build an artificial trust.”<!-- END-CONTENT(pullquote)[7] --></blockquote><!-- END-BLOCK(pullquote)[7] -->
<p>Uber considers AI <a href="https://www.uber.com/us/en/uberai/">essential</a> to its business and has made significant investments in it over the past few years. Its internal <a href="https://eng.uber.com/michelangelo/">machine learning platform</a> helps engineering teams apply AI to optimize trip routes, match drivers and riders, mine <a href="https://www.forbes.com/sites/johnkoetsier/2018/08/22/uber-might-be-the-first-ai-first-company-which-is-why-they-dont-even-think-about-it-anymore/?sh=58d862595b62">insights about drivers</a>, and build more safety features. Uber already uses algorithms to process <a href="https://eng.uber.com/cota/">over 90 percent</a> of its rider feedback (the company said that it receives an immense amount of feedback by design, the majority of which is not related to safety).</p>
<p>Algorithmic safety scoring and risk assessment also fit under Uber’s rider safety initiative and its efforts to ensure safe drop-offs for its growing delivery platform. Experts said the systems are not as far from reality as tech companies’ patents sometimes are. In a statement, Uber said that “patent applications are filed on many ideas, but not all of them actually become products or features.”</p>
<p>But some of Uber’s safety-related patents have close parallels with widely utilized features: A patent filed in 2015 for “<a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PALL&amp;s1=9762601.PN.">trip anomaly</a>” detection bears similarities to Uber’s <a href="https://www.uber.com/newsroom/ridecheck/">RideCheck</a> feature, technology for <a href="https://www.uber.com/newsroom/raisingthebar/">anonymizing pickup and drop-off locations</a> is similar to a patent <a href="http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PG01&amp;s1=20200314642.PGNR.">application filed last year</a>, and an application filed in 2015 for <a href="http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;d=PG01&amp;s1=20160300242.PGNR.">verifying drivers’ identity</a> with selfies is similar to Uber’s <a href="https://www.uber.com/newsroom/securityselfies/">security selfie feature</a>.</p>
<p>Green said Uber’s patents reflect a broader trend in which technology is used as a quick fix for deeper societal issues. “We’re dealing with so many people we don’t know that tech and surveillance steps in to build an artificial trust,” he said.</p>
<p>That trust can only extend so far during the pandemic, which has underscored the economic uncertainty drivers face and the limits of technology’s promise of safety. Rolling out such systems now would mean there’s even more at stake.</p>
<p>The post <a href="https://theintercept.com/2021/10/30/uber-patent-driver-risk-algorithms/">Uber Patents Reveal Experiments With Predictive Algorithms to Identify Risky Drivers</a> appeared first on <a href="https://theintercept.com">The Intercept</a>.</p>
]]></content:encoded>
                                <wfw:commentRss>https://theintercept.com/2021/10/30/uber-patent-driver-risk-algorithms/feed/</wfw:commentRss>
                <slash:comments>0</slash:comments>
                <media:content url='https://theintercept.com/wp-content/uploads/2021/10/RTS367DG-crop.jpg?fit=2000%2C1000' width='2000' height='1000' /><post-id xmlns="com-wordpress:feed-additions:1">374703</post-id>
		<media:thumbnail url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=440&amp;h=440&amp;crop=1" />
		<media:content url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?fit=2000%2C1333" medium="image">
			<media:title type="html">Dashcam</media:title>
			<media:description type="html">Close-up of dashboard camera (dashcam) installed on the interior window of an Uber vehicle in San Ramon, California on September 27, 2018; dashcams are often used by crowdsourced taxi drivers to increase driver and passenger safety.</media:description>
			<media:thumbnail url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1047681424-edit.jpg?w=440&amp;h=440&amp;crop=1" />
		</media:content>
		<media:content url="https://theintercept.com/wp-content/uploads/2026/04/GettyImages-2262719965_4d4a28-e1776793866932.jpg?w=440&#038;h=440&#038;crop=1" medium="image">
			<media:title type="html">Soldiers from the Mexican Army guard the facilities of the Military Garrison in Ciudad Juarez, Chihuahua state, Mexico, on February 23, 2026. Mexico has deployed 10,000 troops to quell clashes sparked by the killing of the country&#039;s most wanted drug lord, which have left dozens dead, officials said on February 23. Nemesio &#34;El Mencho&#34; Oseguera, leader of the Jalisco New Generation Cartel (CJNG), was wounded on February 22 in a shootout with soldiers in the town of Tapalpa in Jalisco state and died while being flown to Mexico City, the army said. (Photo by Herika Martinez / AFP via Getty Images)</media:title>
		</media:content>
		<media:content url="https://theintercept.com/wp-content/uploads/2026/04/GettyImages-2263898284-e1776810421496.jpg?w=440&#038;h=440&#038;crop=1" medium="image">
			<media:title type="html">U.S. sailors prepare to stage ordnance on the flight deck of Nimitz-class aircraft carrier USS Abraham Lincoln on Feb. 28, 2026 at sea.</media:title>
		</media:content>
		<media:content url="https://theintercept.com/wp-content/uploads/2026/04/AP26073831096977-e1776698705422.jpg?w=440&#038;h=440&#038;crop=1" medium="image">
			<media:title type="html">Democratic gubernatorial candidate Tom Steyer speaking at a town hall meeting in Culver City, Calif. on March 14, 2026.</media:title>
		</media:content>
		<media:content url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?fit=4500%2C2995" medium="image">
			<media:title type="html">US-IT-LIFESTYLE-TRANSPORT-UBER</media:title>
			<media:description type="html">The driver rating screen in an Uber app is seen February 12, 2016 in Washington, DC.</media:description>
			<media:thumbnail url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-509765744.jpg?w=440&amp;h=440&amp;crop=1" />
		</media:content>
		<media:content url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?fit=2000%2C1334" medium="image">
			<media:title type="html">Uber Technologies Inc. App-Based Transportation Ahead Of Earnings Figures</media:title>
			<media:description type="html">A traveler uses a smartphone in front of a vehicle displaying Uber Technologies Inc. signage at the Oakland International Airport in Oakland, California, on Aug. 6, 2019.</media:description>
			<media:thumbnail url="https://theintercept.com/wp-content/uploads/2021/10/GettyImages-1160116390-edit.jpg?w=440&amp;h=440&amp;crop=1" />
		</media:content>
            </item>
        
            <item>
                <title><![CDATA[Citizen App Again Lets Users Report Crimes — and Experts See Big Risks]]></title>
                <link>https://theintercept.com/2020/03/02/citizen-app/</link>
                <comments>https://theintercept.com/2020/03/02/citizen-app/#respond</comments>
                <pubDate>Mon, 02 Mar 2020 13:00:18 +0000</pubDate>
                                    <dc:creator><![CDATA[Belle Lin]]></dc:creator>
                                    <dc:creator><![CDATA[Camille Baker]]></dc:creator>
                                		<category><![CDATA[Technology]]></category>

                <guid isPermaLink="false">https://theintercept.com/?p=291915</guid>
                                    <description><![CDATA[<p>The revived video feature could foment racism, increase invasive surveillance, and stoke panic, the experts say.</p>
<p>The post <a href="https://theintercept.com/2020/03/02/citizen-app/">Citizen App Again Lets Users Report Crimes — and Experts See Big Risks</a> appeared first on <a href="https://theintercept.com">The Intercept</a>.</p>
]]></description>
                                        <content:encoded><![CDATA[<p><u>Citizen, a mobile app</u> that alerts people to nearby emergencies, is testing the reintroduction of a controversial <a href="https://support.citizen.com/hc/en-us/articles/115000424533-Can-I-report-an-incident-on-Citizen-">feature</a> that lets users report crimes and incidents on their own by live streaming video.</p>
<p>Created by New York-based startup sp0n, Citizen first launched under the name “Vigilante” in 2016 in New York City, broadcasting alerts of 911 calls to users in the vicinity and allowing those users to send live video from incident scenes, comment on alerts, and report incidents on their own. In a splashy <a href="https://www.youtube.com/watch?v=pe4BrBQxa8g&amp;feature=emb_title">launch video</a> with the hashtag #CrimeNoMore, several young men were depicted rushing to aid a woman who was chased by a menacing stranger; the video instructs users not to “interfere with the crime,” but then adds, “Good luck out there!” Vigilante was met with swift backlash from the public and police departments, and Apple soon pulled the app from its store. At that time, the New York Police Department issued a statement saying, “Crimes in progress should be handled by the NYPD and not a vigilante with a cell phone.”</p>
<p>Several months later, the app rebranded as Citizen, <a href="https://techcrunch.com/2017/03/10/banned-crime-reporting-app-vigilante-returns-as-citizen-says-its-report-incident-feature-will-be-pulled/">removed the incident reporting feature</a>, and said it was shifting its focus to “safety” and “avoiding crime” — a far cry from its prior positioning.</p>
<p>Citizen’s return to public crime reporting has not been publicized, but is <a href="https://support.citizen.com/hc/en-us/articles/115000424533-Can-I-report-an-incident-on-Citizen-">documented </a>on the company’s user support website. The app’s <a href="https://apps.apple.com/us/app/citizen-protect-the-world/id1039889567">latest version</a> in Apple and Google’s app stores also includes the description: “Keep Your Community Safe: Report incidents right when they happen to protect the people around you.”</p>
<p class="p1"></p>
<p class="p1">Dominic McMullan, a Citizen representative, refused to speak with The Intercept on the record during a phone call. In an emailed statement the company sent instead, it said that about five percent of its incidents are reported via the live broadcast feature. They are “merged with 911-reported incidents” and reviewed by Citizen moderators before appearing in the app, according to the statement.</p>
<p>The experiment comes after the company made more attempts to work with law enforcement, including bringing onto its board Bill Bratton (who opposed Vigilante when he was police commissioner in New York) and hiring as an executive someone who oversaw NYPD communications when the department spoke out against the app.</p>
<p>Experts say that, if not addressed with great care, what Citizen does next could set the stage for invasive advertising, greater injustice for vulnerable people, and increased government surveillance. User-powered crime reporting has been rife with racism, panic, and concerns users might bring about personal harm — issues not just for Citizen’s predecessor app Vigilante but also for platforms like Amazon’s Ring, which makes home security cameras and a related social network, and Nextdoor, an app that networks neighbors with one another to communicate about crime and other matters.</p>
<h3>Community Involvement and Crime Reporting</h3>
<p>Citizen’s reintroduction of user reporting is a walk back for the company, which had removed the feature in emphatic terms. In 2017, CEO Andrew Frame told <a href="https://techcrunch.com/2017/03/10/banned-crime-reporting-app-vigilante-returns-as-citizen-says-its-report-incident-feature-will-be-pulled/">TechCrunch</a> user reporting would be removed because it had become a distraction from the app&#8217;s mission, which he said was to “reduce crime, not exploitation of people.” To head off racial bias concerns, Frame had also <a href="https://www.cnn.com/2019/03/12/tech/citizen-crime-app-la/index.html">said</a> that suspicious persons reports would not be included in the app.</p>
<p>Now, on its user support website, Citizen encourages people to report incidents about “protests, lost pets, downed power lines, and other community FYIs.” But its <a href="https://support.citizen.com/hc/en-us/articles/115000603373-What-is-Citizen-s-criteria-for-reporting-incidents-">general criteria</a> for reported incidents from 911 calls are broader, and include alleged crimes in progress such as assaults and thefts, fire, smoke, gas leaks, situations involving hazardous materials, and heightened police activity.</p>
<p>In addition to now accepting incident reports from users through its app, Citizen has experimented with taking reports from outsiders via other channels. In 2018, it began rolling out a tool and community called “GuardianNet” or “GNet.” The tool gave users, or “Guardians,” access to Citizen’s internal feed of real-time police, fire, and emergency radio through a web-based interface, allowing them to comb through dispatches and, until last year, to create safety alerts.</p>
<p>The company has aggressively recruited people, typically police scanner hobbyists, to volunteer as unpaid dispatchers. Its representatives have posted on local classified web pages, popular radio scanning forums, and Facebook groups, and tried to organize in-person meetups.</p>
<p><a href="https://www.reddit.com/r/Newark/comments/9skash/anyone_interested_in_an_advanced_police_scanner/e8xs2q4/?context=3">David Choi</a>, a Citizen operations manager, <a href="https://www.reddit.com/r/shawnee/comments/a1mccb/any_police_scanner_listenersowners_here/">wrote</a> on Reddit that the network was “built exclusively for scanner enthusiasts to listen to police/fire radios and report incidents to keep their communities safe and informed.” In response to concerns from Reddit users that a phone number was needed to sign up for the network, Choi<a href="https://www.reddit.com/r/newjersey/comments/9sicg3/anyone_interested_in_an_advanced_police_scanner/e8pke7l/?context=3"> responded</a>, “We&#8217;re not looking to sell you anything at all. We&#8217;re just trying to build up this community of people who like listening to emergency responder radio and give these people an opportunity to help others and maybe even save lives.”</p>
<p>Citizen has previously highlighted the importance of its Guardians in keeping the public safe: In September 2018, it said one Guardian “hero” <a href="https://twitter.com/CitizenAppNJ/status/1045440094537207808">helped alert</a> thousands of people in real time about a bomb threat in New Jersey, and last year said another Guardian broke the news of a <a href="https://patch.com/new-jersey/jersey-city/classifieds/announcements/69431/police-scanner-site-that-really-came-through-during-the-mall-shooting">shooting</a> at a New Jersey mall. But the “Guardians” are a key component of Citizen’s expansion beyond the 15 American cities it currently operates in; if a city has publicly accessible radio bands and people combing through them, Citizen can gather enough resources to enter the market.</p>
<p>Last March, Choi announced that GuardianNet’s incident reporting feature would be “temporarily paused for the near term future” to prepare for a reboot, but that its scanner feeds and chat rooms would remain open. Now, it seems Citizen is changing course again. In a statement, the company said GuardianNet was “a beta test” that “never launched,” and is no longer accessible to the public. The <a href="https://guardian.citizen.com/">Guardian site</a> now redirects to a page called “<a href="https://protect.staging.sp0n.io/device/b7d938e90770c76361bb312451952bb3">ProtectOS</a>,” which requires a phone number to sign up.<br />
<!-- BLOCK(photo)[1](%7B%22componentName%22%3A%22PHOTO%22%2C%22entityType%22%3A%22RESOURCE%22%7D)(%7B%22scroll%22%3Afalse%2C%22align%22%3A%22bleed%22%2C%22bleed%22%3A%22large%22%2C%22width%22%3A%22auto%22%7D) --><figure class="img-wrap align-bleed large-bleed width-auto" style="width: auto;"><!-- CONTENT(photo)[1] -->
<img data-recalc-dims="1" height="1024" width="1024" decoding="async" class="aligncenter size-large wp-image-291993" src="https://theintercept.com/wp-content/uploads/2020/02/citizen-theintercept-art-2-embed-01-1582931321.png?fit=1024%2C1024" alt="citizen-theintercept-art-2-embed-01-1582931321" />

<figcaption class="caption source pullright">Illustration: Soohee Cho/The Intercept</figcaption><!-- END-CONTENT(photo)[1] --></figure><!-- END-BLOCK(photo)[1] --></p>
<h3>Location data, privacy, and surveillance</h3>
<p>Experts say Citizen’s location data collection raises questions about privacy and surveillance, the government’s interest in such data, and the lack of oversight into location data tracking.</p>
<p>Citizen’s terms of service state that users must permit the app to access their location data even when it isn’t open. The company says it needs both location and notification permissions to notify users of emergencies, which might include “life-saving” crime and safety alerts.</p>
<p>Metadata associated with content like videos, which are required to initiate an incident report on the app, are also collected, according to the company’s privacy policy. These include unique device identifiers, information on wireless networks, location information, and more, all of which can be tied back to users, who sign up for Citizen using their names, email addresses, and phone numbers.</p>
<h4>Privacy and Advertising</h4>
<p>Florian Schaub, a professor at the University of Michigan’s School of Information who focuses on digital privacy, said the privacy risks of using Citizen are similar to those of any app that relies on location data to work. “The risk is always that these location traces are used in ways that users are not anticipating. Advertising is the most common risk,” he said.</p>
<p>Location data is valuable, and its sale is growing and largely unregulated. Broadly speaking, private companies collect users’ movements through popular cellphone apps and sell the data to advertising firms and data brokers. It is completely legal to collect location data, and the companies often defend their practices by claiming the location pings they collect are anonymized, essentially raw data with no identifying information. But experts say that’s impossible: Location information can be easily tied to someone’s identity.</p>
<p>“Based on where one lives, you can infer their income level, track where they go,” said Schaub. “You might be able to see they are going to an AA meeting, which is not that anonymous anymore, or going to a mental health service provider, or a fertility clinic.”</p>
<p>Though Citizen has repeatedly stated that it does not and never will sell user data, it’s possible the company may change its mind as it did with user crime reporting. In response to questions about sharing user data, Citizen sent the following statement: “Citizen shares personal information with the service providers that provide services to or on behalf of Citizen. We also enable service providers to collect personal information from app users only to provide services to or on behalf of Citizen. We pay these service providers and have agreements in place that prevent them from using the data for their own benefit. There is no service provider paying us for any data or data derivative.”</p>
<!-- BLOCK(pullquote)[2](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22right%22%7D) --><blockquote class="stylized pull-right" data-shortcode-type="pullquote" data-pull="right"><!-- CONTENT(pullquote)[2] -->&#8220;No one is literally exchanging a pile of data for a suitcase of money.”<!-- END-CONTENT(pullquote)[2] --></blockquote><!-- END-BLOCK(pullquote)[2] -->
<p>Generally, tech companies’ claims that they will never sell users’ data is a misnomer, said Gennie Gebhart, associate director of research at the Electronic Frontier Foundation. “No one is literally exchanging a pile of data for a suitcase of money,” Gebhart told The Intercept.</p>
<p>Even if a company is not selling data to third parties, it may be making data available to other companies for valuable consideration, or otherwise profiting from what it collects. Facebook CEO Mark Zuckerberg, for example, <a href="https://www.vice.com/en_us/article/8xkdz4/does-facebook-sell-data">has long insisted</a> that Facebook does not sell its users’ data, and even testified before the Senate to this effect. But it allows advertisers to target users at a very granular level, reaping the benefit of invasive tracking even if the underlying user data used for targeting is never provided.</p>
<p>Citizen&#8217;s terms of service contain a provision that allows sp0n and third-party providers to advertise based on users’ location and what they do in the app, including what they search for. Ads on Citizen might be targeted based on the photos and videos uploaded by users and other tracked information, according to its terms.</p>
<p>Experts also pointed out Citizen’s lack of transparency when it comes to how long it stores user data. Citizen says it stores user information for “as short a duration as possible” and aims to “lower the accuracy of historical location data where possible.” If Citizen’s main concern was user privacy, said Schaub, it would explicitly state how long location data was kept for.</p>
<p>Nate Wessler, a staff attorney with the American Civil Liberties Union’s Speech, Privacy, and Technology Project, questioned the breadth of Citizen’s location data collection. “I very much doubt they need location data, a year’s worth or five year’s worth, or even a month’s worth,” he said. “The more customer data is kept, the more is potentially exposed to law enforcement.”</p>
<p>Citizen’s app once worked without continuous location tracking. But one update last year changed that, forcing users to enable full location data access to use any part of the app. Users were outraged, flooding Citizen’s App Store reviews with negative comments about the “location-hungry,” “creepy,” and “hostile” nature of the company’s attempts to access their data. In response, Citizen “adjusted” the setting, telling users they could use the app without enabling location access by closing a pop-up that asked for permanent location sharing. In other responses, Citizen acknowledged that “some users” had privacy concerns about sharing location data but maintained that if users’ location settings were not “properly configured,” they would continue receiving in-app reminders to change them whenever they opened the app.</p>
<p>A Citizen spokesperson said over email that users can currently “control their own location data access” on the app, though it still prompts users to enable such access.</p>
<p>Schaub offered alternatives for Citizen to work without the need for constant, or even occasional, location data sharing. Instead of revealing their movements to Citizen, users could select specific coverage areas where they’d like to receive safety notifications, for example. Even if the app does continue requiring location data, Schaub said Citizen could be more forthcoming about the data it collects and what it is used for. “This notice shouldn’t just be in the privacy policy, it should be made explicit when you activate the feature or when you see it for the first time. This should be shared with users rather than buried in the terms of service,” he said.</p>
<p>It’s also worth noting that if Citizen did end up selling user data, it would not be the first time the company has breached a user privacy assurance. Last year, the Washington Post <a href="https://www.washingtonpost.com/technology/2019/05/28/its-middle-night-do-you-know-who-your-iphone-is-talking/">found</a> that Citizen sent personally identifying information such as users’ phone numbers, emails, and GPS coordinates to the marketing tracking company Amplitude — a direct violation of its privacy policy. Citizen said it removed the tracker after being informed of it, and J. Peter Donald, Citizen&#8217;s then spokesperson, said the company would do a better job of clarifying its privacy policy.</p>
<h4>Surveillance</h4>
<p>Location data is interesting not only to advertisers, but also the federal government. In early February, the Wall Street Journal reported that the Trump administration <a href="https://www.wsj.com/articles/federal-agencies-use-cellphone-location-data-for-immigration-enforcement-11581078600?mod=hp_lead_pos5">purchased a database</a> containing millions of Americans’ cellphone location data to help identify and deport undocumented immigrants. Though a landmark 2018 <a href="https://www.supremecourt.gov/opinions/17pdf/16-402_h315.pdf">Supreme Court</a> ruling requires court oversight for government access to location data from cellphone providers, the government was still able to purchase the data from private companies. Wessler said that if Citizen location data is ever made available to private entities, it would also effectively become available for purchase by government entities.</p>
<p>The government could also try to get Citizen’s user data without paying for it. The company provides <a href="https://citizen.com/privacy/lawenforcement">guidelines</a> for law enforcement that indicate a search warrant is required to obtain users’ location data, photos, videos, and chat messages, and a court order is required to compel video and chat metadata and IP addresses. Wessler said, based on his reading of the guidelines, that Citizen seems to follow the appropriate legal standard for compelling user data, but the company could better inform the public by issuing transparency reports detailing the number of law enforcement requests for such information, as many large tech companies do regularly.</p>
<h3>Policing and Technology</h3>
<p>Since relaunching its app as Citizen, the company has also made headway with police departments, hinting at possible futures for the product. Frame, Citizen’s CEO, told <a href="https://www.cnn.com/2019/03/12/tech/citizen-crime-app-la/index.html">CNN</a> last year he was aware police departments initially “hated” his app, but declared the perception had improved since then. “I don’t know how much they love us, but they at least don’t hate us anymore,” he said.</p>
<p>It may be true in New York City, where his company is based. Last year, a Citizen employee organized a chummy <a href="https://www.rockawave.com/articles/they-got-game/">basketball game</a> between the NYPD and company employees to help “humanize the app” and “humanize the NYPD.”</p>
<p>Citizen has forged connections to the law enforcement community in other ways, such as through hiring. <a href="https://www.linkedin.com/in/jpeterdonald/">J. Peter Donald</a>, who was <a href="https://www.prweek.com/article/1519256/crime-reporting-app-citizen-hires-nypds-peter-donald-comms-head">Citizen</a>’s head of policy and communications until last summer, confirmed to The Intercept that he issued the NYPD’s anti-Vigilante statement as the department’s director of communications — thus helping pave the way for the app’s removal from Apple’s store. In other words, he was a prominent critic of Citizen&#8217;s predecessor app before he joined the company that makes it.  In response to questions about his change of heart, Donald pointed to a statement he gave when he joined Citizen in 2018: “I am eager to join Citizen to continue keeping people informed about their safety and using technology to help people when it matters most.”</p>
<p>Donald was appointed to his NYPD post by Bill Bratton, a former New York Police commissioner and Los Angeles Police Department chief who <a href="https://twitter.com/CommissBratton/status/1151289229839884288">joined</a> Citizen’s board of directors last year. Like Donald, Bratton was also initially opposed to the app, telling <a href="https://www.forbes.com/sites/stevenbertoni/2019/07/15/murder-muggings-mayhem-how-an-ex-hacker-is-trying-to-use-raw-911-data-to-turn-citizen-into-the-next-billion-dollar-app/#7972a0681f8a">Forbes</a> he thought it would “scare people and encourage others to interfere with investigations.” Bratton now welcomes the use of other new technologies in policing too, recently <a href="https://nypost.com/2020/02/02/nypd-pushes-back-against-facial-recognition-ban/">voicing</a> his support for NYPD use of facial recognition tools. Bratton did not immediately respond to questions from The Intercept about the app and his role.</p>
<p>Among police departments, the embrace of new technologies has increased dramatically in the past few years. Clearview AI, a <a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html">controversial facial recognition startup</a> that claims to have over three billion indexed photos, is being sold to police departments across the country. A recent <a href="https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement">data breach</a> exposed the startup’s entire client list, confirming that the majority of them are local and state police departments. Notably, NYPD officers had run more than 11,000 searches using Clearview’s database, the most of any organization using the software.</p>
<p>Ring, a doorbell-camera company acquired by Amazon, works with over 400 police departments and gives law enforcement the ability to directly request camera footage from people’s homes. Its social network, Neighbors, lets users anonymously report crimes, share videos, and talk about suspicious happenings. In November, The Intercept <a href="https://theintercept.com/2019/11/26/amazon-ring-home-security-facial-recognition/">reported</a> Ring’s plans to use facial recognition and its network of home doorbell cameras to create AI-enabled neighborhood watchlists. Of companies like Ring that partner with law enforcement, Wessler said, “It becomes this surveillance industrial complex where people are spending their own money to send surveillance data to the police.”</p>
<!-- BLOCK(pullquote)[3](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22left%22%7D) --><blockquote class="stylized pull-left" data-shortcode-type="pullquote" data-pull="left"><!-- CONTENT(pullquote)[3] -->“When you start trying to make money from both police departments and individual users, the blending of that can be toxic.”<!-- END-CONTENT(pullquote)[3] --></blockquote><!-- END-BLOCK(pullquote)[3] -->
<p>Working with law enforcement would present similar risks for Citizen. “Who knows how Citizen might shift their business model over time to make money,” Wessler said.</p>
<p>Last year, anonymous company sources told <a href="https://www.forbes.com/sites/stevenbertoni/2019/07/15/murder-muggings-mayhem-how-an-ex-hacker-is-trying-to-use-raw-911-data-to-turn-citizen-into-the-next-billion-dollar-app/#20c6eb741f8a">Forbes</a> there was a possibility Citizen could charge “universities, airports, and places with lots of people to allow authorities to send notifications to its users.” The sources also said it was possible users could message officials directly about their safety concerns. “When you start trying to make money from both police departments and individual users, the blending of that can be toxic,” Wessler said.</p>
<p>In response to a question about working with authorities, Citizen said it “does not work with law enforcement in any way, shape or form,” but does work with “advisors with backgrounds in public safety.” This does not denote “a formal relationship with any kind of law enforcement,” the statement continued, emphasizing the company’s “independent mission and vision.” In one recent job posting, however, Citizen describes a role dedicated to researching the needs and motivations of its users and “partners in the public sector,” which includes police departments and governments.</p>
<h3>Racism, Paranoia, and Panic</h3>
<p>Experts say crime-tracking apps like Citizen — especially in light of the possible reintroduction of its crime reporting feature — can legitimize people’s racist perceptions of who commits crimes and looks suspicious.</p>
<p>“The user gets the ability to use their own moral compass to figure out what&#8217;s suspicious and what is worthy of being posted and shot out to the world. A lot of times it&#8217;s based on pretty insidious racial biases about who belongs and who doesn&#8217;t belong, and who&#8217;s suspicious and who&#8217;s not suspicious,” said Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation.</p>
<!-- BLOCK(pullquote)[4](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22right%22%7D) --><blockquote class="stylized pull-right" data-shortcode-type="pullquote" data-pull="right"><!-- CONTENT(pullquote)[4] -->&#8220;The user gets the ability to use their own moral compass to figure out what&#8217;s suspicious. A lot of times it&#8217;s based on pretty insidious racial biases.&#8221;<!-- END-CONTENT(pullquote)[4] --></blockquote><!-- END-BLOCK(pullquote)[4] -->
<p>He added that these biases also exist in communities of color, and that, despite being overpoliced, residents of those communities have understandable motivations to turn to technologies like Citizen. “It could be that some of these apps are a continuation of trying to get the basic civic services out of the government that other communities enjoy,” he said.</p>
<p>Still, apps like Citizen can increase fear of crime even as crime rates hit historic lows across the country, said Sarah Lustbader, senior legal counsel for The Appeal, a publication focused on criminal justice. “It makes people afraid enough to feel like they need it, but it also, it seems to me, reduces your quality of life because it just makes you fearful all the time,” Lustbader said. That fear, in turn, “could exacerbate tensions and maybe even create more actual harm between people.”</p>
<p>In spite of its app&#8217;s rebranding, Lustbader doubted the company had changed its basic motivation and purpose, “which is essentially to say you ought to indulge your worst instinct and your worst fear when it comes to fear and crime.”</p>
<p>Citizen is hardly the only crime-tracking app that has raised concerns. Many have been called out for encouraging racial profiling and increasing paranoia.</p>
<p>Nextdoor, a neighborhood-focused social networking service that allows users to report events — including “suspicious people” — launched in 2011. Since then, it has been repeatedly criticized for giving a platform to insidious racial biases. In response to criticism of posts like one warning of “two young African-Americans, slim, baggy pants, early 20s,” Nextdoor <a href="https://www.nytimes.com/2016/05/19/us/website-nextdoor-hears-racial-profiling-complaints.html">announced</a> that it would modify its system for reporting incidents in 2016.</p>
<p>In 2014, two young white entrepreneurs announced they were preparing to release an app called “SketchFactor” that would draw on public and crowdsourced information to rate the “sketchiness” of areas within a city. SketchFactor eventually shut down after public outcry.</p>
<p>Racism is also a problem on Neighbors, Ring’s social network. According to a <a href="https://www.vice.com/en_us/article/qvyvzd/amazons-home-security-company-is-turning-everyone-into-cops">Motherboard</a> analysis of over 100 reports made on the network over a two-month period, people of color were the majority of those reported as “suspicious.”</p>
<p>The Intercept surveyed comments posted recently on Citizen, finding many that used racist language or evoked racist tropes. On one incident given the title “Tesla Crashed into Tree,” a user commented on the race of the driver: “Probably asian considering he drove into a tree lol.” In another post, a user wrote, “Sanctuary STATE! I’ll probably get thrown off here for sayin it but: CA has become TJ (Tijuana) Im out.” In another post titled “Man Fatally Shot Near Nassau County Border,” another user commented, “Look who lives over there. Not the smartest people in the world I bet. The poor will always behave like the poor&#8230; Not the smartest bunch.”</p>
<p>Citizen says it moderates all content and removes comments and videos that include harassment, discrimination, or hate speech, but many users have complained that a lot seems to slip through the cracks. A Citizen spokesperson said over email that the company is working to improve the comments experience for users, and the moderation team is growing.</p>
<p>The app also continues to raise questions of user safety, even after its rebranding. Though its terms remind users they “should not travel to or remain in any area during, before, or after a crime or other hazardous situation,” in every other respect, Citizen seems to invite users to move toward active emergency areas and film them. Users near emergency incidents receive push notifications and are prompted to livestream video, and notifications show the distance to areas where emergencies are taking place in feet. In October 2018, the company <a href="https://twitter.com/CitizenApp/status/1057687902585536513">tweeted</a>, “Broadcasting live on the Citizen app is the best way to inform and protect your community during a crime or emergency event. Here are some tips to help you record great video.”</p>
<h3>Citizen’s Next Move</h3>
<p>Citizen has said it hopes to rapidly expand to many more cities across the globe. It is funded by $60 million in venture capital from 8VC, Peter Thiel’s Founders Fund, Sequoia Capital, and more, according to Crunchbase, a database that tracks startup investments.</p>
<p>It remains a free app and thus far has not publicly stated how it plans to make money. In its statement to The Intercept, Citizen said its current priority is “growing users on the app and optimizing the user experience,” but “revenue-generating products and services are in development.”</p>
<p>Schaub thinks Citizen may follow a tried-and-true path. “When companies are looking for monetization opportunities, they increasingly look toward the data they are collecting about customers,” he said.</p>
<p>But if Citizen’s incident reporting feature is permanently restored, especially without additional safeguards, it could exacerbate longstanding problems. “The long history of surveillance of the suburbs is people looking out their window and deciding who does and who does not belong,” said EFF’s Guariglia.</p>
<p>&nbsp;</p>
<p>The post <a href="https://theintercept.com/2020/03/02/citizen-app/">Citizen App Again Lets Users Report Crimes — and Experts See Big Risks</a> appeared first on <a href="https://theintercept.com">The Intercept</a>.</p>
]]></content:encoded>
                                <wfw:commentRss>https://theintercept.com/2020/03/02/citizen-app/feed/</wfw:commentRss>
                <slash:comments>0</slash:comments>
                <media:content url='https://theintercept.com/wp-content/uploads/2020/02/citizen-theintercept-art-2-1582931325.png?fit=2000%2C1000' width='2000' height='1000' /><post-id xmlns="com-wordpress:feed-additions:1">291915</post-id>
		<media:thumbnail url="https://theintercept.com/wp-content/uploads/2020/02/citizen-theintercept-art-2-embed-01-1582931321.png?w=440&amp;h=440&amp;crop=1" />
		<media:content url="https://theintercept.com/wp-content/uploads/2020/02/citizen-theintercept-art-2-embed-01-1582931321.png?fit=2000%2C1000" medium="image">
			<media:title type="html">citizen-theintercept-art-2-embed-01-1582931321</media:title>
			<media:thumbnail url="https://theintercept.com/wp-content/uploads/2020/02/citizen-theintercept-art-2-embed-01-1582931321.png?w=440&amp;h=440&amp;crop=1" />
		</media:content>
            </item>
        
            <item>
                <title><![CDATA[Amazon's Accent Recognition Technology Could Tell the Government Where You’re From]]></title>
                <link>https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/</link>
                <comments>https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/#respond</comments>
                <pubDate>Thu, 15 Nov 2018 12:00:11 +0000</pubDate>
                                    <dc:creator><![CDATA[Belle Lin]]></dc:creator>
                                		<category><![CDATA[Technology]]></category>

                <guid isPermaLink="false">https://theintercept.com/?p=222535</guid>
                                    <description><![CDATA[<p>A new patent shows how Alexa could derive ethnic origin and emotion by analyzing speech. Experts think the government could come after the resulting data.</p>
<p>The post <a href="https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/">Amazon&#8217;s Accent Recognition Technology Could Tell the Government Where You’re From</a> appeared first on <a href="https://theintercept.com">The Intercept</a>.</p>
]]></description>
                                        <content:encoded><![CDATA[<p><u>At the beginning</u> of October, Amazon was quietly issued a <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&amp;Sect2=HITOFF&amp;u=/netahtml/PTO/search-adv.htm&amp;r=1&amp;p=1&amp;f=G&amp;l=50&amp;d=PTXT&amp;S1=10,096,319&amp;OS=10,096,319&amp;RS=10,096,319">patent</a> that would allow its virtual assistant Alexa to decipher a user’s physical characteristics and emotional state based on their voice. Characteristics, or “voice features,” like language accent, ethnic origin, emotion, gender, age, and background noise would be immediately extracted and tagged to the user’s data file to help deliver more targeted advertising.</p>
<p>The algorithm would also consider a customer’s physical location — based on their IP address, primary shipping address, and browser settings — to help determine their accent. Should Amazon’s patent become a reality, or if accent detection is already possible, it would introduce questions of surveillance and privacy violations, as well as possible discriminatory advertising, experts said.</p>
<p>The civil rights issues raised by the patent are similar to those around facial recognition, another technology Amazon has used as an anchor of its artificial intelligence strategy, and one that it controversially marketed to law enforcement. Like facial recognition, voice analysis underlines how existing laws and privacy safeguards simply aren’t capable of protecting users from new categories of data collection — or government spying, for that matter. Unlike facial recognition, voice analysis relies not on cameras in public spaces, but microphones inside smart speakers in our homes. It also raises its own thorny issues around advertising that targets or excludes certain groups of people based on derived characteristics like nationality, native language, and so on (the sort of controversy that Facebook has stumbled into <a href="https://theintercept.com/2018/11/02/facebook-ads-white-supremacy-pittsburgh-shooting/">again</a> and <a href="https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters">again</a>).</p>
<!-- BLOCK(photo)[0](%7B%22componentName%22%3A%22PHOTO%22%2C%22entityType%22%3A%22RESOURCE%22%7D)(%7B%22scroll%22%3Afalse%2C%22align%22%3A%22center%22%2C%22width%22%3A%22757px%22%7D) --><figure class="img-wrap align-center  width-fixed" style="width: 757px;"><!-- CONTENT(photo)[0] -->
<img data-recalc-dims="1" height="1024" width="1024" decoding="async" class="aligncenter size-large wp-image-223026" src="https://theintercept.com/wp-content/uploads/2018/11/0-5-1542222720.jpg?fit=1024%2C1024" alt="0-5-1542222720" />
<figcaption class="caption source">From Amazon&#8217;s patent, an illustration of a process for determining physical and emotional characteristics from someone’s voice, resulting in tailored audio content like ads.<br/>Document: United States Patent and Trademark Office</figcaption><!-- END-CONTENT(photo)[0] --></figure><!-- END-BLOCK(photo)[0] -->
<h3>Why the Government Might Be Interested in Accent Data</h3>
<p>If voice-based accent detection can determine a person’s ethnic background, it opens up a new category of information that is incredibly interesting to the government, said Jennifer King, director of consumer privacy at Stanford Law School’s Center for Internet and Society.</p>
<p>“If you’re a company and you’re creating new classifications of data, and the government is interested in them, you’d be naive to think that law enforcement isn’t going to come after it,” she said.</p>
<p>She described a scenario in which knowing a user’s purchase history, existing demographic data, and whether they speak Arabic or Arabic-accented English, Amazon could identify the user as belonging to a religious or ethnic group. King said it’s plausible that the FBI would compel the production of such data from Amazon if it could help determine a user&#8217;s membership to a terrorist group. Data demands focused on terrorism are tougher for companies to fight, she said, as opposed to those that are vague or otherwise overbroad, which they have pushed back on.</p>
<p>Andrew Crocker, a senior staff attorney at the Electronic Frontier Foundation, said the Foreign Intelligence Surveillance Act, or FISA, makes it possible for the government to covertly demand such data. FISA governs electronic spying conducted to acquire information on foreign powers, allowing such monitoring without a warrant in some circumstances and in others under warrants issued by a court closed to the public, with only the government represented. The communications of U.S. citizens and residents are routinely acquired under the law, in many cases incidentally, but even incidentally collected communications may later be used against Americans in FBI investigations. Under FISA, the government could “get information in secret more easily, and there are mass or bulk surveillance capabilities that don’t exist in domestic law,” said Crocker. “Certainly it could be done in secret with less court oversight.”</p>
<!-- BLOCK(pullquote)[1](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22right%22%7D) --><blockquote class="stylized pull-right" data-shortcode-type="pullquote" data-pull="right"><!-- CONTENT(pullquote)[1] -->“You’d be naive to think that law enforcement isn’t going to come after it.”<!-- END-CONTENT(pullquote)[1] --></blockquote><!-- END-BLOCK(pullquote)[1] -->
<p>Jennifer Granick, a surveillance and cybersecurity lawyer at the American Civil Liberties Union’s Speech, Privacy, and Technology Project, suggested that Amazon’s accent data could also provide the government with information for the purpose of immigration control.</p>
<p>“Let’s say you have ICE go to one of these providers and say, ‘Give us all the subscription information of people who have Spanish accents’ … in order to identify people of a particular race or who theoretically might have relatives who are undocumented,” she said. “So you can see that this type information can definitely be abused.”</p>
<p>Though King said she hasn’t seen evidence of these types of government requests, she has witnessed “parallel things happen in other contexts.” It’s also possible that if Amazon was sent a National Security Letter by the FBI, a gag order would prevent the company from disclosing much, including the exact number of letters it received. National Security Letters compel the disclosure of certain types of information from communications firms, like a subpoena would, but often in secret. The letters require the companies to hand over select data, like the name of an account owner and the age of an account, but the FBI has <a href="https://theintercept.com/2017/01/31/national-security-letters-demand-data-that-companies-arent-obligated-to-provide/">routinely</a> asked for more, including email headers and internet browsing history.</p>
<p>Compared to some other tech giants, however, Amazon is less detailed in its disclosures about National Security Letters it receives and about data requests in general. For example, in its <a href="https://www.amazon.com/gp/help/customer/display.html?nodeId=GYSDRGWQ2C2CRYEF">information request reports</a>, it does not disclose how many NSLs it has received or how many accounts are affected by national security requests, as Apple and Google do. These more specific disclosures from other companies show a trend: From mid-2016 to the first half of 2017, national security requests sent to Apple, Facebook, and Google <a href="https://www.reuters.com/article/us-apple-security/apple-sees-steep-increase-in-u-s-national-security-requests-idUSKCN1IQ31V">increased significantly</a>.</p>
<p>But even if the government hasn’t yet made such requests of Amazon, we know that it has been paying attention to voice and speech technology for some time. In January, <a href="https://theintercept.com/2018/01/19/voice-recognition-technology-nsa/">The Intercept</a> reported that the National Security Agency had developed technology not just to record and transcribe private conversations, but also to automatically identify speakers. An individual’s “voiceprint” was created, which could be cross-referenced with millions of intercepted and recorded telephone, video, and internet calls.</p>
<p>To create an American citizen’s “voiceprint,” which government documents don’t explicitly indicate has been done, experts said the NSA would need only to tap into Amazon or Google’s existing voice data.</p>
<p>Over the past year, Amazon’s relationship with the government has become increasingly cozy. <a href="https://www.buzzfeednews.com/article/daveyalba/amazon-facial-recognition-orlando-police-department">BuzzFeed</a> recently revealed details about how the Orlando Police Department was piloting Rekognition, Amazon’s facial recognition technology, to identify “persons of interest.” A few months earlier, Amazon was outed by the <a href="https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazon-teams-government-deploy-dangerous-new">ACLU</a> for “marketing Rekognition for government surveillance.” Meanwhile, in June, the company was busy <a href="https://www.thedailybeast.com/amazon-pushes-ice-to-buy-its-face-recognition-surveillance-tech">pitching Immigration and Customs Enforcement</a> officials on its technology.</p>
<p>Though these revelations have set off alarm bells, even for <a href="https://thehill.com/business-a-lobbying/393583-amazon-employees-protest-sale-of-facial-recognition-tech-to-law">Amazon employees</a>, experts said that speech recognition presents similar concerns that are equally if not more pressing. Amazon’s voice processing patent dates to March of last year. The company, in response to questions from The Intercept, described the patent as exploratory and pledged to abide by its privacy policy when collecting and using data.</p>
<h3>Privacy Law Lags Behind Technology</h3>
<p>Weak privacy laws in the U.S. are one reason consumers are vulnerable when tech companies start gathering new types of data about them. There is nothing in the law that protects data collected about a person’s mood or accent, said Granick.</p>
<p>In the absence of strong legal protections, consumers are forced to make their own decisions about trade-offs between their privacy and the convenience of virtual assistants. “Being able to use really robust voice control would be great if it meant you weren’t just being put into a giant AI algorithm and being used to improve your pitchability for new products, especially when you’re paying for these systems,” said King.</p>
<p>The Electronic Communications Privacy Act, or ECPA, first passed in 1986, was a major step forward in privacy protection at the time. But now, over 30 years later, it has yet to catch up with the pace of technological innovation. Generally, under ECPA, government agencies need a subpoena, court order, or search warrant to compel companies to disclose protected user information. Unlike court orders and search warrants, subpoenas don&#8217;t necessarily require judicial review.</p>
<p>Amazon&#8217;s most recent <a href="https://d1.awsstatic.com/certifications/Information_Request_Report_June_2018.pdf">information request report</a>, which covers the first half of this year, reveals that the company received 1,736 subpoenas during that time, 1,283 of which it turned over some or all of the information requested. Since Amazon began publishing these reports three years ago, the number of data requests it receives has steadily increased, with a huge jump between 2015 and 2016. Echo, its Alexa-enabled speaker for use at home, was released in 2015, and Amazon <a href="https://www.fastcompany.com/40517128/the-government-is-knocking-on-amazons-door-a-lot-more-often">has not said</a> whether the increase is related to the growing popularity of its speaker.</p>
<p>While ECPA protects the data associated with our digital conversations, Granick said its application to information collected by providers like Amazon is “anemic.” “The government could say, ‘Give us a list of everyone who you think is Chinese, Latino’ and the provider has to argue why they shouldn’t. That kind of conclusory data isn’t protected by ECPA, and it means the government can compel its disclosure with a subpoena,” she said.</p>
<!-- BLOCK(pullquote)[2](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22left%22%7D) --><blockquote class="stylized pull-left" data-shortcode-type="pullquote" data-pull="left"><!-- CONTENT(pullquote)[2] -->“The government could say, ‘Give us a list of everyone who you think is Chinese, Latino.’”<!-- END-CONTENT(pullquote)[2] --></blockquote><!-- END-BLOCK(pullquote)[2] -->
<p>Since ECPA is not explicit, there’s a legal question of whether Amazon could voluntarily turn over the conversations its users have with Alexa. Granick said Amazon could argue that such data is a protected electronic communication under ECPA — and require that the government get a warrant to access it — but as a party to the communication, Amazon also has the right to divulge it. There have been no court cases addressing the issue so far, she said.</p>
<p>Crocker, of the EFF, argued that communications with Alexa — voice searches and commands, for example — are protected by ECPA, but agreed that the government could obtain the data through a warrant or other legal process.</p>
<p>He added that the way Amazon stores accent information could impact the government’s ability to access it. If Amazon keeps it stored long term in customer profiles in the cloud, those profiles are easier to obtain than real-time voice communications, the interception of which invokes protections under the Wiretap Act, a federal law governing the interception and disclosure of communications. (ECPA amended the Wiretap Act to include electronic communications.) And if it’s stored in metadata associated with Alexa voice searches, the government has obtained those in past cases and could get to it that way.</p>
<p>In 2016, a <a href="https://www.theinformation.com/articles/amazon-echo-and-the-hot-tub-murder">hot tub murder in Arkansas</a> put in the spotlight the defendant’s Echo. Police seized the device and tried to obtain its recordings, prompting Amazon to argue that any communications with Alexa, and its responses, were <a href="http://fortune.com/2017/02/23/amazon-free-speech-alexa-murder/">protected as a form of free speech</a> under the First Amendment. Amazon eventually turned over the records after the defendant gave his permission to do so. Last week, a New Hampshire judge <a href="https://www.washingtonpost.com/nation/2018/11/14/police-think-alexa-may-have-witnessed-new-hampshire-double-slaying-now-they-want-amazon-turn-her-over/?utm_term=.a8ff3ac7a4bb">ordered Amazon</a> to turn over Echo recordings in a new murder case, again hoping it could provide criminal evidence.</p>
<h3>Dynamic, Targeted, and Discriminatory Ads</h3>
<p>Beyond government surveillance, experts also expressed concern that Amazon’s new classification of data increases the likelihood of discriminatory advertising. When demographic profiling — the basis of traditional advertising — is layered with additional, algorithmically assessed information from Alexa, ads can quickly become invasive or offensive.</p>
<p>Granick said that if Amazon “gave one set of [housing] ads to people who had Chinese accents and a different set of ads to people who have a Finnish accent … highlighting primarily Chinese neighborhoods for one and European neighbors for another,” it would be discriminating on the basis of national origin or race.</p>
<p>King said Amazon also opens itself to charges of price discrimination, and even racism, if it allows advertisers to show and hide ads from certain ethnic or gender groups. “If you live in the O.C. and you have a Chinese accent and are upper middle-class, it could show you things that are higher price. [Alexa] might say, ‘I’m gonna send you Louis Vuitton bags based on those things,&#8217;” she said.</p>
<p>Selling products based on emotions also offers opportunities for advertisers to manipulate consumers. “If you’re a woman in a certain demographic and you’re depressed, and we know that binge shopping is something you do … knowing that you’re in kind of a vulnerable state, there’s no regulation preventing them from doing something like this,&#8221; King said.</p>
<p>An example from the patent envisions marketing to Chinese speakers, albeit in a more innocuous context, describing how ads might be targeted to “middle-aged users who speak Mandarin or have a Chinese accent and live in the United States.” If the user asks, “Alexa, what’s the news today?” Alexa might reply, “Before your news brief, you might be interested in the Xiaomi TV box, which allows you to watch over 1,000 real-time Chinese TV channels for just $49.99. Do you want to buy it?&#8221;</p>
<p>According to the patent, the ads may be presented in response to user voice input, but could also be “presented at any time.” They could even be injected into existing audio streams, such as a news briefing or playback of tracks from a music playlist.</p>
<!-- BLOCK(photo)[3](%7B%22componentName%22%3A%22PHOTO%22%2C%22entityType%22%3A%22RESOURCE%22%7D)(%7B%22scroll%22%3Afalse%2C%22align%22%3A%22bleed%22%2C%22bleed%22%3A%22large%22%2C%22width%22%3A%22auto%22%7D) --><figure class="img-wrap align-bleed large-bleed width-auto" style="width: auto;"><!-- CONTENT(photo)[3] -->
<img loading="lazy" decoding="async" width="5000" height="3333" class="aligncenter size-large wp-image-223203" src="https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg" alt="Andrew DeVore, vice president and associate general counsel with Amazon.com Inc., right, listens as Len Cali, senior vice president of global public policy with AT&amp;T Inc., speaks during a Senate Commerce Committee hearing on consumer data privacy in Washington, D.C., U.S., on Wednesday, Sept. 26, 2018. Facing growing pressure to protect their customers' privacy, some of the biggest technology companies told Congress that they favor new federal consumer safeguards but diverged on some of the details. Photographer: Andrew Harrer/Bloomberg via Getty Images" srcset="https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=5000 5000w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=300 300w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=768 768w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=1024 1024w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=1536 1536w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=2048 2048w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=540 540w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=1000 1000w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=2400 2400w, https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=3600 3600w" sizes="auto, (max-width: 1200px) 100vw, 1200px" />
<figcaption class="caption source pullright">Andrew DeVore, vice president and associate general counsel with Amazon.com Inc., right, listens as Len Cali, senior vice president of global public policy with AT&amp;T Inc., speaks during a Senate Commerce Committee hearing on consumer data privacy in Washington, D.C., on Sept. 26, 2018.<br/>Photo: Andrew Harrer/Bloomberg via Getty Images</figcaption><!-- END-CONTENT(photo)[3] --></figure><!-- END-BLOCK(photo)[3] -->
<h3>New Rules Emerge for Data Privacy</h3>
<p>In the wake of <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Facebook’s Cambridge Analytica scandal</a>, lawmakers have grown increasingly wary of tech companies and their privacy practices. In <a href="https://www.commerce.senate.gov/public/index.cfm/2018/9/committee-to-hold-hearing-examining-consumer-privacy-protections">late September</a>, the Senate Commerce Committee held a fresh round of hearings with tech executives on the issue — also giving them an opportunity to explain how they’re addressing the new, stringent data privacy laws in the <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1532348683434&amp;uri=CELEX:02016R0679-20160504">European Union</a> and <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375">California</a>.</p>
<p>California’s regulation, which passed in June and goes into effect in 2020, sets a new precedent for consumer privacy law in the country. It expands the definition of personal information and gives state residents greater control over the sharing and sale of their data to third parties.</p>
<p>Unsurprisingly, Amazon and other big tech companies pushed back forcefully on the new reforms, citing excessive penalties, compliance costs, and data collection restrictions — and each spent <a href="http://cal-access.sos.ca.gov/Campaign/Committees/Detail.aspx?id=1401518&amp;view=late1">nearly $200,000 to defeat it</a>. During the Senate hearing, Amazon Vice President and Associate General Counsel <a href="https://www.commerce.senate.gov/public/_cache/files/7c30e97b-e5fb-49cc-806e-5cd126ee91dc/48369EAB81D0F112CEDC5672C9AF24AB.09-24-2018devore-testimony.pdf">Andrew DeVore</a> asked the committee to consider the “unintended consequences” of California’s law, which he called “confusing and difficult to comply with.”</p>
<p>Now that the midterm elections have passed, privacy advocates hope that congressional interest in privacy issues will turn into legislative action; thus far, it has not. The Federal Trade Commission is also considering updates to its consumer protection enforcement priorities. In September, it kicked off a <a href="https://www.ftc.gov/news-events/events-calendar/2018/09/ftc-hearing-1-competition-consumer-protection-21st-century">series of hearings</a> examining the impact of “new technologies” and other economic changes.</p>
<!-- BLOCK(pullquote)[4](%7B%22componentName%22%3A%22PULLQUOTE%22%2C%22entityType%22%3A%22SHORTCODE%22%2C%22optional%22%3Atrue%7D)(%7B%22pull%22%3A%22right%22%7D) --><blockquote class="stylized pull-right" data-shortcode-type="pullquote" data-pull="right"><!-- CONTENT(pullquote)[4] -->Advocates hope congressional interest in privacy issues will turn into legislative action; thus far, it has not.<!-- END-CONTENT(pullquote)[4] --></blockquote><!-- END-BLOCK(pullquote)[4] -->
<p>Granick said that as states move to protect consumers where the federal government has not, California could serve as a model for the rest of the country. In August, California also became the first state to pass an <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB327">internet of things cybersecurity law</a>, requiring that manufacturers add a “reasonable security feature” to protect the information it collects from unauthorized access, modification, or disclosure.</p>
<p>In 2008, Illinois became the first state to pass a <a href="http://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&amp;ChapterID=57">law regulating biometric data</a>, placing restrictions on the collection and storing of iris scan, fingerprint, voiceprint, hand scan, and face geometry data. (Granick says it’s unclear if accent data is covered under the law.) Being the first state to pass landmark legislation, Illinois presents a cautionary tale for California. Though its bill was once considered a model law, only two other states — Texas and Washington — have passed biometric privacy laws over the past 10 years. Similar efforts elsewhere were largely killed by corporate lobbying.</p>
<h3>A Growing and Global Problem</h3>
<p>Activists have looked to other countries as examples of what could go wrong if tech companies and government agencies become too friendly, and voice accent data gets misused.</p>
<p><a href="https://www.hrw.org/news/2017/10/22/china-voice-biometric-collection-threatens-privacy">Human Rights Watch</a> reported last year that the Chinese government was creating a national voice biometric database using data from Chinese tech company iFlyTek, which provides its consumer voice recognition apps for free and claims its system can support 22 Chinese dialects. On its English website, iFlyTek said that its technology has been &#8220;inspected and praised&#8221; by &#8220;many party and state leaders,&#8221; including President Xi Jinping and Premier Li Keqiang.</p>
<p>The company is also the supplier of voice pattern collection systems used by regional police bureaus and runs a lab that develops voice surveillance technology for the Ministry of Public Security. Its technology has “helped solve cases” for law enforcement in Anhui, Gansu, Tibet, and Xinjiang, according to a state press report cited by Human Rights Watch. Activists warn that one possible use of the government’s voice database, which could contain dialect and accent-rich voice data from minority groups, is the surveillance of Tibetans and Uighurs.</p>

<p>Last year, <a href="https://www.welt.de/newsticker/news2/article162929384/Bundesamt-testet-Software-zur-Dialekterkennung-von-Fluechtlingen.html">Die Welt</a> reported that the German government was testing voice analysis software to help verify<a href="https://www.theverge.com/2017/3/17/14956532/germany-refugee-voice-analysis-dialect-speech-software"> where its refugees are coming from.</a> They hoped it would determine the dialects of people seeking asylum in Germany, which migration officers would use as one of several “indicators” when reviewing applications. The test was met with skepticism, as speech experts questioned the ability of software to make such a complex determination.</p>
<p>The amount of information people voluntarily give tech companies through smart speakers is growing, along with the purchases users are allowed to make. A <a href="https://news.gallup.com/poll/228497/americans-already-using-artificial-intelligence-products.aspx">Gallup survey</a> conducted last year found that 22 percent of Americans currently use smart home personal assistants like Echo — placing them in living rooms, kitchens, and other intimate spaces. And 44 percent of U.S. adult internet users are planning to buy one, according to a <a href="https://www.cta.tech/News/Blog/Articles/2017/December/44-Percent-of-U-S-Online-Adults-Plan-to-Purchase.aspx">Consumer Technology Association study</a>.</p>
<p>Amazon’s move into the home with more sophisticated voice abilities for Alexa has been a long time coming. In <a href="https://www.technologyreview.com/s/601654/amazon-working-on-making-alexa-recognize-your-emotions/">2016</a>, it was already discussing emotion detection as a way to stay ahead of competitors Google and Apple. Also that year, it filed a <a href="http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;d=PG01&amp;p=1&amp;u=/netahtml/PTO/srchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;s1=20180174595.PGNR.">patent application</a> for a real-time language accent translator — a blend of accent detection and translation technologies. When its emotion and accent patent was issued last month, Alexa’s potential ability to read emotions and detect if customers are sick was called out as <a href="https://www.cnet.com/news/got-the-sniffles-alexa-may-some-day-notice-and-offer-cough-drops/">creepy</a>.</p>
<h3>Amazon Grapples With an “Accent Gap”</h3>
<p>Amazon’s current accent handling capabilities are lackluster. In July, the <a href="https://www.washingtonpost.com/graphics/2018/business/alexa-does-not-understand-your-accent/?utm_term=.312dc4af8123">Washington Post </a>charged that Amazon and Google had created an “accent gap,” leaving non-native English speakers behind in the voice-activated technology revolution. Both Alexa and Google Assistant had the most difficulty understanding Chinese- and Spanish-accented English.</p>
<p>Since the advent of speech recognition technology, picking up on dialects, speech impediments, and accents has been a persistent challenge. If the technology in Amazon’s patent was available today, natural language processing experts said that the accent and emotion detection would not be able to draw precise conclusions. The training data that teaches artificial intelligence lacks diversity in the first place, and because language itself is constantly changing, any AI would have a hard time keeping up.</p>
<p>Though Amazon’s new patent is a sign that it’s paying attention to the “accent gap,” it may be doing so for the wrong reasons. Improved language accent detection makes voice technology more equitable and accessible, but it comes at a cost.</p>
<h3>Regarding Patents</h3>
<p>Patents are not a surefire sign of what tech companies have built, or what is even possible for them to build. Tech companies in particular submit a dizzying number of patent applications.</p>
<p>In an emailed statement, Amazon said that it filed “a number of forward-looking patent applications that explore the full possibilities of new technology. Patents take multiple years to receive and do not necessarily reflect current developments to products and services.” The company also said that it “will only collect and use data in accordance with our privacy policy,” and did not elaborate on other uses of its technology or data.</p>
<p>But King, who has also reviewed numerous Facebook patents, said that they can be used to infer the direction a company is headed.</p>
<p>“You’re seeing a future where the interactions with people and their interior spaces is getting a lot more aggressive,” she said. “That’s the next frontier for companies. Not just tracking your behavior, where you’ve gone, what they think you might buy. Now it’s what you’re thinking, feeling, and that is what makes people deeply uncomfortable.”</p>
<p>For now, people who want to hold onto their privacy and minimize surveillance risk shouldn’t buy a speaker at all, recommended Granick. “You’re basically installing a microphone for the government to listen in to you in your home,” she said.</p>
<p>The post <a href="https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/">Amazon&#8217;s Accent Recognition Technology Could Tell the Government Where You’re From</a> appeared first on <a href="https://theintercept.com">The Intercept</a>.</p>
]]></content:encoded>
                                <wfw:commentRss>https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/feed/</wfw:commentRss>
                <slash:comments>0</slash:comments>
                <media:content url='https://theintercept.com/wp-content/uploads/2018/11/Intercept_Echo_v2-3.5MB-2-1542062294.gif?fit=1439%2C720' width='1439' height='720' /><post-id xmlns="com-wordpress:feed-additions:1">222535</post-id>
		<media:thumbnail url="https://theintercept.com/wp-content/uploads/2018/11/0-5-1542222720.jpg?w=440&amp;h=440&amp;crop=1" />
		<media:content url="https://theintercept.com/wp-content/uploads/2018/11/0-5-1542222720.jpg?fit=1000%2C1353" medium="image">
			<media:title type="html">0-5-1542222720</media:title>
			<media:description type="html">Caption TK.</media:description>
			<media:thumbnail url="https://theintercept.com/wp-content/uploads/2018/11/0-5-1542222720.jpg?w=440&amp;h=440&amp;crop=1" />
		</media:content>
		<media:content url="https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?fit=5000%2C3333" medium="image">
			<media:title type="html">Senate Commerce Committee Hearing On Consumer Data Privacy</media:title>
			<media:description type="html">Andrew DeVore, vice president and associate general counsel with Amazon.com Inc., right, listens as Len Cali, senior vice president of global public policy with AT&#38;T Inc., speaks during a Senate Commerce Committee hearing on consumer data privacy in Washington, D.C., on Sept. 26, 2018.</media:description>
			<media:thumbnail url="https://theintercept.com/wp-content/uploads/2018/11/GettyImages-1040873518-1542239586.jpg?w=440&amp;h=440&amp;crop=1" />
		</media:content>
            </item>
            </channel>
</rss>
