Logo
    Search

    Should this creepy search engine exist?

    enMay 10, 2024

    Podcast Summary

    • Discovering Dubious Search Engines and Sustainable ApparelA search engine using facial recognition technology reveals personal information, raising ethical concerns about privacy and anonymity in public spaces. Be aware of advancements and their potential consequences, and support sustainable apparel brands like Vuori that offset carbon footprints and use better materials.

      Technology, specifically search engines, continues to evolve and blur the lines of privacy in our daily lives. Kashmir Hill, a technology reporter at The New York Times, shared her experience discovering a morally dubious search engine that uses facial recognition technology to uncover personal information. This tool, which can reveal names, addresses, and even compromising content, raises ethical concerns and challenges our assumptions of anonymity in public spaces. Hill emphasized the importance of being aware of these advancements and the potential consequences they may have on our privacy and personal relationships. She also urged for ongoing discussions about the ethics and limitations of such technologies, and encouraged individuals to be cautious and informed about the digital tools they use. Additionally, Hill introduced Vuori, a sustainable and comfortable performance apparel brand that offers versatile clothing for various activities, including lounging. Vuori is committed to offsetting its carbon footprint and using better, more sustainable materials for its products. Listeners can enjoy 20% off their first purchase, free shipping on US orders over $75, and free returns by visiting viore.com/pjsearch.

    • Facial Recognition Technology Raises Privacy ConcernsClearview AI collects and sells billions of images from the public web to law enforcement without clear consent, raising significant privacy concerns.

      The use of facial recognition technology, specifically Clearview AI, raises significant privacy concerns as it collects and sells billions of images from the public web to law enforcement agencies without clear consent. This was first brought to light in a memo written by a former Solicitor General, Paul Clement, in 2019. The technology, which returns accurate results with high efficiency, has been adopted by hundreds of law enforcement agencies and operates beyond the reach of traditional tech giants like Google and Facebook. The existence of such a powerful tool, which can de-anonymize individuals, was a topic of concern during a 2011 workshop organized by the Federal Trade Commission. The consensus at the time was to prevent the use of facial recognition apps for identifying strangers. Despite this, Clearview AI has managed to bypass these concerns and gain widespread adoption among law enforcement agencies.

    • Lack of Transparency Surrounding Use of Clearview AI by Law EnforcementClearview AI's facial recognition technology, used by law enforcement for crime solving, raises concerns due to lack of transparency and accountability in its usage.

      Clearview AI, a powerful facial recognition technology, had been used by various law enforcement agencies without much transparency. The technology, which had scraped billions of photos from the public web, was reportedly being used to solve crimes and find suspects. However, when journalist investigating the company's origins tried to verify information and even test the technology herself, she encountered resistance from law enforcement agencies. Despite initial promises to speak about the tool and share results, detectives she contacted suddenly became unresponsive. This incident raised concerns about the lack of transparency and accountability surrounding the use of such advanced technologies by law enforcement.

    • ClearView AI's Surveillance CapabilityClearView AI holds significant power to monitor individuals, raising privacy concerns, despite keeping their own operations private.

      ClearView AI, a facial recognition company, holds significant power to monitor individuals, including law enforcement officers. The detective's experience of his photo being flagged and the subsequent call from ClearView's tech support, illustrates this surveillance capability. Cashmere Hill, a reporter, faced challenges in investigating the company, with no physical address found on their website and unresponsive investors. However, she eventually succeeded in meeting the investors, who were reluctant to discuss ClearView AI. This incident highlights the potential privacy concerns surrounding ClearView AI's immense power to monitor individuals, while keeping its own operations private.

    • The Reach and Impact of ClearView AI's Facial Recognition TechnologyClearView AI's facial recognition technology has the potential to be both impressive and concerning, highlighting the importance of transparency and ethical considerations in its development and use, as well as the significance of individuals' digital footprints and the potential consequences of one's online presence.

      The power and reach of technology, as demonstrated by ClearView AI, can be both impressive and concerning. Cashmere, a journalist, uncovered the existence of ClearView AI's facial recognition technology and was intrigued by its potential, but also alarmed by its potential misuse. She was able to identify the key figures behind the company, Juan Ton Tat and Richard Schwartz, through their online presence. Juan's Internet trail included YouTube videos of him playing guitar and Twitter posts with an eclectic mix of identities. This case illustrates the importance of transparency and ethical considerations in the development and use of advanced technologies. Additionally, it highlights the significance of individuals' digital footprints and the potential consequences of one's online presence.

    • From struggling online to a tech CEOSelf-taught AI expert Juan built Clearview AI's facial search engine using open-source research on the Internet.

      Juan, the founder of Clearview AI, came to America with dreams of making it big on the Internet but struggled to find success with various apps and ideas. His online presence was limited to a few social media accounts, which raised intrigue when Cashmere Hill, a journalist, discovered an old archived Twitter page. When they eventually met, Juan was quite different from what was expected, appearing more like a tech CEO than the character portrayed online. Juan taught himself the basics of AI-assisted facial recognition by learning from open-source research on the Internet, which allowed him to build Clearview AI's facial search engine. Despite his unconventional background and online persona, Juan proved to be charismatic and open during their interview, sharing insights into his groundbreaking technology. His self-taught approach to AI and facial recognition highlights the transformative power of open-source research and the Internet in advancing technological innovations.

    • Collecting Data from Various Sources for Facial RecognitionExtensive data collection, including from social media, is essential for improving facial recognition technology, but ethical concerns and potential privacy invasions remain.

      The collection and use of large datasets were crucial factors in the success of facial recognition technology, as demonstrated by Juan's innovation in scraping images from the internet. This included photos from various sources like Venmo, Facebook, LinkedIn, and Instagram. The legality of such data collection methods is still a gray area and has not been fully tested in court. Despite the ethical concerns and potential privacy invasions, the importance of extensive data for improving the accuracy and performance of AI systems, particularly in areas like facial recognition, cannot be ignored.

    • Clearview AI's Controversial Use of Publicly Available FacesClearview AI's facial recognition technology uses publicly available info to create a searchable database, sparking debate on privacy vs security in the digital age

      Clearview AI, a facial recognition technology company, is using publicly available information on the Internet to create a searchable database of faces. The argument made by the company's founder, Hoan Ton-That, is that there is a First Amendment right to publicly available information on the Internet, including people's faces. This technology, while controversial, has already fundamentally changed how privacy works. The implications of this technology are significant, as it can potentially reveal personal information such as names, addresses, and sensitive sources. The founder acknowledges the potential for misuse and copycats, but argues that the tool is necessary for law enforcement to prevent crimes against children. While the founder has since stated that he will ensure the tool remains in the hands of law enforcement and banks, the conversation raises important questions about the balance between privacy and security in the digital age.

    • The Debate on Facial Recognition Technology ContinuesWhile Clearview AI intended to prevent child predation, privacy concerns and potential misuse cannot be ignored. Businesses use integrated systems for cost reduction and efficiency, while facial recognition technology remains a topic of ethical debate.

      While technology like Clearview AI may have noble intentions, such as preventing child predation, the potential for misuse and privacy concerns cannot be ignored. The conversation around Clearview AI was set to be a major story in early 2020 until the pandemic shifted the focus to health and safety concerns. However, the debate on the ethical use of facial recognition technology continues. In the business world, companies are turning to integrated systems like NetSuite to reduce costs and improve efficiency. Meanwhile, June's Journey offers an engaging game experience for those seeking a good mystery. The interview subject was taken aback by the public reaction to the Clearview AI story, highlighting the importance of transparency and consent in technological innovation. Despite the controversy, the use of facial recognition technology continues to be a topic of debate and exploration.

    • Facial recognition technology: Controversial uses and ethical dilemmasFacial recognition technology, like Clearview AI, raises ethical concerns due to its ability to scrape public data for both positive and negative purposes. Despite regulatory investigations, the debate around its use continues, highlighting the need for clear guidelines and consent.

      Facial recognition technology, specifically search engines like Clearview AI and its copycats, continues to be a contentious issue despite regulatory investigations and concerns over privacy. These tools, which have the ability to scrape and analyze public data, have been used for both positive and negative purposes, such as identifying individuals involved in controversial events or protecting children from online exposure. However, the questionable upsides and ethical dilemmas surrounding their use remain complicated, with some individuals and organizations justifying their use on a case-by-case basis. The debate around these search engines was overshadowed by the COVID-19 pandemic, but their availability and usage continue to raise important questions about privacy, consent, and the potential misuse of technology.

    • Unexpected privacy invasions through facial recognitionFacial recognition technology can invade privacy in unexpected ways, leading to severe consequences such as job loss. Regulations are needed to prevent misuse and ethical concerns.

      Technology, specifically facial recognition, can easily invade people's privacy in unexpected ways. This was evident in the story of someone being denied entry to Madison Square Garden based on facial recognition. The misuse of this technology is not limited to corporations but also individuals, as seen in the case of a TikTok account exposing people's identities. The consequences of such invasions of privacy can be severe, leading to further privacy violations and even job loss. The potential solution to this issue could be limiting access to facial recognition technology to only the government and law enforcement with proper authorization. However, there is a risk of government abuse of power. It's essential to consider the ethical implications of these technologies and establish regulations to prevent privacy invasions.

    • Facial recognition technology raises privacy concernsFacial recognition technology can misidentify individuals, leading to wrongful arrests and smear campaigns. It challenges our sense of anonymity and raises concerns about privacy in everyday life situations.

      Facial recognition technology raises significant privacy concerns. Students at protests wear masks to avoid identification, but high-resolution images can still reveal their identities. Misidentifications can lead to wrongful arrests and smear campaigns. Creators of these systems argue they're not making mistakes but ranking candidates, leaving the ultimate decision to humans. However, the potential misuse of this technology by both governments and corporations can lead to loss of privacy and even wrongful accusations. The risk is heightened in everyday life situations, where people may not expect their conversations or actions to be recorded and shared. This technology challenges our sense of anonymity and forces us to consider the potential consequences of our actions in the digital age.

    • Facial recognition technology raises privacy concernsFacial recognition technology collects and stores billions of images without consent, raising complex privacy concerns. It's important for individuals, companies, and governments to address these issues as we navigate the future of this technology.

      With the widespread use of facial recognition technology, our actions and appearances in public spaces can potentially be recorded and stored in databases, making us all feel like celebrities with the downsides but without the upsides. This technology raises complex privacy concerns, and while some jurisdictions offer ways to opt-out, most people don't have that protection. Clearview AI, a prominent facial recognition company, has faced controversy over its collection and storage of billions of images without consent. The question of who should have control over our facial data and how it's used is a significant one, and it's important for individuals, companies, and governments to grapple with these issues as we navigate the future of this technology. For more in-depth analysis, tune in to the upcoming board meeting for Search Engine, where we'll discuss these topics and answer your questions in questionably transparent detail.

    • Join the Search Engine podcast meeting and gain access to benefitsSign up for the Search Engine podcast meeting and receive access to exclusive content and upcoming episodes

      If you're interested in joining the Search Engine podcast meeting, you'll need to sign up first at searchengine.show. As a bonus, you'll also gain access to other benefits. For those who are already subscribed, mark your calendars for May 31, 2024, for an upcoming episode. The Search Engine podcast is a production of Odysee and Jigsaw Productions, created by PJ Vogt and Shruti Panamaneni, and produced by Garrett Graham and Noah John. Fact checking is handled by Holly Patton, and the theme music is composed and mixed by Armin Bazarian. Executive producers include Jenna Weiss Berman and Leah Rees Dennis, with support from teams at Jigsaw and Odyssey. To listen to the podcast for free, download it on the Odysee app or wherever you get your podcasts. The team thanks their agents and various team members for their contributions. Stay tuned for a double episode in two weeks.

    Recent Episodes from Search Engine

    Why didn’t Chris and Dan get into Berghain? (Part 2)

    Why didn’t Chris and Dan get into Berghain? (Part 2)
    We travel to Germany to trace techno's history from Detroit to Berlin. The story of how, after the Wall fell, Berlin exorcised its brutal past with a very strange, decade-long party. A mission that takes us all the way to the gates of Berghain.  Music Credits: Original composition in this episode by Armen Bazarian. Additional Tracks: Game One - Infiniti, Dead Man Watches The Clock - Marcel Dettmann & Ben Klock, The Call - Marcel Dettmann & Norman Nodge, Quicksand - Marcel Dettmann. Full playlist here. Sven von Thüle: https://soundcloud.com/svt // Der Klang der Familie Gesine Kühne: https://soundcloud.com/wannadosomething Support the show at searchengine.show! To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
    Search Engine
    enJune 26, 2024

    Why didn’t Chris and Dan get into Berghain? (Part 1)

    Why didn’t Chris and Dan get into Berghain? (Part 1)
    Two Americans embark on a quest: fly across an ocean to try to get into the most exclusive nightclub in the world – Berghain. A German techno palace where the line outside can last 8 hours, and the bouncers are merciless in their judgments. The club does not explain how it makes its decisions about who can enter, but one foolish podcaster will try to explain anyway.  Support the show: search engine.show To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
    Search Engine
    enJune 21, 2024

    What does it feel like to believe in God?

    What does it feel like to believe in God?
    This week, we try to understand an experience that 74% of Americans routinely report having. The first of many conversations (perhaps?). This one, an interview with Zvika Krieger. Support the show: searchengine.show To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
    Search Engine
    enJune 14, 2024

    How much glue should you put in your pizza?

    How much glue should you put in your pizza?
    An internet breaking news story. As we told you last week, Google has begun offering AI-generated answers to search questions. But some answers, it turns out, are strange. Users were told, for instance, that glue was an appropriate ingredient for homemade pizza. We talk to reporter Katie Notopolous, who baked and ate her own homemade glue pizza. Support the show: search engine.show To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
    Search Engine
    enMay 31, 2024

    How do we survive the media apocalypse? (Part 2)

    How do we survive the media apocalypse? (Part 2)
    Last week, Google announced a fundamental change to how the site will work, which will likely have dire effects for the news industry. When you use Google now, the site will often offer AI-generated summaries to you, instead of favoring human-written articles. We talk to Platformer’s Casey Newton about why this is happening, why publishers are nervous, and about a secret new internet you may not have heard of, a paradise to which we may all yet escape.   Support the show at searchengine.show! Search Engine - How do we survive the media apocalypse? (Part 1) Platformer - Google's broken link to the web 404 Media - Why Google is shit now To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
    Search Engine
    enMay 22, 2024

    Should this creepy search engine exist?

    Should this creepy search engine exist?
    After stumbling on a new kind of search engine for faces, we called privacy journalist Kashmir Hill. She’s been reporting on the very sudden and unregulated rise of these facial search engines. Here’s the story of the very first one, the mysterious person who made it, and the copycats it helped spawn. Support the show: searchengine.show To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    What do trigger warnings actually do?

    What do trigger warnings actually do?
    A listener’s brother dies by suicide, and afterwards, she finds herself angered by trigger warnings about suicide. She wants to know — are these actually helping other people? Or is it just something we do because we think we’re supposed to? Support the show: searchengine.show To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    Where's my flying car?

    Where's my flying car?
    Since not long after the car was invented, we have wanted to stick wings on them and fly them through the sky. This week, we interview writer Gideon Lewis-Kraus about the surprisingly long history of actual, working flying cars in America. Plus, what it's like to actually fly in a modern flying car. Read Gideon's article! Support the show! To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    Do political yard signs actually do anything?

    Do political yard signs actually do anything?
    It’s an election year and so Search Engine’s campaign desk is answering the questions you really want answers to: all the political yard signs in your neighbors’ yards … do they do anything besides make everyone like each other less? An experiment that definitively answers this question. Support the show! To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    Why are there so many illegal weed stores in New York City? (Part 2)

    Why are there so many illegal weed stores in New York City? (Part 2)
    In part two of our story, we watch the state of New York try to pull off something we rarely see in America: a kind of reparations. A very ambitious dream encounters a thicket of details and complications. The whole time, cameras roll, broadcasting the meetings on YouTube. Help support the show! To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

    Related Episodes

    EP17 | 科技拉近你我的距離 我可以怎麼提醒孩子網路交友的注意事項? feat. 陳茵嵐老師

    EP17 | 科技拉近你我的距離 我可以怎麼提醒孩子網路交友的注意事項? feat. 陳茵嵐老師
    【本集重點】 Q1. 科技拉近你我的距離 網路為我們的生活帶來許多美好的事物,比如我們可以透過網路瀏覽很多即時、新穎又有趣的資訊;可以與同學或親戚聯繫、分享生活,情感交流;也可以認識很多新的人事物,讓人際關係變得更多元、快速增加情感、沒有時間空間的距離。 心智成熟或交友經驗較多的大人可能會較謹慎、知道「保持距離」,但對孩子而言,他們可能會把網路上的朋友或當做真的朋友,或親密朋友,尤其網路或3C產品的即時性與定位功能,很容易讓人其實不是很了解對方、但卻以為自己很了解對方,即時見面,而加速人際關係的親密期(人際關係的發展階段大致可分:接觸期、涉入期 involvement stage/測試期、親密期、惡化期) Q2. 我可以怎麼提醒孩子網路交友的注意事項 網路交友「停看聽」 停 1. 思考與理解網路的特性 2. 不輕易透露個人資料 3. 避免收受禮物與金錢往來 4. 避免單獨或立即赴約 5. 避免與網友分享個人私密照片(你是有自己身體的自主權的) 看 1. 多搜尋相關資訊 2. 多了解網路安全的設定 3. 學習批判性思考 聽 1. 隨時可以找人聊聊 2. 多吸收相關新知 【本集來賓】 陳茵嵐講師/國立臺灣藝術大學通識中心 📢 想瞭解更多資訊素養與倫理相關內容嗎? 歡迎來 eTeacher 官網找我們唷! 👉 eTeacher 傳送門 https://linktr.ee/funsurfing

    The importance of values in regulating emerging technology to protect human rights with Ed Santow

    The importance of values in regulating emerging technology to protect human rights with Ed Santow

    In today’s episode no. 25, Edward Santow, Australia’s Human Rights Commissioner speaks to Reimagining Justice about one of many projects he is responsible for, namely the Commission’s Human Rights and Technology project.

    Whether you know a little or a lot about human rights or artificial intelligence, you will gain something from listening to our conversation about the most extensive consultation into AI and Human Rights anywhere in the world. Ed explains exactly what human rights are and why they should be protected, how technology is both enhancing and detracting from human rights and the best approach to take in regulating emerging technology in the future.

    We talked about protecting the rights of the most marginalized people, automated decision making and how to combat bias and something I found particularly fascinating, the tension between the universality of human rights, ubiquitous technology and how differing cultural contexts and historical experiences are shaping the principles that will guide both the development and application of technology.

    Ed Santow has been Human Rights Commissioner at the Australian Human Rights Commission since August 2016 and leads the Commission’s work on technology and human rights; refugees and migration; human rights issues affecting LGBTI people; counter-terrorism and national security; freedom of expression; freedom of religion; and implementing the Optional Protocol to the Convention Against Torture (OPCAT).

    Andrea Perry-Petersen – LinkedIn - Twitter @winkiepp – andreaperrypetersen.com.au

    Twitter - @ReimaginingJ

    Facebook – Reimagining Justice group