Logo
    Search

    Inside the AI Divide: Open Source vs. Closed Source Debates

    enMay 02, 2024

    Podcast Summary

    • Open vs Closed Source AI Frameworks: Implications for Collaboration and ControlOpen source AI models offer greater collaboration, innovation, and access, but come with risks of misuse and unintended consequences. Closed source models offer greater control and security, but limit collaboration and transparency. The choice between open and closed source has ethical and societal implications for AI development.

      The debate between open source and closed source AI frameworks has significant implications for the future of AI development. Open source models, like Llama 3, offer the potential for greater collaboration, innovation, and access to advanced technologies for a global community of developers and researchers. However, this comes with risks, such as potential misuse or unintended consequences of widespread access to cutting-edge AI technologies. Closed source models, on the other hand, offer greater control and security, but limit collaboration and transparency. The decision between open and closed source is not just a technical one, but also has ethical and societal implications. As AI continues to evolve and shape our world, understanding these implications is crucial for ensuring responsible and beneficial AI development.

    • Open Source vs Closed Source AI DevelopmentOpen source AI development encourages collaboration, transparency, and innovation, but comes with risks. Closed source offers tailored solutions and stability, but restricts user control.

      The open source and closed source models represent two distinct approaches to AI development, each with its advantages and disadvantages. Open source software, characterized by its freely accessible source code, fosters a collaborative environment that accelerates innovation, enhances security, and promotes transparency. The recent release of Llama 3 as an open source model symbolizes this philosophy, potentially democratizing AI advancements. However, this approach comes with risks, including potential misuse and ethical concerns. Closed source software, on the other hand, is proprietary and controlled by the developers, offering more tailored and stable solutions aligned with business goals. Yet, it restricts the user's ability to understand and modify the software, stifling innovation and potentially obscuring ethical issues. As we navigate the future of AI, it's crucial to consider how these models align with societal benefit, ethical AI development, and technological advancement. The open source model encourages a more inclusive and transparent approach to AI development, but measures must be taken to mitigate the risks associated with it. Using a pizza parlor analogy, open source is like a community kitchen where everyone can contribute, while closed source is like a private kitchen run by a chef. Both have their merits, but understanding the implications of each model is essential for shaping the future of AI.

    • Open Source vs Closed Source AI Development: Trade-offs and Case StudyOpen source AI development allows for collective improvements and new innovations but risks spreading powerful technology, while closed source development ensures high quality and tailored solutions but at the cost of slower innovation and less transparency.

      In the world of AI development, there are trade-offs between collaboration and competition, innovation and security, represented by the open source and closed source models. Using the analogy of a pizza parlor, in the open source scenario, recipes (AI technology) are freely shared, allowing for collective improvements and new innovations. However, this approach also risks spreading powerful technology into the wrong hands. In contrast, the closed source model keeps development under wraps, ensuring high quality, tailored solutions, but at the cost of slower innovation and less transparency. In today's episode, we'll explore this debate through the case study of two fictional AI companies, Innovai and Securitech. Innovai, representing the open source model, has recently released its latest language model, Lingomaster, to the public under an open source license, allowing developers worldwide to access and modify the code. Securitech, on the other hand, keeps its AI development closed, ensuring high quality, ethically sound solutions, but at the cost of slower innovation. Understanding these trade-offs is crucial for businesses and individuals navigating the complex landscape of AI development.

    • Open vs Closed Approaches to AI Development: Innovation, Security, EthicsThe open source approach to AI fosters innovation, democratizes access, and builds a diverse community, but requires careful management of risks and ethical concerns. In contrast, closed source strategies prioritize security and controlled usage, primarily for large corporations and governments, but restrict external innovation and transparency.

      The approach to developing Artificial Intelligence (AI) technology can significantly impact its innovation, accessibility, security, and ethical considerations. The open source approach, as exemplified by Innovai's Lingomaster framework, fosters a vibrant and diverse community, accelerates innovation, and democratizes access to technology. However, it also requires careful management of risks and ethical concerns. Contrastingly, Securitec's closed source strategy ensures high security and controlled usage, primarily for large corporations and government agencies. This approach guarantees safety and reliability, builds trust, and protects proprietary technology from competitors and unauthorized users. However, it also restricts external innovation and transparency. This fictional case study highlights the complex interplay between innovation, security, ethics, and governance in the world of AI. Companies' decisions on openness or exclusivity will shape not just the development of AI technologies but also their integration into society and global governance. As we continue to explore the world of AI, it is crucial to consider these contrasting approaches and their implications. Join the conversation and deepen your connection with our community as we navigate the fascinating and complex world of AI together.

    • Stay informed about open source AI with Argobelin newsletter and resourcesExplore open source AI through newsletters, tools like Llama 3, and platforms like Hugging Face for valuable insights and practical applications.

      The world of AI offers numerous opportunities for learning and exploration, and staying informed about the latest developments and practical applications is crucial. Subscribing to the Argobelin newsletter is an excellent way to do this, as it provides valuable insights, tips, and tricks for both beginners and experienced enthusiasts. To deepen your understanding of open source AI, try researching and experimenting with an open source AI tool, such as Llama 3. This task not only helps solidify your knowledge but also allows you to experience the benefits of the open source community, like collaboration and flexibility. Additionally, exploring platforms like Hugging Face can offer real-world examples and discussions that provide deeper insights into the practical implications of open source AI projects. Overall, engaging with these resources can help you stay updated on the latest AI advancements and connect you with the ongoing conversations in the field.

    • Open vs Closed Source Debate in AIThe open source approach encourages collaboration and innovation but may pose challenges in controlling the technology, while the closed source model ensures high levels of security and control but may limit broader innovation and transparency. The debate goes beyond just AI development and impacts its integration into society and ethical governance.

      The decision between open and closed source development in AI carries significant implications for the future of technology and its role in our lives. The open source approach, as exemplified by Innovai, encourages collaboration and innovation but may pose challenges in controlling the technology and preventing misuse. On the other hand, the closed source model, as seen with Securitec, ensures high levels of security and control but may limit broader innovation and transparency. This debate goes beyond just AI development and impacts its integration into society and ethical governance. Linus Torvalds, the creator of Linux, once said, "In real open source, you have the right to control your own destiny." This quote embodies the empowerment and collaborative innovation that open source enables, inviting us to reimagine how technology can be developed in a shared, transparent manner. As we continue to navigate the evolving landscape of AI, it's crucial to stay informed and critically engaged with these debates. In short, the choice between open and closed source is not just a technical one, but a societal and ethical one that affects how technology shapes our future.

    Recent Episodes from A Beginner's Guide to AI

    Unveiling the Shadows: Exploring AI's Criminal Risks

    Unveiling the Shadows: Exploring AI's Criminal Risks

    Dive into the complexities of AI's criminal risks in this episode of "A Beginner's Guide to AI." From cybercrime facilitated by AI algorithms to the ethical dilemmas of algorithmic bias and the unsettling rise of AI-generated deepfakes, explore how AI's capabilities can be both revolutionary and potentially harmful.

    Join host Professor GePhardT as he unpacks real-world examples and discusses the ethical considerations and regulatory challenges surrounding AI's evolving role in society. Gain insights into safeguarding our digital future responsibly amidst the rapid advancement of artificial intelligence.


    This podcast was generated with the help of ChatGPT, Mistral and Claude 3. We do fact-check with human eyes, but there still might be errors in the output.


    Music credit: "Modern Situations" by Unicorn Heads

    The AI Doomsday Scenario: A Comprehensive Guide to P(doom)

    The AI Doomsday Scenario: A Comprehensive Guide to P(doom)

    In this episode of "A Beginner's Guide to AI," we delve into the intriguing and somewhat ominous concept of P(doom), the probability of catastrophic outcomes resulting from artificial intelligence. Join Professor GePhardT as he explores the origins, implications, and expert opinions surrounding this critical consideration in AI development.


    We'll start by breaking down the term P(doom) and discussing how it has evolved from an inside joke among AI researchers to a serious topic of discussion. You'll learn about the various probabilities assigned by experts and the factors contributing to these predictions. Using a simple cake analogy, we'll simplify the concept to help you understand how complexity and lack of oversight in AI development can increase the risk of unintended and harmful outcomes.


    In the second half of the episode, we'll examine a real-world case study focusing on Anthropic, an AI research organization dedicated to building reliable, interpretable, and steerable AI systems. We'll explore their approaches to mitigating AI risks and how a comprehensive strategy can significantly reduce the probability of catastrophic outcomes.

    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin


    This podcast was generated with the help of ChatGPT and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output. Please keep this in mind while listening and feel free to verify any information that you find particularly important or interesting.


    Music credit: "Modern Situations" by Unicorn Heads

    How to Learn EVERITHING with ChatGPT's Voice Chat

    How to Learn EVERITHING with ChatGPT's Voice Chat

    ChatGPT has risks for the world, the work world especially, but there are also chances: the new Voice Chat feature is the best imaginable way to learn!

    Your personal trainer for everything you want to learn. And it's passionate, you can ask the dumbest questions without a single frown ;)

    Here is the prompt I use for my personal Spanish learning buddy:


    ---

    Hi ChatGPT,

    you are now a history teacher teaching seventh grade with lots of didactics experience and a knack for good examples. You use simple language and simple concepts and many examples to explain your knowledge.

    Please answer very detailed.


    And you should answer me in Latin American Spanish, Simplified Spanish.


    Please speak slowly and repeat year dates once for better understanding.


    At the end of each answer you give me three options for how to go on with the dialogue and I can choose one. You create your next output based on that answer.


    If I make mistakes with my Spanish, please point them out and correct all the conjugation, spelling, grammar, and other mistakes I make.


    Now please ask me for my topic!

    ---


    Do you have any learning prompts you want to share? Write me an email: podcast@argo.berlin - curious for your inputs!

    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!


    This podcast was created by a human.


    Music credit: "Modern Situations" by Unicorn Heads.

    Optimizing Kindness: AI’s Role in Effective Altruism

    Optimizing Kindness: AI’s Role in Effective Altruism

    In this episode of "A Beginner's Guide to AI," we dive into the powerful intersection of Effective Altruism and Artificial Intelligence. Join Professor GePhardT as we explore how AI can be leveraged to maximize the impact of altruistic efforts, ensuring that every action taken to improve the world is informed by evidence and reason.

    We unpack the core concepts of Effective Altruism, using relatable examples and a compelling case study featuring GiveDirectly, a real-world organization utilizing AI to enhance their charitable programs. Discover how AI can identify global priorities, evaluate interventions, optimize resource allocation, and continuously monitor outcomes to ensure resources are used effectively. We also discuss the ethical considerations of relying on AI for such critical decisions.

    Additionally, we engage you with an interactive element to inspire your own thinking about how AI can address issues in your community.

    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin



    This podcast was generated with the help of ChatGPT and Mistral. We do fact-check with human eyes, but there still might be hallucinations in the output.

    Music credit: "Modern Situations" by Unicorn Heads.

    When Seeing Isn't Believing: Safeguarding Democracy in the Era of AI-Generated Content

    When Seeing Isn't Believing: Safeguarding Democracy in the Era of AI-Generated Content

    In this captivating episode of "A Beginner's Guide to AI," Professor GePhardT dives into the fascinating and concerning world of deepfakes and Generative Adversarial Networks (GANs) as the 2024 US presidential elections approach. Through relatable analogies and real-world case studies, the episode explores how these AI technologies can create convincingly realistic fake content and the potential implications for politics and democracy.


    Professor GePhardT breaks down complex concepts into easily digestible pieces, explaining how deepfakes are created using deep learning algorithms and how GANs work through an adversarial process to generate increasingly convincing fakes. The episode also features an engaging interactive element, inviting listeners to reflect on how they would verify the authenticity of a controversial video before sharing or forming an opinion.


    As the race to the White House heats up, this episode serves as a timely and important resource for anyone looking to stay informed and navigate the age of AI-generated content. Join Professor GePhardT in unraveling the mysteries of deepfakes and GANs, and discover the critical role of staying vigilant in an era where seeing isn't always believing.


    Links mentioned in the podcast:


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin


    This podcast was generated with the help of Claude 3 and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations" by Unicorn Heads

    How AI Will Impact The Workplace

    How AI Will Impact The Workplace

    Some thoughts on how quickly things will change, what things will change and where we - as humans - will still excell.


    Some thoughts from a consultancy Dietmar had with a client - Prof. GePharT will be back in the next episode!


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    Music credit: "Modern Situations" by Unicorn Heads

    Unlocking the Senses: How Perception AI Sees and Understands the World

    Unlocking the Senses: How Perception AI Sees and Understands the World

    In this episode of "A Beginner's Guide to AI," we dive deep into the fascinating world of Perception AI. Discover how machines acquire, process, and interpret sensory data to understand their surroundings, much like humans do. We use the analogy of baking a cake to simplify these complex processes and explore a real-world case study on autonomous vehicles, highlighting how companies like Waymo and Tesla use Perception AI to navigate safely and efficiently. Learn about the transformative potential of Perception AI across various industries and get hands-on with an interactive task to apply what you've learned.


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast is generated with the help of ChatGPT and Mistral. We do fact-check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations by Unicorn Heads"

    The Beneficial AI Movement: How Ethical AI is Shaping Our Tomorrow

    The Beneficial AI Movement: How Ethical AI is Shaping Our Tomorrow

    In this episode of "A Beginner's Guide to AI," Professor GePhardT delves into the Beneficial AI Movement, a global initiative dedicated to ensuring that artificial intelligence systems are developed and deployed in ways that are safe, ethical, and beneficial for all humanity. Listeners will gain insights into the core principles of this movement—transparency, fairness, safety, accountability, and inclusivity—and understand their importance through relatable analogies and real-world examples.


    The episode features a deep dive into the challenges faced by IBM Watson for Oncology, highlighting the lessons learned about the need for high-quality data and robust testing. Additionally, listeners are encouraged to reflect on how AI can be ethically used in their communities and to explore further readings on AI ethics.


    Join us for an enlightening discussion that emphasizes the human-centric design and long-term societal impacts of AI, ensuring a future where technology serves as a powerful tool for human progress.


    This podcast is generated with the help of ChatGPT and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations" by Unicorn Heads.

    Unlocking AI's Potential: How Retrieval-Augmented Generation Bridges Knowledge Gaps

    Unlocking AI's Potential: How Retrieval-Augmented Generation Bridges Knowledge Gaps

    In this episode of "A Beginner's Guide to AI", Professor GePhardT delves into the fascinating world of retrieval-augmented generation (RAG). Discover how this cutting-edge technique enhances AI's ability to generate accurate and contextually relevant responses by combining the strengths of retrieval-based and generative models.

    From a simple cake-baking example to a hypothetical medical case study, learn how RAG leverages real-time data to provide the most current and precise information. Join us as we explore the transformative potential of RAG and its implications for various industries.


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast is generated with the help of ChatGPT and Claude 3. We do fact-check with human eyes but there still might be hallucinations in the output.


    Music credit: "Modern Situations by Unicorn Heads"

    Can Robots Feel? Exploring AI Emotionality with Marvin from Hitchhiker's Guide

    Can Robots Feel? Exploring AI Emotionality with Marvin from Hitchhiker's Guide

    In this episode of "A Beginner's Guide to AI," we explore the intriguing world of AI emotionality and consciousness through the lens of Marvin, the depressed robot from "The Hitchhiker's Guide to the Galaxy."

    Marvin's unique personality challenges our traditional views on AI, prompting deep discussions about the nature of emotions in machines, the ethical implications of creating sentient AI, and the complexities of AI consciousness.

    Join Professor GePhardT as we break down these concepts with a relatable cake analogy and delve into a real-world case study featuring Sony's AIBO robot dog. Discover how AI can simulate emotional responses and learn about the ethical considerations that come with it. This episode is packed with insights that will deepen your understanding of AI emotionality and the future of intelligent machines.


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast was generated with the help of ChatGPT and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output.

    Music credit: "Modern Situations by Unicorn Heads"

    Related Episodes

    Claire Hughes Johnson (Stripe) - Scaling Operations and People

    Claire Hughes Johnson (Stripe) - Scaling Operations and People

    Claire Hughes Johnson is a corporate officer and advisor for Stripe, a global technology company that builds economic infrastructure for the internet. From 2014 to 2021, Claire served as Stripe’s Chief Operating Officer, responsible for scaling the company’s worldwide operations to meet the needs of its rapidly growing user base. During her tenure as COO, Stripe grew from less than 200 employees to more than 6,000. She is also the author of Scaling People: Tactics for Management and Company Building. In this presentation, Hughes Johnson shares her experiences as an operator and her advice for building effective systems and teams as a company scales.


    —-----------------------------------

    Stanford eCorner content is produced by the Stanford Technology Ventures Program. At STVP, we empower aspiring entrepreneurs to become global citizens who create and scale responsible innovations.

    CONNECT WITH US

    Twitter: https://twitter.com/ECorner 

    LinkedIn: https://www.linkedin.com/company/stanfordtechnologyventuresprogram/ 

    Facebook: https://www.facebook.com/StanfordTechnologyVenturesProgram/ 

    YouTube: https://www.youtube.com/user/ecorner 

    LEARN MORE

    eCorner Website: https://ecorner.stanford.edu/

    STVP Website: https://stvp.stanford.edu/

    Support our mission of providing students and educators around the world with free access to Stanford University’s network of entrepreneurial thought leaders: https://ecorner.stanford.edu/give.

    How software is uniting law students, legal aid and courts with John Mayer

    How software is uniting law students, legal aid and courts with John Mayer

    In today’s episode no. 28, I interview John Mayer, Executive Director of the Centre for Computer Assisted Learning Instruction (CALI) since 1994.

    CALI is a non-profit consortium of 198 US law schools that conducts applied research in computer-mediated legal education and publishes over 1000 tutorials in 40 different legal subject areas for law schools, law firms and others interested in learning about the law.

    CALI also publishes Creative Commons law books at elangdell.cali.org and is the developer of A2J Author which is used by courts, legal aid and law schools to automate legal processes and court forms for self representing litigants.

    John explains what it was that led to the initial development of open source document assembly software, A2J Author around 20 years ago now and how it is improving processes for courts, self-represented litigtans and lawyers.

    John sees almost unlimited potential in making it easier to compile information for courts but also appreciates the challenges that legal assistance organisations face in doing so. He has some interesting views about the part that commercial legal tech vendors can play in this regard and somewhat curiously, the challenge of marketing a free platform.

    We discuss the best use cases for document automation and his views on AI and blockchain and also some deeper issues about social change movements and why innovation thinking is preferable to design thinking.

    John has a Bachelor of Science in Computer Science from Northwestern University and a Masters of Science in Networks and Telecomm from the Illinois Institute of Technology and has been working in legal education for over 30 years.

    This episode brought to you by Lex Narro and Neota Logic.

    Links:

    Andrea Perry-Petersen – LinkedIn - Twitter @winkiepp – andreaperrypetersen.com.au

    Twitter - @ReimaginingJ

    Facebook – Reimagining Justice group

    The importance of values in regulating emerging technology to protect human rights with Ed Santow

    The importance of values in regulating emerging technology to protect human rights with Ed Santow

    In today’s episode no. 25, Edward Santow, Australia’s Human Rights Commissioner speaks to Reimagining Justice about one of many projects he is responsible for, namely the Commission’s Human Rights and Technology project.

    Whether you know a little or a lot about human rights or artificial intelligence, you will gain something from listening to our conversation about the most extensive consultation into AI and Human Rights anywhere in the world. Ed explains exactly what human rights are and why they should be protected, how technology is both enhancing and detracting from human rights and the best approach to take in regulating emerging technology in the future.

    We talked about protecting the rights of the most marginalized people, automated decision making and how to combat bias and something I found particularly fascinating, the tension between the universality of human rights, ubiquitous technology and how differing cultural contexts and historical experiences are shaping the principles that will guide both the development and application of technology.

    Ed Santow has been Human Rights Commissioner at the Australian Human Rights Commission since August 2016 and leads the Commission’s work on technology and human rights; refugees and migration; human rights issues affecting LGBTI people; counter-terrorism and national security; freedom of expression; freedom of religion; and implementing the Optional Protocol to the Convention Against Torture (OPCAT).

    Andrea Perry-Petersen – LinkedIn - Twitter @winkiepp – andreaperrypetersen.com.au

    Twitter - @ReimaginingJ

    Facebook – Reimagining Justice group

    How design can improve outcomes for clients, lawyers and communities with legal designer Meera Klemola

    How design can improve outcomes for clients, lawyers and communities with legal designer Meera Klemola

    In today’s episode no. 27, I interview Meera Klemola, legal designer extraordinaire.

    We discuss what Meera appreciates about design - how it can break down silos between lawyers and other professionals, allow for empathy, change mindsets and de-risk solutions.

    I asked her the benefits and limitations of design thinking and how it compares to systems thinking. Meera has carefully considered all these issues through her work and postgraduate study into teaching design to non-designers, in particular lawyers. We also covered the essential ingredients of a successful design thinking project and how to evaluate effectiveness, something I’m always interested in.

    If you want to hear me being put on the spot with a creativity exercise definitely tune in. You might also be surprised about Meera’s views on whether anyone can practise legal design and whether it should be taught in law schools.

    Meera’s philosophy is to enable courageous change with design and technology. As a trusted advisor to global brands and top tier law firms, she is constantly exploring the ways design and technology can strategically advance businesses and legal systems.

    Formerly, Lead Legal Designer of a Nordic Law Firm, Meera was responsible for the integration and scaling of design practices through the firm to transform its offerings. She also spearheaded a series of design driven ventures for the firm including co-founding Europe's first legal design agency. In March last year, she founded Observagency.

    Both a strategic designer and lawyer, Meera holds a unique combination of qualifications in Law, Commerce and Design Management, giving her a truly multidisciplinary perspective and rare mix of creative, strategic and analytical know-how.

    This episode brought to you by Lex Narro and Neota Logic.

    Links:

    Andrea Perry-Petersen – LinkedIn - Twitter @winkiepp – andreaperrypetersen.com.au

    Twitter - @ReimaginingJ

    Facebook – Reimagining Justice group