Logo

    fairness-in-ai

    Explore "fairness-in-ai" with insightful episodes like "Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References", "FaaS Architecture and Verifiable Fairness for ML Systems" and "Fairness in AI: Navigating Complex Ethical AI Dilemmas with Beena Ammanath" from podcasts like ""Tech Stories Tech Brief By HackerNoon", "Tech Stories Tech Brief By HackerNoon" and "Machine Learning Tech Brief By HackerNoon"" and more!

    Episodes (3)

    Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

    Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

    This story was originally published on HackerNoon at: https://hackernoon.com/privacy-preserving-computation-of-fairness-for-ml-systems-acknowledgement-and-references.
    Discover Fairness as a Service (FaaS), an architecture and protocol ensuring algorithmic fairness without exposing the original dataset or model details.
    Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #ml-systems, #ml-fairness, #faas, #fairness-in-ai, #fairness-as-a-service, #fair-machine-learning, #fairness-computation, #cryptograms, and more.

    This story was written by: @ashumerie. Learn more about this writer by checking @ashumerie's about page, and for more stories, please visit hackernoon.com.

    Fairness as a Service (FaaS) revolutionizes algorithmic fairness audits by preserving privacy without accessing original datasets or model specifics. This paper presents FaaS as a trustworthy framework employing encrypted cryptograms and Zero Knowledge Proofs. Security guarantees, a proof-of-concept implementation, and performance experiments showcase FaaS as a promising avenue for calculating and verifying fairness in AI algorithms, addressing challenges in privacy, trust, and performance.

    FaaS Architecture and Verifiable Fairness for ML Systems

    FaaS Architecture and Verifiable Fairness for ML Systems

    This story was originally published on HackerNoon at: https://hackernoon.com/faas-architecture-and-verifiable-fairness-for-ml-systems.
    Discover the robust architecture of Fairness as a Service (FaaS), a groundbreaking system for trustworthy fairness audits in machine learning.
    Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #ml-systems, #ml-fairness, #fairness-as-a-service, #fair-machine-learning, #fairness-in-ai, #faas-architecture, #fairness-computation, #hackernoon-top-story, #hackernoon-es, #hackernoon-hi, #hackernoon-zh, #hackernoon-fr, #hackernoon-bn, #hackernoon-ru, #hackernoon-vi, #hackernoon-pt, #hackernoon-ja, #hackernoon-de, #hackernoon-ko, #hackernoon-tr, and more.

    This story was written by: @escholar. Learn more about this writer by checking @escholar's about page, and for more stories, please visit hackernoon.com.

    This section unfolds the architecture of Fairness as a Service (FaaS), a revolutionary system for ensuring trust in fairness audits within machine learning. The discussion encompasses the threat model, protocol overview, and the essential phases: setup, cryptogram generation, and fairness evaluation. FaaS introduces a robust approach, incorporating cryptographic proofs and verifiable steps, offering a secure foundation for fair evaluations in the ML landscape.

    Fairness in AI: Navigating Complex Ethical AI Dilemmas with Beena Ammanath

    Fairness in AI: Navigating Complex Ethical AI Dilemmas with Beena Ammanath

    This story was originally published on HackerNoon at: https://hackernoon.com/fairness-in-ai-navigating-complex-ethical-ai-dilemmas-with-beena-ammanath.
    Trustworthy AI aims to provide a holistic framework to identify Important questions to ask when developing or using AI
    Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #ai, #ai-ethics, #responsible-ai, #stakeholder-framework, #longview, #fairness-in-ai, #trustworthy-ai, #ethical-ai, and more.

    This story was written by: @linked_do. Learn more about this writer by checking @linked_do's about page, and for more stories, please visit hackernoon.com.

    Trustworthy AI aims to provide a holistic framework to identify Important questions to ask when developing or using AI. Trustworthy AI should be fair and impartial, robust and reliable, transparent, explainable, secure, safe, accountable, responsible and with privacy.