Logo

    fairness-as-a-service

    Explore "fairness-as-a-service" with insightful episodes like "Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References" and "FaaS Architecture and Verifiable Fairness for ML Systems" from podcasts like ""Tech Stories Tech Brief By HackerNoon" and "Tech Stories Tech Brief By HackerNoon"" and more!

    Episodes (2)

    Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

    Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

    This story was originally published on HackerNoon at: https://hackernoon.com/privacy-preserving-computation-of-fairness-for-ml-systems-acknowledgement-and-references.
    Discover Fairness as a Service (FaaS), an architecture and protocol ensuring algorithmic fairness without exposing the original dataset or model details.
    Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #ml-systems, #ml-fairness, #faas, #fairness-in-ai, #fairness-as-a-service, #fair-machine-learning, #fairness-computation, #cryptograms, and more.

    This story was written by: @ashumerie. Learn more about this writer by checking @ashumerie's about page, and for more stories, please visit hackernoon.com.

    Fairness as a Service (FaaS) revolutionizes algorithmic fairness audits by preserving privacy without accessing original datasets or model specifics. This paper presents FaaS as a trustworthy framework employing encrypted cryptograms and Zero Knowledge Proofs. Security guarantees, a proof-of-concept implementation, and performance experiments showcase FaaS as a promising avenue for calculating and verifying fairness in AI algorithms, addressing challenges in privacy, trust, and performance.

    FaaS Architecture and Verifiable Fairness for ML Systems

    FaaS Architecture and Verifiable Fairness for ML Systems

    This story was originally published on HackerNoon at: https://hackernoon.com/faas-architecture-and-verifiable-fairness-for-ml-systems.
    Discover the robust architecture of Fairness as a Service (FaaS), a groundbreaking system for trustworthy fairness audits in machine learning.
    Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #ml-systems, #ml-fairness, #fairness-as-a-service, #fair-machine-learning, #fairness-in-ai, #faas-architecture, #fairness-computation, #hackernoon-top-story, #hackernoon-es, #hackernoon-hi, #hackernoon-zh, #hackernoon-fr, #hackernoon-bn, #hackernoon-ru, #hackernoon-vi, #hackernoon-pt, #hackernoon-ja, #hackernoon-de, #hackernoon-ko, #hackernoon-tr, and more.

    This story was written by: @escholar. Learn more about this writer by checking @escholar's about page, and for more stories, please visit hackernoon.com.

    This section unfolds the architecture of Fairness as a Service (FaaS), a revolutionary system for ensuring trust in fairness audits within machine learning. The discussion encompasses the threat model, protocol overview, and the essential phases: setup, cryptogram generation, and fairness evaluation. FaaS introduces a robust approach, incorporating cryptographic proofs and verifiable steps, offering a secure foundation for fair evaluations in the ML landscape.