Logo

    cryptograms

    Explore "cryptograms" with insightful episodes like "Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References" and "Special Guest: Mary Seifert" from podcasts like ""Tech Stories Tech Brief By HackerNoon" and "Baseball, Books, and Banter: A Podcast by Nicole Asselin"" and more!

    Episodes (2)

    Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

    Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

    This story was originally published on HackerNoon at: https://hackernoon.com/privacy-preserving-computation-of-fairness-for-ml-systems-acknowledgement-and-references.
    Discover Fairness as a Service (FaaS), an architecture and protocol ensuring algorithmic fairness without exposing the original dataset or model details.
    Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #ml-systems, #ml-fairness, #faas, #fairness-in-ai, #fairness-as-a-service, #fair-machine-learning, #fairness-computation, #cryptograms, and more.

    This story was written by: @ashumerie. Learn more about this writer by checking @ashumerie's about page, and for more stories, please visit hackernoon.com.

    Fairness as a Service (FaaS) revolutionizes algorithmic fairness audits by preserving privacy without accessing original datasets or model specifics. This paper presents FaaS as a trustworthy framework employing encrypted cryptograms and Zero Knowledge Proofs. Security guarantees, a proof-of-concept implementation, and performance experiments showcase FaaS as a promising avenue for calculating and verifying fairness in AI algorithms, addressing challenges in privacy, trust, and performance.

    Special Guest: Mary Seifert