Welcome!

I'm Sean. Welcome to my personal website and blog. You can see my latest articles below. If you have any questions/feedback/corrections please let me know at sean.firstcloud [at] gmail.com, thanks!

ようこそ!

ようこそ、齊藤初雲(しょうん)と申します。本サイトをご覧頂き有難うございます。 以下が最新の記事です。ご質問・フィードバック・訂正がある際はsean.firstcloud [at] gmail.com までお問合せください。

Posts

Making explainability algorithms more robust with GANs

Explainability algorithms like LIME can be fooled into thinking that machine learning models which show biased, harmful behavior are instead innocuous. We attempt to improve LIME robustness to defend against these attacks.

What happens when we mask out the "unimportant" parts of an image?

Integrated Gradients determines which pixels are important for the prediction of a deep neural network. When we keep only the most important regions of an image based on these attributions, can the model still perform well?

Explaining Away Attacks Against Neural Networks

In this article, we will demonstrate how to fool a neural network into predicting an image of an elephant as a ping-pong ball. We will also see whether we can defend against such attacks by explaining the model's decisions.

Explaining Away Attacks Against Neural Networks (Poster)

The poster I presented at SEA MLS 2019 and MLSys 2020 for the blog post titled, "Explaining Away Attacks Against Neural Networks".

KonnichiWorld!

An introduction to this blog.