Epsilon Noising Method
Type of Adversarial attacks
Generate an image
def enm_attack(image, epsilon):
# Generate noise matrix, with same shape as image,
# and random values in [- epsilon, epsilon]
img_rows = image.shape[-2]
img_cols = image.shape[-1]
epsilon_mat = np.asarray([[(2*(np.random.random() - 0.5)*epsilon
for i in range(img_rows)]
for j in range(img_cols)]])
# Create the attack image by adjusting each pixel of the input image
eps_image = image.detach().numpy() + epsilon_mat
# Clipping eps_image to maintain pixel values into the [0, 1] range
eps_image = torch.from_numpy(eps_image).float()
eps_image = torch.clamp(eps_image, 0, 1)
# Return
return eps_image
Clipping a value
Effect on accuracy
Adding noise to an image tends to make models malfunction.
Eventually, with full noise (large
Plausibility
To determine if an Adversarial sample is "close enough" to original sample