Apns-218.mp4
: The authors demonstrate that a small patch placed in a scene can cause a segmentation model to fail globally or ignore critical objects (like pedestrians or traffic signs).
You can often find these supplementary videos on platforms like arXiv (under the "Ancillary files" section) or the researchers' project GitHub repositories. apns-218.mp4
The number usually denotes a specific test case, scene, or figure number referenced within the study. This paper explores the vulnerability of deep learning-based image segmentation models (like those used in autonomous driving) to adversarial patches—small, intentionally designed images that can cause a model to misclassify specific objects or entire regions of a scene. Context of the Paper : The authors demonstrate that a small patch
: Files like "apns-218.mp4" typically show a side-by-side comparison of: The original input video. The adversarial patch being applied to the scene. This paper explores the vulnerability of deep learning-based
: Adversarial machine learning, specifically targeting semantic segmentation networks (e.g., PSPNet, ICNet).
The resulting produced by the neural network.