Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Python JavaScript Jupyter Notebook Submitted 10 August 2020Published 27 September 2020
Review

Editor: @terrytangyuan (all papers)
Reviewers: @GregaVrbancic (all reviews), @ethanwharris (all reviews)

Authors

Jonas Rauber (0000-0001-6795-9441), Roland Zimmermann, Matthias Bethge, Wieland Brendel

Citation

Rauber et al., (2020). Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. Journal of Open Source Software, 5(53), 2607, https://doi.org/10.21105/joss.02607

@article{Rauber2020, doi = {10.21105/joss.02607}, url = {https://doi.org/10.21105/joss.02607}, year = {2020}, publisher = {The Open Journal}, volume = {5}, number = {53}, pages = {2607}, author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel}, title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX}, journal = {Journal of Open Source Software} }
Copy citation string · Copy BibTeX  
Tags

python machine learning adversarial attacks neural networks pytorch tensorflow jax keras eagerpy

Altmetrics
Markdown badge

 

License

Authors of JOSS papers retain copyright.

This work is licensed under a Creative Commons Attribution 4.0 International License.

Creative Commons License

Table of Contents
Public user content licensed CC BY 4.0 unless otherwise specified.
ISSN 2475-9066