Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Python JavaScript Jupyter Notebook Submitted 10 August 2020Published 27 September 2020
Review

Editor: @terrytangyuan (all papers)
Reviewers: @GregaVrbancic (all reviews), @ethanwharris (all reviews)

Authors

Jonas Rauber (0000-0001-6795-9441), Roland Zimmermann, Matthias Bethge, Wieland Brendel

Citation

Rauber et al., (2020). Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. Journal of Open Source Software, 5(53), 2607, https://doi.org/10.21105/joss.02607

Copy citation string · Copy BibTeX  
Tags

python machine learning adversarial attacks neural networks pytorch tensorflow jax keras eagerpy

Altmetrics
Markdown badge

 

License

Authors of JOSS papers retain copyright.

This work is licensed under a Creative Commons Attribution 4.0 International License.

Creative Commons License

Public user content licensed CC BY 4.0 unless otherwise specified.
ISSN 2475-9066