tag:joss.theoj.org,2005:/papers/tagged/explainable%20AIJournal of Open Source Software2024-01-09T15:42:20ZJournal of Open Source Softwarehttps://joss.theoj.orgtag:joss.theoj.org,2005:Paper/38102024-01-09T15:42:20Z2024-01-10T00:01:06ZScarlet: Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logicacceptedv0.0.12022-08-10 18:27:58 UTC932024-01-09 15:42:20 UTC920245052RitamRahaUniversity of Antwerp, Antwerp, Belgium, CNRS, LaBRI and Université de Bordeaux, France0000-0003-1467-1182RajarshiRoyMax Planck Institute for Software Systems, Kaiserslautern, Germany0000-0002-0202-1169NathanaëlFijalkowCNRS, LaBRI and Université de Bordeaux, France0000-0002-6576-4680DanielNeiderTU Dortmund University, Dortmund, Germany, Center for Trustworthy Data Science and Security, University Alliance Ruhr, Germany0000-0001-9276-634210.21105/joss.05052https://doi.org/10.5281/zenodo.10419514Pythonhttps://joss.theoj.org/papers/10.21105/joss.05052.pdflinear temporal logic (LTL), Explainable AI (XAI), specification mining, Formal Methodstag:joss.theoj.org,2005:Paper/34332022-12-15T20:02:32Z2022-12-16T00:01:07ZDIANNA: Deep Insight And Neural Network Analysisacceptedv.0.4.02022-03-22 16:05:29 UTC802022-12-15 20:02:32 UTC720224493ElenaRanguelovaNetherlands eScience Center, Amsterdam, the Netherlands0000-0002-9834-1756ChristiaanMeijerNetherlands eScience Center, Amsterdam, the Netherlands0000-0002-5529-5761LeonOostrumNetherlands eScience Center, Amsterdam, the Netherlands0000-0001-8724-8372YangLiuNetherlands eScience Center, Amsterdam, the Netherlands0000-0002-1966-8460PatrickBosNetherlands eScience Center, Amsterdam, the Netherlands0000-0002-6033-960XGiuliaCrocioniNetherlands eScience Center, Amsterdam, the Netherlands0000-0002-0823-0121MatthieuLaneuvilleSURF, Amsterdam, the Netherlands0000-0001-6022-0046BryanCardenasGuevaraSURF, Amsterdam, the Netherlands0000-0001-9793-910XRenaBakhshiNetherlands eScience Center, Amsterdam, the Netherlands0000-0002-2932-3028DamianPodareanuSURF, Amsterdam, the Netherlands0000-0002-4207-872510.21105/joss.04493https://doi.org/10.5281/zenodo.7387004Python, Jupyter Notebookhttps://joss.theoj.org/papers/10.21105/joss.04493.pdfexplainable AI, Deep Neural Networks, ONNX, benchmark datasetstag:joss.theoj.org,2005:Paper/38192022-10-14T16:13:22Z2022-10-15T00:00:36ZGSAreport: Easy to Use Global Sensitivity Reportingacceptedv1.0.02022-08-15 10:46:50 UTC782022-10-14 16:13:22 UTC720224721VanStein,BasLIACS, Leiden University, The Netherlands0000-0002-0013-7969ElenaRaponiTechnical University of Munich, Germany0000-0001-6841-740910.21105/joss.04721https://doi.org/10.5281/zenodo.7191341Python, Jupyter Notebookhttps://joss.theoj.org/papers/10.21105/joss.04721.pdfglobal sensitivity analysis, explainable aitag:joss.theoj.org,2005:Paper/13972020-02-05T18:14:54Z2021-02-15T11:31:16Zshapr: An R-package for explaining machine learning models with dependence-aware Shapley valuesacceptedv0.1.02019-12-10 14:11:31 UTC462020-02-05 18:14:54 UTC520192027NikolaiSellereiteNorwegian Computing Center0000-0002-4671-0337MartinJullumNorwegian Computing Center0000-0003-3908-515510.21105/joss.02027https://doi.org/10.5281/zenodo.3641831R, Python, C++https://joss.theoj.org/papers/10.21105/joss.02027.pdfexplainable AI, interpretable machine learning, shapley values, feature dependence