tag:joss.theoj.org,2005:/papers/tagged/Reinforcement%20LearningJournal of Open Source Software2023-09-02T03:35:35ZJournal of Open Source Softwarehttps://joss.theoj.orgtag:joss.theoj.org,2005:Paper/44582023-09-02T03:35:35Z2023-09-03T00:01:29ZElectricGrid.jl - A Julia-based modeling and simulation tool for power electronics-driven electric energy gridsacceptedv1.0.02023-04-28 13:51:00 UTC892023-09-02 03:35:35 UTC820235616OliverWallscheidChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, Germany0000-0001-9362-8777SebastianPeitzChair of Data Science for Engineering, Paderborn University, Paderborn, Germany0000-0002-3389-793XJanStennerChair of Data Science for Engineering, Paderborn University, Paderborn, GermanyDanielWeberChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, Germany0000-0003-3367-5998SeptimusBoshoffChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, GermanyMarvinMeyerChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, Germany0009-0008-2879-7118VikasChidanandaChair of Data Science for Engineering, Paderborn University, Paderborn, GermanyOliverSchweinsChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, Germany10.21105/joss.05616https://doi.org/10.5281/zenodo.8297533Juliahttps://joss.theoj.org/papers/10.21105/joss.05616.pdfElectric Grids, Microgrids, Reinforcement Learning, Energy Systems, Simulation, Testing, Controltag:joss.theoj.org,2005:Paper/43872023-08-25T03:10:18Z2023-08-26T00:01:21ZEthical Smart Grid: a Gym environment for learning ethical behavioursacceptedv1.0.02023-04-05 08:28:33 UTC882023-08-25 03:10:18 UTC820235410ClémentScheirlinckUniv Lyon, UCBL, CNRS, INSA Lyon, Centrale Lyon, Univ Lyon 2, LIRIS, UMR5205, F-69622 Villeurbanne, FranceRémyChaputUniv Lyon, UCBL, CNRS, INSA Lyon, Centrale Lyon, Univ Lyon 2, LIRIS, UMR5205, F-69622 Villeurbanne, France0000-0002-2233-7566SalimaHassasUniv Lyon, UCBL, CNRS, INSA Lyon, Centrale Lyon, Univ Lyon 2, LIRIS, UMR5205, F-69622 Villeurbanne, France10.21105/joss.05410https://doi.org/10.5281/zenodo.8239411Pythonhttps://joss.theoj.org/papers/10.21105/joss.05410.pdfReinforcement Learning, Machine Ethics, Smart Grid, Multi-Agent System, OpenAI Gymtag:joss.theoj.org,2005:Paper/37362022-09-05T11:15:59Z2022-09-06T00:01:05Zgraphenv: a Python library for reinforcement learning on graph search spacesacceptedv0.0.52022-07-21 22:03:11 UTC772022-09-05 11:15:59 UTC720224621DavidBiagioniComputational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA0000-0001-6140-1957CharlesEdisonTrippComputational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA0000-0002-5867-3561StruanClarkComputational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA0000-0003-0078-6560DmitryDuplyakinComputational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA0000-0001-5132-0168JeffreyLawBiosciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA0000-0003-2828-1273PeterC. St.JohnBiosciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA0000-0002-7928-372210.21105/joss.04621https://doi.org/10.5281/zenodo.7030161Python, Jupyter Notebookhttps://joss.theoj.org/papers/10.21105/joss.04621.pdfreinforcement learning, graph search, combinatorial optimizationtag:joss.theoj.org,2005:Paper/34132022-06-16T16:24:22Z2022-06-17T00:00:52ZOTTO: A Python package to simulate, solve and visualize the source-tracking POMDPacceptedv1.02022-03-07 20:54:03 UTC742022-06-16 16:24:22 UTC720224266AuroreLoisyAix Marseille Univ, CNRS, Centrale Marseille, IRPHE, Marseille, France0000-0002-8089-8636ChristopheEloyAix Marseille Univ, CNRS, Centrale Marseille, IRPHE, Marseille, France0000-0003-4114-726310.21105/joss.04266https://doi.org/10.5281/zenodo.6651884Pythonhttps://joss.theoj.org/papers/10.21105/joss.04266.pdfolfactory search, source tracking, POMDP, reinforcement learningtag:joss.theoj.org,2005:Paper/32872022-05-04T21:34:55Z2022-05-05T00:00:49Zpymdp: A Python library for active inference in discrete state spacesacceptedv0.0.42022-01-14 18:37:52 UTC732022-05-04 21:34:55 UTC720224098ConorHeinsDepartment of Collective Behaviour, Max Planck Institute of Animal Behavior, 78457 Konstanz, Germany, Centre for the Advanced Study of Collective Behaviour, 78457 Konstanz, Germany, Department of Biology, University of Konstanz, 78457 Konstanz, Germany, VERSES Research Lab, Los Angeles, California, USABerenMillidgeVERSES Research Lab, Los Angeles, California, USA, MRC Brain Networks Dynamics Unit, University of Oxford, Oxford, UKDaphneDemekasDepartment of Computing, Imperial College London, London, UKBrennanKleinVERSES Research Lab, Los Angeles, California, USA, Network Science Institute, Northeastern University, Boston, MA, USA, Laboratory for the Modeling of Biological and Socio-Technical Systems, Northeastern University, Boston, USA0000-0001-8326-5044KarlFristonWellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London WC1N 3AR, UKIainD.CouzinDepartment of Collective Behaviour, Max Planck Institute of Animal Behavior, 78457 Konstanz, Germany, Centre for the Advanced Study of Collective Behaviour, 78457 Konstanz, Germany, Department of Biology, University of Konstanz, 78457 Konstanz, GermanyAlexanderTschantzVERSES Research Lab, Los Angeles, California, USA, Sussex AI Group, Department of Informatics, University of Sussex, Brighton, UK, Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK10.21105/joss.04098https://doi.org/10.5281/zenodo.6484849Python, MATLABhttps://joss.theoj.org/papers/10.21105/joss.04098.pdfactive inference, Markov Decision Process, POMDP, MDP, Reinforcement Learning, Artificial Intelligence, Bayesian inference, free energy principletag:joss.theoj.org,2005:Paper/30942022-03-03T16:44:19Z2022-03-04T00:01:13Zgym-saturation: an OpenAI Gym environment for saturation proversacceptedv0.1.02021-10-01 16:31:20 UTC712022-03-03 16:44:19 UTC720223849BorisShminkeLaboratoire J.A. Dieudonné, CNRS and Université Côte d'Azur, France0000-0002-1291-989610.21105/joss.03849https://doi.org/10.5281/zenodo.6324282Python, OpenEdge ABLhttps://joss.theoj.org/papers/10.21105/joss.03849.pdfOpenAI Gym, automated theorem prover, saturation prover, reinforcement learningtag:joss.theoj.org,2005:Paper/27522021-08-23T19:50:46Z2021-08-24T00:01:08ZAbmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learningaccepted0.1.22021-06-10 23:06:33 UTC642021-08-23 19:50:46 UTC620213424EdwardRusuLawrence Livermore National Laboratory0000-0003-1033-439XRubenGlattLawrence Livermore National Laboratory10.21105/joss.03424https://doi.org/10.5281/zenodo.5196791Pythonhttps://joss.theoj.org/papers/10.21105/joss.03424.pdfagent-based simulation, multi-agent reinforcement learning, machine learning, agent-based modelingtag:joss.theoj.org,2005:Paper/22152021-02-24T15:51:00Z2021-02-25T00:01:41ZLearning Simulator: A simulation software for animal and human learningaccepted1.0.02020-12-04 13:12:48 UTC582021-02-24 15:51:00 UTC620212891MarkusJonssonCentre for Cultural Evolution, Stockholm University, Stockholm, Sweden0000-0003-1242-3599StefanoGhirlandaCentre for Cultural Evolution, Stockholm University, Stockholm, Sweden, Department of Psychology, Brooklyn College and Graduate Center, CUNY, New York, NY, USA0000-0002-7270-9612JohanLindCentre for Cultural Evolution, Stockholm University, Stockholm, Sweden0000-0002-4159-6926VeraVinkenBiosciences Institute, Newcastle University, Newcastle upon Tyne, United KingdomMagnusEnquistCentre for Cultural Evolution, Stockholm University, Stockholm, Sweden, Department of Zoology, Stockholm University, Sweden10.21105/joss.02891https://doi.org/10.5281/zenodo.4544535Emacs Lisp, Pythonhttps://joss.theoj.org/papers/10.21105/joss.02891.pdfassociative learning, reinforcement learning, behavior, mathematical model, simulation, guitag:joss.theoj.org,2005:Paper/17012021-02-07T08:16:41Z2021-02-17T23:18:00Zgym-electric-motor (GEM): A Python toolbox for the simulation of electric drive systemsacceptedv0.2.12020-05-29 12:35:51 UTC582021-02-07 08:16:41 UTC620212498PraneethBalakrishnaDepartment of Power Electronics and Electrical Drives, Paderborn University, GermanyGerritBookDepartment of Power Electronics and Electrical Drives, Paderborn University, GermanyWilhelmKirchgässnerDepartment of Power Electronics and Electrical Drives, Paderborn University, Germany0000-0001-9490-1843MaximilianSchenkeDepartment of Power Electronics and Electrical Drives, Paderborn University, Germany0000-0001-5427-9527ArneTraueDepartment of Power Electronics and Electrical Drives, Paderborn University, GermanyOliverWallscheidDepartment of Power Electronics and Electrical Drives, Paderborn University, Germany0000-0001-9362-877710.21105/joss.02498https://doi.org/10.5281/zenodo.4355691Pythonhttps://joss.theoj.org/papers/10.21105/joss.02498.pdfelectric drive control, electric motors, OpenAI Gym, power electronics, reinforcement learningtag:joss.theoj.org,2005:Paper/16862020-10-05T11:48:00Z2021-02-15T11:30:38ZOMG: A Scalable and Flexible Simulation and Testing Environment Toolbox for Intelligent Microgrid Controlacceptedv0.1.32020-05-26 12:54:11 UTC542020-10-05 11:48:00 UTC520202435StefanHeidChair of Intelligent Systems and Machine Learning, Paderborn University, Paderborn, GermanyDanielWeberChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, GermanyHenrikBodeChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, GermanyEykeHüllermeierChair of Intelligent Systems and Machine Learning, Paderborn University, Paderborn, GermanyOliverWallscheidChair of Power Electronics and Electrical Drives, Paderborn University, Paderborn, Germany0000-0001-9362-877710.21105/joss.02435https://doi.org/10.5281/zenodo.4041278Modelica, Pythonhttps://joss.theoj.org/papers/10.21105/joss.02435.pdfOpenModelica, Microgrids, Reinforcement Learning, Energy Systems, Simulation, Testing, Control