A Regulation Enforcement Solution for Multi-agent Reinforcement Learning
Abstract
References
Index Terms
- A Regulation Enforcement Solution for Multi-agent Reinforcement Learning
Recommendations
Training Cooperative Agents for Multi-Agent Reinforcement Learning
AAMAS '19: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent SystemsDeep Learning and back-propagation has been successfully used to perform centralized training with communication protocols among multiple agents in a cooperative environment. In this paper we present techniques for centralized training of Multi-Agent (...
Mediated Multi-Agent Reinforcement Learning
AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent SystemsThe majority of Multi-Agent Reinforcement Learning (MARL) literature equates the cooperation of self-interested agents in mixed environments to the problem of social welfare maximization, allowing agents to arbitrarily share rewards and private ...
Deep reinforcement learning for multi-agent interaction
Multi-agent systems research in the United KingdomThe development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel ...
Comments
Information & Contributors
Information
Published In
- General Chairs:
- Edith Elkind,
- Manuela Veloso,
- Program Chairs:
- Noa Agmon,
- Matthew E. Taylor
Sponsors
Publisher
International Foundation for Autonomous Agents and Multiagent Systems
Richland, SC
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Conference
Acceptance Rates
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 61Total Downloads
- Downloads (Last 12 months)2
- Downloads (Last 6 weeks)0
Other Metrics
Citations
View Options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in