Abstract
Automated negotiation is a crucial component for establishing cooperation and collaboration within multi-agent systems. While reinforcement learning (RL)-based negotiating agents have achieved remarkable success in various scenarios, they still face limitations due to certain assumptions on which they are based. In this work, we proposes a novel approach called ANOTO to improve the negotiating agents' ability via offline-to-online RL. ANOTO enables a negotiating agent (1) to communicate with opponents using an end-to-end strategy that covers all negotiation actions, (2) to learn negotiation strategies from historical offline data without requiring active interactions, and (3) to enhance the optimization process during the online phase, facilitating rapid and stable performance improvements for the learned offline strategies. Experimental results, based on a number of negotiation scenarios and recent winning agents from the Automated Negotiating Agents Competitions (ANAC), are provided.
Original language | English |
---|---|
Pages (from-to) | 2195-2197 |
Number of pages | 3 |
Journal | Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS |
Volume | 2024-May |
Publication status | Published - 6 May 2024 |
Event | 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024 - Auckland, New Zealand Duration: 6 May 2024 → 10 May 2024 https://www.aamas2024-conference.auckland.ac.nz/ |
Keywords
- Automated negotiation
- E-commence
- Reinforcement learning