ANOTO: Improving Automated Negotiation via Offline-to-Online Reinforcement Learning

Siqi Chen*, Jianing Zhao, Kai Zhao, Gerhard Weiss, Fengyun Zhang, Ran Su, Yang Dong, Daqian Li, Kaiyou Lei

*Corresponding author for this work

Research output: Contribution to journalConference article in journalAcademicpeer-review

Abstract

Automated negotiation is a crucial component for establishing cooperation and collaboration within multi-agent systems. While reinforcement learning (RL)-based negotiating agents have achieved remarkable success in various scenarios, they still face limitations due to certain assumptions on which they are based. In this work, we proposes a novel approach called ANOTO to improve the negotiating agents' ability via offline-to-online RL. ANOTO enables a negotiating agent (1) to communicate with opponents using an end-to-end strategy that covers all negotiation actions, (2) to learn negotiation strategies from historical offline data without requiring active interactions, and (3) to enhance the optimization process during the online phase, facilitating rapid and stable performance improvements for the learned offline strategies. Experimental results, based on a number of negotiation scenarios and recent winning agents from the Automated Negotiating Agents Competitions (ANAC), are provided.
Original languageEnglish
Pages (from-to)2195-2197
Number of pages3
JournalProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume2024-May
Publication statusPublished - 6 May 2024
Event23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024 - Auckland, New Zealand
Duration: 6 May 202410 May 2024
https://www.aamas2024-conference.auckland.ac.nz/

Keywords

  • Automated negotiation
  • E-commence
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'ANOTO: Improving Automated Negotiation via Offline-to-Online Reinforcement Learning'. Together they form a unique fingerprint.

Cite this