Smart City Gnosys

Smart city article details

Title Bidding Strategy For Two-Sided Electricity Markets: A Reinforcement Learning Based Framework
ID_Doc 11968
Authors Pedasingu B.S.; Subramanian E.; Bichpuriya Y.; Sarangan V.; Mahilong N.
Year 2020
Published BuildSys 2020 - Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
DOI http://dx.doi.org/10.1145/3408308.3427976
Abstract We aim to increase the revenue or reduce the purchase cost of a given market participant in a double-sided, day-ahead, wholesale electricity market serving a smart city. Using an operations research based market clearing mechanism and attention based time series forecaster as sub-modules, we build a holistic interactive system. Through this system, we discover better bidding strategies for a market participant using reinforcement learning (RL). We relax several assumptions made in existing literature in order to make the problem setting more relevant to real life. Our Markov Decision Process (MDP) formulation enables us to tackle action space explosion and also compute optimal actions across time-steps in parallel. Our RL framework is generic enough to be used by either a generator or a consumer participating in the electricity market. We study the efficacy of the proposed RL based bidding framework from the perspective of a generator as well as a buyer on real world day-ahead electricity market data obtained from the European Power Exchange (EPEX). We compare the performance of our RL based bidding framework against three baselines: (a) an ideal but un-realizable bidding strategy; (b) a realizable approximate version of the ideal strategy and (c) historical performance as found from the logs. Under both perspectives, we find that our RL based framework is more closer to the ideal strategy than other baselines. Further, the RL based framework improves the average daily revenue of the generator by nearly €7,200 (€2.64 M per year) and €9,000 (€3.28 M per year) over the realizable ideal and historical strategies respectively. When used on behalf of a buyer, it reduces average daily procurement cost by nearly €2,700 (€0.97 M per year) and €57,200 (€52.63 M per year) over the realizable ideal and historical strategies respectively. We also observe that our RL based framework automatically adapts its actions to changes in the market power of the participant. © 2020 ACM.
Author Keywords Bidding; Electricity Markets; Forecasting; Optimization; Reinforcement Learning


Similar Articles


Id Similarity Authors Title Published
41557 View0.891Alsolami M.; Alferidi A.; Lami B.; Ben Slama S.Peer-To-Peer Trading In Smart Grid With Demand Response And Grid Outage Using Deep Reinforcement LearningAin Shams Engineering Journal, 14, 12 (2023)
29194 View0.86Hamouda M.R.; Amara F.; Oviedo J.C.; Rueda L.; Toquica D.Historical-Based Bidding Strategy In Transactive Energy Market For Household: Data Driven AnalysisAin Shams Engineering Journal, 14, 12 (2023)
35921 View0.859Tang Z.; Xie H.; Du C.; Liu Y.; Khalaf O.I.; Allimuthu U.K.Machine Learning Assisted Energy Optimization In Smart Grid For Smart City ApplicationsJournal of Interconnection Networks, 22 (2022)
7676 View0.852Bousia A.; Daskalopulu A.; Papageorgiou E.I.An Auction Pricing Model For Energy Trading In Electric Vehicle NetworksElectronics (Switzerland), 12, 14 (2023)