Self-Adaptation Through Reinforcement Learning Using a Feature Model

dc.contributor.authorBoulehouache , Soufiane
dc.contributor.authorMazouzi , Smaine
dc.contributor.authorOuareth , Selma
dc.date.accessioned2024-05-26T08:56:56Z
dc.date.available2024-05-26T08:56:56Z
dc.date.issued2022
dc.description.abstractTypically, self-adaptation is achieved by implementing the MAPE-K Control Loop. The researchers agree that multiple control loops should be assigned if the system is complex and large-scale. The hierarchical control has proven to be a good compromise to achieve SAS goals, as there is always some degree of decentralization but it also retains a degree of centralization. The decentralized entities must be coordinated to ensure the consistency of adaptation processes. The high cost of data transfer between coordinating entities may be an obstacle to achieving system scalability. To resolve this problem, coordination should only define between entities that require communication between them. However, most of the current SAS relies on static MAPE-K. In this article, authors present a new method that allows changing the structure and behavior of loops. Authors base on exploration strategies for online reinforcement learning, using the feature model to define the adaptation space.
dc.identifier.urihttp://dspace.univ-skikda.dz:4000/handle/123456789/1857
dc.language.isoen
dc.publisherInternational Journal of Organizational and Collective Intelligence Volume 12 • Issue 4
dc.titleSelf-Adaptation Through Reinforcement Learning Using a Feature Model
dc.typeArticle
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Self-Adaptation_Through_Reinforcement_Learning_Usi.pdf
Size:
1.13 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description:
Collections