Self-Adaptation Through Reinforcement Learning Using a Feature Model
dc.contributor.author | Boulehouache , Soufiane | |
dc.contributor.author | Mazouzi , Smaine | |
dc.contributor.author | Ouareth , Selma | |
dc.date.accessioned | 2024-05-26T08:56:56Z | |
dc.date.available | 2024-05-26T08:56:56Z | |
dc.date.issued | 2022 | |
dc.description.abstract | Typically, self-adaptation is achieved by implementing the MAPE-K Control Loop. The researchers agree that multiple control loops should be assigned if the system is complex and large-scale. The hierarchical control has proven to be a good compromise to achieve SAS goals, as there is always some degree of decentralization but it also retains a degree of centralization. The decentralized entities must be coordinated to ensure the consistency of adaptation processes. The high cost of data transfer between coordinating entities may be an obstacle to achieving system scalability. To resolve this problem, coordination should only define between entities that require communication between them. However, most of the current SAS relies on static MAPE-K. In this article, authors present a new method that allows changing the structure and behavior of loops. Authors base on exploration strategies for online reinforcement learning, using the feature model to define the adaptation space. | |
dc.identifier.uri | http://dspace.univ-skikda.dz:4000/handle/123456789/1857 | |
dc.language.iso | en | |
dc.publisher | International Journal of Organizational and Collective Intelligence Volume 12 • Issue 4 | |
dc.title | Self-Adaptation Through Reinforcement Learning Using a Feature Model | |
dc.type | Article |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- Self-Adaptation_Through_Reinforcement_Learning_Usi.pdf
- Size:
- 1.13 MB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed to upon submission
- Description: