Please use this identifier to cite or link to this item: https://doi.org/10.1017/jfm.2022.476
DC FieldValue
dc.titleOptimizing low-Reynolds-number predation via optimal control and reinforcement learning
dc.contributor.authorGuangpu Zhu
dc.contributor.authorWen-Zhen Fang
dc.contributor.authorLailai Zhu
dc.date.accessioned2024-09-09T00:18:33Z
dc.date.available2024-09-09T00:18:33Z
dc.date.issued2022-06-22
dc.identifier.citationGuangpu Zhu, Wen-Zhen Fang, Lailai Zhu (2022-06-22). Optimizing low-Reynolds-number predation via optimal control and reinforcement learning. Journal of Fluid Mechanics 944 : A3 1-22. ScholarBank@NUS Repository. https://doi.org/10.1017/jfm.2022.476
dc.identifier.issn0022-1120
dc.identifier.issn1469-7645
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/249687
dc.description.abstractWe seek the best stroke sequences of a finite-size swimming predator chasing a non-motile point or finite-size prey at low Reynolds number. We use optimal control to seek the globally optimal solutions for the former and reinforcement learning (RL) for general situations. The predator is represented by a squirmer model that can translate forward and laterally, rotate and generate a stresslet flow. We identify the predator’s best squirming sequences to achieve the time-optimal (TO) and efficiency-optimal (EO) predation. For a point prey, the TO squirmer executing translational motions favours a two-fold L-shaped trajectory that enables it to exploit the disturbance flow for accelerated predation; using a stresslet mode expedites significantly the EO predation, allowing the predator to catch the prey faster yet with lower energy consumption and higher predatory efficiency; the predator can harness its stresslet disturbance flow to suck the prey towards itself; compared to a translating predator, its compeer combining translation and rotation is less time-efficient, and the latter occasionally achieves the TO predation via retreating in order to advance. We also adopt RL to reproduce the globally optimal predatory strategy of chasing a point prey, qualitatively capturing the crucial two-fold attribute of a TO path. Using a numerically emulated RL environment, we explore the dependence of the optimal predatory path on the size of prey. Our results might provide useful information that help in the design of synthetic microswimmers such as in vivo medical microrobots capable of capturing and approaching objects in viscous flows.
dc.rightsAttribution-NonCommercial 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/
dc.subjectmicro-organism dynamics
dc.subjectswimming/flying
dc.subjectmicroscale transport
dc.typeArticle
dc.contributor.departmentMECHANICAL ENGINEERING
dc.description.doi10.1017/jfm.2022.476
dc.description.sourcetitleJournal of Fluid Mechanics
dc.description.volume944
dc.description.pageA3 1-22
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
optimizing-low-reynolds-number-predation-via-optimal-control-and-reinforcement-learning.pdf1.65 MBAdobe PDF

CLOSED

None
predator_prey.pdf8.26 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons