Please use this identifier to cite or link to this item:
https://doi.org/10.1017/jfm.2022.476
Title: | Optimizing low-Reynolds-number predation via optimal control and reinforcement learning | Authors: | Guangpu Zhu Wen-Zhen Fang Lailai Zhu |
Keywords: | micro-organism dynamics swimming/flying microscale transport |
Issue Date: | 22-Jun-2022 | Citation: | Guangpu Zhu, Wen-Zhen Fang, Lailai Zhu (2022-06-22). Optimizing low-Reynolds-number predation via optimal control and reinforcement learning. Journal of Fluid Mechanics 944 : A3 1-22. ScholarBank@NUS Repository. https://doi.org/10.1017/jfm.2022.476 | Rights: | Attribution-NonCommercial 4.0 International | Abstract: | We seek the best stroke sequences of a finite-size swimming predator chasing a non-motile point or finite-size prey at low Reynolds number. We use optimal control to seek the globally optimal solutions for the former and reinforcement learning (RL) for general situations. The predator is represented by a squirmer model that can translate forward and laterally, rotate and generate a stresslet flow. We identify the predator’s best squirming sequences to achieve the time-optimal (TO) and efficiency-optimal (EO) predation. For a point prey, the TO squirmer executing translational motions favours a two-fold L-shaped trajectory that enables it to exploit the disturbance flow for accelerated predation; using a stresslet mode expedites significantly the EO predation, allowing the predator to catch the prey faster yet with lower energy consumption and higher predatory efficiency; the predator can harness its stresslet disturbance flow to suck the prey towards itself; compared to a translating predator, its compeer combining translation and rotation is less time-efficient, and the latter occasionally achieves the TO predation via retreating in order to advance. We also adopt RL to reproduce the globally optimal predatory strategy of chasing a point prey, qualitatively capturing the crucial two-fold attribute of a TO path. Using a numerically emulated RL environment, we explore the dependence of the optimal predatory path on the size of prey. Our results might provide useful information that help in the design of synthetic microswimmers such as in vivo medical microrobots capable of capturing and approaching objects in viscous flows. | Source Title: | Journal of Fluid Mechanics | URI: | https://scholarbank.nus.edu.sg/handle/10635/249687 | ISSN: | 0022-1120 1469-7645 |
DOI: | 10.1017/jfm.2022.476 | Rights: | Attribution-NonCommercial 4.0 International |
Appears in Collections: | Staff Publications Elements |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
optimizing-low-reynolds-number-predation-via-optimal-control-and-reinforcement-learning.pdf | 1.65 MB | Adobe PDF | CLOSED | None | ||
predator_prey.pdf | 8.26 MB | Adobe PDF | OPEN | Post-print | View/Download |
This item is licensed under a Creative Commons License