The rapid progress in microprocessors, batteries, additive manufacturing, and new materials have allowed us to build smaller yet smarter mobile robots. As promising environmental perception instruments, small autonomous robots including unmanned aerial vehicles (UAV) and autonomous underwater vehicles (AUV) are often severely constrained in actuation capability and navigation system accuracy. The presence of ubiquitous geophysical flows (e.g., hurricanes or ocean currents) tends to exacerbate the challenges in accurate control and state estimation of these mobile platforms, hindering their large-scale, long-term deployment. Conventionally, background flows are often considered as an adversarial factor to the mobility and navigation of compact mobile robots. We advocate two new perspectives on the roles of background flows as ubiquitous navigation references and transportation “highways” for both independent and networked autonomous robots. We believe that properly understanding, learning, and utilizing ambient information are the keys to approaching the life-long autonomy issues faced by future mobile robots. From unmanned submarines to autonomous-driving vehicles, from better understanding Earth to further exploring extraterrestrial habitats, we aim at bringing autonomous robots to all frontiers of new discoveries.
Accurate localization is the foundation of efficient robot guidance. It is also critical to correctly georeferencing environmental data collected by autonomous mobile platforms. Nonetheless, long-term, mid-ocean navigation is notoriously challenging due to the lack of conventional navigation references and reliable communication means. To this end, we introduced a novel flow-aided navigation approach (Figure 1) for long-term, mid-depth AUVs to improve their navigation accuracy when all conventional techniques fail. This method leverages the dynamics of spatiotemporally varying ocean currents as navigation references in mitigating the accumulative error of inertial navigation. Another research effort in robot localization focuses on the cooperative localization problem of marine robots. The objective is to design a hierarchical information fusion system that allows autonomous surface vessels and shallow-water underwater vehicles to assist the navigation of deep-water robots. Such a system is critical to extending the footprint of low-cost marine robots. The ultimate objective of this research direction is to allow robots to simultaneously learn feature dynamics and improve their localization accuracy. We are currently investigating a strategy called Fluid-SLAM, i.e., simultaneous localization and flow field mapping, which will allow a mobile robot to concurrently map a dynamic fluid environment and accurately navigate within.
We study the control and guidance of mobile robot swarms within strong background flows in adaptive and optimized fashions. We consider the emergent robot swarm and the underlying geophysical flows as two components of an integrated dynamical system. We believe that modeling robot swarms as fluids enables much convenience in manipulating the macroscopic swarm dynamics in dynamic background flows as well as evaluating and predicting the performance of robot swarms within flow environments. We design distributed robot swarm control laws based on a numerical fluid simulation method named smoothed particle hydrodynamics. All three essential properties possessed by well-behaved flocks including separation, cohesion, and alignment are naturally satisfied. Flock guidance and obstacle avoidance are handled gracefully and conveniently through the introduction of virtual attractors and repellers. For instance, through dimensional analysis, we discovered that swarm compressibility and velocity consensus can be characterized and controlled by the Mach number and the Reynolds number, respectively, of the fluid emulated by the robot swarm. We also demonstrated that nearly optimal guidance of SPH flocks within geophysical flows can be simplified to optimal path planning for a (virtual) flock lead (Figure 2).