Research

Research

My research interests include, but are not limited to:

  • Artificial Intelligence;
  • intelligent robotics;
  • AI planning
  • path/motion planning;
  • heuristic search;
  • multi-agent systems;
  • vision-based navigation;
  • cognitive agents.

I’m passionate and enthusiastic about developing methods and algorithms for controlling intelligent agents (mobile robots, driverless cars, drones, computer game characters etc.) in such a way that these agents are:

  • autonomous, i.e. can behave appropriately in complex, dynamic environments without being fully controlled by an operator;
  • adaptive, i.e. can behave well in changing environments;
  • collaborative, i.e. can interact with each other and humans to safely and effectively accomplish their missions.

Creating intelligent control systems for mobile agents is a challenging problem. To solve it one needs to use the variety of methods from AI, control theory, computer cognitive modelling etc.

Currently I’m involved in research and development of methods and algorithms for path and motion planning and (to a less extent) vision-based navigation.


Videos || Results || In plain English


Videos

The following videos provide an insight of my research activities.







Results

Below one might find several examples of the algorithms, methods and models that were developed with my active involvement. More details on them can be found in my publications.

  1. Methods and algorithms for centralized multi-agent path finding (MAPF).
    • CCBS – an optimal MAPF planner that supports continuous time. A long (journal) version of this work is available here. (This is one of my most cited works)
    • ICCBS – an improved version of CCBS that significantly increases its computational efficiency without sacrificing theoretical guarantees.
    • AA-SIPP(m) – a prioritized multi-agent path planning algorithm that is capable of handling any-angle moves.
    • Enhanced AA-SIPP(m) algorithm that supports agents of arbitrary size, supports different moving velocities and takes into account rotations.
  2. Methods and algorithms for decentralized multi-agent path finding
    • Follower – method based on the combination of heuristic search (for long-term planning) and reinforcement learning (for local collision avoidance).
    • Switcher– one more variant of combining search and reinforcement learning for solving decentralized multi-agent pathfinding, that is based on different switching mechanisms that alternate between learnable and non-learnable behavior policies.
    • MATS-LP – decentralized multi-agent pathfinding based on the Monte-Carlo Tree Search (MCTS). One more paper on this topic (with the preliminary results).
  3. Methods and algorithms for single-agent path finding that utilize state-of-the-art machine learning techniques
    • POLAMP – algorithm for planning a path in a dynamic environment, which is based on the combination of reinforcement learning with search-based and sampling-based planning.
    • TransPath – algorithm for grid-based pathfinding that leverages a modern neural network (transformer) tailored and trained to (significantly) reduce the search space. A preliminary paper on this topic is available here.
    • GridPathRL – one of our first works on trying to solve classical pathfinding problem with reinforcement learning. (This is one of my most cited works)
  4. Methods and algorithms for single-agent path finding that are based on heuristic search
    • SIPP-IP – algorithm for pathfinding in dynamic environments that is based on the safe interval path planning methodology (for the sake of computational efficiency) and takes the kinodynamic constraints into account (e.g. the agent can not stop instantaneously).
    • TO-AA-SIPP – algorithm for pathfinding in dynamic environments that supports any-angle moves and guarantees the optimality of the constructed solutions.
    • Algorithm for finding a path that does not contain sharp turns – LIAN and its modifications: eLIAN and LP-LIAN.
  5. Methods and algorithms for vision-based navigation

In plain English

My peers and I are aimed at making mobile agents, e.g. mobile robots, smarter. Creating a intelligent, versatile robot is indeed a very challenging problem that is composed of dozens of smaller problems itself. We are focused on the software part of this puzzle. We create methods and algorithms, implement them as a software and evaluate them either in simulation (numerous our experiments look pretty much like “jumping dots on the screen”) or on the off-the-shelf robotic platforms (wheeled robots, drones, robotic arms etc.). We do not construct robots themselves, as we are more the software guys rather than the hardware guys. Moreover, some of our methods are general enough to be applied not only to physical robots but to, say, video game characters. Generally, as most of the ‘science guys’ we solve challenging puzzles that advance our understanding on how intelligent decision making should be implemented within the artificial systems.