- About Us
KURI has a number of projects on developing autonomous capabilities for unmanned robot vehicles, with research focused on modeling, control, navigation, mapping, and sensor based tracking, with applications on sampling, environment monitoring, and interventions.
The project, coded Go2-Sample&Return, addresses the technical and scientific challenges of a ―go to sample and return‖ strategy for intervention of robotic devices in hazardous environments such as chemical or nuclear incident zones. The strategy will be applied on heterogeneous robotic platforms, which include one ground vehicle (UGV), cooperating with several micro aerial vehicles (UAV) for searching, mapping and path planning the incident zone. The UGV has capacity of physical intervention in operational theatre through a 5-DOF robotic arm equipped with a hand with advanced touching sensors. The project includes knowledge and developments from aerospace, electronics, computer, nuclear and mechanical engineering fields to enhance this robotic technology.
A distributed consensus protocol for the formation coordination will be designed, developed and tested. This technique will increase the autonomy of unmanned robotic vehicles providing them with capacity to navigate, measure and mapping cooperatively and localize incident places (e.g. higher nuclear radiation sources). Low-cost computational path-planning algorithms will be developed for unmanned systems (UGV, UAV) to cooperatively establish paths towards the incident zone, keeping their ability to correct and adjust themselves with unexpected obstacles. The UGV will conduct incident source searching and establish optimal trajectories towards the incident zone (e.g. source of radiation). For the intervention phase and since the source of incident is unknown a priori, an advanced robot grasping modelling will be developed based on a limited interesting/feature points in the scene. Multi-sensor information will be integrated for reactive grasping and a robot-hand motion model will be proposed for interactive manipulation to characterize objects in the incident zone. This will enable to sample the source (e.g. sand) and return it autonomously using an optimal path. ROS (Robotic Operating System) will be used to integrate and interoperate all the heterogeneous platforms, and integrate algorithms that will be validated and benchmarked within specific scenarios such as the one shown in Figure 1.
This project is supported by a Level 2 Khalifa University Internal Research Fund (KUIRF) with the total amount of 2,075,000 AED. Collaborators include researchers from a variety of departments at the Khalifa University, including Robotics Institute, Electrical and Computer Engineering Department, Mechanical Engineering Department, Aerospace Engineering Department, and Nuclear Engineering Department.
People: Dr. Guowei Cai, Prof. Jorge Dias, Prof. Lakmal Seneviratne, Dr. Hassan Al Muhairi, Dr. Tarek Taha, Dr. Dongming Gan, Dr. Alexander Solodov, Mr. Bara Emran, Mr. Tiago Caldeira, Mr. Rui De Figueiredo
In this work, a framework for frequency-domain flight dynamics modeling methodology that can be universally applied to any multi-rotor aerial vehicles is introduced. The primary contribution of this proposed method is a systematic integration of the first-principles modeling and system identification to generate flight dynamics model for multi-rotor aerial vehicles with high accuracy. The first-principles modeling approach and model linearization are adopted to obtain an appropriate baseline model for the subsequent system identification. Next, a four-step parameter identification process, which consists of 1) baseline model determination, 2) data collection and preprocessing, 3) mode-wise parameter identification, and 4) model fidelity validation, is conducted in the frequency domain to identify the uncertain parameters. The overall framework is shown in Figure 1 for convenient reference. Our method has been applied on multiple in-house built multi-rotor aircraft, and high accuracy results (two samples are shown in Figures 2 and 3) have been obtained.
The EXPO 2020 in Dubai is set to bring approximately 25 million visitors to the country. In order to cater such a need, the government has announced a number of mega projects within the master plan for the city. During the first half of 2014, there were more than 50 contracts, worth a total of more than $5.4 billion, for newly announced or recovered residential projects. The health and safety on numerous construction sites has emerged as a critically important issue. According to Dubai municipality statistics, workers falling from high-rise buildings accounted for 45 % of the 865 construction industry accidents in Dubai from 2004 to 2007. In addition, in 2011, Abu-Dhabi municipality recorded 29 accidents at construction sites, 10 of them fatal.
In this project, the use of drones to monitor and inspect new construction sites is proposed. Nowadays, all site inspection is done on the spot. Drone site inspection is an innovative idea that allows monitoring sites from a distance which wasn’t yet deployed in the field. Drones site inspection will focus on two different aspects: monitoring summer working hours and construction progress/evolution. The drones are equipped with cameras that are used to capture images/ real-time videos of in progress construction sites. The captured images are then sent to a base station for further analysis and safety checks which reduces the number of physical on-site visits. If the inspector at the base station detected any rule violations, the construction company will be warned. Moreover, images of the cables, structure and piping, will be studied at the base station by the civil or site engineers who can even use an iPad. The drone will also be equipped with an Infrared Thermal Imaging camera to enable easy inspection for irregularities. Drones Site Inspection will not only reduce the cost of physical inspection but also provide an efficient way to inspect more construction areas within a specific time spam. In addition, sending inspectors at high altitude to inspect the building is not a safe task. Therefore the drone will be faster and more practical. With the drone, construction progress can be recorded quarterly, monthly, weekly or daily from the air, which provide historical data for the government.
An SID prototype (shown in Figure 1) has been developed at KURI. Its capability has been demonstrated conceptually in the 2014 Drones for Good Competition, which is organized by the Dubai Government. The excellent performance under full autonomous control, which is recorded in the following video demonstration, has clearly proved SID’s high potential in various practical implementations. Currently, KURI is continuously working with Abu Dhabi Municipality and Dubai Municipality to further complete SID’s functionality and push its actual implementation in the near future.
Utilizing robotic systems in hazardous situations continues to expand and hence research effort is being put to enhance the efficiency and performance of these systems. Robots involvement in hazardous situations not only saves lives by reducing the human exposure to dangerous environments, but also increases the efficiency of responding to these incidents. However, to deal with emergencies, particularity the large scale ones, systems that allows seamless collaboration between teams of robots and humans are highly desirable. Robust decision making and coordination among agents are required so that they can interact, collaborate and form one team based on shared observations and actions. Unfortunately, in search and rescue scenarios information can be limited and uncertain due to communication issues. This project proposes using Dec-MPOMDP in order to handle the uncertainty when it comes to decision making. The thesis argues that in order to achieve collaboration between heterogeneous teams there are main aspects that should be achieved and one of them is the decision making. Enhanced collaboration is conducted with the time then through feedback loop.
Project Fund: This project is funded by Buhooth Program
Mapping areas while exploring a previously unknown large environment is an essential skill that modern robotic systems should be equipped with. In search and rescue scenarios, this task becomes particularly challenging due to the time criticality, and possible presence of risk and danger.
Our research focuses on rapid response to search and rescue scenarios by effectively deploying robots that can: collaborate to explore and map previously unknown areas; identify source of danger; and retrieve samples that could be analyzed at a later stage. In this context, coordinating a group of heterogeneous robots with various capabilities while performing SLAM, and efficiently assigning tasks to individual robots is essential to successfully accomplish the requirements of the search and rescue mission.
Fund: Level 2 funding Go2Sample and Return
The interest of micro scale multi-rotor unmanned aerial vehicle (UAV) is growing in military and civilian applications. This popularity may be because of the fact that multi-rotor UAV's have many potential uses in various military and civilian applications such as surveillance, inspection and monitoring of borders, power grid lines, oil/gas pipelines and highways, search and rescue mission, military and security applications, etc. The modeling and autonomous flight control system for micro scale multi-rotor UAV is, however, challenging because of its lightweight, underactuated property, coupling between translational and rotational dynamics, inherent nonlinearity associated with the aerodynamics and gyroscopic effects and external disturbances associated with unpredictable changes in flying environments. In this project, we focus on modeling and tracking control problem of multi-rotor UAV system in the presence of uncertainty associated with modeling errors and disturbance uncertainties from varying payload mass and moment of inertia,flying environment, aerodynamic friction, gyroscopic effects, flying environments and other external disturbances. The proposed method combines classical proportional-derivative control with robust adaptive control theory. Robust adaptive control theory employs to learn and compensate uncertain changes in dynamics as a result of uncertainties mentioned above.
The motivation behind distributed cooperative control of multi-agents is due to its application in various fields, such as formation, flocking, rendezvous, and unmanned aerial vehicles. In this project, we are developing distributed finite-time consensus based low cost cloud-connected multiple aerial vehicle platforms capable of working autonomously and collaboratively with other agents in order to perform specific tasks with specific formation. The project studies distributed consensus algorithms for the coordination of a swarm of distributed aerial vehicles in order to maximize the efficiency on information gathering and sensing of multiple targets in the ground that can be either stationary or in motion. The project investigates the formation geometry of multiple small aerial vehicles to optimize sensing, multi-sensor fusion and distributed data processing maximizing information gathered during swarm sensing missions, while adapting to environmental changes such as obstacles or uncertain flying environments such as wind perturbations. The consensus technique employs to develop distributed nonlinear protocols with local neighboring information such that the states of all the unmanned aerial vehicles in a team can achieve an agreement in order to perform desired tasks. The proposed strategy can be rapidly deployable, scalable, adaptive, cost-effective, and robust networks of autonomous distributed agents. The project has high potential for impact in a number of fields including environmental monitoring, safety, security, surveillance and disaster recovery that often occurs in difficult environmental conditions such: fog, haze, low-light conditions.
Collaboration: Kings College London, University of Coimbra, Carleton University, University of Ottawa, New York University
Telepresence technology can extend human's sensing and manipulation capabilities to the isolated remote environment. Such technology has many potential applications including space and undersea explorations, telesurgery and telemedicine, search and rescue mission, monitoring, surveillance and inspection as well as remote handling of hazardous materials in dangerous or awkward environments that are inaccessible for human intervention. One of the open problems in existing remote presence technologies is the lack of bilateral flows of the operator’s motion/forces and camera vision data from the local side to the remote side and vice versa without any haptic feedback. On the other hand, current remote presence technologies use either direct or dedicated transmission lines to exchange information between local and remote sites in order to ensure the reliability of the communications lines and low latency. With the recent advancement in haptics and network technologies, telemanipulation can be performed via using widely available distributed communication networks making remote presence technology easily accessible and cost effective. In this project, we are developing novel haptic and augmented reality based control and interaction interface system for bilateral shared autonomous system for telemanipulation over distributed communication networks. The proposed interface allows operator to navigate remote robotic system to control and interact with uncertain environments. Shared input interface for local and remote platform develops by combining position and position-velocity signals with the reflected remote interaction force fields mapped by artificial force field and virtual impedance force field. Robust adaptive control algorithm uses locally to estimate the interaction properties between human and master manipulator and between slave and remote environment. The control interface for local and remote system also combines robust control with adaptive control theory to deal with the uncertainty associated with the uncertainty associated with modeling errors and other external disturbances. The proposed control and interface can be applied for unmanned aerospace vehicle, unmanned underwater vehicle, unmanned ground vehicle and manipulator including industrial and medical robotic systems for remote navigation, control and interaction with uncertain environments.
Collaboration: Stanford University, Kings College London, University of Coimbra, Carleton University, University of Ottawa, New York University
In this project, we intend to develop a novel bird-like robot “RoboPrey” that can contribute for the art of falconry. Unlike the RC ornithopter platform currently available in the market, the robotic prey bird to be developed is fully autonomous, being instrumented with sensors and telemetry that allow velocity and trajectory control for adapting the falcon to specific development programs. By applying this creative into the field of falconry, we target for establishing a new methodology for falcon training and development of hunting skills. Robotic prey bird can be adjusted automatically to specific falcon individuals by using their identifiers. Currently, we are focusing on the design and instrumentation of the first RoboPrey prototype, which is based on RC ornithopter or duct-fan based bird-like aircraft (shown in Figure 1). Significant modifications will be conducted to achieve the wing flapping and full autonomy. The potential of environmental impact with this project is high since contributes for falcon training without wild prey and re-habilitation programs at any time of year. The project also has positive impact on houbara bustard conservation by reducing illegal trade for falcon training.