Emre Ugur

Ph.D.

Active projects

Title: Robots Understanding Their Actions by Imagining Their Effects
Acronym: IMAGINE
Duration: 01.2017 - 12.2020
Funded by: European Union, H2020-ICT
Code: 731761
Budget: 365,000 Euro
Today's robots are good at executing programmed motions, but they do not understand their actions in the sense that they could automatically generalize them to novel situations or recover from failures. IMAGINE seeks to enable robots to understand the structure of their environment and how it is affected by its actions. The core functional element is a generative model based on an association engine and a physics simulator. "Understanding" is given by the robot's ability to predict the effects of its actions, before and during their execution. This scientific objective is pursued in the context of recycling of electromechanical appliances. Current recycling practices do not automate disassembly, which exposes humans to hazardous materials, encourages illegal disposal, and creates significant threats to environment and health, often in third countries. IMAGINE will develop a TRL-5 prototype that can autonomously disassemble prototypical classes of devices, generate and execute disassembly actions for unseen instances of similar devices, and recover from certain failures.

Title: Imagining Other's Goals in Cognitive Robots
Acronym: IMAGINE-COG++
Duration: 06.2018 - 06.2019
Funded by: Bogazici University Research Fund
Budget: 36,000 TL
Code: 18A01P5
In this research project, we aim to design and implement an effective robotic system that can infer others' goals from their incomplete action executions, and that can help others achieving these goals. Our approach is inspired from helping behavior observed in infants; exploits robot's own sensorimotor control and affordance detection mechanisms in understanding demonstrators' actions and goals; and has similarities with human brain processing related to control and understanding of motor programs. Our system will follow a developmental progress similar to infants, whose performance in inferring goals of others' actions is closely linked to development of their own sensorimotor skills. At the end of this project, we plan to verify whether our developmental goal inference and helping strategy is effective or not through human-robot interface experiments using upper body Baxter robot in different tasks.

Title: Sağlarlık güdümlü karmaşık manipülasyon ögrenme çerçevesi
Duration: 14.03.2017 - 01.02.2019
Funded by: TUBITAK 2232, Return Fellowship Award
Code: 117C016
Budget: 108,000 TL
Bu proje ile, ortamın robota sunduğu sağlarlıkları (affordances) öğrenip modelleyerek sağlarlıklar ve sensör geribildirimleri ile desteklenen gelişmiş bir manipülasyon beceri sistemini kurmayı hedeflemekteyiz. Bu tip ortamlardaki `tutmak', `taşımak' ve `yerleştirmek' gibi eylemler tipik oldukları için, hareketleri, gösterim yolu ile öğrenme (learning by demonstration) ile robota aktarmayı planlamaktayız. Bu yolla yarı-yapısal ortamlar için gerekli manipülasyon becerilerini öğrendikten sonra, robot, ortamın sunduğu görsel ve diğer sağlarlıkların, bu becerilerin yürütülmesini nasıl etkilediğini öğrenmelidir.

Title: Learning in Cognitive Robots
Duration: 08.2016 - 08.2017
Funded by: Bogazici University Research Fund
Budget: 55,000 Euro
The aim of this project is to start forming a new cognitive and developmental robotics research group in Bogazici University with a special emphasis on intelligent and adaptive manipulation. This start-up fund will be used to build the laboratory with the most important and necessary setup that includes a human-friendly robotic system for manipulation (Baxter robot), a number of sensors for perception, and a workstation for computations and control.



Open student projects for CMPE 492

  • The topics are not limited with the ones below. You are free to suggest your own project description with the state of the art robots (Baxter, Sawyer, NAO) in our lab!
  • SERVE: See-Listen-Plan-Act In this project, you will integrate state-of-the-art DNN based object detection and classification systems for perception, existing libraries for speech recognition, grounded conceptual knowledge base for language interpretation, a planner for reasoning and robot actuators for achieving the given goals. Initially, we plan to investigate use of YOLO real-time object detection system, PRAXICON for translation from natural language instructions to robot knowledge base, PRADA engine for probabilistic planning, and Caf-feNet Deep Convolutional Neural Networks fine-tuned for robotic table-top settings.


  • NAO Robot Avatar: In this project, you will implement a system that enables seeing from NAO's eyes and moving with NAO's body. NAO's motions will be copied through utilizing an adapted whole-body tracking system, and the robot camera images will be displayed on a head-mount display system. This system will enable full embodiment, and will be used for a very fruitful research direction: Utilizing robot avatars to understand the underlying mechanisms of human sensorimotor processes through changing different aspects of the embodiment.

  • Graphical User Interface for Baxter Robot: The aim of this project is to implement a GUI to control Baxter robot. Through its user interface, we expect to move joints seperately, move the hand to a specific position, open and close the grippers, and display the sensors such as force/torque sensor, camera and depth data.


  • HRI: Robot control and communication through speech: The aim of this project is to integrate existing speech processing and synthesis tools to communication with the Baxter robot. English and Turkish languages will be used in communications, in setting tasks or in getting information from the robot. The robot's voice based communication skills will be reinforced with various interfaces including emotional faces displayed on the tablet.



  • Robot simulation and motion control for Sawyer: Operating a real robot can be cumbersome, risky and slow. Therefore, it is often helpful to be able to simulate the robot. Moreover, if a robot needs to move its hand into a desired target, it should not simply follow any path from its current position because it may hit an obstacle. Therefore, the robot needs to plan a path from its current pose to the target pose. The objective of this project is to create a realistic kinematic, volumetric and dynamic model of the Sawyer robot platform, to adapt a number of motion planning packages for Sawyer, and finally implement a benchmark task such as a pick-and-place operation across an obstacle.