Subtasks of T1.b Project name: National Competence Center – Cybernetics and Artificial Intelligence (abbrev. NCK KUI)
ID code: TN01000024
Supported by: Technology Agency of the Czech Republic
Programme: National Centres of Competence 1
Project duration: 01/2019 – 12/2020
Main investigator: prof. Ing. Vladimír Mařík, DrSc., dr. h. c.
Subproject T1.b: Machine perception, intelligent human-machine interface
Responsible prof. Ing. Václav Hlaváč, CSc.
Home Partners Subtasks Internal web pages Contacts

Subtasks of T1.b

We outlined the following ten subtaks of the Subtask T1.b at propsal writing in 2018.

1. Force/torque compliant robotic manipulators

The force/torque compliant robots are necessary and useful in many robotics tasks. They are considered to be an important tool for safe human-robot collaboration. This establishes the collaborative robot (cobot). The research in this subtask will enhance the state of the art in collaborative robotics; make it implementable on wider range of robots (in contrast to our current ability to work with two KUKA LBR iiwa arms, which we have in the lab). ROS (Robotic Operating System, likely it’s coming version 2) is the used middleware. The current obstacle is that that the feedback loop is not fast enough (about 20 Hz). We will also build on top of the background knowledge the R4I project (lead by R. Babuška) keeps providing to CTU-CIIRC team and provide it industrial relevance reciprocally.

2. Vision-based robot calibration

Vision based robot calibration is estimating/identifying parameters, e.g. dimensions, angles and coordinate transforms, of robots and mechanisms based on indirect measurements through cameras and 3D scanners. The problem is highly non-linear and often leads to solving hard polynomial problems, or to polynomial optimization of very non-linear functions with many local extrema. The vision and laser range-finding technology enables to simplify and fasten traditional calibration based on direct measurements poses of mechanism by expensive laser-based measuring devices. Moreover, vision-based robot calibration can work online and be used to monitor the state of robots during their life cycle. We will investigate and implement new methods improving vision based calibration methods using modern tool from algebraic geometry and polynomial optimization in combination with machine learning techniques. Our results will also find application in task T2.d in the area of autonomous driving. We will also build on top of the background knowledge the IMPACT project (lead by J. Šivic).

3. Tactile/vision-based servoing for robots

Perceptually rich servoing makes a first half of robot intelligence. The second half of it is usually in outside world perception/knowledge/representation and uncertain reasoning in it. This subtask will add new research to our rich experience to both halves, mainly in vision. We have gained the experience with tactile feedback lately. The design and development of a gripper with sensors tuned for the particular application is also needed and will be dealt with. The research should provide modules, which will serve as building blocks for potential industrial applications, both for roboticmanipulators and autonomous vehicles at the factory workshop. The fusion of information from different sensory sources will be dealt with too, also by the UTIA-CAS partner and not only.

4. Human-robot collaboration

Assuming the force/compliant robots (which we already have and will improve), we will endow robots with other perceptually-based feedback/servo loops as vision and speech (see a separate subtask for speech). The task will research how the industrial human-robot collaboration can be applied in industrial scenarios and boost them. This subtask will also provide situation awareness tools about humans and other participants at the factory floor shop as, e.g. autonomous vehicles delivering tools/materials. This situation awareness will be achieved through vision, range finding, touch, sound, etc. We will create a representation of the robot surrounding, maintain it and update it perceptually. We will use the representation for mission planning, safety consideration, etc.

5. Teaching robots more complicated manipulation skills/behaviors using imitation learning

The desired skills/behaviors could be shown by a human instructor or be learned from instruction videos, e.g. from rich sources at YouTube. Machine vision and computer vision tools will be used for extracting the relevant information and represent it in the industrial context. We will build on the background knowledge the IMPACT project (led by J. Šivic) keeps bringing to CTU-CIIRC team. The co-play with a subtask dealing with speech is assumed too.

6. Industrial vision and robot-based quality check

The subtask explores computer vision and machine learning methods for quality inspection in the industry and usage of embedded systems in it. The subtask is lead by VUT-FEKT team. Demands on visual industry inspection systems grow year by year significantly as new technologies emerge. Standard methods of explicit problem definition and solving it by means of series of image processing methods and classifiers are suitable only for limited set of clearly defined tasks. One of the newly appeared class of tasks is the one, when only OK objects are known and available and nobody can exactly define NOK object. Such tasks are called anomalies detection and machine learning approaches are often employed to solve them. In this subtask, we will focus on approaches how to effectively solve such bad conditioned problems, especially in an extremely time-demanding training phase. Our aim is tune relevant use-cases to our industrial partner Škoda Auto, which is our biggest industrial partner in this task.

7. Thermal imaging in industrial context

The subtask is led by UTIA-CAS team. It is devoted to Automatic visual inspection, recognition and quality control of industrial components based on IR thermal imaging. Two application domains are considered. The first one is the usual conveyor belt inspection in a factory. The second one relates to the contactless inspection in the electricity distribution network domain, e.g. of power line (wires), circuit breakers. The methodology will be based on the expertise of UTIA-CAS team in the area of multimodal image processing and the data fusion. The functionality of the proposed solution will be enriched by the database oriented tools. This will enable the integration of diagnosed elements into the asset management and risk-oriented maintenance concepts. Database tools will also allow significantly better fault trending and more reliable use of the residual life of the tested elements.

8. Perceptually rich application scenarios

This subtask will build application-relevant scenarios and implement them. The demonstration will the outcome of each scenario. We will present demonstrations will at the end of year project year. The feedback will serve to improve old scenarios. Each year a new scenario (scenarios) will be added. Scenarios will serve as the interface to other tasks of this project and mainly to potential industrial partners. Demonstration will also help us to understand safety issues including the ability to embed them into industrial realization.

9. Speech and language feedback

The subtask developed mainly by UWB team uses machine learning and artificial intelligence tools for human-machine communication in natural spoken and sign languages. Towards this general goal, we will deal with the following: (i) automatic conversion of spoken language to text; (ii) speech and/or sign language synthesis; (iii) voice or multimodal human-machine dialog; (iv) analysis of audio/audiovisual data and their intelligent search. Mentioned areas of speech and language technologies will be explored and developed for subsequent use in industrial and social practices, according to customer requirements. The subtask will co-play with other subtasks.

10. Autonomous driving application area

The background knowledge gained in several projects by CTU-CIIRC and CTU-FTS together with knowledge/skills to be developed in this subtask T1.b have direct applications to autonomously driven cars (with focus mainly on passenger cars and small utility vehicles). Subtask T1.b will contribute to the wider effort of CTU-FTS and Škoda Auto in this direction in other subtasks of this projects and also in other projects.


Responsible: Václav Hlaváč; Last modification 04.12.2019 01:35