CN111216133A - Robot demonstration programming method based on fingertip identification and hand motion tracking - Google Patents

Robot demonstration programming method based on fingertip identification and hand motion tracking Download PDF

Info

Publication number
CN111216133A
CN111216133A CN202010080638.6A CN202010080638A CN111216133A CN 111216133 A CN111216133 A CN 111216133A CN 202010080638 A CN202010080638 A CN 202010080638A CN 111216133 A CN111216133 A CN 111216133A
Authority
CN
China
Prior art keywords
image
robot
teaching
histogram
demonstration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010080638.6A
Other languages
Chinese (zh)
Other versions
CN111216133B (en
Inventor
雷渠江
徐杰
李秀昊
梁波
刘纪
刘俊豪
李致豪
王卫军
韩彰秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Advanced Technology of CAS
Original Assignee
Guangzhou Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Advanced Technology of CAS filed Critical Guangzhou Institute of Advanced Technology of CAS
Priority to CN202010080638.6A priority Critical patent/CN111216133B/en
Publication of CN111216133A publication Critical patent/CN111216133A/en
Application granted granted Critical
Publication of CN111216133B publication Critical patent/CN111216133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot demonstration programming method based on fingertip identification and hand motion tracking, which relates to the technical field of machine learning and comprises the following steps: step 1: completing the establishment of a hardware environment of a demonstration programming system, and manually demonstrating to complete a teaching task; step 2: recognizing the position and the posture of a human hand in the teaching process through machine vision, calculating a teaching motion track of the human hand in a robot coordinate system through coordinate transformation, and smoothing a teaching path by utilizing a particle filter algorithm; and step 3: recognizing human gestures through machine vision, planning different actions according to the gestures or indicating the ending and the beginning of a demonstration process; and 4, step 4: the robot completes the teaching task in the same posture by repeatedly using the teaching path obtained by visual recognition and the human gesture. The invention provides a simple and efficient method for teaching of the industrial robot, reduces the use threshold of the industrial robot, and greatly improves the programming efficiency.

Description

Robot demonstration programming method based on fingertip identification and hand motion tracking
Technical Field
The invention relates to the field of machine learning, in particular to a robot demonstration programming method based on fingertip identification and hand motion tracking.
Background
Industrial robots are now excellent at all parts of the world in performing tasks such as loading and unloading, handling, welding, painting and grinding. Compared with workers, robots have the advantages of high precision, low error rate, strong repeatability and the like, and can continuously run in severe environments, so that industrial robots are increasingly becoming an indispensable part in industrial production processes. In a factory, since the initial position and the target position of each part are determined, the robot can complete tasks well and quickly, but once the robot leaves a set environment, most robots cannot work normally; in addition, a demonstrator needs to have certain programming skills to operate the industrial robot, so that a worker has certain difficulty in operating the robot, and the application of the robot in small and medium-sized enterprises is hindered.
The inventor finds that the industrial robot programming method in the prior art is mainly teaching programming, and the teaching programming has the advantages of low programming threshold, simplicity, convenience, no need of an environmental model and the like, but the teaching online programming process is tedious, the efficiency is low, the precision is completely determined by visual observation of a demonstrator, and a satisfactory effect on complicated path teaching online programming is difficult to achieve. In addition, accidents are easy to happen in the teaching process, equipment is damaged if the accidents are light, and people are injured if the accidents are heavy.
In view of the above, in order to reduce the use threshold of the industrial robot and improve the programming efficiency, it is necessary to provide a new robot demonstration programming method based on fingertip recognition and hand motion tracking.
Disclosure of Invention
In view of the above, it is necessary to provide a robot demonstration programming method based on fingertip identification and hand motion tracking, which completes a teaching task through human hand demonstration and obtains a teaching path by using an algorithm, and a machine visually identifies the teaching path to complete the teaching task, so that the robot has the capability of learning new skills based on a very small amount of experiments, and can also ensure safety in the learning process, facilitate a demonstrator to add a new task, and quickly adjust the change of the environment, the task and the robot.
In order to realize the purpose, the invention is realized according to the following technical scheme:
a robot demonstration programming method based on fingertip identification and hand motion tracking comprises the following steps:
step 1: completing the establishment of a hardware environment of a demonstration programming system, and manually demonstrating to complete a teaching task;
step 2: recognizing the position and the posture of a human hand in the teaching process through machine vision, calculating a teaching motion track of the human hand in a robot coordinate system through coordinate transformation, and smoothing a teaching path by utilizing a particle filter algorithm;
and step 3: recognizing human gestures through machine vision, planning different actions according to the gestures or indicating the ending and the beginning of a demonstration process;
and 4, step 4: the robot completes the teaching task in the same posture by repeatedly using the teaching path obtained by visual recognition and the human gesture.
Further, in step 1, the hardware environment building comprises:
constructing the demonstration programming system, wherein the demonstration programming system comprises a Kinect depth camera, an ROS dynamic Ubantu16.04 notebook computer and a UR5 robot;
the Kinect depth camera is used as a visual sensor to collect visual data;
the notebook computer is connected with the Kinect camera and the UR5 robot to complete the training of the finger motion track capture algorithm.
Further, the step 2 specifically includes the following steps:
step 21: after acquiring a gesture image of a demonstrator, finishing finger segmentation by adopting a segmentation method based on a skin color histogram;
step 22: tracking the position of a hand in an image in the teaching process through a particle filter algorithm to obtain an ROI (region of interest) of the hand image;
step 23: and unifying the position and the posture information of the teaching points, namely the pointing points, into a robot coordinate system.
Further, in step 21, the segmentation method of the skin color histogram specifically includes:
step 211: converting skin color from RGB color space to HSV color space, wherein the formula adopted by the color space conversion is as follows:
Figure BDA0002380191400000031
Figure BDA0002380191400000032
v=max (3)
wherein h represents the hue of the color, s represents the saturation of the color, v represents the hue, (r, g, b) are the coordinates of red, green, blue of a color, respectively, and max and min are the maximum and minimum values of r, g, b, respectively;
step 212: the method comprises the steps of acquiring a skin color sample from a hand of a demonstrator, creating a histogram in a skin color area as a sample of a detection target, and carrying out equalization processing on the histogram;
wherein the histogram equalization process comprises the steps of:
calculating an accumulated histogram;
carrying out interval conversion on the accumulated histogram;
the calculation formula of the histogram equalization processing is as follows:
Figure BDA0002380191400000033
in the formula, skRepresenting new gray levels, L representing the number of gray levels of the pattern, njExpressing the number of pixel points of the j-th gray level in the image, and expressing the total number of the pixel points by N;
step 213: using the equalized histogram, image segmentation is performed on a histogram region having the same expression as the sample through a histogram back projection function, and the gray histogram of the finger image shows two peaks: one is a finger as a foreground, the other is a background, and a trough gray value is taken as a segmentation threshold value to effectively segment the foreground and the background; the calculation formula is as follows:
Figure BDA0002380191400000034
wherein F (x, y)/F (x, y) represents the divided finger image, ThfIs a segmentation threshold.
Further, in step 22, the particle filtering algorithm specifically includes:
step 221: calculating the probability density of the target area:
selecting a block of area in the image as a target by a manual labeling method, wherein the size of the target range is consistent with that of the tracking area, and the width and the height are h respectivelyx、hy(referred to as kernel bandwidth h) where there are n pixels with { z }i(xi,yi) N denotes 1,2, · n;
respectively calculating three color component histograms of a target image HSV color space, wherein each histogram has 8 intervals;
probability density function of the target region qu}u=1,…,mCan be expressed as:
Figure BDA0002380191400000041
wherein C is a normalization coefficient, z0Is the central pixel coordinate vector of the target region,
Figure BDA0002380191400000042
representing a pixel point zi(xi,yi) With the center of the target z0(x0,y0) Normalized distance to origin, b (z)i) Denotes ziThe associated histogram bin of the pixel value, u is the color index of the histogram, K (u) represents the Epanechnikov kernel, δ [ b (z)i)-u]The effect can be defined as:
Figure BDA0002380191400000043
step 222: particle set description and system state transition:
the particle set model is defined as:
Figure BDA0002380191400000044
wherein x and y are the central point positions of the target,
Figure BDA0002380191400000045
the speed of the target in the x, y directions, H, respectivelyx、HyRespectively the width and height of the target, a is a scale factor, and T represents transposition;
the system state transition refers to an updating process of a target state changing along with time, and a system state transition equation is expressed as follows:
sk=Ask-1+ek-1(9)
wherein A is a state transition matrix, ek-1Is the noise of the system;
step 223: system observation and state estimation:
the system state, i.e., the position output of the target, is expressed as follows:
Figure BDA0002380191400000051
in the formula, E(s)k) Representing a set of particles
Figure BDA0002380191400000052
As the state of the system at time k,
Figure BDA0002380191400000053
represents the weight of each particle at time k, which is defined as follows:
Figure BDA0002380191400000054
in the formula (I), the compound is shown in the specification,
Figure BDA0002380191400000055
the region probability density function represented for each particle is defined as follows:
Figure BDA0002380191400000056
step 224: target update and resampling: when the particle filter is updated in an iteration mode, the phenomenon of particle degradation is easy to occur, and resampling is needed;
the resampling process is as follows:
some particle weights at time k +1
Figure BDA0002380191400000057
If the particle size is too small, copying some particles by using particles with large weight to replace the particles when the particle size reaches a specified threshold value;
and processing each frame of picture in the video under the offline condition through color segmentation in the HSV color space to obtain the position of the actual target area in the image coordinate system.
Further, the step 23 specifically includes the following steps:
step 231: and completing the conversion between the image plane coordinate system and the fingertip pixel coordinate system, wherein the conversion equation is as follows:
Figure BDA0002380191400000058
where (x, y, z) represents a spatial point in the image, (u, v, d) represents a fingertip pixel coordinate point, and fxAnd fyDenotes the focal length of the camera in the x-axis and y-axis, CxAnd CyIs the aperture center of the camera, s is the scale factor of the depth image;
step 232: and completing the conversion between the camera coordinate system and the image plane coordinate system, wherein the conversion equation is as follows:
Figure BDA0002380191400000061
wherein [ u, v ]]As the coordinates of the point in the image coordinate system, [ x, y ]]Representing the coordinates of the point in the camera coordinate system, (d)x,dy) Is the actual size of the pixels on the camera, which is connected to the pixel coordinate system and the actual coordinate system, (u)0,v0) Representing the center of the image plane;
step 233: and completing the conversion between the robot coordinate system and the camera coordinate system, wherein the conversion equation is as follows:
Figure BDA0002380191400000062
wherein [ u, v ]]Is the coordinate of a point in the image coordinate system, [ X ]c,Yc,Zc]Representing the coordinates of the point in the camera coordinate system,
Figure BDA0002380191400000063
the reference of the camera is shown as follows,
Figure BDA0002380191400000064
indicating the camera external parameters.
Further, step 3 specifically includes the following steps:
step 31: acquisition of gesture data sets: carrying out color filtering processing on pixels in a specific region of the gesture through a kinect camera, obtaining 300 gray-scale images with the image size of 32 x 32px by each gesture, and storing the gray-scale images into a folder with the gesture command as the name by taking the current time as the file name; for each acquired gesture picture, generating 20 converted pictures in an image conversion mode so as to improve the generalization capability of the model;
step 32: structural design and training of a convolutional neural network: training the collected gesture data set by adopting a LeNet-5 convolutional neural network structure;
step 33: integration with ROS framework:
after the training of the convolutional neural network is completed, integrating a training result with an ROS framework to enable the network to exist in the ROS framework in an ROS node mode so as to control the motion behavior of the robot in the step 4 and change the running state of the system;
converting the gesture image into an OpenCV image format through cv _ bridge, and performing skin color filtering and Gaussian blur processing on the image in a specific area to obtain a picture with the same format as the gesture data set;
predicting the processed gesture image by using a prediction function, and determining the gesture of the demonstrator by repeating the same gesture;
and controlling the start and stop of the teaching of the track through gestures, and controlling the planning and operation of the track. The trouble of clicking a mouse is saved, and meanwhile, the friendliness of a demonstrator and the efficiency of teaching programming are greatly improved.
Preferably, in step 31, the gestures total 6 types: start, pause, end, plan, execute, restart; the image transformation mode comprises the following steps: rotation, translation, shearing, turning and symmetry.
Preferably, in step 32, the LeNet-5 convolutional neural network structure includes an input layer, a convolutional layer C1, a pooling layer S2, a convolutional layer C3, a pooling layer S4, a convolutional layer C5, a fully-connected layer F6, and a fully-connected layer OUTPUT; wherein:
the input layer of the LeNet-5 convolutional neural network structure: the input image is a single-channel 32 x 32 size binary image, represented by a matrix as [1,32,32 ];
the convolution layer C1 of the LeNet-5 convolution neural network structure: the convolution kernel size used by convolution layer C1 was 5 x 5, the sliding step was 1, the number of convolution kernels was 6, the image size after passing this layer became 24, and the output matrix was [6,28,28 ];
the pooling layer of the LeNet-5 convolutional neural network structure S2: pooling layer S2 kernel size was 2 × 2, step 2, after pooling operation, image size was halved to 14 × 14, output matrix was [6,14,14 ];
the convolution layer C3 of the LeNet-5 convolution neural network structure: using 60 filters of 5 × 5, 16 sets of feature maps with size 10 × 10 are obtained, and the output matrix is [16,10,10 ];
the pooling layer of the LeNet-5 convolutional neural network structure S4: pooling layer S4 kernel size was 2 × 2, step 2, after pooling operation, image size was halved to 5 × 5, output matrix was [16,5,5 ];
the convolution layer C5 of the LeNet-5 convolution neural network structure: using 120 × 15 — 1920 filters of 5 × 5, obtaining 120 feature maps with the size of 1 × 1, and obtaining an output matrix of [120,1,1 ];
the full-connection layer F6 of the LeNet-5 convolutional neural network structure: there are 84 neurons, and 84 groups of feature maps with the size of 1 × 1 are obtained, and the output matrix is [84,1,1 ];
and the full connection layer OUTPUT of the LeNet-5 convolutional neural network structure is as follows: the probability of the classification result is obtained by 10 Euclidean radial basis functions.
Preferably, in step 33, the ROS node processes the received gesture image in real time, identifies the image through a convolutional neural network, and issues the identification result to the ROS framework in the form of an ROS topic.
Further, the step 4 specifically includes the following steps:
step 41: filtering the position of the teaching path by adopting a median filter to obtain a smooth teaching path;
step 42: demonstrating a programming track by a demonstrator through fingertips in a robot working area, and shooting the whole teaching process by a camera;
step 43: after the demonstration of the teaching process is completed, the acquired teaching point information is converted, and the robot completes corresponding tasks in the same angle posture through control software.
Preferably, in step 41, the median filtering process includes:
step 411: for N discrete point sequences { p) in the teach pathi(xi,yi,zz)}i=1,...,NSelecting the length L of the filtering window to be 2 × L +1, wherein L is an integer; selecting a teach point in a window on a teach path as pi-1(xi-1,yi-1,zi-1),....,pi(xi,yi,zi),...,pi+1(xi+1,yi+1,zi+1) Wherein p isi(xi,yi,zi) Teaching points at the central position of the window;
step 412: the x, y and z coordinate values of the teaching points are respectively sorted from small to large to obtain the intermediate value (x)med,ymed,zmed),pi(xi,yi,zi) Is the median filtered output value, defined as follows:
Figure BDA0002380191400000081
compared with the prior art, the invention has the advantages and positive effects that at least:
in the teaching process, the position and posture information of the finger tip of the human is acquired by using a machine vision method and is converted into a robot program capable of reproducing a task, so that the robot is controlled to finish the task reproduction, the off-line teaching of the robot is realized by coordinate conversion and path filtering smoothing, a simple and efficient method is provided for the teaching of the industrial robot, and the use threshold of the industrial robot is reduced; the robot demonstration programming utilizes task demonstration to replace the traditional programming process, thereby greatly improving the programming efficiency, simplifying the robot reprogramming task, shortening the time from the reprogramming to the production use of the robot, and having important significance for the popularization and the application of the robot.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a robot demonstration programming method based on fingertip identification and hand motion tracking according to the present invention;
FIG. 2 is a robotic demonstration programming system provided in accordance with one embodiment of the present invention;
FIG. 3 is a flow chart of a particle filter target tracking algorithm according to an embodiment of the present invention;
FIG. 4 is a diagram of a gesture set according to an embodiment of the present invention;
fig. 5 is a diagram of a LeNet-5 model according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work based on the embodiments of the present invention belong to the protection scope of the present invention.
Examples
FIG. 1 provides a flow chart of a robot demonstration programming method of the invention based on fingertip identification and hand motion tracking; FIG. 2 provides a robotic demonstration programming system of the present invention; the programming principle and hardware basis of the present invention can be understood on the basis of fig. 1 and 2.
As shown in fig. 1, the robot demonstration programming method based on fingertip identification and hand motion tracking of the present invention comprises the following steps:
step 1: completing the establishment of a hardware environment of a demonstration programming system, and manually demonstrating to complete a teaching task;
step 2: recognizing the position and the posture of a human hand in the teaching process through machine vision, calculating a teaching motion track of the human hand in a robot coordinate system through coordinate transformation, and smoothing a teaching path by utilizing a particle filter algorithm;
and step 3: recognizing human gestures through machine vision, planning different actions according to the gestures or indicating the ending and the beginning of a demonstration process;
and 4, step 4: the robot completes the teaching task in the same posture by repeatedly using the teaching path obtained by visual recognition and the human gesture.
As shown in fig. 1, the specific operation flow of the present invention can be described as follows: after the hardware environment is built, demonstrating a teaching task by hands in a real scene, and acquiring a color image by a depth camera; after acquiring a gesture image of a demonstrator, completing finger segmentation by adopting a segmentation method based on a skin color histogram, tracking the position of a hand in the image in the teaching process by a particle filter algorithm, acquiring an ROI (region of interest) of the hand image, and then converting a coordinate system of teaching points, namely the positions and posture information of fingertips, into a robot coordinate system; and then, filtering the position of the teaching path by adopting a median filter to obtain a smooth teaching path, demonstrating a programming track by using fingertips in a robot working area and shooting and recording the programming track by using a camera by a demonstrator, and controlling the robot to complete corresponding tasks in the same-angle posture after the demonstration is completed.
It should be noted that other method flows or operation modes obtained without inventive work on the basis of the present invention are also included in the protection scope of the present invention without departing from the principle and spirit of the present invention.
As shown in fig. 2, further, in step 1, the hardware environment building includes:
constructing the demonstration programming system, wherein the demonstration programming system comprises a Kinect depth camera, an ROS dynamic Ubantu16.04 notebook computer and a UR5 robot;
the Kinect depth camera is used as a visual sensor to collect visual data;
the notebook computer is connected with the Kinect camera and the UR5 robot to complete the training of the finger motion track capture algorithm.
Further, the step 2 specifically includes the following steps:
step 21: after acquiring a gesture image of a demonstrator, finishing finger segmentation by adopting a segmentation method based on a skin color histogram;
step 22: tracking the position of a hand in an image in the teaching process through a particle filter algorithm to obtain an ROI (region of interest) of the hand image;
step 23: and unifying the position and the posture information of the teaching points, namely the pointing points, into a robot coordinate system.
Further, in step 21, the segmentation method of the skin color histogram specifically includes:
step 211: converting skin color from RGB color space to HSV color space, wherein the formula adopted by the color space conversion is as follows:
Figure BDA0002380191400000111
Figure BDA0002380191400000112
v=max (3)
wherein h represents the hue of the color, s represents the saturation of the color, v represents the hue, (r, g, b) are the coordinates of red, green, blue of a color, respectively, and max and min are the maximum and minimum values of r, g, b, respectively;
step 212: the method comprises the steps of acquiring a skin color sample from a hand of a demonstrator, creating a histogram in a skin color area as a sample of a detection target, and carrying out equalization processing on the histogram;
wherein the histogram equalization process comprises the steps of:
calculating an accumulated histogram;
carrying out interval conversion on the accumulated histogram;
the calculation formula of the histogram equalization processing is as follows:
Figure BDA0002380191400000113
in the formula, skRepresenting new gray levels, L representing the number of gray levels of the pattern, njExpressing the number of pixel points of the j-th gray level in the image, and expressing the total number of the pixel points by N;
step 213: using the equalized histogram, image segmentation is performed on a histogram region having the same expression as the sample through a histogram back projection function, and the gray histogram of the finger image shows two peaks: one is a finger as a foreground, the other is a background, and a trough gray value is taken as a segmentation threshold value to effectively segment the foreground and the background; the calculation formula is as follows:
Figure BDA0002380191400000121
wherein F (x, y)/F (x, y) represents the divided finger image, ThfIs a segmentation threshold.
As shown in fig. 3, further, in step 22, the particle filtering algorithm specifically includes:
step 221: calculating the probability density of the target area:
selecting a block of area in the image as a target by a manual labeling method, wherein the size of the target range is consistent with that of the tracking area, and the width and the height are h respectivelyx、hy(referred to as kernel bandwidth h) where there are n pixels with { z }i(xi,yi) N denotes 1,2, · n;
respectively calculating three color component histograms of a target image HSV color space, wherein each histogram has 8 intervals;
probability density function of the target region qu}u=1,…,mCan be expressed as:
Figure BDA0002380191400000122
wherein C is a normalization coefficient, z0Is the central pixel coordinate vector of the target region,
Figure BDA0002380191400000123
representing a pixel point zi(xi,yi) With the center of the target z0(x0,y0) Normalized distance to origin, b (z)i) Denotes ziThe associated histogram bin of the pixel value, u is the color index of the histogram, K (u) represents the Epanechnikov kernel, δ [ b (z)i)-u]The effect can be defined as:
Figure BDA0002380191400000124
step 222: particle set description and system state transition:
the particle set model is defined as:
Figure BDA0002380191400000125
wherein x and y are the central point positions of the target,
Figure BDA0002380191400000126
the speed of the target in the x, y directions, H, respectivelyx、HyRespectively the width and height of the target, a is a scale factor, and T represents transposition;
the system state transition refers to an updating process of a target state changing along with time, and a system state transition equation is expressed as follows:
sk=Ask-1+ek-1(9)
wherein A is a state transition matrix, ek-1Is the noise of the system;
step 223: system observation and state estimation:
the system state, i.e., the position output of the target, is expressed as follows:
Figure BDA0002380191400000131
in the formula, E(s)k) Representing a set of particles
Figure BDA0002380191400000132
As the state of the system at time k,
Figure BDA0002380191400000133
represents the weight of each particle at time k, which is defined as follows:
Figure BDA0002380191400000134
in the formula (I), the compound is shown in the specification,
Figure BDA0002380191400000135
the region probability density function represented for each particle is defined as follows:
Figure BDA0002380191400000136
step 224: target update and resampling: when the particle filter is updated in an iteration mode, the phenomenon of particle degradation is easy to occur, and resampling is needed;
the resampling process is as follows:
some particle weights at time k +1
Figure BDA0002380191400000137
If the particle size is too small, copying some particles by using particles with large weight to replace the particles when the particle size reaches a specified threshold value;
and processing each frame of picture in the video under the offline condition through color segmentation in the HSV color space to obtain the position of the actual target area in the image coordinate system.
Further, the step 23 specifically includes the following steps:
step 231: and completing the conversion between the image plane coordinate system and the fingertip pixel coordinate system, wherein the conversion equation is as follows:
Figure BDA0002380191400000138
where (x, y, z) represents a spatial point in the image, (u, v, d) represents a fingertip pixel coordinate point, and fxAnd fyDenotes the focal length of the camera in the x-axis and y-axis, CxAnd CyIs the aperture center of the camera, s is the scale factor of the depth image;
step 232: and completing the conversion between the camera coordinate system and the image plane coordinate system, wherein the conversion equation is as follows:
Figure BDA0002380191400000141
wherein [ u, v ]]As the coordinates of the point in the image coordinate system, [ x, y ]]Representing the coordinates of the point in the camera coordinate system, (d)x,dy) Is the actual size of the pixels on the camera, which is connected to the pixel coordinate system and the actual coordinate system, (u)0,v0) Representing the center of the image plane;
step 233: and completing the conversion between the robot coordinate system and the camera coordinate system, wherein the conversion equation is as follows:
Figure BDA0002380191400000142
wherein [ u, v ]]Is the coordinate of a point in the image coordinate system, [ X ]c,Yc,Zc]Representing the coordinates of the point in the camera coordinate system,
Figure BDA0002380191400000143
the reference of the camera is shown as follows,
Figure BDA0002380191400000144
indicating the camera external parameters.
Further, step 3 specifically includes the following steps:
step 31: acquisition of gesture data sets: carrying out color filtering processing on pixels in a specific region of the gesture through a kinect camera, obtaining 300 gray-scale images with the image size of 32 x 32px by each gesture, and storing the gray-scale images into a folder with the gesture command as the name by taking the current time as the file name; for each acquired gesture picture, generating 20 converted pictures in an image conversion mode so as to improve the generalization capability of the model;
step 32: structural design and training of a convolutional neural network: training the collected gesture data set by adopting a LeNet-5 convolutional neural network structure;
step 33: integration with ROS framework:
after the training of the convolutional neural network is completed, integrating a training result with an ROS framework to enable the network to exist in the ROS framework in an ROS node mode so as to control the motion behavior of the robot in the step 4 and change the running state of the system;
converting the gesture image into an OpenCV image format through cv _ bridge, and performing skin color filtering and Gaussian blur processing on the image in a specific area to obtain a picture with the same format as the gesture data set;
predicting the processed gesture image by using a prediction function, and determining the gesture of the demonstrator by repeating the same gesture;
the teaching device has the advantages that the teaching is controlled to start and stop through gestures, the planning and the operation of the tracks are controlled, the trouble of clicking a mouse is eliminated, and meanwhile, the friendliness of a demonstrator and the efficiency of teaching programming are greatly improved.
As shown in fig. 4, in step 31, the gestures are preferably 6: start, pause, end, plan, execute, restart; (i.e. Start/Stop/Finish/Plan/Execute/Restart in the drawings) the image transformation method comprises the following steps: rotation, translation, shearing, turning and symmetry.
As shown in fig. 5, preferably, in step 32, the LeNet-5 convolutional neural network structure includes an input layer, a convolutional layer C1, a pooling layer S2, a convolutional layer C3, a pooling layer S4, a convolutional layer C5, a fully-connected layer F6, and a fully-connected layer OUTPUT; wherein:
the input layer of the LeNet-5 convolutional neural network structure: the input image is a single-channel 32 x 32 size binary image, represented by a matrix as [1,32,32 ];
the convolution layer C1 of the LeNet-5 convolution neural network structure: the convolution kernel size used by convolution layer C1 was 5 x 5, the sliding step was 1, the number of convolution kernels was 6, the image size after passing this layer became 24, and the output matrix was [6,28,28 ];
the pooling layer of the LeNet-5 convolutional neural network structure S2: pooling layer S2 kernel size was 2 × 2, step 2, after pooling operation, image size was halved to 14 × 14, output matrix was [6,14,14 ];
the convolution layer C3 of the LeNet-5 convolution neural network structure: using 60 filters of 5 × 5, 16 sets of feature maps with size 10 × 10 are obtained, and the output matrix is [16,10,10 ];
the pooling layer of the LeNet-5 convolutional neural network structure S4: pooling layer S4 kernel size was 2 × 2, step 2, after pooling operation, image size was halved to 5 × 5, output matrix was [16,5,5 ];
the convolution layer C5 of the LeNet-5 convolution neural network structure: using 120 × 15 — 1920 filters of 5 × 5, obtaining 120 feature maps with the size of 1 × 1, and obtaining an output matrix of [120,1,1 ];
the full-connection layer F6 of the LeNet-5 convolutional neural network structure: there are 84 neurons, and 84 groups of feature maps with the size of 1 × 1 are obtained, and the output matrix is [84,1,1 ];
and the full connection layer OUTPUT of the LeNet-5 convolutional neural network structure is as follows: the probability of the classification result is obtained by 10 Euclidean radial basis functions.
Preferably, in step 33, the ROS node processes the received gesture image in real time, identifies the image through a convolutional neural network, and issues the identification result to the ROS framework in the form of an ROS topic.
Further, the step 4 specifically includes the following steps:
step 41: filtering the position of the teaching path by adopting a median filter to obtain a smooth teaching path;
step 42: demonstrating a programming track by a demonstrator through fingertips in a robot working area, and shooting the whole teaching process by a camera;
step 43: after the demonstration of the teaching process is completed, the acquired teaching point information is converted, and the robot completes corresponding tasks in the same angle posture through control software.
Preferably, in step 41, the median filtering process includes:
step 411: for N discrete point sequences { p) in the teach pathi(xi,yi,zz)}i=1,...,NSelecting the length L of the filtering window to be 2 × L +1, wherein L is an integer; selecting a teach point in a window on a teach path as pi-1(xi-1,yi-1,zi-1),....,pi(xi,yi,zi),...,pi+1(xi+1,yi+1,zi+1) Wherein p isi(xi,yi,zi) Teaching points at the central position of the window;
step 412: the x, y and z coordinate values of the teaching points are respectively sorted from small to large to obtain the intermediate value (x)med,ymed,zmed),pi(xi,yi,zi) Is the median filtered output value, defined as follows:
Figure BDA0002380191400000161
the above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the appended claims.

Claims (14)

1. A robot demonstration programming method based on fingertip identification and hand motion tracking is characterized by comprising the following steps:
step 1: completing the establishment of a hardware environment of a demonstration programming system, and manually demonstrating to complete a teaching task;
step 2: recognizing the position and the posture of a human hand in the teaching process through machine vision, calculating a teaching motion track of the human hand in a robot coordinate system through coordinate transformation, and smoothing a teaching path by utilizing a particle filter algorithm;
and step 3: recognizing human gestures through machine vision, planning different actions according to the gestures or indicating the ending and the beginning of a demonstration process;
and 4, step 4: the robot completes the teaching task in the same posture by repeatedly using the teaching path obtained by visual recognition and the human gesture.
2. A robot demonstration programming method based on fingertip identification and hand motion tracking according to claim 1, characterized in that in step 1, said hardware environment construction comprises:
constructing the demonstration programming system, wherein the demonstration programming system comprises a Kinect depth camera, an ROS dynamic Ubantu16.04 notebook computer and a UR5 robot;
the Kinect depth camera is used as a visual sensor to collect visual data;
the notebook computer is connected with the Kinect camera and the UR5 robot to complete the training of the finger motion track capture algorithm.
3. A robot demonstration programming method based on fingertip identification and hand motion tracking according to claim 2, characterized in that said step 2 specifically comprises the following steps:
step 21: after acquiring a gesture image of a demonstrator, finishing finger segmentation by adopting a segmentation method based on a skin color histogram;
step 22: tracking the position of a hand in an image in the teaching process through a particle filter algorithm to obtain an ROI (region of interest) of the hand image;
step 23: and unifying the position and the posture information of the teaching points, namely the pointing points, into a robot coordinate system.
4. The programming method for demonstration of a robot based on fingertip identification and hand motion tracking according to claim 3, wherein in step 21, the segmentation method of the skin color histogram is specifically:
step 211: converting the skin color from an RGB color space to an HSV color space;
step 212: the method comprises the steps of acquiring a skin color sample from a hand of a demonstrator, creating a histogram in a skin color area as a sample of a detection target, and carrying out equalization processing on the histogram;
step 213: using the equalized histogram, image segmentation is performed on a histogram region having the same expression as the sample through a histogram back projection function, and the gray histogram of the finger image shows two peaks: one is a finger as a foreground, the other is a background, and a trough gray value is taken as a segmentation threshold value to effectively segment the foreground and the background; the calculation formula is as follows:
Figure FDA0002380191390000021
wherein F (x, y)/F (x, y) represents the divided finger image, ThfIs a segmentation threshold.
5. A robotic presentation programming method based on fingertip identification and hand motion tracking according to claim 4, characterized in that said histogram equalization process comprises the following steps:
calculating an accumulated histogram;
carrying out interval conversion on the accumulated histogram;
the calculation formula of the histogram equalization processing is as follows:
Figure FDA0002380191390000022
in the formula, skRepresenting new gray levels, L representing the number of gray levels of the pattern, njAnd the number of the pixel points of the j-th gray level in the image is represented, and N represents the total number of the pixel points.
6. A robot demonstration programming method based on fingertip identification and hand motion tracking according to claim 3, characterized in that in step 22, said particle filtering algorithm comprises in particular the following steps:
step 221: calculating the probability density of the target area:
selecting a block of area in the image as a target by a manual labeling method, wherein the size of the target range is consistent with that of the tracking area, and the width and the height are h respectivelyx、hy(referred to as kernel bandwidth h) where there are n pixels with { z }i(xi,yi) N denotes 1,2, · n;
respectively calculating three color component histograms of a target image HSV color space, wherein each histogram has 8 intervals;
probability density function of the target region qu}u=1,…,mCan be expressed as:
Figure FDA0002380191390000023
wherein C is a normalization coefficient, z0Is the central pixel coordinate vector of the target region,
Figure FDA0002380191390000024
representing a pixel point zi(xi,yi) With the center of the target z0(x0,y0) Normalized distance to origin, b (z)i) Denotes ziThe associated histogram bin of the pixel value, u is the color index of the histogram, K (u) represents the Epanechnikov kernel, δ [ b (z)i)-u]The effect can be defined as:
Figure FDA0002380191390000031
step 222: particle set description and system state transition:
the particle set model is defined as:
Figure FDA0002380191390000032
wherein x and y are the central point positions of the target,
Figure FDA0002380191390000033
the speed of the target in the x, y directions, H, respectivelyx、HyWidth and height of the target, respectively, a is a scale factor;
the system state transition refers to an updating process of a target state changing along with time, and a system state transition equation is expressed as follows:
sk=Ask-1+ek-1(9)
wherein A is a state transition matrix, ek-1Is the noise of the system;
step 223: system observation and state estimation:
the system state, i.e., the position output of the target, is expressed as follows:
Figure FDA0002380191390000034
and (3) showing the weight of each particle at the k moment, wherein the weight is defined as follows:
Figure FDA0002380191390000035
in the formula (I), the compound is shown in the specification,
Figure FDA0002380191390000036
the region probability density function represented for each particle is defined as follows:
Figure FDA0002380191390000037
step 224: target update and resampling: when the particle filter is updated iteratively, the phenomenon of particle degradation is easy to occur, and resampling is needed.
7. A method for programming a robotic presentation based on fingertip identification and hand motion tracking according to claim 6, wherein in step 224, said resampling process is as follows:
some particle weights at time k +1
Figure FDA0002380191390000041
If the particle size is too small, copying some particles by using particles with large weight to replace the particles when the particle size reaches a specified threshold value;
and processing each frame of picture in the video under the offline condition through color segmentation in the HSV color space to obtain the position of the actual target area in the image coordinate system.
8. A method for programming a robotic presentation based on fingertip identification and hand motion tracking according to claim 3, wherein said step 23 comprises the following steps:
step 231: and completing the conversion between the image plane coordinate system and the fingertip pixel coordinate system, wherein the conversion equation is as follows:
Figure FDA0002380191390000042
where (x, y, z) represents a spatial point in the image, (u, v, d) represents a fingertip pixel coordinate point, and fxAnd fyDenotes the focal length of the camera in the x-axis and y-axis, CxAnd CyIs the aperture center of the camera, s is the scale factor of the depth image;
step 232: and completing the conversion between the camera coordinate system and the image plane coordinate system, wherein the conversion equation is as follows:
Figure FDA0002380191390000043
wherein [ u, v ]]As the coordinates of the point in the image coordinate system, [ x, y ]]Representing the coordinates of the point in the camera coordinate system, (d)x,dy) Is the actual size of the pixels on the camera, which is connected to the pixel coordinate system and the actual coordinate system, (u)0,v0) Representing the center of the image plane;
step 233: and completing the conversion between the robot coordinate system and the camera coordinate system, wherein the conversion equation is as follows:
Figure FDA0002380191390000044
wherein [ u, v ]]Is the coordinate of a point in the image coordinate system, [ X ]c,Yc,Zc]Representing the coordinates of the point in the camera coordinate system,
Figure FDA0002380191390000051
the reference of the camera is shown as follows,
Figure FDA0002380191390000052
indicating the camera external parameters.
9. The programming method for demonstration of a robot based on fingertip identification and hand motion tracking according to claim 1, wherein the step 3 specifically comprises the following steps:
step 31: acquisition of gesture data sets: carrying out color filtering processing on pixels in a specific region of the gesture through a kinect camera, obtaining 300 gray-scale images with the image size of 32 x 32px by each gesture, and storing the gray-scale images into a folder with the gesture command as the name by taking the current time as the file name; for each acquired gesture picture, generating 20 converted pictures in an image conversion mode so as to improve the generalization capability of the model;
step 32: structural design and training of a convolutional neural network: training the collected gesture data set by adopting a LeNet-5 convolutional neural network structure;
step 33: integration with ROS framework:
after the training of the convolutional neural network is completed, integrating a training result with an ROS framework to enable the network to exist in the ROS framework in an ROS node mode;
converting the gesture image into an OpenCV image format through cv _ bridge, and performing skin color filtering and Gaussian blur processing on the image in a specific area to obtain a picture with the same format as the gesture data set;
predicting the processed gesture image by using a prediction function, and determining the gesture of the demonstrator by repeating the same gesture;
and controlling the start and stop of the teaching of the track through gestures, and controlling the planning and operation of the track.
10. A method for programming a robotic presentation based on fingertip recognition and hand motion tracking according to claim 9, wherein in step 31 said gestures total 6: start, pause, end, plan, execute, restart; the image transformation mode comprises the following steps: rotation, translation, shearing, turning and symmetry.
11. The fingertip identification and hand motion tracking based robot demonstration programming method of claim 9, wherein in step 32, the LeNet-5 convolutional neural network structure comprises an input layer, a convolutional layer C1, a pooling layer S2, a convolutional layer C3, a pooling layer S4, a convolutional layer C5, a fully-connected layer F6, and a fully-connected layer OUTPUT.
12. The programming method for demonstration programming of a robot based on fingertip identification and hand movement tracking according to claim 9, characterized in that, in step 33, the ROS node processes the received gesture images in real time, identifies the images through a convolutional neural network, and distributes the identification result to the ROS frame in the form of ROS topic.
13. A robot demonstration programming method based on fingertip identification and hand motion tracking according to claim 1, characterized in that said step 4 specifically comprises the following steps:
step 41: filtering the position of the teaching path by adopting a median filter to obtain a smooth teaching path;
step 42: demonstrating a programming track by a demonstrator through fingertips in a robot working area, and shooting the whole teaching process by a camera;
step 43: after the demonstration of the teaching process is completed, the acquired teaching point information is converted, and the robot completes corresponding tasks in the same angle posture through control software.
14. A method for programming a robot demonstration based on fingertip identification and hand movement tracking according to claim 13, wherein in step 41, said median filtering process comprises:
step 411: for N discrete point sequences { p) in the teach pathi(xi,yi,zz)}i=1,...,NSelecting the length L of the filtering window to be 2 × L +1, wherein L is an integer; selecting a teach point in a window on a teach path as pi-1(xi-1,yi-1,zi-1),....,pi(xi,yi,zi),...,pi+1(xi+1,yi+1,zi+1) Wherein p isi(xi,yi,zi) Teaching points at the central position of the window;
step 412: the x, y and z coordinate values of the teaching points are respectively sorted from small to large to obtain the intermediate value (x)med,ymed,zmed),pi(xi,yi,zi) Is the median filtered output value, defined as follows:
Figure FDA0002380191390000061
CN202010080638.6A 2020-02-05 2020-02-05 Robot demonstration programming method based on fingertip identification and hand motion tracking Active CN111216133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010080638.6A CN111216133B (en) 2020-02-05 2020-02-05 Robot demonstration programming method based on fingertip identification and hand motion tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010080638.6A CN111216133B (en) 2020-02-05 2020-02-05 Robot demonstration programming method based on fingertip identification and hand motion tracking

Publications (2)

Publication Number Publication Date
CN111216133A true CN111216133A (en) 2020-06-02
CN111216133B CN111216133B (en) 2022-11-22

Family

ID=70831613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010080638.6A Active CN111216133B (en) 2020-02-05 2020-02-05 Robot demonstration programming method based on fingertip identification and hand motion tracking

Country Status (1)

Country Link
CN (1) CN111216133B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111890357A (en) * 2020-07-01 2020-11-06 广州中国科学院先进技术研究所 Intelligent robot grabbing method based on action demonstration teaching
CN112045680A (en) * 2020-09-02 2020-12-08 山东大学 Cloth stacking robot control system and control method based on behavior cloning
CN113319854A (en) * 2021-06-25 2021-08-31 河北工业大学 Visual demonstration method and system for bath robot
CN113822251A (en) * 2021-11-23 2021-12-21 齐鲁工业大学 Ground reconnaissance robot gesture control system and control method based on binocular vision
CN115013386A (en) * 2022-05-30 2022-09-06 燕山大学 Hydraulic system protection device control method based on visual identification and control device thereof
CN115990891A (en) * 2023-03-23 2023-04-21 湖南大学 Robot reinforcement learning assembly method based on visual teaching and virtual-actual migration

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834922A (en) * 2015-05-27 2015-08-12 电子科技大学 Hybrid neural network-based gesture recognition method
CN105739702A (en) * 2016-01-29 2016-07-06 电子科技大学 Multi-posture fingertip tracking method for natural man-machine interaction
CN107813310A (en) * 2017-11-22 2018-03-20 浙江优迈德智能装备有限公司 One kind is based on the more gesture robot control methods of binocular vision
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning
CN108983980A (en) * 2018-07-27 2018-12-11 河南科技大学 A kind of mobile robot basic exercise gestural control method
CN110147162A (en) * 2019-04-17 2019-08-20 江苏大学 A kind of reinforced assembly teaching system and its control method based on fingertip characteristic

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834922A (en) * 2015-05-27 2015-08-12 电子科技大学 Hybrid neural network-based gesture recognition method
CN105739702A (en) * 2016-01-29 2016-07-06 电子科技大学 Multi-posture fingertip tracking method for natural man-machine interaction
CN107813310A (en) * 2017-11-22 2018-03-20 浙江优迈德智能装备有限公司 One kind is based on the more gesture robot control methods of binocular vision
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning
CN108983980A (en) * 2018-07-27 2018-12-11 河南科技大学 A kind of mobile robot basic exercise gestural control method
CN110147162A (en) * 2019-04-17 2019-08-20 江苏大学 A kind of reinforced assembly teaching system and its control method based on fingertip characteristic

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111890357A (en) * 2020-07-01 2020-11-06 广州中国科学院先进技术研究所 Intelligent robot grabbing method based on action demonstration teaching
CN111890357B (en) * 2020-07-01 2023-07-04 广州中国科学院先进技术研究所 Intelligent robot grabbing method based on action demonstration teaching
CN112045680A (en) * 2020-09-02 2020-12-08 山东大学 Cloth stacking robot control system and control method based on behavior cloning
CN112045680B (en) * 2020-09-02 2022-03-04 山东大学 Cloth stacking robot control system and control method based on behavior cloning
CN113319854A (en) * 2021-06-25 2021-08-31 河北工业大学 Visual demonstration method and system for bath robot
CN113822251A (en) * 2021-11-23 2021-12-21 齐鲁工业大学 Ground reconnaissance robot gesture control system and control method based on binocular vision
CN115013386A (en) * 2022-05-30 2022-09-06 燕山大学 Hydraulic system protection device control method based on visual identification and control device thereof
CN115990891A (en) * 2023-03-23 2023-04-21 湖南大学 Robot reinforcement learning assembly method based on visual teaching and virtual-actual migration

Also Published As

Publication number Publication date
CN111216133B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN111216133B (en) Robot demonstration programming method based on fingertip identification and hand motion tracking
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
CN109993073B (en) Leap Motion-based complex dynamic gesture recognition method
CN104463191A (en) Robot visual processing method based on attention mechanism
JP2018514036A (en) Machine vision with dimensional data reduction
Liu et al. Using unsupervised deep learning technique for monocular visual odometry
Huang et al. Deepfinger: A cascade convolutional neuron network approach to finger key point detection in egocentric vision with mobile camera
Perimal et al. Hand-gesture recognition-algorithm based on finger counting
Gourob et al. A robotic hand: Controlled with vision based hand gesture recognition system
CN110807391A (en) Human body posture instruction identification method for human-unmanned aerial vehicle interaction based on vision
Lou Crawling robot manipulator tracking based on gaussian mixture model of machine vision
CN113681552B (en) Five-dimensional grabbing method for robot hybrid object based on cascade neural network
Lin et al. Robot grasping based on object shape approximation and LightGBM
Ikram et al. Real time hand gesture recognition using leap motion controller based on CNN-SVM architechture
Gadhiya et al. Analysis of deep learning based pose estimation techniques for locating landmarks on human body parts
Vysocky et al. Generating synthetic depth image dataset for industrial applications of hand localization
Pradhan et al. Design of intangible interface for mouseless computer handling using hand gestures
Hussain et al. Real-time robot-human interaction by tracking hand movement & orientation based on morphology
Deherkar et al. Gesture controlled virtual reality based conferencing
Beknazarova et al. Machine learning algorithms are used to detect and track objects on video images
Srividya et al. Hand Recognition and Motion Analysis using Faster RCNN
Guarneri Hand Gesture Recognition for Home Robotics
Ovchar et al. Automated recognition and sorting of agricultural objects using multi-agent approach.
Jha et al. Real Time Hand Gesture Recognition for Robotic Control
Madane et al. Traffic surveillance: theoretical survey of video motion detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant