CN107315410B - Automatic obstacle removing method for robot - Google Patents

Automatic obstacle removing method for robot Download PDF

Info

Publication number
CN107315410B
CN107315410B CN201710455042.8A CN201710455042A CN107315410B CN 107315410 B CN107315410 B CN 107315410B CN 201710455042 A CN201710455042 A CN 201710455042A CN 107315410 B CN107315410 B CN 107315410B
Authority
CN
China
Prior art keywords
point
image
robot
control system
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710455042.8A
Other languages
Chinese (zh)
Other versions
CN107315410A (en
Inventor
顾金凤
刘祥勇
纪亚强
唐炜
章玮滨
刘操
张玮文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Linus Intelligent Equipment Hubei Co ltd
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN201710455042.8A priority Critical patent/CN107315410B/en
Publication of CN107315410A publication Critical patent/CN107315410A/en
Application granted granted Critical
Publication of CN107315410B publication Critical patent/CN107315410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic obstacle removing method for a robot, which comprises an omnidirectional moving chassis, a manipulator obstacle removing device and an image acquisition device, wherein the manipulator obstacle removing device comprises a tail end executing mechanism and a six-degree-of-freedom mechanical ARM; the method comprises the following steps: 1. starting a control system, selecting a working mode, and determining the initial position of the six-degree-of-freedom mechanical arm by using visual feedback; 2. the ARM carries out primary processing on the image in the step 1 to obtain a coordinate of a grabbing point, and transmits the coordinate to a PC (personal computer) in real time to process to obtain a judgment result; 3. the control system carries out the next step of work according to the judgment result; 4. and detecting the corner point of the visual marker of the tail end actuating mechanism, and controlling the six-degree-of-freedom mechanical arm to approach and grab the target object. The robot obstacle-removing robot can optionally and automatically remove obstacles or manually, has high intelligent degree of robot obstacle-removing and diversified functions, can be applied to obstacle-removing on the ground and underground detection, reduces the labor intensity and improves the working efficiency.

Description

Automatic obstacle removing method for robot
Technical Field
The invention belongs to the technical field of robots, and particularly relates to an automatic obstacle removing method for a robot.
Technical Field
With the development of science and technology, robots are developing into an increasingly huge family, replacing human beings to do various kinds of work, and having wide application prospects in the aspects of production and life, particularly in dangerous environments and extreme environment operations. Especially, the aspects of robot obstacle removal, robot rescue and the like are utilized, so that the labor force is liberated, the working efficiency is improved, and the life safety is guaranteed.
For the obstacle removing work of the robot, the requirements on the function complexity degree of the obstacle removing work are different due to different requirements and purposes. In order to ensure accuracy and safety in troubleshooting on some important occasions, manual real-time participation is needed, and the requirement on the functions of the robot is not high. In some common occasions, the robot can work freely without personnel, and the requirement on the obstacle elimination analysis capability of the robot is high. How to realize the function compatibility in the two situations is a problem to be solved by modern robots. At present, most of the robot obstacle removing functions in the prior art are single, the intelligent degree is insufficient, and the flexibility is low.
Chinese patent application No. CN201420569430.0 discloses a robot arm obstacle-removing fire-extinguishing robot, which has real-time monitoring capability, but does not have image processing and analyzing capability, cannot identify obstacles, cannot ensure the control accuracy of the robot arm, and needs manual whole-course participation. Chinese patent application with application number CN201610131450.3 discloses an autonomous obstacle-removing type intelligent vehicle system, though do not need manual participation, the laser ranging module that this system utilized can only obtain the distance of the barrier in the dead ahead, can't obtain the three-dimensional coordinate of barrier, influence the precision of snatching of arm, and the obstacle-removing target is also fixed as the barrier in the dead ahead, the crawler-type structure that adopts simultaneously makes the steering range limited, and the obstacle-removing area is little.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic obstacle removing method for a robot, the obstacle removing method can perform autonomous or manual work according to selection, intelligently judge obstacles through a vision system, move in 360 degrees in all directions according to instructions, ensure the control precision of a robot arm of the robot by using vision feedback, and has high intelligent degree and good flexibility.
In order to solve the technical problems, the invention adopts the following technical scheme.
The invention relates to an automatic obstacle removing method for a robot, wherein the robot comprises an omnidirectional moving chassis, a manipulator obstacle removing device, an image acquisition device and a robot control system; the omnidirectional moving chassis comprises Mecanum wheels, a speed reducing motor, an H-shaped suspension, a vehicle body and a shock absorber;
the car body is as follows: the robot control system, the embedded display screen and the distance measuring sensors are arranged around the robot control system; the vehicle body is connected with the Mecanum wheel and the speed reducing motor through an H-shaped suspension;
the manipulator obstacle removing device comprises a tail end executing mechanism and a six-degree-of-freedom mechanical arm, wherein the six-degree-of-freedom mechanical arm is arranged at the front end of the top of the vehicle body; a visual marker is arranged on the tail end executing mechanism;
the image acquisition device comprises a bracket fixed below the vehicle body and a binocular camera arranged on the bracket; the image acquisition device can adjust the position of the binocular camera through the bracket and transmit the image to the control system for processing and analysis in real time;
the robot control system is provided with an ARM processor and is communicated with the PC through a wireless network;
the method is characterized in that:
the robot control system comprises an under-voltage alarm module, an image acquisition module, a distance measurement module and a driving module, wherein the under-voltage alarm device is used for detecting a power supply and performing under-voltage prompt, and the image acquisition module is used for transmitting image signals acquired by a binocular camera to the control system and a PC (personal computer) for processing; the distance measuring module is used for detecting an object in the manipulator operating space range and triggering a signal to stop advancing for the control system, and the driving module is used for driving the motor to enable the robot to realize omnidirectional movement;
the method comprises the following steps:
step 1, starting a robot control system, selecting a working mode of the robot control system, and determining an initialization position of a six-degree-of-freedom mechanical arm by using visual feedback through detecting an angular point of a visual marker on a tail end execution mechanism;
step 2, firstly, selecting a fault removal working mode: the method comprises an autonomous control mode and a manual mode; then the ARM processor performs primary processing analysis on the image acquired in the step 1 to acquire a coordinate of a capture point, and transmits the image to a PC (personal computer) in real time for further processing, analyzing and identifying so as to acquire a judgment result;
step 3, the PC communicates with ARM and transmits the judgment result, and the robot control system carries out the next work according to the judgment result;
and 4, in the obstacle removing operation process, controlling the six-degree-of-freedom mechanical arm to approach and grab the target object by using visual feedback through detecting the corner point of the visual marker on the tail end executing mechanism.
The step 1 specifically comprises:
S11:firstly, calibrating a binocular camera, acquiring internal and external parameters of the binocular camera, establishing a conversion relation between an image coordinate system and a world coordinate system, and determining an initialization position O (x) of a six-degree-of-freedom mechanical arm1,y1,z1);
S12: starting a robot control system, carrying out corner point detection on the visual marker on the tail end execution mechanism, and calculating the initial position M (x)2,y2,z3) And feeding back the deviation (delta x, delta y and delta z) to a robot control system to form a vision closed-loop control circuit so as to control the six-freedom-degree mechanical arm to accurately reach the initial position.
In the step 2, if the obstacle clearance working mode is selected as man-hour, the process includes:
s22, the ARM processor carries out first-step processing analysis on the image to obtain a grabbing point coordinate, and the process is as follows:
s221, obtaining a rough outline by utilizing a DP algorithm;
s222, performing binarization segmentation on the parallax image, performing closed operation, calculating a convex hull of the obstacle, solving the area of the convex hull, and denoising the obstacle with the area smaller than a threshold value;
s223, weighting the horizontal and vertical coordinates of the outline to obtain a two-dimensional coordinate of an image point;
s224, converting the two-dimensional coordinates of the image points into three-dimensional coordinates T (x, y, z) of the mechanical arm;
s225, traversing all the y values, and selecting the minimum y value as a capture point coordinate;
s225, setting a threshold value N for the y valueyIf y < NyThen move laterally to the right for a certain time
Figure BDA0001323423210000021
If Y > NyThen move laterally to the left, move time
Figure BDA0001323423210000022
Wherein W1、W2、W3、W4Four Mecanum wheels (12) rotating speed of the omnidirectional mobile chassis (10);
and S226, directly setting the judgment result to be 1, namely setting the coordinate position of the grabbing point to be the three-dimensional coordinate of the point determined on the image manually.
Further, in the step 2, if the obstacle avoidance operation mode is selected as the autonomous control, the process includes:
s22, the ARM processor carries out first-step processing analysis on the image to obtain a grabbing point coordinate, and the process is as follows:
s221, obtaining a rough outline by utilizing a DP algorithm;
s222, performing binarization segmentation on the parallax image, performing closed operation, calculating a convex hull of the obstacle, solving the area of the convex hull, and denoising the obstacle with the area smaller than a threshold value;
s223, weighting the horizontal and vertical coordinates of the outline to obtain a two-dimensional coordinate of an image point;
s224, converting the two-dimensional coordinates of the image points into three-dimensional coordinates T (x, y, z) of the mechanical arm;
s225, traversing all the y values, and selecting the minimum y value as a capture point coordinate;
s225, setting a threshold value N for the y valueyIf y < NyThen move laterally to the right for a certain time
Figure BDA0001323423210000031
If Y > NyThen move laterally to the left, move time
Figure BDA0001323423210000032
Wherein W1、W2、W3、W4Four Mecanum wheels (12) rotating speed of the omnidirectional mobile chassis (10);
s23, after the ARM processor transmits the image to the PC in real time, the PC carries out further processing, analysis and identification to obtain an identification result, and the process comprises the following steps: an improved Canny and SIFT combined algorithm is adopted for feature point extraction, and then the feature points are used as input to enter a fuzzy neural network for image recognition, wherein the improved Canny and SIFT combined algorithm comprises the following steps:
(1) and (3) detection of extreme values in the scale space: the difference of Gaussian kernel functions with different scale factors and the convolution of the image are obtained, and the calculation formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein D (x, y) is a Gaussian difference scale space,
Figure BDA0001323423210000033
is a Gaussian function of changing scale, L (x, y) is the scale space of an image, I (x, y) is the image data, is convolution, k is a constant of multiple of adjacent scale spaces, and sigma is a scale space factor;
(2) detecting key points in the scale space: comparing each characteristic point on the scale space with 26 neighborhood points on the periphery and the upper layer and the lower layer, if the point is the maximum or the minimum point, determining the point as a key point, and if not, discarding the point;
(3) the edge detection algorithm obtains edge points: an anisotropic Gaussian filter is adopted to perform denoising work smoothing image, the gradient amplitude and direction are calculated in the 5X5 field which is the same as the key point detection, non-maximum suppression is performed through linear interpolation, the Ostu self-adaption is used for setting high and low thresholds, and edge points are obtained by detecting and connecting edges of the gradient image which is subjected to the non-maximum suppression;
(4) the precise key points are as follows: comparing the key points and the edge points in the two steps, and judging whether to remove the key points;
s231, filtering a part of edge response points of the detected candidate key points by using a Gaussian function, and then calculating the position of each feature point in the original image;
s232, calculating the position point of each detected edge point in the 3X3 neighborhood;
s233, comparing the key points and the edge points in the two steps of S231 and S232, judging whether the position coordinates are equal, and discarding the key points if the position coordinates are equal; if not, the key point is continuously compared with the edge point and the field point set, if equal, the key point is abandoned, if not, the key point is continuously compared with the positions of other edge points detected in the step S32, if equal, the key point is abandoned, otherwise, the key point is retained;
and S234, taking the obtained feature point set as input, entering a fuzzy neural network for image recognition to obtain a judgment result.
The step 3 specifically includes:
s31, the PC transmits the judgment result to the robot control system, if the received judgment result is '0', the robot control system returns to the step 2, and if the judgment result is '1', the step S32 is carried out;
and S32, solving by using the inverse kinematics principle by the robot control system to obtain each joint instruction so as to control the six-degree-of-freedom mechanical arm to operate.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the automatic obstacle removing method for the robot can select the working mode according to the requirement, and has diversified functions; in the autonomous control mode, the manipulator is autonomously controlled by using visual feedback, so that the motion control efficiency of the robot and the repeated positioning precision of the manipulator are improved; in the aspect of image acquisition, the problem of interference of a complex background image on target object identification is effectively solved, the identification precision is improved, the accuracy of obstacle information is ensured, and the accuracy of manipulator control when the robot removes obstacles is ensured; the barrier can be intelligently judged, the working efficiency is improved, and the intelligent performance is high; the omnidirectional movement characteristic ensures that the robot can work in a narrow space and has good flexibility.
2. In the image analysis processing, the feature point extraction is carried out by adopting an improved Canny and SIFT combined algorithm, and then the feature point is used as input to enter a fuzzy neural network for image recognition. Compared with the traditional detection algorithm, the algorithm is beneficial to detecting the target in a complex environment, improves the anti-noise capability of feature extraction and has good detection effect.
Drawings
Fig. 1 is a schematic structural diagram of a robot according to an embodiment of the present invention. 10 of an omnidirectional moving chassis, 11 of an H-shaped suspension, 12 of a Mecanum wheel, 13 of a shock absorber, 14 of a speed reducing motor, 15 of a distance measuring sensor, 16 of a display screen and 17 of a vehicle body; 20, a manipulator obstacle removing device; 21 end actuating mechanism, 22 six-degree-of-freedom mechanical arm; 30 image acquisition devices, 31 supports and 32 binocular cameras.
Fig. 2 is a flowchart of an automatic obstacle clearance method for a robot according to an embodiment of the present invention.
Fig. 3 is a block diagram of an automatic obstacle clearance control system of a robot according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating an automatic obstacle clearance method for a robot according to an embodiment of the present invention. The robot comprises an omnidirectional moving chassis 10, a manipulator obstacle removing device 20, an image acquisition device 30 and a robot control system;
the omnidirectional moving chassis 10 comprises a Mecanum wheel 12, a speed reducing motor 14, an H-shaped suspension 11, a vehicle body 17 and the like; the omnidirectional moving chassis 10 is provided with a shock absorber 13 to ensure the stable running of the robot on the unbalanced ground;
a robot control system adopting an ARM processor is arranged in the vehicle body 17, a display screen 16 is embedded in the vehicle body and used for displaying the motion state of the robot, and distance measuring sensors 15 are arranged on the periphery of the vehicle body 17 and used for detecting the surrounding environment of the robot to prevent touch; the vehicle body 17 is connected with the Mecanum wheel 12 and the speed reducing motor 14 through an H-shaped suspension 11;
the manipulator obstacle removing device 20 comprises a tail end executing mechanism 21 and a six-degree-of-freedom mechanical arm 22, wherein the six-degree-of-freedom mechanical arm 22 is installed at the front end of the top of the vehicle body 17, and a visual marker is arranged on the tail end executing mechanism 21; the manipulator obstacle-removing device 20 solves the three-dimensional coordinates by using the reverse kinematics principle of the control system, so as to obtain the instructions of each joint of the six-degree-of-freedom manipulator 22. And then, the terminal actuating mechanism 21 is used for detecting the corner points of the visual markers, so that visual feedback is realized, and the manipulator obstacle removing device 20 is controlled in real time to complete obstacle removing operation.
The image acquisition device 30 comprises a bracket 31 and a binocular camera 32, wherein the bracket 31 is fixed below the vehicle body 17, and the binocular camera 32 is arranged on the bracket 31; the image acquisition device 30 can horizontally adjust the position of the binocular camera 32 through the bracket 31 and transmit images to the control system for processing and analysis in real time; the invention can automatically or manually remove the obstacles according to selection, improves the intelligent degree of robot obstacle removal, increases the functional diversity, can be applied to obstacle removal on the ground and underground detection, reduces the labor intensity and improves the working efficiency.
The robot control system is provided with an ARM processor and is communicated with a PC through a wireless network, the control system comprises an under-voltage alarm module, an image acquisition module, a distance measurement module and a driving module, the under-voltage alarm device is used for detecting a power supply and performing under-voltage prompt, and the image acquisition module is used for transmitting image signals acquired by a binocular camera to the control system and the PC for processing; the distance measurement module is used for detecting objects in the manipulator operation space range and giving a trigger signal to the control system to stop advancing, and the driving module is used for driving the motor to complete the omnidirectional movement of the robot.
Fig. 2 is a flowchart of an automatic obstacle clearance method for a robot according to an embodiment of the present invention. Fig. 3 is a block diagram of an automatic obstacle clearance control system of a robot according to an embodiment of the present invention. As shown in fig. 2 and 3, the method of the present embodiment includes the following four steps:
step 1: starting a robot control system, selecting a working mode, and determining the initialized position of the six-freedom-degree mechanical arm 22 by using visual feedback through detecting the corner of the visual marker of the tail end executing mechanism 21, wherein the specific process comprises the following steps:
s11: calibrating the binocular camera 32 to obtain the internal and external parameters of the binocular camera 32, establishing the conversion relation between the image coordinate system and the world coordinate system, and determining the initialized position O (x) of the six-degree-of-freedom mechanical arm 221,y1,z1)。
S12: starting the system, detecting the corner of the visual marker of the end actuating mechanism 21, and calculating the initial position M (x)2,y2,z3) And feeds back the deviation (Δ x, Δ y, Δ z) to the robot control system to form a visual closed-loop control loop to control the six-degree-of-freedom mechanical arm 22 to accurately reach the initial position.
Step 2: the ARM processor carries out first-step processing analysis on the image to obtain a grabbing point coordinate, and the image is transmitted to the PC in real time to be further processed, analyzed and identified to obtain an identification result. The specific process of the step 2 is as follows:
s21: firstly, selecting a fault removal working mode;
if the operation mode is the autonomous control, the following steps S22 and S23 are sequentially performed;
if the working mode is manual, only step S22 is performed, and the determination result is directly set as: the position of the coordinate of the grabbing point is a three-dimensional coordinate of a point manually determined on the image.
S22: the ARM processor carries out first-step processing analysis on the image to obtain a grabbing point coordinate, and the process is as follows:
s221: obtaining an approximate contour by using a DP algorithm; the DP (dynamic programming) algorithm is a common method for solving the optimization problem of the multi-stage decision process. The basic idea is as follows: decomposing the problem to be solved into a plurality of interconnected sub-problems, solving the sub-problems, and obtaining the solution of the original problem from the solutions of the sub-problems; for repeated sub-problems, the problem is solved only when the problem is encountered for the first time, and the answer is stored, so that the answer is directly quoted when the problem is encountered again later, and the problem does not need to be solved again. The dynamic programming algorithm treats the solution to the problem as the result of a series of decisions and also examines whether each optimal decision sequence contains an optimal decision subsequence, i.e., whether the problem has optimal substructure properties.
S222: performing binarization segmentation on the parallax image, performing closed operation, calculating a convex hull of the obstacle, solving the area of the convex hull, and denoising the obstacle with the area smaller than a threshold value;
s223: weighting the horizontal and vertical coordinates of the outline to obtain a two-dimensional coordinate of an image point;
s224: converting the two-dimensional coordinates of the image points into three-dimensional coordinates T (x, y, z) of the mechanical arm;
s225: traversing all the y values, and selecting the minimum value as a capture point coordinate;
s225: a threshold value N is set for the value of yyIf y < NyTransverse, moving time
Figure BDA0001323423210000063
If Y > NyThen move laterally to the left, move time
Figure BDA0001323423210000062
Wherein W1、W2、W3、W4Four Mecanum wheels (12) rotating speed of the omnidirectional mobile chassis (10);
s23: after the ARM processor transmits the image to the PC in real time, the PC carries out further processing, analysis and identification to obtain an identification result, and the process comprises the following steps: an improved Canny and SIFT combined algorithm is adopted for feature point extraction, and then the feature points are used as input to enter a fuzzy neural network for image recognition, wherein the improved Canny and SIFT combined algorithm comprises the following steps:
(1) and (5) detecting an extreme value in the scale space. Can be obtained by the difference of Gaussian kernel functions with different scale factors and the convolution of images,
the calculation formula is as follows
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
Wherein D (x, y) is a Gaussian difference scale space,
Figure BDA0001323423210000061
is a Gaussian function of changing scale, L (x, y) is a scale space of a graph, I (x, y) is image data, is convolution, k is a constant of multiple of adjacent scale spaces, and sigma is a scale space factor;
(2) keypoints in scale space are detected. Each feature point on the scale space is compared with 26 neighborhood points on the periphery and the upper layer and the lower layer, if the point is the maximum or the minimum point, the point is determined as a key point, and if not, the point is discarded.
(3) The edge detection algorithm obtains edge points. The method comprises the steps of denoising a work smooth image by adopting an anisotropic Gaussian filter, calculating gradient amplitude and direction by adopting the 5X5 field same as the key point detection, carrying out non-maximum suppression through linear interpolation, setting high and low thresholds by using Ostu self-adaption, and detecting the gradient image subjected to non-maximum suppression and connecting edges to obtain edge points.
(4) The exact key point. And comparing the key points and the edge points in the previous two steps, and judging whether to remove the key points.
S231: filtering a part of edge response points of the detected candidate key points by using a Gaussian function, and then calculating the position of each feature point in the original image;
s232: calculating the position point in the 3X3 neighborhood of each detected edge point;
s233: comparing the key points and the edge points of the first two steps, judging whether the position coordinates are equal, if so, discarding the key points, if not, comparing the key points with the edge point field point set, if so, discarding, otherwise, comparing the key points with the positions of other edge points detected in the step S32, and discarding if equal, otherwise, keeping;
s234: and the obtained feature point set is used as input to enter a fuzzy neural network for image recognition so as to obtain a judgment result.
And step 3: the PC machine communicates with the ARM to inform a judgment result, and the robot control system carries out the next work according to the result; the specific process of the step 3 is as follows:
s31, the PC transmits the judgment result to the robot control system, if the received judgment result is the character '0', the robot control system returns to the step 2, and if the character '1' is received, the step S32 is carried out;
s32: the controller carries out inverse kinematics solution to obtain each joint instruction so as to control the six-degree-of-freedom mechanical arm to operate;
and 4, step 4: in the operation process, the six-degree-of-freedom mechanical arm is controlled to approach and grab a target object by using visual feedback through detecting the corner points of the visual markers of the tail end executing mechanism.

Claims (1)

1. A robot automatic obstacle removing method comprises an omnidirectional moving chassis (10), a manipulator obstacle removing device (20), an image acquisition device (30) and a robot control system; the omnidirectional moving chassis (10) comprises Mecanum wheels (12), a speed reducing motor (14), an H-shaped suspension (11), a vehicle body (17) and a shock absorber (13);
the vehicle body (17): the robot is provided with a robot control system, an embedded display screen (16) and ranging sensors (15) arranged on the periphery; the vehicle body (17) is connected with the Mecanum wheel (12) and the speed reducing motor (14) through an H-shaped suspension (11);
the manipulator obstacle removing device (20) comprises a tail end executing mechanism (21) and a six-degree-of-freedom mechanical arm (22), wherein the six-degree-of-freedom mechanical arm (22) is installed at the front end of the top of the vehicle body (17); a visual marker is arranged on the tail end executing mechanism (21);
the image acquisition device (30) comprises a bracket (31) fixed below the vehicle body (17) and a binocular camera (32) arranged on the bracket (31); the image acquisition device (30) can adjust the position of the binocular camera (32) through the bracket (31) and transmit the image to the control system for processing and analysis in real time;
the robot control system is provided with an ARM processor and is communicated with the PC through a wireless network;
the robot control system comprises an under-voltage alarm module, an image acquisition module, a distance measurement module and a driving module, wherein the under-voltage alarm module is used for detecting a power supply and giving an under-voltage prompt, the distance measurement module is used for detecting an object in the operation space range of the manipulator and giving a trigger signal to the control system to stop advancing,
the method is characterized in that:
the image acquisition module is used for transmitting image signals acquired by the binocular camera to the control system and the PC for processing; the driving module is used for driving the motor to enable the robot to realize omnidirectional movement;
the method comprises the following steps:
step 1, starting a robot control system, selecting a working mode of the robot control system, and determining an initialization position of a six-degree-of-freedom mechanical arm (22) by using visual feedback through detecting an angular point of a visual marker on a tail end execution mechanism (21); the method comprises the following steps:
s11: firstly, the binocular camera (32) is calibrated and acquiredInternal and external parameters of the binocular camera (32), the conversion relation between an image coordinate system and a world coordinate system is established, and the initialization position O (x) of the six-degree-of-freedom mechanical arm (22) is determined1,y1,z1);
S12: starting a robot control system, carrying out corner point detection on a visual marker on a tail end execution mechanism (21), and calculating an initial position M (x)2,y2,z3) The deviation (delta x, delta y, delta z) is fed back to a robot control system to form a vision closed-loop control circuit so as to control the six-freedom-degree mechanical arm (22) to accurately reach an initial position;
step 2, firstly, selecting a fault removal working mode: the method comprises an autonomous control mode and a manual mode; then the ARM processor performs primary processing analysis on the image acquired in the step 1 to acquire a coordinate of a capture point, and transmits the image to a PC (personal computer) in real time for further processing, analyzing and identifying so as to acquire a judgment result;
step 3, the PC communicates with the ARM processor and transmits the judgment result, and the robot control system performs the next work according to the judgment result;
step 4, in the obstacle removing operation process, the six-degree-of-freedom mechanical arm (22) is controlled to approach and grab a target object by using visual feedback through detecting the corner points of the visual markers on the tail end executing mechanism (21);
in the step 2, if the obstacle avoidance operation mode is selected as the autonomous control, the process includes:
s22, the ARM processor carries out first-step processing analysis on the image to obtain a grabbing point coordinate, and the process is as follows:
s221, obtaining a rough outline by utilizing a DP algorithm;
s222, performing binarization segmentation on the parallax image, performing closed operation, calculating a convex hull of the obstacle, solving the area of the convex hull, and denoising the obstacle with the area smaller than a threshold value;
s223, weighting the horizontal and vertical coordinates of the outline to obtain a two-dimensional coordinate of an image point;
s224, converting the two-dimensional coordinates of the image points into three-dimensional coordinates T (x, y, z) of the mechanical arm;
s225, traversing all the y values, and selecting the minimum y value as a capture point coordinate;
s225, setting a threshold value N for the y valueyIf y < NyThen move laterally to the right for a certain time
Figure FDA0002415617130000021
If Y > NyThen move laterally to the left, move time
Figure FDA0002415617130000022
Wherein W1、W2、W3、W4Four Mecanum wheels (12) rotating speed of the omnidirectional mobile chassis (10);
s23, after the ARM processor transmits the image to the PC in real time, the PC carries out further processing, analysis and identification to obtain an identification result, and the process comprises the following steps: an improved Canny and SIFT combined algorithm is adopted for feature point extraction, and then the feature points are used as input to enter a fuzzy neural network for image recognition, wherein the improved Canny and SIFT combined algorithm comprises the following steps:
(1) and (3) detection of extreme values in the scale space: the difference of Gaussian kernel functions with different scale factors and the convolution of the image are obtained, and the calculation formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein D (x, y) is a Gaussian difference scale space,
Figure FDA0002415617130000023
is a Gaussian function of changing scale, L (x, y) is a scale space of a graph, I (x, y) is image data, is convolution, k is a constant of multiple of adjacent scale spaces, and sigma is a scale space factor;
(2) detecting key points in the scale space: comparing each characteristic point on the scale space with 26 neighborhood points on the periphery and the upper layer and the lower layer, if the point is the maximum or the minimum point, determining the point as a key point, and if not, discarding the point;
(3) the edge detection algorithm obtains edge points: an anisotropic Gaussian filter is adopted to perform denoising work smoothing image, the 5X5 neighborhood which is the same as the key point detection is adopted to calculate the gradient amplitude and direction, non-maximum suppression is performed through linear interpolation, the Ostu self-adaption is used to set the high and low threshold values, and the edge point is obtained by detecting and connecting the gradient image which is not subjected to the non-maximum suppression;
(4) the precise key points are as follows: comparing the key points and the edge points in the two steps, and judging whether to remove the key points;
s231, filtering a part of edge response points of the detected candidate key points by using a Gaussian function, and then calculating the position of each feature point in the original image;
s232, calculating the position points in the 3X3 neighborhood of each detected edge point;
s233, comparing the key points and the edge points in the two steps of S231 and S232, judging whether the position coordinates are equal, and discarding the key points if the position coordinates are equal; if not, the key point is continuously compared with the edge point neighborhood point set, if equal, the key point is abandoned, if not, the key point is continuously compared with the positions of other edge points detected in the step S23, if equal, the key point is abandoned, otherwise, the key point is retained;
and S234, taking the obtained feature point set as input, entering a fuzzy neural network for image recognition to obtain a judgment result.
CN201710455042.8A 2017-06-16 2017-06-16 Automatic obstacle removing method for robot Active CN107315410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710455042.8A CN107315410B (en) 2017-06-16 2017-06-16 Automatic obstacle removing method for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710455042.8A CN107315410B (en) 2017-06-16 2017-06-16 Automatic obstacle removing method for robot

Publications (2)

Publication Number Publication Date
CN107315410A CN107315410A (en) 2017-11-03
CN107315410B true CN107315410B (en) 2020-05-29

Family

ID=60184175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710455042.8A Active CN107315410B (en) 2017-06-16 2017-06-16 Automatic obstacle removing method for robot

Country Status (1)

Country Link
CN (1) CN107315410B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110103196A (en) * 2019-06-19 2019-08-09 广东电网有限责任公司 The robot for overhauling of GIS a kind of and the examination and repair system of GIS
CN111121639B (en) * 2019-12-18 2021-07-27 同济大学 Rigid-flexible integrated crack detection system for narrow building space
CN111113421A (en) * 2019-12-30 2020-05-08 上海燊星机器人科技有限公司 Robot intelligence snatchs sequencing system
CN113077413A (en) * 2020-01-06 2021-07-06 苏州宝时得电动工具有限公司 Self-moving equipment and control method thereof
CN111421549A (en) * 2020-04-24 2020-07-17 深圳国信泰富科技有限公司 Obstacle clearing robot and control method
CN111708366B (en) * 2020-06-29 2023-06-06 山东浪潮科学研究院有限公司 Robot, and method, apparatus and computer-readable storage medium for controlling movement of robot
CN113029634B (en) * 2021-03-22 2022-12-20 江苏省产品质量监督检验研究院 Full-automatic mattress sampler
CN113065596B (en) * 2021-04-02 2022-01-28 鑫安利中(北京)科技有限公司 Industrial safety real-time monitoring system based on video analysis and artificial intelligence
CN113190031B (en) * 2021-04-30 2023-03-24 成都思晗科技股份有限公司 Forest fire automatic photographing and tracking method, device and system based on unmanned aerial vehicle
CN114833799B (en) * 2022-04-26 2024-01-02 浙江大学 Robot and method for unmanned collection of animal saliva samples in farm
CN116300918A (en) * 2023-03-07 2023-06-23 广东隆崎机器人有限公司 Six-axis robot path planning device, robot and readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102922521B (en) * 2012-08-07 2015-09-09 中国科学技术大学 A kind of mechanical arm system based on stereoscopic vision servo and real-time calibration method thereof
CN103522291B (en) * 2013-10-29 2016-08-17 中国人民解放军总装备部军械技术研究所 The target grasping system of a kind of explosive-removal robot and method
US10152213B2 (en) * 2016-09-01 2018-12-11 Adobe Systems Incorporated Techniques for selecting objects in images
CN106737665B (en) * 2016-11-30 2019-07-19 天津大学 Based on binocular vision and the matched mechanical arm control system of SIFT feature and implementation method

Also Published As

Publication number Publication date
CN107315410A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107315410B (en) Automatic obstacle removing method for robot
CN111055281A (en) ROS-based autonomous mobile grabbing system and method
CN106863332B (en) Robot vision positioning method and system
CN105518702A (en) Method, device and robot for detecting target object
US20090290758A1 (en) Rectangular Table Detection Using Hybrid RGB and Depth Camera Sensors
CN109623815B (en) Wave compensation double-robot system and method for unmanned salvage ship
CN111077890A (en) Implementation method of agricultural robot based on GPS positioning and automatic obstacle avoidance
Balta et al. Terrain traversability analysis for off-road robots using time-of-flight 3d sensing
CN114252071A (en) Self-propelled vehicle navigation device and method thereof
CN114683290A (en) Method and device for optimizing pose of foot robot and storage medium
Soans et al. Object tracking robot using adaptive color thresholding
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
CN109579698B (en) Intelligent cargo detection system and detection method thereof
CN109079777B (en) Manipulator hand-eye coordination operation system
CN109508017A (en) Intelligent carriage control method
Gao et al. An automatic assembling system for sealing rings based on machine vision
CN110631577B (en) Service robot navigation path tracking method and service robot
Shaw et al. Development of an AI-enabled AGV with robot manipulator
Zhou et al. Visual servo control system of 2-DOF parallel robot
CN114714358A (en) Method and system for teleoperation of mechanical arm based on gesture protocol
CN114800494A (en) Box moving manipulator based on monocular vision
CN113139987A (en) Visual tracking quadruped robot and tracking characteristic information extraction algorithm thereof
Gao et al. Shared autonomy for assisted mobile robot teleoperation by recognizing operator intention as contextual task
Kamaruzzaman et al. Design and implementation of a wireless robot for image processing
Calderon et al. Road detection algorithm for an autonomous UGV based on monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20171103

Assignee: Zhenjiang Kaituo Machinery Co.,Ltd.

Assignor: JIANGSU University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2020980007284

Denomination of invention: A method of robot automatic obstacle removal

Granted publication date: 20200529

License type: Common License

Record date: 20201029

EE01 Entry into force of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Zhenjiang Kaituo Machinery Co.,Ltd.

Assignor: JIANGSU University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2020980007284

Date of cancellation: 20201223

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220920

Address after: Room 1, No. 188 and 198, Hanlin Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province 215300

Patentee after: Kunshan Quantai Information Technology Service Co.,Ltd.

Address before: 212003, No. 2, Mengxi Road, Zhenjiang, Jiangsu

Patentee before: JIANGSU University OF SCIENCE AND TECHNOLOGY

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240729

Address after: No. 338 Jinqiao Avenue, Macheng Economic Development Zone, Huanggang City, Hubei Province 438000

Patentee after: Linus Intelligent Equipment (Hubei) Co.,Ltd.

Country or region after: China

Address before: Room 1, No. 188 and 198, Hanlin Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province 215300

Patentee before: Kunshan Quantai Information Technology Service Co.,Ltd.

Country or region before: China