CN108181897A - A kind of method of biped robot's automatic tracking - Google Patents
A kind of method of biped robot's automatic tracking Download PDFInfo
- Publication number
- CN108181897A CN108181897A CN201711306559.7A CN201711306559A CN108181897A CN 108181897 A CN108181897 A CN 108181897A CN 201711306559 A CN201711306559 A CN 201711306559A CN 108181897 A CN108181897 A CN 108181897A
- Authority
- CN
- China
- Prior art keywords
- path
- image
- robot
- deviation
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000003708 edge detection Methods 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims description 34
- 238000001914 filtration Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 9
- 238000000605 extraction Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004380 ashing Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004579 marble Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D57/00—Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
- B62D57/02—Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
- B62D57/032—Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legs; with alternately or sequentially lifted feet or skid
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/68—Analysis of geometric attributes of symmetry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30172—Centreline of tubular or elongated structure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Electromagnetism (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- Chemical & Material Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The present invention provides a kind of method of biped robot's automatic tracking, including:The guidance path image information of visual sensor distribution of machine people foot on the biped robot is to processor;Processor handles the guidance path image information received, obtains the relative position relation of robot current pose and guidance path, and is sent to controller;Relative position relation includes angular deviation and position deviation;Controller adjusts walking path to realize automatic tracking according to the angular deviation received and position deviation control robot.The present invention acquires guidance path image information by visual sensor distribution of machine;Image is ashed, mean filter, Canny edge detection algorithms and extraction path edge coordinate obtain route information, and improve routing information accuracy by section scanning;In Path Recognition, slope matched method is proposed for crossedpath to select best progress path;Therefore, there is preferable real-time and antijamming capability.
Description
Technical Field
The invention relates to the technical field of robot visual navigation, in particular to an automatic tracking method for a biped robot.
Background
Visual navigation of mobile robots is an important direction in the field of robot research at present. Vision-based indoor navigation can be divided into three categories: map-based navigation, map-based build navigation, and map-less navigation. With the popularization of the service type robot and the entrance to the family, the requirement on whether the robot can independently complete related service projects in an indoor environment is provided, and the core of the service type robot is the indoor navigation technology of the robot. Gartshore proposes a navigation algorithm that employs a grid-occupied map building framework and feature location detection, i.e., an online processing of RGB color image sequences by a single camera. The algorithm first detects the contour edge of an object in the current image frame through a Harris edge and corner detector, scans the edge feature to determine the peak value, then, considers all possible positions under any depth, projects the detected feature to a 2D image plane, and according to odometry data and the extracted image feature, a system positioning module can calculate the position of the robot. The map-based construction method requires a map that relies on the global environment as a basis for navigation decision-making. This navigation method is problematic when the environment changes. Saitoh et al propose a method for tracking centerline in a corridor of a wheeled mobile robot, which uses a single USB camera and a notebook computer, and uses Hough transform to detect the boundaries of the corridor and the wall, along which the robot will move. None of the above methods can satisfy the requirement that the robot can complete related tasks in a relatively complicated environment. One method used by an automated guided vehicle navigation system is a guide-line based navigation technique, in which, in practical applications, a mobile robot moves along a pre-designed geometry to perform search and rescue tasks. Many researchers have proposed using vision systems on autonomous mobile vehicles to acquire and analyze images of guidewires laid on the ground to overcome the limitations of using other sensors.
At present, carriers of visual navigation are mainly concentrated on a wheeled robot, a camera is fixed, a biped robot (such as an NAO robot) moves through the motion of the two feet like a human, the control difficulty is higher, and the control precision is difficult to meet the requirements of the wheeled robot. However, in pursuit of humanoid robot, the humanoid robot in the future can move by feet, so that the research on the visual navigation of the biped robot is also significant.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an automatic tracking method of a biped robot, which comprises the steps of sending machine acquisition navigation path image information through a vision sensor; ashing, mean filtering, Canny edge detection algorithm and extracting path edge coordinates are carried out on the image to obtain path information, and accuracy of the path information is improved through interval scanning; in the path identification, a slope matching method is provided for the cross path to select the best forward path, and the method has better real-time performance and anti-interference capability.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method of automated tracking of a biped robot, comprising:
the vision sensor arranged on the biped robot sends navigation path image information under the feet of the robot to the processor;
the processor processes the received navigation path image information to obtain the relative position relation between the current posture of the robot and the navigation path and sends the relative position relation to the controller; the relative positional relationship includes an angular deviation and a positional deviation;
and the controller controls the robot to adjust the walking path according to the received angle deviation and position deviation so as to realize automatic tracking.
Preferably, the processor processes the received navigation path image information, and includes:
converting the RGB color image into a gray image;
carrying out image filtering on the gray level image by adopting a mean filtering method;
performing edge detection on the filtered image by adopting a Canny edge detection algorithm;
extracting the edge coordinates of the path to obtain the center line of the path;
and obtaining the relative position relation between the current posture of the robot and the navigation path according to the position of the central line.
Preferably, the extracting the coordinates of the edge of the path to obtain the centerline of the path includes:
the edge coordinates of the path in the image are acquired by a line-by-line scanning method, and the median is taken to obtain a column matrix idnex of the center line position, as follows:
idnex=(I1,I2,...,Ii)T
wherein,f (i, j) represents a two-dimensional array corresponding to the binary value image of F (i, j) processed by a Canny edge detection algorithm.
Preferably, if the navigation path is a straight path or a curved path, the center line of the path is obtained according to the following method:
in the image processing area, processing all pixels in the first row respectively to obtain a left edge point position and a right edge point position;
respectively limiting the left edge point position and the right edge point position range of the next adjacent line by using the left edge point position and the right edge point position of the previous line in the path image of the same frame from the second line to obtain the respective left edge point position and right edge point position;
and sequentially connecting the middle points of the left edge point position and the right edge point position to obtain the center line of the path.
Preferably, if the navigation path is a cross path, the centerline of the path is obtained according to the following method:
splitting an image into a bottom part, a middle part and a top part;
and taking a middle line between the top and the middle part to obtain the central point of the forward path, and respectively connecting the lines to obtain the central line of the path.
Preferably, if the navigation path is a straight path, the angular deviation and the positional deviation of the straight path are obtained according to the following method:
and acquiring the distance from the central line at the bottom of the image to the central position of the image as position deviation, and acquiring the included angle between the central line and the Y axis as angle deviation.
Preferably, if the navigation path is a curved path, the angle deviation and the position deviation of the curved path are obtained according to the following method:
acquiring the distance from the central line of the bottom of the image to the central position of the image as position deviation;
calculating the ratio of the arc length to the chord length of the central line of the path as the curvature, obtaining the included angle β between the tangent line at the front end of the curve and the Y axis, obtaining the included angle alpha between the connecting line of the middle points at the two ends of the curve and the Y axis, and compensating the angle deviation β -alpha according to the curvature.
Preferably, if the navigation path is a cross path, the angle deviation and the position deviation are obtained according to the following method:
judging the advancing direction of the cross path;
if the advancing direction is a straight path, acquiring the angle deviation and the position deviation of the straight path;
and if the advancing direction is a curve path, acquiring the angle deviation and the position deviation of the curve path.
Preferably, the method for determining the advancing direction of the cross path includes:
let O be the intersection point of the central lines of the two intersecting paths, take the central point as the center of the square of the region, intersect the edges of the square by four central lines, calculate the slopes of the four central lines and the point O, and respectively k1、k2、k3And k4Defining a matching rate Kl,mThe formula of (1) is:
wherein l belongs to an integer of [1,4], m belongs to an integer of [1,4], and l is not equal to m;
comparing the matching rates Kl,mIs selected to have a value of K closest to 1l,mThe corresponding center line is taken as a forward path.
Preferably, the vision sensor is a camera.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention sends the image information of the navigation path collected by the machine through the visual sensor; ashing, mean filtering, Canny edge detection algorithm and extracting path edge coordinates are carried out on the image to obtain path information, and accuracy of the path information is improved through interval scanning;
(2) when the edge coordinates of the path are extracted and the center line of the path is obtained, the method carries out optimization processing on the straight path, the curve path and the cross path in a distinguishing manner, greatly reduces the data processing amount, improves the real-time performance and improves the anti-interference capability to a certain extent;
(3) in the invention, during path identification, a slope matching method is provided for the crossed path, so that the biped robot can select the optimal advancing path;
(4) the invention carries out visual navigation by laying a navigation route indoors, applies the navigation technology to the biped robot, can simply and effectively solve the problem of complex indoor environment, and has certain research significance for serving the robot to enter the family in the future.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a flow chart of the image recognition preprocessing of the present invention;
FIG. 3 is an experimental diagram of three filters and three edge detection algorithms of the present invention;
FIG. 4 is a schematic digital image of the present invention;
FIG. 5 is a schematic representation of the real-time processing of path identification of the present invention;
FIG. 6 is a schematic representation of the cross-path real-time processing of the present invention;
FIG. 7 is a straight line model of the robot path navigation of the present invention;
FIG. 8 is a graphical model of the robot path navigation of the present invention;
FIG. 9 is a cross-line model of the robot path navigation of the present invention;
FIG. 10 is an initial pose of a NAO robot standing and its field of view of an embodiment of the present invention;
fig. 11 is a linear progression view of an NAO robot of an embodiment of the invention;
fig. 12 is a circular progression diagram of an NAO robot of an embodiment of the invention;
fig. 13 is a diagram of the NAO robot of an embodiment of the present invention advancing in a quadrilateral shape;
fig. 14 is a circular progression diagram of an NAO robot of an embodiment of the present invention.
Detailed Description
Referring to fig. 1, a method for automatic tracking of a biped robot includes:
step 101, a vision sensor arranged on the biped robot sends navigation path image information under the feet of the robot to a processor;
102, processing the received navigation path image information by the processor to obtain the relative position relation between the current posture of the robot and the navigation path, and sending the relative position relation to a controller; the relative positional relationship includes an angular deviation and a positional deviation;
and 103, controlling the robot to adjust a walking path by the controller according to the received angle deviation and position deviation so as to realize automatic tracking.
It should be noted that the processor and the controller in the above steps may be integrated on the biped robot, or may be a separately arranged processor and controller, and the embodiment is not particularly limited.
The processor processes the received navigation path image information, and comprises the following steps: converting the RGB color image into a gray image; carrying out image filtering on the gray level image by adopting a mean filtering method; performing edge detection on the filtered image by adopting a Canny edge detection algorithm; extracting the edge coordinates of the path to obtain the center line of the path; and obtaining the relative position relation between the current posture of the robot and the navigation path according to the position of the central line.
The main vision sensor of the robot autonomous tracking based on vision is a camera, the robot acquires path information through the camera, edge information of the path is found out through an image recognition method, and navigation parameters are obtained through an algorithm. The flow of the preprocessing process of image recognition is shown in fig. 2. Therefore, the speed of image processing, the noise immunity, and the accuracy of edge extraction are the prerequisites for obtaining navigation parameters.
In the image processing, the color image needs to be converted into the gray image in advance to perform the correlation calculation and identification. The conversion formula for converting the RGB color image into the gray image is as follows:
the main purpose of path recognition is to detect the edges of the navigation path. Commonly used edge detection algorithms are the Sobel operator, Canny operator and laplacian operator. Since the edge detection algorithm is mainly based on the first and second derivatives of the image intensity, the computation of the derivatives is sensitive to noise, and therefore filters have to be used to improve the performance of the noise-dependent edge detector. Image filtering, namely, suppressing the noise of a target image under the condition of keeping the detail features of the image as much as possible, is an indispensable operation in image preprocessing, and the effectiveness and reliability of subsequent image processing and analysis are directly affected by the quality of the processing effect. Common image filters include gaussian, mean and median filtering.
The mean filtering method is to select a template for the current pixel to be processed, the template is composed of a plurality of adjacent pixels, and the mean value of the template is used to replace the value of the original pixel.
Wherein, (x, y) is the current pixel point to be processed, and M represents the total number of pixels including the current pixel in the template.
Because the extraction path has low requirements on the detail part in the image, the noise interference can be effectively removed through mean value filtering.
The general flow of the Canny edge detection algorithm is as follows:
(1) convolving the image with a Gaussian smoothing filter:
S[i,j]=G[i,j;σ]*I[i,j]
(2) two arrays of partial derivatives, P and Q, are calculated using first order finite differences:
P[i,j]≈(S[i,j+1]-S[i,j]
+S[i+1,j+1]-S[i+1,j])/2
Q[i,j]≈(S[i,j]-S[i+1,j]
+S[i,j+1]-S[i+1,j+1])/2
(3) calculating the amplitude and azimuth:
θ[i,j]=arctan(Q[i,j]/P[i,j]
(4) non-maxima suppression: and thinning the ridge zone in the amplitude image, namely only keeping the point with the maximum local amplitude change. The variation range of the gradient angle is reduced to one of four sectors, and the direction angle and the amplitude are respectively as follows:
ξ[i,j]=Sector(θ[i,j])
N[i,j]=NMS(M[i,j],ξ[i,j]
(5) and taking a threshold value, and assigning zero to all the pixels lower than the threshold value to obtain an edge array of the image.
Referring to fig. 3, which is an experimental diagram showing three filters and three edge detection algorithms, it can be seen that by comparing the three filters with the three edge detection effects, an ideal effect can be obtained by combining the mean filtering and the Canny operator, and edge interference caused by tile gaps and illumination can be well filtered. The ideal continuous path edge information lays a foundation for the accurate navigation of the robot.
The image after image preprocessing (gray level, mean filtering and Canny edge detection algorithm processing) is a binary image containing path edge information. The two-dimensional numerical image is actually a two-dimensional array of gray values, and the size of the two-dimensional array is the resolution of the image. If the array is represented by F (i, j), as shown in fig. 4, the gray value of the coordinate (i, j) is F (i, j). The function f (i, j) is a mathematical model of the digital image, sometimes also referred to as an image function. The edge coordinates of the path in the image are obtained by a line-by-line scanning method, and the median is taken to obtain a column matrix idnex of the center line position, which is taken as a main basis for calculating navigation parameters, as follows:
idnex=(I1,I2,...,Ii)T
wherein,f (i, j) represents a two-dimensional array corresponding to the binary value image of F (i, j) processed by a Canny edge detection algorithm.
When the robot walks in real time, the processing of the last sampled image must be finished before the next image sampling period comes, namely, the real-time performance of image processing is ensured. Therefore, the navigation path is accurately identified, and simultaneously, the speed of path image identification must be ensured. When identifying a frame of path image, all the steps shown in fig. 2 are completed. This ensures the robustness of recognition to some extent, but at the same time brings a large data processing amount, thereby leading to a decrease in real-time performance. The navigation path image is considered to be composed of pixel points in one row, and the position difference of the corresponding left edge point and the right edge point between two adjacent rows of the path is small due to the continuity of the path. Therefore, in addition to dividing the upper and lower image processing areas to reduce the image data processing amount, the edge position of the upper line in the same frame path image is used to limit the edge point range of the adjacent lower line, thereby reducing the number of pixel points needing to be processed in each line and achieving the purpose of improving the real-time performance. The method comprises the following specific steps:
(1) in the image processing area, the processing shown in fig. 2 is performed on all the pixels in the first row, so as to obtain the left and right edge points of the path. If not (the row corresponding to the area without path), the next row continues to be processed until the left and right edge point positions L1 and R1 are detected, as shown in FIG. 5.
(2) After the first line edge point is detected, the entire line is not processed as in step (1) but a value of width f pixels is determined when the second line is processed. Processing in the position range of [ L1-f, R1+ f ] of the line to obtain the edge points L2 and R2 of the line; and then searching left and right edge points of a third row between the L2-f and the R2+ f until obtaining left and right edge points of each row of the navigation path in the processing area. As long as the value of f is properly selected, the left and right edge points of the row can be found on the interval [ L1-f, R1+ f ]. When the f value is smaller, the edge point can not be found or wrong edge points can be found; although the f value is too large, the f value can be found, but the operation amount is increased, and the real-time performance is not improved.
the method has the advantages of greatly reducing the number of pixel points needing filtering and edge detection, greatly reducing the data processing amount and improving the real-time performance, and limiting the candidate edge points in a smaller candidate interval when detecting the edge points of the next row, thereby improving the anti-interference capability to a certain extent.
The method can greatly reduce the image processing time without crossing paths, but for the intersection, the method can lose path information to cause the robot to not move forward correctly. Therefore, a path judgment mechanism needs to be introduced in the image processing process, when the intersection is encountered, the real-time method is not adopted for processing, and three-section processing of the image is adopted, namely, the image is divided into a bottom part, a middle part and a top part, as shown in fig. 6. The processing method can greatly reduce the processing time, but takes the loss of the path information as the premise, in order to accelerate the image processing speed as much as possible and obtain more accurate path information, a method of taking a median is used, as shown in fig. 6, the difference between the connection line of the middle point and the top part and the actual path is larger, so that a middle line is taken between the top part and the middle part to obtain the central point of the forward path, and the connection lines are respectively connected, and by the method, the processing time can be reduced, and the path can be closer to the actual path.
On the basis of obtaining the navigation path, the path tracking model of the invention comprises the following three steps:
(1) linear path tracking model
A simple linear path tracking model is shown in FIG. 7, the navigation path acquired by the camera of the biped robot is regarded as a straight line by the model, the edge line of the path is acquired by an image recognition algorithm, and then the center line of the path is acquired.
(2) Curve path tracking model
the optimal tracking state is that the advancing direction of the robot is always tangent to the curve path when the robot walks on the curve path, so that the angle deviation α of the curve path tracking model cannot be directly calculated through the edge position unlike a straight line path tracking model which is shown in figure 81The tangent at the front end of the curve is in a direction V0in order to compensate the angle deviation of the beta-alpha, the curvature of the path is defined as follows, the ratio of the arc length to the chord length of the central line of the path is the curvature of the path, and the closer the ratio is to 1, the closer the ratio is to the straight line.
(3) Tracking model for cross-route
The situation that two paths are crossed is often encountered in the autonomous tracking of the robot, and how the robot judges and selects the advancing path is needed to be solvedThe problem is solved. Therefore, a path selection algorithm based on slope matching is provided, and the problem of selection of the advancing direction of the cross path can be solved. A model of cross-line path tracking is shown in fig. 9. In the figure, O is the intersection point of the center lines of the two intersecting paths, the center point is taken as the center of the square of the area, the four center lines will intersect with the edges of the square, the four center lines can be respectively marked as 1, 2, 3 and 4, and the slopes with the point O are respectively k1、k2、k3And k4. Defining a matching rate Kl,mThe formula of (1) is:
wherein l belongs to an integer of [1,4], m belongs to an integer of [1,4], and l is not equal to m;
when the matching rate Kl,mThe closer to 1, the more likely the two paths are to be continuous. The path selection algorithm through slope matching can correctly select the forward path without being unknown when the intersection is encountered. Assuming that the current path is 1, K is calculated in sequence1,2、K1,3And K1,4And comparing K1,2、K1,3And K1,4Size of (D), select K1,2、K1,3And K1,4The central line corresponding to the median closest to 1 is taken as a forward path.
In this embodiment, the autonomous tracking of the biped robot is developed by using a human robot NAO developed by Aldebaran, france, as a hardware platform.
The NAO hardware is newly designed and manufactured, so that the smoothness of NAO action is ensured, and various sensors are also equipped. In addition, the NAO can be programmed under operating systems such as Linux, windows or Mac OS by using a plurality of languages such as C + +, Python, Java and the like, and graphical programming software Choregraphe is also provided, so that a user can freely write a program for the NAO by using imagination to teach many actions of the program.
The embodiment needs to develop the vision of the NAO robot. The head of the NAO robot has a total of two cameras up to a 1280 x 960 resolution of 30 frames per second. The path information on the ground is a main information source for the NAO robot to advance, and because the image information at a far position has no great utilization value, a camera positioned at the bottom of the NAO robot is selected, and the head of the NAO robot is rotated downwards by a fixed angle, so that the NAO robot can acquire the path information under feet and can not shield the vision field by the feet of the NAO robot, and the initial standing posture and the vision field of the camera are shown in fig. 10.
The Aldebaran company designs a graphical programming software for the NAO robot, and the software is very suitable for programming enthusiasts without a software development base. Meanwhile, the NAO robot also provides a multi-language development environment, such as C + +, Python, Java and the like. In this embodiment, software development for visual path tracking of the NAO robot is performed by using python2.7+ opencv2.4. Python is a very popular programming language at present, has rich and powerful libraries, and can conveniently and quickly write program software with application value. OpenCV is a BSD license (open source) based distributed cross-platform computer vision library that can run on Linux, Windows, Android, and Mac OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
The present embodiment is programmed and developed using Python language, and partial code and pseudo code are given as follows.
(1) Initializing an NAO robot
The initialization process of the NAO robot keeps the NAO in a fixed pose, selects the bottom camera, and sets the color space and resolution of the camera, etc.
(2) Performing image acquisition and preprocessing thereof
The image acquisition uses getImageRemote in the ALVideoDeviceProxy agent. The filtering and edge extraction of the image uses functions in the OpenCV library.
Image acquisition and pre-processing code
Path detection algorithm partial pseudo-code
Path detection algorithm pseudo-code
According to the processing, the experiment of the embodiment respectively lays straight lines, circles, quadrangles and intersecting lines on the ground to verify the accuracy and stability of the theory. The test was carried out by applying a black tape having a width of about 15mm to the surface of the light-colored marble. The straight line tracking experiment of the NAO robot is shown in fig. 11, the circular tracking experiment of the NAO robot is shown in fig. 12, the experiment chart of the NAO robot tracking along the quadrilateral path is shown in fig. 13, and the experiment chart of the NAO robot tracking along the complex path is shown in fig. 14. The experimental results data for the various routes are shown in table 1 below.
TABLE 1
As can be seen from table 1, the NAO robot has good stability and accuracy for tracking a relatively simple path, and also has certain defects to be improved for tracking a complex path, and the failure reasons mainly come from shaking of the camera when the NAO robot walks and interference of external light intensity, and further improvement needs to be performed on the camera to eliminate shaking and the camera to resist external interference.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the technical scope of the present invention, so that any minor modifications, equivalent changes and modifications made to the above embodiment according to the technical spirit of the present invention are within the technical scope of the present invention.
Claims (10)
1. A method for automatic tracking of a biped robot, comprising:
the vision sensor arranged on the biped robot sends navigation path image information under the feet of the robot to the processor;
the processor processes the received navigation path image information to obtain the relative position relation between the current posture of the robot and the navigation path and sends the relative position relation to the controller; the relative positional relationship includes an angular deviation and a positional deviation;
and the controller controls the robot to adjust the walking path according to the received angle deviation and position deviation so as to realize automatic tracking.
2. The method for automated tracking of a biped robot of claim 1 wherein the processor processes the received navigation path image information, comprising:
converting the RGB color image into a gray image;
carrying out image filtering on the gray level image by adopting a mean filtering method;
performing edge detection on the filtered image by adopting a Canny edge detection algorithm;
extracting the edge coordinates of the path to obtain the center line of the path;
and obtaining the relative position relation between the current posture of the robot and the navigation path according to the position of the central line.
3. The method for automatic tracking of a biped robot according to claim 2, wherein said extracting path edge coordinates to obtain a centerline of the path comprises:
the edge coordinates of the path in the image are acquired by a line-by-line scanning method, and the median is taken to obtain a column matrix idnex of the center line position, as follows:
idnex=(I1,I2,...,Ii)T
wherein,f (i, j) represents a two-dimensional array corresponding to the binary value image of F (i, j) processed by a Canny edge detection algorithm.
4. The method for automated tracking of a biped robot according to claim 3, wherein if the navigation path is a straight path or a curved path, the centerline of the path is obtained according to the following method:
in the image processing area, processing all pixels in the first row respectively to obtain a left edge point position and a right edge point position;
respectively limiting the left edge point position and the right edge point position range of the next adjacent line by using the left edge point position and the right edge point position of the previous line in the path image of the same frame from the second line to obtain the respective left edge point position and right edge point position;
and sequentially connecting the middle points of the left edge point position and the right edge point position to obtain the center line of the path.
5. The method for automated tracking of a biped robot according to claim 3, wherein if the navigation path is a cross path, the centerline of the path is obtained according to the following method:
splitting an image into a bottom part, a middle part and a top part;
and taking a middle line between the top and the middle part to obtain the central point of the forward path, and respectively connecting the lines to obtain the central line of the path.
6. The method for automatic tracking of a biped robot according to claim 2, wherein if the navigation path is a straight path, the angular deviation and the positional deviation of the straight path are obtained according to the following method:
and acquiring the distance from the central line at the bottom of the image to the central position of the image as position deviation, and acquiring the included angle between the central line and the Y axis as angle deviation.
7. The method for automatic tracking of a biped robot according to claim 6, wherein if the navigation path is a curved path, the angular deviation and the positional deviation of the curved path are obtained according to the following method:
acquiring the distance from the central line of the bottom of the image to the central position of the image as position deviation;
calculating the ratio of the arc length to the chord length of the central line of the path as the curvature, obtaining the included angle β between the tangent line at the front end of the curve and the Y axis, obtaining the included angle alpha between the connecting line of the middle points at the two ends of the curve and the Y axis, and compensating the angle deviation β -alpha according to the curvature.
8. The method for automated tracking of a biped robot according to claim 7, wherein if the navigation path is a cross path, the angular deviation and the positional deviation are obtained according to the following method:
judging the advancing direction of the cross path;
if the advancing direction is a straight path, acquiring the angle deviation and the position deviation of the straight path;
and if the advancing direction is a curve path, acquiring the angle deviation and the position deviation of the curve path.
9. The method for automated biped robot tracking according to claim 8, wherein the method for determining the direction of the cross path comprises:
let O be the intersection point of the central lines of the two intersecting paths, take the central point as the center of the square of the region, intersect the edges of the square by four central lines, calculate the slopes of the four central lines and the point O, and respectively k1、k2、k3And k4Defining a matching rate Kl,mThe formula of (1) is:
wherein l belongs to an integer of [1,4], m belongs to an integer of [1,4], and l is not equal to m;
comparing the matching rates Kl,mIs selected to have a value of K closest to 1l,mThe corresponding center line is taken as a forward path.
10. The biped robot automatic tracking method according to claim 1, wherein the vision sensor is a camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711306559.7A CN108181897A (en) | 2017-12-11 | 2017-12-11 | A kind of method of biped robot's automatic tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711306559.7A CN108181897A (en) | 2017-12-11 | 2017-12-11 | A kind of method of biped robot's automatic tracking |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108181897A true CN108181897A (en) | 2018-06-19 |
Family
ID=62545896
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711306559.7A Pending CN108181897A (en) | 2017-12-11 | 2017-12-11 | A kind of method of biped robot's automatic tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108181897A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109093625A (en) * | 2018-09-11 | 2018-12-28 | 国网山东省电力公司莱芜供电公司 | A kind of straight line path visual identity method for robot cruise |
CN109269544A (en) * | 2018-09-27 | 2019-01-25 | 中国人民解放军国防科技大学 | Inspection system for suspension sensor of medium-low speed magnetic suspension vehicle |
CN109324615A (en) * | 2018-09-20 | 2019-02-12 | 深圳蓝胖子机器人有限公司 | Office building delivery control method, device and computer readable storage medium |
CN109828568A (en) * | 2019-02-15 | 2019-05-31 | 武汉理工大学 | Ball gait optimization method is sought to the NAO robot of RoboCup match |
CN109900266A (en) * | 2019-03-27 | 2019-06-18 | 小驴机器人(武汉)有限公司 | Fast recognition and positioning mode and system based on RGB-D and inertial navigation |
CN110032191A (en) * | 2019-04-28 | 2019-07-19 | 中北大学 | A kind of human emulated robot is quickly walked tracking avoidance implementation method |
CN110084825A (en) * | 2019-04-16 | 2019-08-02 | 上海岚豹智能科技有限公司 | A kind of method and system based on image edge information navigation |
CN110989581A (en) * | 2019-11-26 | 2020-04-10 | 广东博智林机器人有限公司 | Method and device for controlling conveyance system, computer device, and storage medium |
CN111028275A (en) * | 2019-12-03 | 2020-04-17 | 扬州后潮科技有限公司 | Tracing robot PID method based on cross-correlation image positioning matching |
CN111093007A (en) * | 2018-10-23 | 2020-05-01 | 辽宁石油化工大学 | Walking control method and device for biped robot, storage medium and terminal |
CN111398984A (en) * | 2020-03-22 | 2020-07-10 | 华南理工大学 | Self-adaptive laser radar point cloud correction and positioning method based on sweeping robot |
CN112631312A (en) * | 2021-03-08 | 2021-04-09 | 北京三快在线科技有限公司 | Unmanned equipment control method and device, storage medium and electronic equipment |
CN112720408A (en) * | 2020-12-22 | 2021-04-30 | 江苏理工学院 | Visual navigation control method for all-terrain robot |
CN113139987A (en) * | 2021-05-06 | 2021-07-20 | 太原科技大学 | Visual tracking quadruped robot and tracking characteristic information extraction algorithm thereof |
CN113381667A (en) * | 2021-06-25 | 2021-09-10 | 哈尔滨工业大学 | Seedling searching walking system and method based on ROS and image processing |
CN113721625A (en) * | 2021-08-31 | 2021-11-30 | 平安科技(深圳)有限公司 | AGV control method, device, equipment and storage medium |
CN113867345A (en) * | 2021-09-23 | 2021-12-31 | 西北工业大学 | Nao robot path planning method based on deep Double-Q network |
CN114281085A (en) * | 2021-12-29 | 2022-04-05 | 福建汉特云智能科技有限公司 | Robot tracking method and storage medium |
CN114639163A (en) * | 2022-02-25 | 2022-06-17 | 纯米科技(上海)股份有限公司 | Walking program scoring method, system, electronic device and storage medium |
WO2022156755A1 (en) * | 2021-01-21 | 2022-07-28 | 深圳市普渡科技有限公司 | Indoor positioning method and apparatus, device, and computer-readable storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104007761A (en) * | 2014-04-30 | 2014-08-27 | 宁波韦尔德斯凯勒智能科技有限公司 | Visual servo robot tracking control method and device based on pose errors |
KR20150144731A (en) * | 2014-06-17 | 2015-12-28 | 주식회사 유진로봇 | Apparatus for recognizing location mobile robot using edge based refinement and method thereof |
CN105651286A (en) * | 2016-02-26 | 2016-06-08 | 中国科学院宁波材料技术与工程研究所 | Visual navigation method and system of mobile robot as well as warehouse system |
CN106054886A (en) * | 2016-06-27 | 2016-10-26 | 常熟理工学院 | Automatic guiding transport vehicle route identification and control method based on visible light image |
CN106985142A (en) * | 2017-04-28 | 2017-07-28 | 东南大学 | A kind of double vision for omni-directional mobile robots feels tracking device and method |
CN106990786A (en) * | 2017-05-12 | 2017-07-28 | 中南大学 | The tracking method of intelligent carriage |
CN108931240A (en) * | 2018-03-06 | 2018-12-04 | 东南大学 | A kind of path tracking sensor and tracking method based on electromagnetic induction |
CN111062968A (en) * | 2019-11-29 | 2020-04-24 | 河海大学 | Robot path skeleton extraction method based on edge scanning and centerline extraction |
-
2017
- 2017-12-11 CN CN201711306559.7A patent/CN108181897A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104007761A (en) * | 2014-04-30 | 2014-08-27 | 宁波韦尔德斯凯勒智能科技有限公司 | Visual servo robot tracking control method and device based on pose errors |
KR20150144731A (en) * | 2014-06-17 | 2015-12-28 | 주식회사 유진로봇 | Apparatus for recognizing location mobile robot using edge based refinement and method thereof |
CN105651286A (en) * | 2016-02-26 | 2016-06-08 | 中国科学院宁波材料技术与工程研究所 | Visual navigation method and system of mobile robot as well as warehouse system |
CN106054886A (en) * | 2016-06-27 | 2016-10-26 | 常熟理工学院 | Automatic guiding transport vehicle route identification and control method based on visible light image |
CN106985142A (en) * | 2017-04-28 | 2017-07-28 | 东南大学 | A kind of double vision for omni-directional mobile robots feels tracking device and method |
CN106990786A (en) * | 2017-05-12 | 2017-07-28 | 中南大学 | The tracking method of intelligent carriage |
CN108931240A (en) * | 2018-03-06 | 2018-12-04 | 东南大学 | A kind of path tracking sensor and tracking method based on electromagnetic induction |
CN111062968A (en) * | 2019-11-29 | 2020-04-24 | 河海大学 | Robot path skeleton extraction method based on edge scanning and centerline extraction |
Non-Patent Citations (6)
Title |
---|
HAMID REZA RIAHI BAKHTIARI, ABOLFAZL ABDOLLAHI, HANI REZAEIAN: "Semi automatic road extraction from digital images", 《THE EGYPTIAN JOURNAL OF REMOTE SENSING AND SPACE SCIENCE》 * |
LI-HONG JUANG,JIAN-SEN ZHANG: "Robust visual line-following navigation system for humanoid robots", 《ARTIFICIAL INTELLIGENCE REVIEW》 * |
LI-HONG JUANG,JIAN-SEN ZHANG: "Visual Tracking Control of Humanoid Robot", 《IEEE ACCESS》 * |
孟武胜: "基于CMOS传感器的循迹智能车路径识别研究", 《机电一体化》 * |
李灵芝: "基于图像处理的AGV视觉导航研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
胡长晖: "数字摄像头路径识别技术的研究与应用", 《可编程控制器与工厂自动化》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109093625A (en) * | 2018-09-11 | 2018-12-28 | 国网山东省电力公司莱芜供电公司 | A kind of straight line path visual identity method for robot cruise |
CN109324615A (en) * | 2018-09-20 | 2019-02-12 | 深圳蓝胖子机器人有限公司 | Office building delivery control method, device and computer readable storage medium |
CN109269544A (en) * | 2018-09-27 | 2019-01-25 | 中国人民解放军国防科技大学 | Inspection system for suspension sensor of medium-low speed magnetic suspension vehicle |
CN109269544B (en) * | 2018-09-27 | 2021-01-29 | 中国人民解放军国防科技大学 | Inspection system for suspension sensor of medium-low speed magnetic suspension vehicle |
CN111093007A (en) * | 2018-10-23 | 2020-05-01 | 辽宁石油化工大学 | Walking control method and device for biped robot, storage medium and terminal |
CN111093007B (en) * | 2018-10-23 | 2021-04-06 | 辽宁石油化工大学 | Walking control method and device for biped robot, storage medium and terminal |
CN109828568A (en) * | 2019-02-15 | 2019-05-31 | 武汉理工大学 | Ball gait optimization method is sought to the NAO robot of RoboCup match |
CN109828568B (en) * | 2019-02-15 | 2022-04-15 | 武汉理工大学 | NAO robot ball-searching gait optimization method for RoboCup game |
CN109900266A (en) * | 2019-03-27 | 2019-06-18 | 小驴机器人(武汉)有限公司 | Fast recognition and positioning mode and system based on RGB-D and inertial navigation |
CN110084825A (en) * | 2019-04-16 | 2019-08-02 | 上海岚豹智能科技有限公司 | A kind of method and system based on image edge information navigation |
CN110032191A (en) * | 2019-04-28 | 2019-07-19 | 中北大学 | A kind of human emulated robot is quickly walked tracking avoidance implementation method |
CN110989581A (en) * | 2019-11-26 | 2020-04-10 | 广东博智林机器人有限公司 | Method and device for controlling conveyance system, computer device, and storage medium |
CN110989581B (en) * | 2019-11-26 | 2023-04-07 | 广东博智林机器人有限公司 | Method and device for controlling conveyance system, computer device, and storage medium |
CN111028275A (en) * | 2019-12-03 | 2020-04-17 | 扬州后潮科技有限公司 | Tracing robot PID method based on cross-correlation image positioning matching |
CN111028275B (en) * | 2019-12-03 | 2024-01-30 | 内蒙古汇栋科技有限公司 | Image positioning matching tracking robot PID method based on cross correlation |
CN111398984A (en) * | 2020-03-22 | 2020-07-10 | 华南理工大学 | Self-adaptive laser radar point cloud correction and positioning method based on sweeping robot |
CN111398984B (en) * | 2020-03-22 | 2022-03-29 | 华南理工大学 | Self-adaptive laser radar point cloud correction and positioning method based on sweeping robot |
CN112720408A (en) * | 2020-12-22 | 2021-04-30 | 江苏理工学院 | Visual navigation control method for all-terrain robot |
CN112720408B (en) * | 2020-12-22 | 2022-07-08 | 江苏理工学院 | Visual navigation control method for all-terrain robot |
WO2022156755A1 (en) * | 2021-01-21 | 2022-07-28 | 深圳市普渡科技有限公司 | Indoor positioning method and apparatus, device, and computer-readable storage medium |
CN112631312A (en) * | 2021-03-08 | 2021-04-09 | 北京三快在线科技有限公司 | Unmanned equipment control method and device, storage medium and electronic equipment |
CN113139987A (en) * | 2021-05-06 | 2021-07-20 | 太原科技大学 | Visual tracking quadruped robot and tracking characteristic information extraction algorithm thereof |
CN113381667A (en) * | 2021-06-25 | 2021-09-10 | 哈尔滨工业大学 | Seedling searching walking system and method based on ROS and image processing |
CN113381667B (en) * | 2021-06-25 | 2022-10-04 | 哈尔滨工业大学 | Seedling searching walking system and method based on ROS and image processing |
CN113721625A (en) * | 2021-08-31 | 2021-11-30 | 平安科技(深圳)有限公司 | AGV control method, device, equipment and storage medium |
CN113721625B (en) * | 2021-08-31 | 2023-07-18 | 平安科技(深圳)有限公司 | AGV control method, device, equipment and storage medium |
CN113867345A (en) * | 2021-09-23 | 2021-12-31 | 西北工业大学 | Nao robot path planning method based on deep Double-Q network |
CN113867345B (en) * | 2021-09-23 | 2024-09-06 | 西北工业大学 | Nao robot path planning method based on depth Double-Q network |
CN114281085A (en) * | 2021-12-29 | 2022-04-05 | 福建汉特云智能科技有限公司 | Robot tracking method and storage medium |
CN114281085B (en) * | 2021-12-29 | 2023-06-06 | 福建汉特云智能科技有限公司 | Robot tracking method and storage medium |
CN114639163A (en) * | 2022-02-25 | 2022-06-17 | 纯米科技(上海)股份有限公司 | Walking program scoring method, system, electronic device and storage medium |
CN114639163B (en) * | 2022-02-25 | 2024-06-07 | 纯米科技(上海)股份有限公司 | Scoring method and scoring system for walking program, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108181897A (en) | A kind of method of biped robot's automatic tracking | |
US11361469B2 (en) | Method and system for calibrating multiple cameras | |
CN102982557B (en) | Method for processing space hand signal gesture command based on depth camera | |
Huang et al. | A fast point cloud ground segmentation approach based on coarse-to-fine Markov random field | |
US20050002558A1 (en) | Camera based position recognition for a road vehicle | |
CN108597009B (en) | Method for detecting three-dimensional target based on direction angle information | |
CN106584451A (en) | Visual navigation based transformer substation automatic composition robot and method | |
Fiala et al. | Visual odometry using 3-dimensional video input | |
CN110334625A (en) | A kind of parking stall visual identifying system and its recognition methods towards automatic parking | |
Maier et al. | Vision-based humanoid navigation using self-supervised obstacle detection | |
CN105138990A (en) | Single-camera-based gesture convex hull detection and palm positioning method | |
CN111784655A (en) | Underwater robot recovery positioning method | |
CN113848892B (en) | Robot cleaning area dividing method, path planning method and device | |
Sandy et al. | Object-based visual-inertial tracking for additive fabrication | |
CN103247032A (en) | Weak extended target positioning method based on attitude compensation | |
CN111176305A (en) | Visual navigation method | |
JP6410231B2 (en) | Alignment apparatus, alignment method, and computer program for alignment | |
Tomono et al. | Mobile robot navigation in indoor environments using object and character recognition | |
Carrera et al. | Lightweight SLAM and Navigation with a Multi-Camera Rig. | |
Li et al. | A mobile robotic arm grasping system with autonomous navigation and object detection | |
JPH07103715A (en) | Method and apparatus for recognizing three-dimensional position and attitude based on visual sense | |
Ma et al. | Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment | |
Laue et al. | Efficient and reliable sensor models for humanoid soccer robot self-localization | |
CN114782639A (en) | Rapid differential latent AGV dense three-dimensional reconstruction method based on multi-sensor fusion | |
Gu et al. | Truss member registration for implementing autonomous gripping in biped climbing robots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180619 |
|
RJ01 | Rejection of invention patent application after publication |