LU500407B1 - Real-time positioning method for inspection robot - Google Patents

Real-time positioning method for inspection robot Download PDF

Info

Publication number
LU500407B1
LU500407B1 LU500407A LU500407A LU500407B1 LU 500407 B1 LU500407 B1 LU 500407B1 LU 500407 A LU500407 A LU 500407A LU 500407 A LU500407 A LU 500407A LU 500407 B1 LU500407 B1 LU 500407B1
Authority
LU
Luxembourg
Prior art keywords
dimensional code
positioning data
monocular cameras
label
uwb
Prior art date
Application number
LU500407A
Other languages
German (de)
Inventor
Jiehong Shi
Congling Shi
Honglei Che
Jian Li
Fei Ren
Chen Zhao
Xiaodong Qian
Original Assignee
China Academy Safety Science & Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy Safety Science & Technology filed Critical China Academy Safety Science & Technology
Application granted granted Critical
Publication of LU500407B1 publication Critical patent/LU500407B1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0257Hybrid positioning
    • G01S5/0263Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0257Hybrid positioning
    • G01S5/0263Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
    • G01S5/0264Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems at least one of the systems being a non-radio wave positioning system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The method comprising: employing each of monocular cameras to identify a two-dimensional code, and determining whether each of the monocular cameras can identify a two-dimensional code label; when each of the monocular cameras can identify the two-dimensional code label, acquiring an identification result of the two-dimensional code label by each of the monocular cameras, determining first positioning data of each of the monocular cameras based on each identification result, and determining a real-time position of the inspection robot based on each of the first positioning data; and when at least one monocular camera cannot identify the two-dimensional code label, acquiring UWB positioning data of a UWB label, moving the robot, acquiring second positioning data of each of the monocular cameras, compensating and correcting the UWB positioning data with each of the second positioning data, and determining a real-time position of the robot based on compensated and corrected third positioning data.

Description

DESCRIPTION 17000607 REAL-TIME POSITIONING METHOD FOR INSPECTION ROBOT
FIELD OF TECHNOLOGY
[0001] The present invention relates to the technical field of a robot, in particular to a real-time positioning method for an inspection robot.
BACKGROUND
[0002] The real-time navigation function of an inspection robot requires the system to determine the next driving direction according to the current position of the inspect robot, which requires high positioning accuracy. The difficulty of inspect robot positioning technology mainly includes two aspects: one is navigation technology, for positioning accuracy will vary with different navigation technologies for complicated environmental factors, the other is data fusion technology, for the computing process will be very cumbersome when data collected by sensors are fused in different requirements for accuracy.
[0003] In recent years, with the continuous development of the field of intelligent manufacturing, higher requirements have been raised on the positioning accuracy and flexible configuration of inspection robots. Laser radar, visual navigation, QR code navigation, and SLAM navigation have been applied to different fields. Currently, laser radar is employed for most inspection robot positioning methods, which can assist an inspection robot in constructing an indoor environment map for completely unknown indoor environment via core sensors such as LiDAR, and achieving autonomous navigation of the inspection robot. The indoor positioning accuracy of laser radar is around 20mm, while due to the high price of laser radar, partial laser radar accounts for one-third of the total cost of the inspect robot. In addition, the indoor environment map is generally constructed with SLAM technology, which mainly includes Visual SLAM (VSLAM) and LiDARSLAM. VSLAM refers to use of depth cameras such as Kinect for navigation in the indoor environment and has a working principle of optically processing the environment around the inspection robot. The camera collects image information, and the processor links the collected image information to an actual position of the inspection robot so as to accomplish autonomous navigation and positioning of the inspection robot. VSLAM has a large amount of calculation, which requires high 17900607 system performance of the inspection robot, and the map generated by VSLAM is usually a point cloud that cannot be directly applied to route planning of the inspection robot.
SUMMARY
[0004] To solve the above problems, the present invention aims at providing a real-time positioning method of an inspection robot, which incorporates UWB positioning and monocular vision positioning, thereby achieving high-accuracy positioning while reducing the cost thereof.
[0005] The present invention provides a real-time positioning method of an inspection robot, comprising:
[0006] employing each of monocular cameras to identify a two-dimensional code indoors, and determining whether each of the monocular cameras can identify a two-dimensional code label:
[0007] when each of the monocular cameras can identify the two-dimensional code label, acquiring an identification result of the two-dimensional code label by each of the monocular cameras, determining first positioning data of each of the monocular cameras based on the identification result of the two-dimensional code label by each of the monocular cameras, and determining a real-time position of the inspection robot based on the first positioning data of each of the monocular cameras; and
[0008] when at least one monocular camera cannot identify the two-dimensional code label, acquiring UWB positioning data of the UWB label, moving the inspection robot, acquiring second positioning data of each of the monocular cameras, compensating and correcting the UWB positioning data of the UWB label with the second positioning data of each of the monocular cameras, and determining a real-time position of the inspection robot based on compensated and corrected third positioning data,
[0009] wherein the two-dimensional code label and the UWB label are positioned on the inspection robot.
[0010] As a further improvement of the present invention, when each of the monocular cameras can identify the two-dimensional code label, the acquiring an identification result of the two-dimensional code label by each of the monocular cameras and 17900607 the determining first positioning data of each of the monocular cameras based on the identification result of the two-dimensional code label by each of the monocular cameras comprise:
[0011] acquiring a two-dimensional code image of the two-dimensional code label shot by a monocular camera;
[0012] acquiring an identification result of the two-dimensional code label by the monocular cameras based on the two-dimensional code image;
[0013] determining first positioning data of the monocular cameras based on the identification result of the two-dimensional code label; and
[0014] determining first positioning data of each of monocular cameras, respectively, by analogy.
[0015] As a further improvement of the present invention, the acquiring an identification result of the two-dimensional code label by the monocular cameras based on the two-dimensional code image comprises:
[0016] subjecting the two-dimensional code image to binarization processing to obtain a first binary image;
[0017] extracting an outline of the two-dimensional code from the first binary image;
[0018] based on the extracted outline, subjecting the first binary image to perspective transformation to obtain a second binary image;
[0019] acquiring a white bit and a black bit of the second binary image through OTSU binarization processing; and
[0020] identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label.
[0021] As a further improvement of the present invention, the identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label comprises:
[0022] determining a dictionary type of the two-dimensional code label according to the white bit and the black bit of the second binary image;
[0023] searching the dictionary type of the two-dimensional code label from a 17900607 dictionary bank to acquire the identification result of the two-dimensional code label.
[0024] As a further improvement of the present invention, the determining first positioning data of the monocular cameras based on the identification result of the two-dimensional code label comprises:
[0025] taking a mark point of the two-dimensional code image as a starting point, and sequentially acquiring image coordinate values of four angular points of the two-dimensional code image in a clockwise direction, wherein the mark point of the two-dimensional code image is one of the four angular points of the two-dimensional code image;
[0026] determining a rotation matrix R and a translation matrix T of an image coordinate system relative to a world coordinate system through a P3P algorithm;
[0027] acquiring world coordinate values of the four angular points of the two-dimensional code image according to the rotation matrix R, the translation matrix T, and the image coordinate values of the four angular points of the two-dimensional code image; and
[0028] determining a world coordinate value of the two-dimensional code image as first positioning data of the monocular cameras according to the world coordinate values of the four angular points of the two-dimensional code image.
[0029] As a further improvement of the present invention, the determining a real-time position of the inspection robot based on the first positioning data of each of the monocular cameras comprises:
[0030] determining a target positioning result according to each of the first positioning data;
[0031] using the target positioning result as the real-time position of the inspection robot.
[0032] As a further improvement of the present invention, when each of the monocular cameras cannot identify the two-dimensional code label, the acquiring UWB positioning data of the UWB label, the moving the inspection robot, the acquiring second positioning data of each of the monocular cameras, the compensating and correcting the UWB positioning data of the UWB label with the second positioning data of each of the monocular cameras, and the determining a real-time position of the inspection robot based on 17900607 compensated and corrected third positioning data comprise:
[0033] acquiring at least three groups of UWB positioning data of the UWB label, and determining a first average value (x, V,) of the at least three groups of UWB positioning data;
[0034] moving the inspection robot, acquiring at least three groups of second positioning data of each of the monocular cameras, and determining a second average value (x. V.) of the at least three groups of second positioning data;
[0035] determining an error value (5,6 „) =((x,-x,),(y.—y,)) based on the first average value (x, V,) and the second average value (x. V.) ; and
[0036] compensating and correcting another group of UWB positioning data (x,,¥,) of the UWB label with the error value (5,.6 y) to obtain third positioning data (x,.y,)= (x, +6, Yu +6 ,) as the real-time position of the inspection robot.
[0037] As a further improvement of the present invention, the method is further used for simultaneously positioning a plurality of inspection robots, wherein each inspection robot is provided with a different two-dimensional code label and a different UWB label.
[0038] The present invention further provides an electronics device comprising a memory and a processor, the memory being used for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to achieve the method.
[0039] The present invention further provides a computer readable storage medium on which computer programs are stored, the computer programs being executed by the processor to achieve the method.
[0040] The present invention has the advantageous effect that:
[0041] the method of the present invention uses the monocular cameras to identify the two-dimensional code label on the inspection robot, and achieves high-accuracy positioning of the inspection robot through conversion of the world coordinate system, the image coordinate system, and the camera coordinate system, wherein positional accuracy may be accurate to a centimeter level. Meanwhile, areas not covered by the monocular cameras are 17900607 subjected to data fusion in combination with UWB positioning data, thereby finally achieving indoor full-coverage high-accuracy positioning with a positioning accuracy up to 15mm. The present invention may further achieve simultaneous positioning of a plurality of inspection robots.
BRIEF DESCRIPTION OF THE DRAWINGS
[0042] To clearly explain the technical solutions in examples of the present invention or the prior art, described below is a brief introduction to accompanying figures required for describing examples or the prior art. Apparently, the accompanying figures described hereinafter are only some examples of the invention, whereby those of ordinary skill in the art can further obtain other figures without any ingenuity.
[0043] FIG. 1 is a schematic flowchart of a real-time positioning method for an inspection robot as recited in an exemplary example of the present invention.
[0044] FIG. 2 is a schematic diagram of use of a monocular camera for positioning as recited in an exemplary example of the present invention.
[0045] FIG. 3 is a schematic diagram of pose transformation as recited in an exemplary example of the present invention.
DESCRIPTION OF THE EMBODIMENTS
[0046] The technical solutions in examples of the present invention will be described below clearly and completely with reference to the accompanying figures in examples of the present invention. Apparently, the described examples are only part of examples of the present invention, rather than all examples. Based on examples of the present invention, all other examples acquired by those of ordinary skill in the art without any ingenuity shall fall within the protection scope of the present invention.
[0047] It shall be noted that if examples of the present invention involves directional indications (such as up, down, left, right, front, and back), the directional indications are only used to explain the relative position relationship, movement conditions, etc. among components in some specific pose (as shown in figures), and correspondingly vary with the specific pose. 17000607
[0048] In addition, in the description of the present invention, all terms used are for illustrative purposes only, and are not intended to limit the scope of the present invention. The terms “comprising” and/or “including” are used to specify the presence of the elements, steps, operations and/or components, but do not exclude the presence or addition of one or more other elements, steps, operations and/or components. The terms, such as “first” and “second”, may be used for describing various elements without representing the sequence or limiting these elements. Furthermore, in the description of the present invention, unless otherwise stated, “a plurality of” refers to “two or more”. These terms are only used to distinguish one element from another. These and/or other aspects become apparent in combination with the following figures, and it is easier for those of ordinary skill in the art to understand descriptions of the examples of the present invention. The figures are used for illustrative purposes only to depict the examples of the present invention. Those skilled in the art will readily recognize from the following description that alternative examples of the structure and method shown in the present invention can be used without departing from the principle of the present invention.
[0049] À real-time positioning method for an inspect robot as recited in the example of the present invention, as shown in FIG. 1, comprises:
[0050] S1, employing each of monocular cameras to identify a two-dimensional code indoors, and determine whether each of the monocular cameras can identify a two-dimensional code label:
[0051] S2, when each of the monocular cameras can identify the two-dimensional code label, acquiring an identification result of the two-dimensional code label by each of the monocular cameras indoors;
[0052] S3, determining first positioning data of each of the monocular cameras based on the identification result of the two-dimensional code label by each of the monocular cameras;
[0053] S4, determining a real-time position of the inspection robot based on the first positioning data of each of the monocular cameras;
[0054] S5, when each of the monocular cameras cannot identify the two-dimensional code label, acquiring UWB positioning data of a UWB label; 17900607
[0055] S6, moving the inspection robot and acquiring second positioning data of each of the monocular cameras;
[0056] S7, compensating and correcting the UWB positioning data of the UWB label with the second positioning data of each of the monocular cameras; and
[0057] S8, determining a real-time position of the inspection robot based on compensated and corrected third positioning data;
[0058] wherein the two-dimensional code label and the UWB label are positioned on the inspection robot, and the two-dimensional code label comprises a black frame and a binary matrix within the black frame.
[0059] The method of the present invention incorporates UWB (ultra wide band) positioning and monocular vision positioning to determine a real-time coordinate of the inspection robot according to the positioning data of each of monocular cameras when each of monocular cameras can shoot the two-dimensional code label, employ the positioning data of the UWB label to perform assistant positioning on the inspection robot when at least one monocular camera is sheltered (eg, a monocular camera is sheltered, and eg, two monocular cameras are sheltered simultaneously), thus failing to shoot the two-dimensional label, and compensate and correct the positioning data of the UWB label to serve as the real-time coordinate of the inspection robot.
[0060] Wherein, for example, four monocular cameras may be arranged on four indoor walls, respectively, the field of view (FOV) of which may cover most of a room. The present invention does not specifically define the number and position of the monocular cameras, and the specific installation positions of the monocular cameras can be adjusted according to the size of the room and the FOV of the camera.
[0061] In an alternative embodiment, the acquiring an identification result of the two-dimensional code label by each of the monocular cameras and the determining first positioning data of each of the monocular cameras based on the identification result of the two-dimensional code label by each of the monocular cameras comprise:
[0062] acquiring a two-dimensional code image of the two-dimensional code label shot by a monocular camera,
[0063] acquiring an identification result of the two-dimensional code label by the 17900607 monocular cameras based on the two-dimensional code image;
[0064] determining first positioning data of the monocular camera based on the identification result of the two-dimensional code label; and
[0065] determining first positioning data of each monocular camera, respectively, by analogy.
[0066] The two-dimensional code label of the present invention consists of a black frame and a binary matrix within the black frame, wherein the black frame 1s beneficial to fast detection in the two-dimensional code image, and the binary matrix therein is beneficial to fast identification and error correction of the two-dimensional code label.
[0067] In an alternative embodiment, the acquiring an identification result of the two-dimensional code label by the monocular camera based on the two-dimensional code image comprises:
[0068] subjecting the two-dimensional code image to binarization processing to obtain a first binary image;
[0069] extracting an outline of the two-dimensional code from the first binary image;
[0070] based on the extracted outline, subjecting the first binary image to perspective transformation to obtain a second binary image;
[0071] acquiring a white bit and a black bit of the second binary image through OTSU binarization processing; and
[0072] identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label.
[0073] Wherein OTSU is an optimal algorithm for determining an image binarization partition threshold value, and this method is also referred to as a maximum between-class variance method. The OTSU method is simple to calculate and is unaffected by image brightness and contrast. The OTSU method divides an image into two parts, a background and a foreground, as per the characteristic of gray levels of the image. Since variance is a measure of the uniformity of gray level distribution, the greater the between-class variance between the background and the foreground, the greater the difference between the two parts of the image. Misclassification of part of the foreground into a background or part of the 17900607 background into a foreground will cause the difference between the two parts to become smaller. Therefore, a partition that maximizes the between-class variance means that the probability of mispartition is minimal.
[0074] In an alternative embodiment, the identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label comprises:
[0075] determining a dictionary type of the two-dimensional code label according to the white bit and the black bit of the second binary image;
[0076] searching the dictionary type of the two-dimensional code label from a dictionary bank to acquire the identification result of the two-dimensional code label.
[0077] Wherein the present invention stores a plurality of dictionary types in the dictionary bank, and when some dictionary type is searched out from the dictionary bank according to the white bit and the black bit of the second binary image, a matching result of the two-dimensional code label can be acquired as the identification result of the two-dimensional code label.
[0078] In an alternative embodiment, the determining first positioning data of the monocular cameras based on the identification result of the two-dimensional code label comprises:
[0079] taking a mark point of the two-dimensional code image as a starting point, and sequentially acquiring image coordinate values of four angular points of the two-dimensional code image in a clockwise direction, wherein the mark point of the two-dimensional code image is one of the four angular points of the two-dimensional code image;
[0080] determining a rotation matrix R and a translation matrix T of an image coordinate system relative to a world coordinate system through a P3P algorithm;
[0081] acquiring world coordinate values of the four angular points of the two-dimensional code image according to the rotation matrix R, the translation matrix T, and the image coordinate values of the four angular points of the two-dimensional code image;
[0082] and determining a world coordinate value of the two-dimensional code image as first positioning data of the monocular cameras according to the world coordinate values of the four angular points of the two-dimensional code image. 17900607
[0083] As shown in Fig. 2, when the monocular cameras are employed for positioning, positioning data of the image coordinate system are obtained, and the image coordinate system and the world coordinate system need to be converted to obtain positioning data of the world coordinate system.
[0084] A conversion relationship of the world coordinate system, the camera coordinate system, and the image coordinate system needs to be established, wherein,
[0085] Ow represents an original point of the world coordinate system w, Oc represents an original point of the camera coordinate system c, and O represents an original point of the image coordinate system o.
[0086] The camera coordinate system c is described as | 7" relative to the world coordinate system w, and the image coordinate system o is described as “7' relative to the camera coordinate system c, thereby obtaining that the image coordinate system o is described as y /' relative to the world coordinate system w: T= Sl Kk ? 10087] 0 110 1 LKR RER +R a
[0088] wherein “R represents a rotation matrix of the camera coordinate system c relative to the world coordinate system w, ‘ R represents a rotation matrix of the image coordinate system o relative to the camera coordinate system c, "P, represents a translation matrix of the camera coordinate system c relative to the world coordinate system w, and “P, represents a translation matrix of the image coordinate system o relative to the camera coordinate system c.
[0089] The present invention performs pose transformation using the P3P algorithm to acquire the rotation matrix R and the translation matrix T and then obtains positioning data of the monocular cameras according to the rotation matrix R and the translation matrix T. The P3P algorithm projects 3d points to the camera coordinate system and then gathers 3d points to an optic center according to a pinhole model principle. As shown in Fig. 3, according to a cosine theorem: 2 2 Lon + Lor, a 2 ’ Lon ’ Lon cos Oro, = Lys,
[0090] Lon +L pn —2-Lon Lor cos, = Lap, Lys, + Len, a 2 ’ Los, ’ Los, cos Oro.r, = Lan 0091 Pı, P;, and P3 are three imaging points in the image coordinate system, ging p 8 y respectively.
[0092] Three expressions in the above expression are divided by Li, p, to become: I? x’ +y” = 2xycos by, , =— Lop, I?
[0093] x? +17 —2xcos0,0p => Lor, I? V+ =2ycosb,,, = Lop, . Los Lor .
[0094] wherein x =—=- y =—== and x and y represent world coordinate values of Lop Lo» angular points of the two-dimensional code image in the world coordinate system;
[0095] it is further obtained: x’ +y° —2xyc080,,, —v=0
[0096] xX’ +17 =2x¢086p,, , —uv=0 y+ =2ycosb,,, —wv=0 Lr L Lr 0097 wherein y=—1 gy ="H0 wy= TBE and u, v, and w represent image p 8 Lon, Lon, Lon, coordinate values of angular points of the two-dimensional code image in the image coordinate system;
[0098] it is further obtained through conversion:
(1-u)y* —ux’ —2y cosy, , +2uxy cosy, , +1=0 17900607 » —W)x" — wy? —2xc086,, , +2WX) cos Oo» +1=0
[0099] where world coordinate values of angular points of the two-dimensional code image may be obtained by converting image coordinate values of angular points of the two-dimensional code image in the image coordinate system when three cosine values COSOpo,p > COSOpo pn, and COSO,, p are known.
[00100] Wherein the mark point of the two-dimensional code image 1s a starting point for detection of four angular points, as well as a coordinate point of positioning. Four angular points are clockwise extracted. After world coordinate values of four angular points are obtained, it is necessary to further determine a target world coordinate value as a world coordinate value of the two-dimensional code image, that is, to obtain the first positioning data of the monocular cameras. For example, the world coordinate values of the four angular points may be compared separately, and the optimal solution of the world coordinate values of the four angular points can be selected as the target world coordinate value. For example, the average value of the world coordinate values of the four angular points may be used as the target world coordinate value. For example, the world coordinate values of the four angular points may also be processed by other algorithms such as compensation and weighting to determine the target world coordinate value. The present invention does not specifically define the specific processing way of how to use the world coordinate values of the four angular points to determine the target world coordinate value.
[00101] In an alternative embodiment, the determining a real-time position of the inspection robot based on the first positioning data of each of the monocular cameras comprises:
[00102] determining a target positioning result according to each of the first positioning data;
[00103] using the target positioning result as the real-time position of the inspection robot.
[00104] According to the present invention, a plurality of monocular cameras are arranged, each of which identifies the two-dimensional code label to obtain first positioning data. After the first positioning data are obtained, it is necessary to further determine the 17900607 target positioning result as a real-time position of the inspection robot. For example, the first positioning data may be averaged and then subjected to calculation of residual errors with the average value, respectively, the first positioning data corresponding to the minimum residual error 1s determined as the optimal positioning result serving as the target positioning result. For example, it is also possible to average the first positioning data, and use the average value as the target positioning result. For example, it is also possible to calculate each of the first positioning data by other algorithms such as compensation and weighting to determine the target positioning result. The present invention does not specifically define the specific processing way of how to use each of the first positioning data to determine the target positioning result.
[00105] In an alternative embodiment, when each of the monocular cameras cannot identify the two-dimensional code label, the acquiring UWB positioning data of the UWB label, the moving the inspection robot, the acquiring second positioning data of each of the monocular cameras, the compensating and correcting the UWB positioning data of the UWB label with the second positioning data of each of the monocular cameras, and the determining a real-time position of the inspection robot based on compensated and corrected third positioning data comprise:
[00106] acquiring at least three groups of UWB positioning data of the UWB label and determining a first average value (x, V,) of the at least three groups of UWB positioning data;
[00107] moving the inspection robot, acquiring at least three groups of second positioning data of each of the monocular cameras, and determining a second average value (x. V.) of the at least three groups of second positioning data;
[00108] determining an error value (5,6 „) =((x,-x,),(y.—y,)) based on the first average value (x, V,) and the second average value (x. V.) ; and
[00109] compensating and correcting another group of UWB positioning data (X,>V,) of the UWB label with the error value (5,6 y) to obtain third positioning data
(x,.y,)= (x, +0.,y, + ô,) as the real-time position of the inspection robot.
[00110] Since obstacles are prone to blocking the vision of the monocular cameras, the monocular cameras cannot identify the two-dimensional code label on the body of the inspection robot. Under such circumstances, the UMB positioning data of the UWB label need to be read as the real-time position of the inspection robot. When the monocular cameras can accurately identify the two-dimensional code label, the positioning data of the monocular cameras are read as the real-time position of the inspection robot, which can meet the positioning requirements of the inspection robot in a complex environment.
[00111] Wherein, the UWB positioning data of the UWB label are acquired through each UWB anchor point (for example, four UWB anchor points) located indoors and matched with the UWB label, and the four UWB anchor points may be set on four indoor walls, for example. The present invention employs a two-way ranging method when acquiring positioning data of the UWB label, measures the time of flight (TOF) from the UWB label to the UWB anchor points, and then multiplies TOF by the speed of light to obtain the distance between the UWB anchor points and the UWB label.
[00112] For example, the UWB label of the present invention is provided with four UWB anchor points during positioning. Assuming that the coordinate of a UWB anchor point As (x1, yl, z1), the coordinate of a UWB anchor point B is (x2, y2, z2), the coordinate of a UWB anchor point C is (x3, y3, z3), the coordinate of a UWB anchor point D is (x4, y4, z4), and the coordinate of the UWB label to be solved is (x, y, z), it can be obtained: (xx) +(y-n) +(z-z) = R}
[00113] (sn) + (ron) Hama) = (x—x;) +(y-y;) +(z-z;) =R; (x—x,) +(y-n) +(z-z,) =R;
[00114] These expressions are expanded to obtain: Xx} =2xx, + Hy —2yy, +20 +77 —227, = R}
[00115] © mh A Dm HE Hz zz = Ry X Hx; —2xx, + y" + y; —2yy, + 7° +7; —2zz, = R, Xx; —2xx, +y + Yi —2yy, +z +z; —2z7, = RÊ
[00116] The expression in line 1 is subtracted from the expressions in lines 2, 3, and 4 respectively to obtain: 17000607 2% —x,)x+2(5, —v,)y+2(7, -z,)z=4
[00117] 2% —x,)x+2(5 —v,)y+2(7, —7,)z=4 2(x, —x,)x+2(5, —v,)y+2(7, —z,)z= 4,
[00118] wherein: A=R —R =x; +x =), AVS +2)
[00119] A=R —R =x +x —y; + TE +2 ARR =x +x SNA +2
[00120] the above expressions are converted into a form of matrix multiplication to obtain: 2(xi—x2) 2(91—y2) 2(z1—22) || x M
[00121] je —x3) 2(y1—ys) 2(z1— > fi = i 2(x1—x4) 2(y1— ya) 2(Z1—24) || z Az
[00122] Wherein Ri, Ro, R3, and Rs represent a distance from the UWB anchor point A, the UWB anchor point B, the UWB anchor point C, and the UWB anchor point D to the UWB label, respectively, thereby acquiring positioning data of the UWB label according to coordinates of the four UWB anchor points.
[00123] Since the monocular cameras and the UWB label employ different positioning methods, when at least one monocular camera cannot identify the two-dimensional code label on the body of the inspection robot, the inspection robot needs to be positioned by using the UMB positioning data of the UWB label at this time.
[00124] Since positioning methods used by the monocular cameras and the UWB label are all at a millisecond level, the inspection robot moves at a speed of 0.2m/s. At such a speed, at least three groups of continuous monocular camera positioning data can be regarded as data measured at the same location, and UMB positioning data are compensated and corrected with these continuous monocular camera positioning data.
[00125] For example, when data switching from the monocular camera positioning data to the UMB positioning data is performed, for example, the first three groups of UWB positioning data (x,,,y,,)- (X2> Yız) (xs, Yız) are acquired to determine a first average value (x, , V,) of the three groups of the UWB positioning data; three groups of second positioning data (x,y, )-(%.,.¥.,)+(%.,.¥.,) of the first three monocular cameras are acquired to determine a second average value (x. V.) of the three groups of second positioning data; and the first average value (x, V,) is subtracted from the second average value (x. V.) to obtain an error value (5.. J, ). The computational expressions are: 1g 1g
[00126] (x, Va) = =D XD Vu 3 3A 1x 1g
[00127] (x, )= Dez De 3 3A 0, = xX, ce x,
[00128] 0, = Ve Va
[00129] Wherein (5..5,) serves as a compensating factor for compensating and correcting UWB positioning data.
[00130] The fourth group of UWB positioning data (x,,y,) are compensated and corrected through the compensating factor to obtain the third group of positioning data (x. y,) = (x, +6, Vn + s,) as a real-time position of the inspection robot.
[00131] The present invention employs the average value of the second positioning data and the average value of the UWB positioning data to calculate the compensating factor and does not specifically define the number of groups of the second positioning data or the number of groups of the UWB positioning data.
[00132] In an alternative embodiment, the method is further used for simultaneously positioning a plurality of inspection robots, wherein each of the inspection robots is provided with a different two-dimensional code label and a different UWB label.
[00133] According to the method of the present invention, the inspection robots may be fixed at a plurality of indoor coordinate measurement points, respectively (for example, 30 coordinate measurement points arranged at intervals of 120cm), coordinate values of which constitute a coordinate set. When the inspection robots travel along a preset route, the positioning data of the monocular cameras and the positioning data of the UWB labels are recorded respectively to obtain, for example, 30 groups of static data, wherein measurement 17900607 is performed at each coordinate measurement point several times (for example, 100 times) to obtain the positioning data of each coordinate measurement point, and the Euclidean distance between the positioning data of each coordinate measurement point and a coordinate value of the coordinate measurement point in the coordinate set is calculated as a positioning error.
[00134] 5 =(x,—x) + (3 -v)
[00135] According to the above expression, (x, y ) represents a coordinate value of the coordinate measurement point, (Xx, y,) represents acquired positioning data of the coordinate measurement point, and 1 represents the times of measurement. Through determination, the method of the present invention can achieve 15-millimetre dynamic positioning accuracy.
[00136] The method of the present invention uses the monocular cameras to identify the two-dimensional code label on the inspection robot, and achieves high-accuracy positioning of the inspection robot through conversion of the world coordinate system, the image coordinate system, and the camera coordinate system, wherein positional accuracy may be accurate to a centimeter level. Meanwhile, areas not covered by the monocular cameras are subjected to data fusion in combination with UWB positioning data, thereby finally achieving indoor full-coverage high-accuracy positioning with a positioning accuracy up to 15mm. The present invention may further achieve simultaneous positioning of a plurality of inspection robots.
[00137] The present invention further involves an electronic apparatus comprising a server, a terminal, etc. The electronic apparatus comprises: at least one processor; a memory in communication connection with at least one processor; and a communication assembly in communication with a storage medium, the communication assembly receiving and sending data under control of the processor; wherein instructions are stored in the memory and may be executed by at least one processor to implement the real-time positioning method in the above examples.
[00138] In an alternative embodiment, the memory serving as a nonvolatile computer readable storage medium may be used for storing nonvolatile software programs and nonvolatile computer executable programs and modules. The processor executes various 17900607 functional applications and data processing of the apparatus by operating nonvolatile software programs, instructions and modules stored in the memory, i.e., implementing the real-time positioning method.
[00139] The memory may include a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the data storage area can store a list of options and the like. In addition, the memory may include a high-speed random access memory and may also include a nonvolatile memory, such as at least one disk memory device, a flash memory device, or other nonvolatile solid-state storage devices. In some examples, the memory may optionally include memories remotely provided with respect to the processor, and these remote memories may be connected to an external device via a network. Examples of the network include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
[00140] One or more modules are stored in the memory, and the real-time positioning method in any method example above is executed when the modules are executed by one or more processors.
[00141] The above product can execute the real-time positioning method provided in the examples of the present application and has the corresponding functional modules and beneficial effects for executing the method. For technical details not described in the examples in detail, please refer to the real-time positioning method provided in the examples of the present application.
[00142] The present disclosure further pertains to a computer-readable storage medium for storing a computer-readable program which is used for a computer to execute some or all of the examples of the real-time positioning method.
[00143] That is, those skilled in the art can understand that all or some of the steps in the real-time positioning method of the examples may be implemented by instructing relevant hardware through programs. The programs are stored in a storage medium and include several instructions to enable a device (may be a single-chip microcomputer, a chip, etc.) or a processor to implement all or some of the steps of the method in the examples of the present application. The storage medium includes various media that may store program codes, such 17900607 as a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
[00144] In the description provided here, a lot of specific details are explained. However, it can be understood that the examples of the present invention can be practiced without these specific details. In some instances, well-known methods, structures, and technologies are not shown in detail, so as not to obscure the understanding of the present description.
[00145] In addition, those of ordinary skill in the art can understand that although some examples described herein include certain features instead of other features included in other examples, the combination of features of different examples means that they are within the scope of the present invention and form different examples. For example, in the claims, any one of the claimed examples can be used in any combination.
[00146] It should be understood by those skilled in the art that although the present invention has been described with reference to exemplary examples, various changes can be made and elements thereof can be replaced with equivalents without departing from the scope of the present invention. In addition, without departing from the essential scope of the present invention, many amendments may be made to adapt a particular situation or material to the teaching of the present invention. Therefore, the present invention is not limited to the specific examples disclosed, but will include all examples falling within the scope of the appended claims.

Claims (10)

Claims
1. À real-time positioning method of an inspection robot, wherein, comprising: employing each of monocular cameras to identify a two-dimensional code indoors, and determining whether each of the monocular cameras can identify a two-dimensional code label; when each of the monocular cameras can identify the two-dimensional code label, acquiring an identification result of the two-dimensional code label by each of the monocular cameras, determining first positioning data of each of the monocular cameras based on the identification result of the two-dimensional code label by each of the monocular cameras, and determining a real-time position of the inspection robot based on the first positioning data of each of the monocular cameras; and when at least one monocular camera cannot identify the two-dimensional code label, acquiring UWB positioning data of the UWB label, moving the inspection robot, acquiring second positioning data of each of the monocular cameras, compensating and correcting the UWB positioning data of the UWB label with the second positioning data of each of the monocular cameras, and determining a real-time position of the inspection robot based on compensated and corrected third positioning data, wherein the two-dimensional code label and the UWB label are positioned on the inspection robot.
2. The method according to claim 1, wherein when each of the monocular cameras can identify the two-dimensional code label, the acquiring an identification result of the two-dimensional code label by each of the monocular cameras and the determining first positioning data of each of the monocular cameras based on the identification result of the two-dimensional code label by each of the monocular cameras comprise: acquiring a two-dimensional code image of the two-dimensional code label shot by a monocular camera; acquiring an identification result of the two-dimensional code label by the monocular cameras based on the two-dimensional code image; determining first positioning data of the monocular cameras based on the identification result of the two-dimensional code label; and 7500407 determining first positioning data of each of monocular cameras, respectively, by analogy.
3. The method according to claim 2, wherein the acquiring an identification result of the two-dimensional code label by the monocular cameras based on the two-dimensional code image comprises: subjecting the two-dimensional code image to binarization processing to obtain a first binary image; extracting an outline of the two-dimensional code from the first binary image; based on the extracted outline, subjecting the first binary image to perspective transformation to obtain a second binary image; acquiring a white bit and a black bit of the second binary image through OTSU binarization processing; and identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label.
4. The method according to claim 3, wherein the identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label comprises: determining a dictionary type of the two-dimensional code label according to the white bit and the black bit of the second binary image; searching the dictionary type of the two-dimensional code label from a dictionary bank to acquire the identification result of the two-dimensional code label.
5. The method according to claim 2, wherein the determining first positioning data of the monocular cameras based on the identification result of the two-dimensional code label comprises: taking a mark point of the two-dimensional code image as a starting point, and sequentially acquiring image coordinate values of four angular points of the two-dimensional code image in a clockwise direction, wherein the mark point of the two-dimensional code image is one of the four angular points of the two-dimensional code image; determining a rotation matrix R and a translation matrix T of an image coordinate system relative to a world coordinate system through a P3P algorithm;
acquiring world coordinate values of the four angular points of the two-dimensional code 17000607 image according to the rotation matrix R, the translation matrix T, and the image coordinate values of the four angular points of the two-dimensional code image; and determining a world coordinate value of the two-dimensional code image as first positioning data of the monocular cameras according to the world coordinate values of the four angular points of the two-dimensional code image.
6. The method according to claim 1, wherein the determining a real-time position of the inspection robot based on the first positioning data of each of the monocular cameras comprises: determining a target positioning result according to each of the first positioning data; using the target positioning result as the real-time position of the inspection robot.
7. The method according to claim 1, wherein when each of the monocular cameras cannot identify the two-dimensional code label, the acquiring UWB positioning data of the UWB label, the moving the inspection robot, the acquiring second positioning data of each of the monocular cameras, the compensating and correcting the UWB positioning data of the UWB label with the second positioning data of each of the monocular cameras, and the determining a real-time position of the inspection robot based on compensated and corrected third positioning data comprise: acquiring at least three groups of UWB positioning data of the UWB label, and determining a first average value (x, , V,) of the at least three groups of UWB positioning data; moving the inspection robot, acquiring at least three groups of second positioning data of each of the monocular cameras, and determining a second average value (x. V.) of the at least three groups of second positioning data; determining an error value (5,6 „) =((x,-x,),(y.-y,)) based on the first average value (x, V,) and the second average value (x. V.) ; and compensating and correcting another group of UWB positioning data (X,.Y,) of the UWB label with the error value (5,6 y) to obtain third positioning data
(x,.y,)= (x, +0.,y, +0 ,) as the real-time position of the inspection robot.
8. The method according to claim 1, wherein the method is further used for simultaneously positioning a plurality of inspection robots, wherein each inspection robot is provided with a different two-dimensional code label and a different UWB label.
9. An electronics device comprising a memory and a processor, wherein, the memory being used for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to achieve the method according to any one of claims 1 to 8.
10.A computer readable storage medium on which computer programs are stored, the computer programs being executed by the processor to achieve the method according to any one of claims 1 to 8.
LU500407A 2020-07-29 2021-07-07 Real-time positioning method for inspection robot LU500407B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010746224.2A CN111964680B (en) 2020-07-29 2020-07-29 Real-time positioning method of inspection robot

Publications (1)

Publication Number Publication Date
LU500407B1 true LU500407B1 (en) 2022-01-07

Family

ID=73363123

Family Applications (1)

Application Number Title Priority Date Filing Date
LU500407A LU500407B1 (en) 2020-07-29 2021-07-07 Real-time positioning method for inspection robot

Country Status (2)

Country Link
CN (1) CN111964680B (en)
LU (1) LU500407B1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240750A (en) * 2021-05-13 2021-08-10 中移智行网络科技有限公司 Three-dimensional space information measuring and calculating method and device
CN113516708B (en) * 2021-05-25 2024-03-08 中国矿业大学 Power transmission line inspection unmanned aerial vehicle accurate positioning system and method based on image recognition and UWB positioning fusion
CN113642687A (en) * 2021-07-16 2021-11-12 国网上海市电力公司 Substation inspection indoor position calculation method integrating two-dimensional code identification and inertial system
CN116559172A (en) * 2023-04-23 2023-08-08 兰州交通大学 Unmanned aerial vehicle-based steel bridge welding seam detection method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109100738B (en) * 2018-08-20 2023-01-03 武汉理工大学 Reliable positioning system and method based on multi-sensor information fusion
US11126198B2 (en) * 2018-12-30 2021-09-21 Ubtech Robotics Corp Robot movement control method, apparatus and robot using the same
CN110163912B (en) * 2019-04-29 2022-01-11 广州达泊智能科技有限公司 Two-dimensional code pose calibration method, device and system
CN110262507B (en) * 2019-07-04 2022-07-29 杭州蓝芯科技有限公司 Camera array robot positioning method and device based on 5G communication
CN110345937A (en) * 2019-08-09 2019-10-18 东莞市普灵思智能电子有限公司 Appearance localization method and system are determined in a kind of navigation based on two dimensional code
CN110879071B (en) * 2019-12-06 2021-05-25 成都云科新能汽车技术有限公司 High-precision positioning system and positioning method based on vehicle-road cooperation

Also Published As

Publication number Publication date
CN111964680B (en) 2021-05-18
CN111964680A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
LU500407B1 (en) Real-time positioning method for inspection robot
US11878433B2 (en) Method for detecting grasping position of robot in grasping object
CN109345588B (en) Tag-based six-degree-of-freedom attitude estimation method
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
KR20200041355A (en) Simultaneous positioning and mapping navigation method, device and system combining markers
CN110244284B (en) Calibration plate for calibrating multi-line laser radar and GPS\INS and method thereof
US11830216B2 (en) Information processing apparatus, information processing method, and storage medium
CN112667837A (en) Automatic image data labeling method and device
CN115655262B (en) Deep learning perception-based multi-level semantic map construction method and device
Werman et al. Robot localization using uncalibrated camera invariants
CN111998862A (en) Dense binocular SLAM method based on BNN
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
CN111964681B (en) Real-time positioning system of inspection robot
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
Qiu et al. A new monocular vision simultaneous localization and mapping process for high-precision positioning in structured indoor environments
CN114782496A (en) Object tracking method and device, storage medium and electronic device
Zhao et al. Dmvo: A multi-motion visual odometry for dynamic environments
CN110763232B (en) Robot and navigation positioning method and device thereof
CN117537803B (en) Robot inspection semantic-topological map construction method, system, equipment and medium
CN116660916B (en) Positioning method, mapping method and electronic equipment for orchard mobile robot
US11262103B1 (en) Heliostat localization in camera field-of-view with induced motion
Gong et al. Scene-aware Online Calibration of LiDAR and Cameras for Driving Systems
Howarth et al. Extraction and grouping of surface features for 3d mapping

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20220107