CN116484899A - Visual identification code for indoor positioning of mobile robot and positioning method - Google Patents
Visual identification code for indoor positioning of mobile robot and positioning method Download PDFInfo
- Publication number
- CN116484899A CN116484899A CN202310376744.2A CN202310376744A CN116484899A CN 116484899 A CN116484899 A CN 116484899A CN 202310376744 A CN202310376744 A CN 202310376744A CN 116484899 A CN116484899 A CN 116484899A
- Authority
- CN
- China
- Prior art keywords
- edge
- image
- visual identification
- identification code
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 93
- 238000000034 method Methods 0.000 title claims abstract description 31
- 108091026890 Coding region Proteins 0.000 claims abstract description 29
- 239000000758 substrate Substances 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 239000003086 colorant Substances 0.000 claims abstract description 8
- 108700026244 Open Reading Frames Proteins 0.000 claims abstract description 6
- 238000002310 reflectometry Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 13
- 238000012937 correction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003702 image correction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 2
- 230000003287 optical effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/06009—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
- G06K19/06037—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
- G06K7/1443—Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a visual identification code for indoor positioning of a mobile robot, which comprises a light-colored substrate; the coding region is arranged in the middle of the light-colored substrate, the coding region is a matrix of 11 x 11, and the coding region is used for recording the position, the state and the size of the visual identification code; the edge identification area is of a dark ring structure and is a circumcircle of the coding area; the number of the edge marks is four, the edge marks are correspondingly arranged at four corners of the coding region, the edge marks are arranged in the edge recognition region, and the edge marks are made of light colors; the positioning marks are arranged in the edge recognition area, are positioned at the periphery of the coding area and are formed by arranging a plurality of linear stripes with alternating colors. The invention provides a visual identification code for mobile robot indoor positioning, which can accurately determine the position of a robot. The invention also provides a positioning method for indoor positioning of the mobile robot.
Description
Technical Field
The invention relates to the technical field of indoor positioning of robots, in particular to a visual identification code and a positioning method for indoor positioning of a mobile robot.
Background
At present, the indoor robot positioning technology has various navigation modes. Such as magnetic tape, electromagnetic, laser, visual, etc. The magnetic tape and the electromagnetic guiding mode are both complex in construction and poor in flexibility, and a metal wire or the magnetic tape needs to be preset below a robot working area, so that a robot path is difficult to change. The laser and visual guiding mode can utilize indoor geometric or visual characteristics to carry out positioning navigation, and has high flexibility. However, these methods have limitations in that the robot cannot perform positioning correctly when the environment varies too much.
The image coding is a graphic symbol automatic identification processing code system based on computer image processing technology, combined coding principle and the like. The two-dimensional code is a common image code, on the corresponding element positions of the matrix, the appearance of points (square, round dots or other shapes) is used for representing binary '1', the absence of the points is used for representing binary '0', the arrangement and combination of the points determine the meaning represented by the matrix two-dimensional code, and the two-dimensional code has the advantages of large information capacity, strong error correction capability and the like, and is widely applied.
The matrix type two-dimensional code includes Codeone, matrixCode, QRcode, aprilTag, TRIPtag and the like. QRCode is the most common and also the most typical two-dimensional code.
The two-dimensional code has some problems in robot positioning: lack of direct position and attitude information: the code is usually a number or some other complex text information, but only the position and posture information is the most important for positioning, and usually after the code of the two-dimensional code is read, the code needs to be used for reading the position and posture corresponding to the two-dimensional code of the number from a database;
lidar is not visible: the common two-dimensional code is in an image form, the laser radar cannot distinguish the two-dimensional code from the environment according to a geometric mechanism or reflectivity, and the position of the marker cannot be directly determined.
Disclosure of Invention
The invention aims to provide a visual identification code and a positioning method for indoor positioning of a mobile robot, a user inputs position information and size information of the visual identification to generate a printable visual code, the periphery of the visual code is pasted with a reflective sticker according to requirements to finish the generation of the visual identification code, after being acquired by an image acquisition device and being identified by a laser point cloud identification algorithm, the position relation between the image coding information, angular point position information, the image coding and a laser reflection area and the robot can be acquired, and then the position of the robot is accurately determined.
The invention discloses a visual identification code for indoor positioning of a mobile robot and a positioning method, which adopts the following technical scheme:
a visual identification code for mobile robot indoor positioning comprises a light-colored substrate;
the coding region is arranged in the middle of the light-colored substrate, is a matrix of 11 x 11, and is used for recording the position, the state and the size of the visual identification code;
the edge identification area is of a dark ring structure and is a circumcircle of the coding area;
the number of the edge marks is four, the edge marks are correspondingly arranged at four corners of the coding region, the edge marks are arranged in the edge recognition region, and the edge marks are made of light colors;
the positioning mark is arranged in the edge recognition area, is positioned at the periphery of the coding area and is formed by arranging a plurality of linear stripes with alternating colors
As a preferred scheme, the elements of the matrix are square, the sizes are the same, the dark square represents 1, the light square represents 0, the coordinates of a certain position in the European space are encoded into 58-bit binary data, and the posture data of a certain Euler angle mark are encoded into 36-bit binary data.
Preferably, the coding region further comprises a check code, and the check code adopts 16-bit binary data for coding check.
Preferably, the light-colored substrate edge is provided with a high-reflectivity edge, the high-reflectivity edge is a square ring, the width is required to be greater than or equal to 1 cm, and the center of the high-reflectivity edge coincides with the center of the coding region.
A positioning method for mobile robot indoor positioning, comprising the steps of:
s1, the visual identification code is used for content coding, coordinates of an X axis, a Y axis and a Z axis of a certain position in European space are coded into 58-bit binary data, the gesture of a certain Euler angle mark is coded into 36-bit binary data, and the visual identification code size is coded into 10-bit binary data;
s2, coding and checking the visual identification code, and taking 16-bit binary data as a check code of coded content to ensure the decoding correctness;
s3, decoding the visual identification code in the S1, utilizing the visual identification code acquired by a camera, recognizing the position and the direction of a coding region by using a recognition algorithm after image correction, reading each element to be 0 or 1, combining and dividing the elements according to a certain order, and converting the elements into 10-system data to acquire the position information, the posture information, the size information and the check code of the visual identification code;
s4, substituting the decoded data into a check result obtained by a check formula, and comparing the check result with the check code obtained in the S3.
Preferably, euler angles adopted by the gesture codes in the S1 are represented by Tait-Bryan angles, the rotation angles of rotation around the X, Y, Z axes of the self coordinate system are respectively Roll, pitch, yaw, and the sequence is Z-Y-X.
Preferably, the check code adopts CRC check code, namely cyclic redundancy check code, and adopts serial transmission auxiliary memory to communicate with data of a host computer and computer network.
As a preferred scheme, the decoding process in S3 specifically includes:
s3-1, image acquisition: when the robot runs, images are acquired from the visual field of the camera in real time, the definition of the images is critical to decoding and positioning, the acquired images cannot have smear, the exposure time of the camera cannot be too long, and the exposure time of the camera is less than 50ms;
s3-2, graying the image: converting the acquired image into a single-channel gray scale image so as to enhance the processing speed;
s3-3, image calibration: correcting the distorted image, wherein the correction parameters are obtained in a camera internal parameter calibration mode, and the internal parameters of the camera can be obtained in the calibration process, wherein the internal parameters comprise the focal length of a camera lens, the position of a center point and distortion parameters;
s3-4, noise reduction: when the image is acquired, filtering is adopted to filter noise points on the visual identification code because of shooting;
s3-5, edge extraction: firstly, extracting edge information in an image by utilizing an edge detection algorithm;
s3-6, quick response edge identification area: detecting circular information contained in the extracted edge information by using a Hough transformation algorithm, and determining the position of a visual identification code according to the position of the concentric ring;
s3-7, edge identification and recognition: in the middle of the concentric circles obtained in the step S3-6, the positions of four squares are found by Hough transformation, and then the positions of four corner points of the coding region are determined;
s3-8, positioning identification: in the non-public area of the circular ring inner area and the coding area, detecting straight line stripes by using a Hough straight line detection method, and further determining a positioning mark and the direction of the coding area;
s3-9, gray level equalization, namely performing equalization treatment on the image shot by the camera to ensure that the histogram of the image is distributed more uniformly, the contrast is increased, and the distinction degree is enhanced;
s3-10, binarizing the image: performing image binarization processing on the gray level image formed by the S3-9, and performing binarization on the image by adopting a maximum inter-class difference method;
s3-11, code reading: according to the determined positioning identification and the position of the coding region, dividing the coding region into 11 x 11 squares, mapping the 11 x 11 squares onto a standard coding graph, calculating the duty ratio of dark pixels in each square, if the duty ratio exceeds 50%, reading the square as '1', otherwise as '0', and further determining the coordinates, the gestures and the sizes of the visual identification codes.
The visual identification code for indoor positioning of the mobile robot has the beneficial effects that: the user inputs the position information and the size information of the visual mark, namely the generated visual mark, the visual mark can be generated by pasting reflective sticker on the periphery of the visual mark according to the requirement, the image acquisition equipment acquires the visual mark, and the image code information, the angular point position information, the image code, the laser reflection area and the position relation of the robot can be acquired after the image acquisition equipment acquires the image and the laser point cloud identification algorithm, so that the position of the robot is accurately determined.
Drawings
Fig. 1 is a schematic structural view of a visual identification code for mobile robot indoor positioning according to the present invention.
Fig. 2 is a diagram of a visual identification code coordinate system of a visual identification code for mobile robot indoor positioning according to the present invention.
Fig. 3 is a visual identification code positioning scene of a positioning method for mobile robot indoor positioning according to the present invention.
Detailed Description
The invention is further illustrated and described below in conjunction with the specific embodiments and the accompanying drawings:
referring to fig. 1, a visual identification code for indoor positioning of a mobile robot includes a light-colored substrate 10, the light-colored substrate 10 is made of white high-reflectivity material, a high-reflectivity edge 11 is disposed at an edge of the light-colored substrate 10, the high-reflectivity edge 11 is a square ring, the width is greater than or equal to 1 cm, and the center of the high-reflectivity edge coincides with the center of a coding region 20.
The coding region 20, the coding region 20 is arranged in the middle of the light-colored substrate 10, the coding region 20 is a matrix of 11 x 11, and the coding region 20 is used for recording the position, state and size of the visual identification code; the elements of the matrix are square, the size is the same, the dark square represents 1, the light square represents 0, the coordinates of a certain position in the European space are encoded into 58-bit binary data, and the gesture data of a certain Euler angle mark are encoded into 36-bit binary data.
And the coding region 20 further comprises a check code, and the check code adopts 16-bit binary data to perform coding check, and is used for checking the data of the coding region 20, so that the decoding accuracy is ensured.
The edge recognition area 30, the edge recognition area 30 is of a dark ring structure, and the edge recognition area 30 is a circumcircle of the coding area 20; the dark ring on the light low reflectivity substrate is used for identifying the position of the quick rough positioning code of the module in the camera view, and the light and dark areas should have different optical characteristics so as to be distinguishable, so that a user can quickly judge whether the code area 20 exists in the view and preliminarily judge the position of the visual identification code.
The number of the edge marks 40 is four, the edge marks 40 are correspondingly arranged at four corners of the coding region 20, the edge marks 40 are arranged in the edge recognition region 30, and the edge marks 40 are made of light colors. Inside the dark circle there are four light squares, which are one peripheral boundary of the coding region 20, defining four vertices of a rectangular area constituted by the coding region 20.
The positioning mark 50 is arranged in the edge recognition area 30, the positioning mark 50 is positioned at the periphery of the coding area 20, the positioning mark 50 is formed by arranging a plurality of straight lines with alternating colors, the width of each line is the same as that of the coding area 20, the graph is used for dividing and aligning the data lines and rows of the coding area 20, and meanwhile, the direction of the visual identification code is defined.
In the above scheme, the edges of the light-colored substrate 10 are provided with the high-reflectivity edges 11, the high-reflectivity edges 11 are square rings, the width is required to be greater than or equal to 1 cm, and the centers of the high-reflectivity edges coincide with the centers of the coding regions 20.
The user inputs the position information and the size information of the visual mark, namely the generated visual mark, the visual mark can be generated by pasting reflective sticker on the periphery of the visual mark according to the requirement, the image acquisition equipment acquires the visual mark, and the image code information, the angular point position information, the image code, the laser reflection area and the position relation of the robot can be acquired after the image acquisition equipment acquires the image and the laser point cloud identification algorithm, so that the position of the robot is accurately determined.
A positioning method for mobile robot indoor positioning, comprising the steps of:
s1, the visual identification code is used for content coding, coordinates of an X axis, a Y axis and a Z axis of a certain position in European space are coded into 58-bit binary data, the gesture of a certain Euler angle mark is coded into 36-bit binary data, and the visual identification code size is coded into 10-bit binary data;
referring to fig. 2, euler angles used in the gesture encoding in S1 are represented by the Tait-Bryan angle, and the rotation angles around the X, Y, Z axes of the own coordinate system are respectively indicated as Roll, pitch, yaw, and the sequence is Z-Y-X.
S2, coding and checking the visual identification code, and taking 16-bit binary data as a check code of coded content to ensure the decoding correctness; . The check bit is used for preventing the error in the decoding process from affecting the positioning accuracy if the visual identification code is stained or exposed abnormally, so that a specific algorithm is adopted to calculate the coded data by substituting the algorithm, and a check code is obtained. If the code check code is the same as the decoding check code, the decoding is correct, otherwise, the decoding fails.
The check code adopts CRC check code, namely Cyclic Redundancy Check (CRC), which is an error detection and correction code with quite strong capability, and the circuit for realizing the encoding and the check code is quite simple, and is commonly used in data communication and computer network of an auxiliary memory for serial transmission (binary bit string is transmitted bit by bit along a signal line) and a host. The check code is composed of 104 bits of information code and 16 bits of check code, the parameter model adopts CRC-16/IBM, the width bit is 16, and the polynomial POLY is 8005.
S3, decoding the visual identification code in the S1, utilizing the visual identification code acquired by the camera, recognizing the position and the direction of the coding region by using a recognition algorithm after image correction, reading each element to be 0 or 1, combining and dividing the elements according to a certain order, and converting the elements into 10-system data to acquire the position information, the posture information, the size information and the check code of the visual identification code.
S4, substituting the decoded data into a check result obtained by a check formula, and comparing the check result with the check code obtained in the S3.
The decoding process in S3 specifically includes:
s3-1, image acquisition: when the robot runs, images are acquired from the field of view of the camera in real time, the definition of the images is critical to decoding and positioning, the acquired images cannot be smeared, and the exposure time of the camera cannot be too long. The exposure time of the camera is less than 50ms, so that the phenomenon of smear can be avoided in the motion process of the robot. The global shutter camera is adopted as much as possible, so that the uniformity of the time for collecting the image pixel points can be ensured;
s3-2, graying the image: for the visual identification code, the information is represented by the difference of light reflection, namely, the difference of 0 and 1 is only the difference of gray values, the decoding process only needs the proposition distribution condition of the bar code, and no color information is needed. Generally, an image acquired by a camera is a color image and includes three channels of RGB, so that the acquired image is first converted into a single-channel gray scale image to enhance the processing speed;
s3-3, image calibration: lens distortion is effectively a generic term for perspective distortion inherent to optical lenses, i.e., distortion due to perspective. The imaging principle of a camera determines that the image of the camera has the distortion, particularly in a camera with a wide-angle lens, the degree of distortion is more serious, and the common distortion is two kinds, namely radial distortion and tangential distortion.
Radial distortion is caused by different refraction angles of light rays caused by radian of the lens surface, so that the edge distortion of the lens is more serious, and zero distortion is generated at the optical center; tangential distortion occurs primarily due to imperfections in the assembly, causing the lens to be non-parallel to the imaging plane. In the actual use process, the distorted image needs to be corrected first. The correction parameters are obtained by calibrating the internal parameters of the camera, wherein the internal parameters of the camera can be obtained in the calibrating process, and the internal parameters comprise the focal length of the lens of the camera, the position of the center point and distortion parameters. Comparing the images before correction and after correction;
s3-4, noise reduction: when the image is acquired, if the illumination is poor, the image has some noise, meanwhile, the noise points on the image can be caused by the contamination of the visual identification code and the like, and the noise can be filtered in a filtering mode;
s3-5, edge extraction: firstly, extracting edge information in an image by utilizing an edge detection algorithm, and commonly used as a canny algorithm;
s3-6, quick response edge identification area: detecting circular information contained in the extracted edge information by using a Hough transformation algorithm, and determining the position of a visual identification code according to the position of the concentric ring;
s3-7, edge identification and recognition: in the middle of the concentric circles obtained in the step S3-6, the positions of four squares are found by Hough transformation, and then the positions of four corner points of the coding region are determined;
s3-8, positioning identification: in the non-public area of the circular ring inner area and the coding area, detecting straight line stripes by using a Hough straight line detection method, and further determining a positioning mark and the direction of the coding area;
s3-9, gray level equalization, namely, the phenomenon of uneven image exposure usually occurs, pixels of a coding region of a visual identification code are concentrated in a relatively small region, so that bright and dark distinction degree is not high, and an image shot by a camera is subjected to equalization treatment, so that histogram distribution of the image is more uniform, the contrast is increased, and the distinction degree is increased;
s3-10, binarizing the image: the visual identification code decoding process needs to decode the point position into 0 or 1, the gray level image is a multi-value image, the process of changing the multi-value image into a binary image is called image binarization, the gray level image formed by S3-9 is subjected to image binarization, and the image is binarized by adopting a maximum inter-class difference method;
s3-11, code reading: according to the determined positioning identification and the position of the coding region, dividing the coding region into 11 x 11 squares, mapping the 11 x 11 squares onto a standard coding graph, calculating the duty ratio of dark pixels in each square, if the duty ratio exceeds 50%, reading the square as '1', otherwise as '0', and further determining the coordinates, the gestures and the sizes of the visual identification codes.
A use scenario for locating using visual identification codes is shown in fig. 3, where the visual identification codes are attached to the wall and ground, and locating is accomplished when the codes are simultaneously present in the field of view of the lidar and camera.
The positioning process is described as follows:
1. in the image acquired by the camera, the visual identification code is detected and decoded.
2. According to the corner information and the size of the code obtained during decoding of the camera image and the internal reference of the known camera, the position of the recognized visual identification code under the camera coordinate system can be calculated by using a visual geometry method.
3. And calculating the Euclidean distance between the visual identification code and the camera according to the obtained coordinates of the visual identification code under the camera coordinate system. If the distance is greater than a certain threshold, the obtained coordinates are considered to be unreliable, and laser radar is needed to assist in determination; if the distance is less than the threshold, the coordinates of the visual identification code in the camera coordinate system are considered to be authentic.
4. If the obtained coordinate of the identified visual identification code under the camera coordinate system is not credible, the point cloud with the intensity higher than the threshold value is needed to be intercepted in the three-dimensional space range of a certain threshold value near the coordinate according to the rough coordinate of the identified visual identification code obtained in the last step in the point cloud data obtained by the laser radar, and the point cloud with the intensity higher than the threshold value is filtered according to the certain threshold value.
5. Clustering the filtered point clouds, removing outliers, solving a plane equation of a plane where the point clouds are located, and solving the center coordinates of the point clouds, so that the accurate pose of the visual identification code under the vehicle body coordinate system can be obtained.
6. According to the pose of the visual identification code obtained by decoding the camera image under the world coordinate system, the pose of the visual identification code under the world coordinate system can be reversely deduced by combining the obtained pose of the visual identification code under the vehicle body coordinate system.
The invention provides a visual identification code for indoor positioning of a mobile robot, which is characterized in that a user inputs position information and size information of a visual identification to generate a visual code, the periphery of the visual code is stuck with a reflective sticker according to requirements to finish the generation of the visual identification, after being collected by an image collecting device and being identified by a laser point cloud identification algorithm, the position relation between the image coding information, angular point position information, the image coding, a laser reflection area and the robot can be obtained, and then the position of the robot is accurately determined.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (8)
1. A visual identification code for mobile robot indoor positioning, comprising a light-colored substrate;
the coding region is arranged in the middle of the light-colored substrate, is a matrix of 11 x 11, and is used for recording the position, the state and the size of the visual identification code;
the edge identification area is of a dark ring structure and is a circumcircle of the coding area;
the number of the edge marks is four, the edge marks are correspondingly arranged at four corners of the coding region, the edge marks are arranged in the edge recognition region, and the edge marks are made of light colors;
the positioning mark is arranged in the edge recognition area, is positioned at the periphery of the coding area and is formed by arranging a plurality of linear stripes with alternating colors.
2. A visual identification code for mobile robot indoor positioning according to claim 1, wherein the elements of the matrix are square, of the same size, the dark square represents 1, the light square represents 0, the coordinates of a certain position in the european space are encoded as 58-bit binary data, and the pose data of a certain euler angle is encoded as 36-bit binary data.
3. A visual identification code for mobile robot indoor positioning as recited in claim 1, wherein the code region further comprises a check code that employs 16-bit binary data for code checking.
4. A visual identification code for mobile robot indoor positioning as claimed in claim 1, wherein the light-colored base edge is provided with a high-reflectivity edge, the high-reflectivity edge is a square ring, the width is greater than or equal to 1 cm, and the center of the high-reflectivity edge coincides with the center of the coding region.
5. A positioning method for mobile robot indoor positioning, comprising the steps of:
s1, carrying out content coding on the visual identification code according to any one of claims 1-4, coding the X axis, Y axis and Z axis of a certain position in European space into 58-bit binary data, coding the gesture of a certain Euler angle mark into 36-bit binary data, and coding the visual identification code size into 10-bit binary data;
s2, coding and checking the visual identification code, and taking 16-bit binary data as a check code of coded content to ensure the decoding correctness;
s3, decoding the visual identification code in the S1, utilizing the visual identification code acquired by a camera, recognizing the position and the direction of a coding region by using a recognition algorithm after image correction, reading each element to be 0 or 1, combining and dividing the elements according to a certain order, and converting the elements into 10-system data to acquire the position information, the posture information, the size information and the check code of the visual identification code;
s4, substituting the decoded data into a check result obtained by a check formula, and comparing the check result with the check code obtained in the S3.
6. The method of claim 5, wherein euler angles used for the gesture codes in S1 are represented by Tait-Bryan angles, and the rotation angles around the X, Y, Z axes of the self-coordinate system are respectively Roll, pitch, yaw, and the sequence is Z-Y-X.
7. An indoor positioning method for mobile robot according to claim 5, wherein the check code is a CRC check code, i.e. a cyclic redundancy check code, and the serial transmission is used in the data communication and computer network between the auxiliary memory and the host computer.
8. The method for mobile robot indoor positioning according to claim 5, wherein the decoding process in S3 is specifically:
s3-1, image acquisition: when the robot runs, images are acquired from the visual field of the camera in real time, the definition of the images is critical to decoding and positioning, the acquired images cannot have smear, the exposure time of the camera cannot be too long, and the exposure time of the camera is less than 50ms;
s3-2, graying the image: converting the acquired image into a single-channel gray scale image so as to enhance the processing speed;
s3-3, image calibration: correcting the distorted image, wherein the correction parameters are obtained in a camera internal parameter calibration mode, and the internal parameters of the camera can be obtained in the calibration process, wherein the internal parameters comprise the focal length of a camera lens, the position of a center point and distortion parameters;
s3-4, noise reduction: when the image is acquired, filtering is adopted to filter noise points on the visual identification code because of shooting;
s3-5, edge extraction: firstly, extracting edge information in an image by utilizing an edge detection algorithm;
s3-6, quick response edge identification area: detecting circular information contained in the extracted edge information by using a Hough transformation algorithm, and determining the position of a visual identification code according to the position of the concentric ring;
s3-7, edge identification and recognition: in the middle of the concentric circles obtained in the step S3-6, the positions of four squares are found by Hough transformation, and then the positions of four corner points of the coding region are determined;
s3-8, positioning identification: in the non-public area of the circular ring inner area and the coding area, detecting straight line stripes by using a Hough straight line detection method, and further determining a positioning mark and the direction of the coding area;
s3-9, gray level equalization, namely performing equalization treatment on the image shot by the camera to ensure that the histogram of the image is distributed more uniformly, the contrast is increased, and the distinction degree is enhanced;
s3-10, binarizing the image: performing image binarization processing on the gray level image formed by the S3-9, and performing binarization on the image by adopting a maximum inter-class difference method;
s3-11, code reading: according to the determined positioning identification and the position of the coding region, dividing the coding region into 11 x 11 squares, mapping the 11 x 11 squares onto a standard coding graph, calculating the duty ratio of dark pixels in each square, if the duty ratio exceeds 50%, reading the square as '1', otherwise as '0', and further determining the coordinates, the gestures and the sizes of the visual identification codes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310376744.2A CN116484899A (en) | 2023-04-11 | 2023-04-11 | Visual identification code for indoor positioning of mobile robot and positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310376744.2A CN116484899A (en) | 2023-04-11 | 2023-04-11 | Visual identification code for indoor positioning of mobile robot and positioning method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116484899A true CN116484899A (en) | 2023-07-25 |
Family
ID=87222472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310376744.2A Withdrawn CN116484899A (en) | 2023-04-11 | 2023-04-11 | Visual identification code for indoor positioning of mobile robot and positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116484899A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118505958A (en) * | 2024-07-18 | 2024-08-16 | 温州德维诺自动化科技有限公司 | Visual image positioning method and tinplate badge prepared by using same |
-
2023
- 2023-04-11 CN CN202310376744.2A patent/CN116484899A/en not_active Withdrawn
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118505958A (en) * | 2024-07-18 | 2024-08-16 | 温州德维诺自动化科技有限公司 | Visual image positioning method and tinplate badge prepared by using same |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3309704B1 (en) | Two-dimensional code partitioning and decoding method and system | |
US7769236B2 (en) | Marker and method for detecting said marker | |
Fiala | Designing highly reliable fiducial markers | |
CN115131444B (en) | Calibration method based on monocular vision dispensing platform | |
CN111353485B (en) | Seal identification method, device, equipment and medium | |
CN108763996B (en) | Plane positioning coordinate and direction angle measuring method based on two-dimensional code | |
CN104866859A (en) | High-robustness visual graphical sign and identification method thereof | |
CN116484899A (en) | Visual identification code for indoor positioning of mobile robot and positioning method | |
EP3561729B1 (en) | Method for detecting and recognising long-range high-density visual markers | |
US6941026B1 (en) | Method and apparatus using intensity gradients for visual identification of 2D matrix symbols | |
CN111767780A (en) | AI and vision combined intelligent hub positioning method and system | |
EP4095803A1 (en) | Image processing method and apparatus | |
CN116433701B (en) | Workpiece hole profile extraction method, device, equipment and storage medium | |
CN113313628B (en) | Affine transformation and mean pixel method-based annular coding point robustness identification method | |
CN116843748B (en) | Remote two-dimensional code and object space pose acquisition method and system thereof | |
CN110705322A (en) | Two-dimensional code positioning method for AGV navigation | |
CN109191489B (en) | Method and system for detecting and tracking aircraft landing marks | |
Fiala | Artag fiducial marker system applied to vision based spacecraft docking | |
CN116245948A (en) | Monocular vision cooperative target and pose measuring and calculating method | |
CN116110069A (en) | Answer sheet identification method and device based on coding mark points and relevant medium thereof | |
CN115376131A (en) | Design and identification method of dot-shaped coding mark | |
Nemec et al. | Visual localization and identification of vehicles inside a parking house | |
WO2019228936A1 (en) | Target direction estimation using centroid follower | |
Liu et al. | CH-Marker: a color marker robust to occlusion for augmented reality | |
Chen et al. | Detection of coded concentric rings for camera calibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230725 |