CN112541943A - Robot positioning method based on visual road signs - Google Patents
Robot positioning method based on visual road signs Download PDFInfo
- Publication number
- CN112541943A CN112541943A CN201910890204.XA CN201910890204A CN112541943A CN 112541943 A CN112541943 A CN 112541943A CN 201910890204 A CN201910890204 A CN 201910890204A CN 112541943 A CN112541943 A CN 112541943A
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- robot
- rectangle
- road sign
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012545 processing Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 10
- 230000007797 corrosion Effects 0.000 claims description 6
- 238000005260 corrosion Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 abstract description 2
- 239000011159 matrix material Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000013519 translation Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a robot positioning method based on visual road signs, which performs matching calculation by using the position of a road sign graph in world coordinates and image information shot by a robot camera to solve the current pose of a robot in the world coordinates. The method mainly estimates the rough range of the road sign by carrying out local self-adaptive binarization and edge feature extraction on the image; extracting a fitted edge straight line by using a Hough transformation algorithm, and judging whether the number of the straight lines meets the requirement or not; and verifying whether the space structure where the straight line is positioned can form a road sign image or not, and obtaining data information required for solving the posture. And calculating to obtain the self posture of the robot by using the known road sign positions, the camera parameters and the information obtained by processing the images. The invention carries out local processing on the image and is insensitive to the change of the brightness of the environment; the road sign structure is simple, so that the position information of the road sign can be detected under the condition of low resolution of the camera, and the method has a stronger application range compared with the traditional method.
Description
[ technical field ] A method for producing a semiconductor device
The invention belongs to the technical field of digital image processing, and particularly relates to a robot positioning method based on vision.
[ background of the invention ]
The robot positioning technology is always the key content in the technical field of robots, and has important influence on the functions of navigation, obstacle avoidance and the like. Although the GPS can provide a positioning function, the GPS still cannot accurately provide sub-meter positioning accuracy due to technical limitations; the laser radar can provide higher precision by matching with the information of the odometer, but the problems of high price, less characteristics and the like easily cause tracking loss, so that the positioning phenomenon cannot be carried out; the positioning technologies such as ultrasonic and infrared are also expensive and difficult to popularize in large quantities.
[ summary of the invention ]
In order to solve the problems, the invention provides a robot positioning method based on a visual road sign, which accurately positions a robot through a camera and the road sign.
In order to achieve the purpose, the invention adopts the following technical scheme:
a robot positioning method based on visual road signs comprises the following steps:
the method comprises the following steps: carrying out binarization processing on a frame of gray level image shot by a robot camera to obtain a first binary image of the gray level image;
step two: searching a road sign image and determining the position;
step three: calculating the pose of the robot camera in a landmark coordinate system;
step four: identifying a number in the landmark image;
step five: and calculating to obtain the positioning of the robot.
Further, the step one specifically includes the following sub-steps:
dividing the gray level image into continuous square regions with the same size;
performing the following operation on each square region: combining 8 square areas around each square area to form a sampling area with the size of 3 multiplied by 3, and obtaining a first adaptive binarization threshold value of each pixel in each square area by using an Otsu algorithm on the sampling area, wherein the first adaptive binarization threshold value of each pixel in each square area is the first adaptive binarization threshold value of each pixel in the gray level image;
and carrying out image filtering on the first self-adaptive binarization threshold value of each pixel in the gray-scale image to obtain a second self-adaptive binarization threshold value of each pixel, and carrying out binarization operation on the gray-scale image by using the second self-adaptive binarization threshold value of each pixel to obtain a first binary image of the image.
Furthermore, the square area is divided into areas with side lengths of:
wherein s is the size of the image, and f(s) is the side length of the square area;
the positions of the square region division are allocated as follows:
in a coordinate system of the gray scale image, horizontal coordinates become larger from left to right, vertical coordinates become larger from top to bottom, l is a left side boundary of a leftmost square region, and t is an upper boundary of an uppermost square region, so that coordinates of the upper left corner of each square region are (i + b + l, j + b + t), wherein i and j are non-negative integers, and an implicit condition of constraint i and j is that an intersection exists between the represented square region and the gray scale image shot by the robot camera.
Preferably, the image filtering includes: and adopting an average filtering mode, taking a pixel as a center, summing the first adaptive binarization threshold values of all pixels in the square range in a square range with the side length of a, and taking an average value to obtain a second adaptive binarization threshold value of the pixel, wherein a is 2 times the side length of the square region plus 1.
Preferably, the second step specifically includes the following substeps:
substep 21: after the first binary image is subjected to expansion corrosion treatment, extracting first edge points which are continuously distributed in all edges and making the first edge points into a circumscribed rectangle;
substep 22: taking two external rectangles, wherein one external rectangle is directly nested into the other external rectangle, selecting the external rectangle at the outer side of the two external rectangles as a positive rectangle, and the edge of the positive rectangle is parallel or vertical to the edge of the first binary image after the expansion corrosion treatment;
substep 23: in the first binary image corresponding to the regular rectangle, a hough transformation algorithm is adopted to obtain a plurality of edge line segments, whether the number of the edge line segments is more than or equal to 8 is judged, and if the number of the edge line segments is not more than 8, the substep 22 is returned;
substep 24: taking a gray image corresponding to the regular rectangular area, and obtaining a second binary image according to the method in the first step;
substep 25: extracting second edge points from the second binary image, wherein the second edge points are the edge points left after removing the edge points of the outermost rectangle from all the edge points in the second binary image and then removing the edge points of concentric rectangles similar to the sub-rectangles of the outermost rectangle in the outermost rectangle;
fusing the edge line segments, merging the edge line segments with included angles less than or equal to 0-10 degrees into a straight line, and screening and removing the edge line segments meeting the following conditions: the number of second edge points with the distance to the straight line of the edge line segment within the range of 0 pixel to 5 pixels is less than 5;
calculating an included angle threshold value and a range threshold value, fitting straight lines to the second edge points by using the included angle threshold value and the range threshold value, if the number of the straight lines obtained by final fitting is 8, performing the next step, and if not, returning to the step 23;
substep 26: enumerating the condition that the 8 straight lines can be combined into two quadrangles, taking the two quadrangles which are mutually nested, wherein the larger quadrangle in the two quadrangles is the outer edge of the road sign image, the smaller quadrangle is the inner edge of the road sign image, and the positions of the road sign image in the image shot by the robot camera are determined by the two quadrangles which are mutually nested.
Further, the third step specifically includes:
and C, according to the position of the landmark image obtained in the step II in the image shot by the robot camera, obtaining a first position of 8 angular points of the landmark image in the image shot by the robot camera, wherein the 8 angular points of the landmark image have a second position in a landmark coordinate system of the landmark image, and calculating the relative pose between the robot camera and the landmark by utilizing a PnP algorithm by combining the first position and the second position.
Further, the fourth step specifically includes:
and determining the specific position of the image of the central number according to the outer edge of the road sign image obtained in the step two, converting the image of the central number into a front image by utilizing affine transformation, extracting edge points of the front image, wherein the continuously distributed edge points are used as external regular rectangles for dividing sub-images of each number in the image of the central number, zooming the sub-images to the width and the height of 20 x 32, and identifying the sub-images of 20 x 32 by using a handwritten number identification neural network model to obtain a corresponding digital result in the image.
Further, the step five specifically includes:
and searching the corresponding pose of the landmark in the world coordinate system from the database according to the digital result in the fourth step, calculating the pose of the robot camera in the world coordinate system when the robot camera shoots the image by combining the poses of the robot camera in the landmark coordinate system in the third step, and further calculating to obtain the pose of the robot in the world coordinate system so as to realize the positioning of the robot.
Preferably, the visual road sign comprises two mutually nested concentric rectangles, the rectangle positioned on the outer side is horizontally placed, the included angle between the rectangle positioned on the inner side and the rectangle positioned on the outer side is 45 degrees, the two squares have color difference, and three numbers are arranged in the rectangle positioned on the inner side.
Preferably, before the step one, the method further comprises the following steps:
and pasting the visual road signs on the wall surface, recording the poses of the visual road signs in a world coordinate system, inputting the poses into the database, and measuring the displacement and the rotation angle between the camera and the center of the robot as the relative poses between the camera and the robot.
The invention has the following beneficial effects:
the technical scheme provided by the invention is based on a visual road sign positioning method, the image is locally processed, and the image is insensitive to the change of ambient brightness; the road sign has simple structure, can detect road sign position information under the condition of low resolution of the camera, provides higher positioning precision, has stronger application range than the traditional method, has low cost of the camera and the road sign, and can be widely applied.
These features and advantages of the present invention will be disclosed in more detail in the following detailed description and the accompanying drawings. The best mode or means of the present invention will be described in detail with reference to the accompanying drawings, but the present invention is not limited thereto. In addition, the features, elements and components appearing in each of the following and in the drawings are plural and different symbols or numerals are labeled for convenience of representation, but all represent components of the same or similar construction or function.
[ description of the drawings ]
The invention will be further described with reference to the accompanying drawings in which:
FIG. 1 is a schematic view of a visual road marking in an embodiment of the invention;
FIG. 2 is a flow chart of an embodiment of the present invention;
FIG. 3 is a schematic diagram of the square region position of step one in the embodiment of the present invention;
FIG. 4 is a schematic diagram of an edge point and a circumscribed rectangle in step two according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of fused edge segments in step three of the present invention;
FIG. 6 is a schematic diagram of fused edge segments at step four in the embodiment of the present invention;
FIG. 7 is a schematic diagram of a fitted line segment in step five of the present invention;
FIG. 8 is a schematic diagram of step six of determining the position of the landmark image using fitted line segments in an embodiment of the present invention;
fig. 9 is a schematic diagram of the pose relationship between the landmark and the camera in the seventh step in the embodiment of the present invention.
[ detailed description ] embodiments
The technical solutions of the embodiments of the present invention are explained and illustrated below with reference to the drawings of the embodiments of the present invention, but the following embodiments are only preferred embodiments of the present invention, and not all embodiments. Based on the embodiments in the implementation, other embodiments obtained by those skilled in the art without any creative effort belong to the protection scope of the present invention.
Reference in the specification to "one embodiment" or "an example" means that a particular feature, structure or characteristic described in connection with the embodiment itself may be included in at least one embodiment of the patent disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
In this specification, reference to "pixel" as being square in shape, references to "area", "side length", "height" and "width" all describe the number of pixels it contains.
The specific numerical thresholds set forth in this specification apply to all equipment and environmental conditions that satisfy the practice of the invention, and do not require any particular equipment or environment.
The embodiment provides a robot positioning method based on visual road signs, which mainly adopts the steps of detecting the positions of the visual road signs in images, acquiring visual road sign information and calculating the pose of a camera to realize the positioning of a robot. In the embodiment, a common 1280 × 720 pixel network camera is used for acquiring data at different distances in different environments from different angles, wherein each angle comprises a forward road sign, a lateral road sign and the like, different environments comprise a direct light source, a reverse light source, road sign light emission and the like, different distances comprise within one meter, one meter to two meters, two meters to five meters and the like, visual road signs with different sizes, different numbers and different proportions of inner squares and outer squares are respectively taken as samples for identification and parameter adjustment, and due to the relationship of the resolution of a camera, images of which the collected visual road sign images account for about 5% to 20% of the total image size are taken as the samples, the images containing the numbers are segmented to obtain more than ten thousand 20 × 32 pixel digital image samples, and the image samples are applied to the training of the digital identification neural network.
As shown in fig. 1, the visual road sign comprises two mutually nested concentric rectangles, specifically in this embodiment, a square is preferably used, the square on the outer side is horizontally placed, the included angle between the square on the inner side and the square on the outer side is 45 °, the two squares have obvious color difference, wherein the square on the outer side is black, the square on the inner side is white, and three numbers are arranged in the square on the inner side.
The visual road sign is made of materials such as paper and plastics, is pasted on a vertical white wall surface which is about 1-2m higher from the ground, and requires that the upper side and the lower side are parallel to the ground.
The pose calculation method of the visual landmark in the world coordinate system comprises the following steps: artificially setting a right-hand space rectangular coordinate system, requiring the x axis and the z axis to be parallel to the ground, leading the y axis direction to be vertically upward, taking the position of the visual road sign as the translation amount from the original point to the centers of two squares forming the visual road sign, leading the direction of the visual road sign to be the direction of a vector which vertically passes through the visual road sign from the back of the visual road sign in the space rectangular coordinate system, knowing the position and the direction of the visual road sign according to the position and the direction of the visual road sign, and leading the position and the direction to be associated with numbers in the visual road sign and recording the position and the direction in a database. As shown in fig. 2, the robot positioning method based on visual road signs provided by this embodiment includes the following steps:
the method comprises the steps of pasting a visual landmark on a vertical white wall surface which is about 1-2m higher than the ground in advance, recording the pose of the visual landmark in a world coordinate system to a database, measuring the relative pose, namely the displacement and the rotation angle, between a camera installed on a robot and the center of the robot, and measuring the internal parameters and the distortion parameters of the camera.
The method comprises the following steps: image binarization processing:
(1) taking a frame of image shot by a camera with known camera internal parameters, camera distortion parameters and relative poses of the robot and the camera on the robot, if the image is not a gray image, converting the image into a gray image of the image shot by the camera (hereinafter referred to as an original gray image), wherein the coordinates of the upper left corner of the original gray image are (0, 0), the coordinates of the lower right corner of the original gray image are (w, h), w is the image width, h is the image height, and the area where the rectangle of the coordinates of the upper left corner and the coordinates of the lower right corner determined by the two points is called an original gray image area, as shown in fig. 3;
(2) dividing the original gray image into continuous square regions with the same size and side length of
Wherein s is the image size, f(s) is the side length of the square region, and the position is allocated as
B is the side length of a square, w is the width of a gray image, h is the height of the gray image, mod represents a left-taking symbol, in a coordinate system of the gray image, the horizontal coordinate is sequentially increased from left to right, the vertical coordinate is sequentially increased from top to bottom, l is the left side boundary of the leftmost square region, t is the upper boundary of the uppermost square region, the upper left corner coordinate of each square region is (i + b + l, j + b + t), wherein i and j are non-negative integers, and an implicit condition of constraining i and j is that an intersection exists between the represented square region and the gray image shot by the robot camera;
the division of the square area can also be determined by the side length of the square area and the coordinates of the upper left corner of the square area.
(3) Selecting a square area, combining 8 square areas with 4 corners connected and 4 sides adjacent to each other to form a sampling area with the size of 3 multiplied by 3, and solving the self-adaptive binary threshold value of each pixel in the original gray level image in the sampling area by using an Otsu algorithm;
(4) calculating in the step (3) for each square area to obtain a self-adaptive binarization threshold value of each pixel in each square area, namely a first self-adaptive binarization threshold value of each pixel in the original gray level image;
(5) carrying out image filtering on the first self-adaptive binarization threshold value of each pixel in the original gray image, wherein the image filtering mode is as follows: taking a pixel as a center, in a square range with the side length of a, summing the first adaptive binarization thresholds of all pixels in the coverage area of the range and an original gray level image area together and taking the average value to obtain a second adaptive binarization threshold of the pixel, wherein a is 2 times the side length of the square area plus 1;
(6) calculating a local self-adaptive binary image, wherein the gray value corresponding to each pixel in the original gray image is 0 if the gray value is smaller than a second self-adaptive binary threshold value of the pixel, otherwise, the gray value is 255, after the calculation is completed, calling an OpenCV function media blank with a ksize (size of a filtering template) parameter of 3, and performing median filtering on the local self-adaptive binary image to obtain a first binary image;
step two: as shown in fig. 4, the landmark image edge line segment and edge point extraction:
(1) calling an OpenCV function getStructure element and morphologyEx function to perform expansion corrosion operation on the first binary image, calling an OpenCV function findContours to extract a contour consisting of a first edge point and the first edge point from a result image obtained by the expansion corrosion operation, wherein the parent contour and a sub-contour of the contour are provided, the parent contour is outside the contour, other contours do not exist between the parent contour and the sub-contour, the sub-contour is inside the contour, and other contours including the sub-contour do not exist between the parent contour and the sub-contour;
(2) enumerating each outline and the sub-outline thereof, judging whether the circumscribed rectangle of each outline is embedded with the circumscribed rectangle of the sub-outline thereof, if so, respectively recording the two circumscribed rectangles as a parent rectangle region and a sub-rectangle region, if the sub-outline thereof also has the sub-outline thereof, making the circumscribed rectangle of the sub-outline thereof as a mask rectangle region, and also recording the circumscribed rectangle of the sub-outline thereof together with the two rectangle regions as the outline rectangle record;
(3) taking the above outline rectangle record, if the area of the parent rectangle region is greater than or equal to four fifths of the area of the original gray image region, or if the area of the parent rectangle region is less than 400 pixels, or if one of the length and the width of the parent rectangle region is less than 20 pixels, or the area of the parent rectangle region is greater than 8 times of the area of the child rectangle region, determining that no landmark image exists in the parent rectangle region, and repeating the operation in the step (3);
(4) if the area of the parent rectangular region in (3) is less than or equal to 10000 pixels and the length and the width are less than or equal to 120 pixels, regarding 8 sides of the parent rectangular region and the child rectangular region as edge line segments, and performing (5) operation if the above conditions are not met, otherwise performing (6) operation;
(5) calculating an external regular rectangle of the parent rectangle, wherein the side of the external regular rectangle is parallel to or perpendicular to the width and height of an original gray image area, calling an OpenCV function Canny, the parameter of a function threshold1 (a first threshold) is 20, the parameter of a threshold2 (a second threshold) is 100, the rest are default parameters, calculating a Canny operator of the image of the first binary image in the area of the regular rectangle, calling an OpenCV function Hough LinesP, the rho parameter is 1, the theta parameter is pi/45, the threshold parameter is 10, the minLineLength parameter is 10, and the maxLineGap parameter is 10, calculating the edge line segment of the image of the first binary image in the area according to the Canny operator, and returning to (3) if the number of the edge line segment is less than 8;
(6) calculating a circumscribed regular rectangle of the parent rectangle, wherein the sides of the regular rectangle are parallel to or perpendicular to the width and height of the original gray level image region, and performing the method described in the step one on the image of the original gray level image in the circumscribed regular rectangle region, wherein the gray level image to be processed is not shot by a camera but the image of the original gray level image in the circumscribed regular rectangle region, and a second binary image is obtained after the processing is completed;
(7) calling an OpenCV function findContours, extracting edge points in a second binary image, according to the relation of the outlines of the edge points, if one outline has no parent outline, the outline is the outermost outline, removing the edge points at the edge of the second binary image to obtain the edge points at the edge of the image, wherein a central rectangular area is (3) a rectangular area which is 0.7 times of the side length of a sub-matrix area and is concentric with the sub-rectangle, removing the edge points in the central rectangular area from the edge points at the edge of the image to obtain the remaining part of the second edge points.
Step three: and fusing edge line segments, and calculating an optimal included angle threshold as shown in fig. 5:
(1) setting the included angle threshold value to be 5 degrees;
(2) calculating the median of the lengths of the edge line segments, and removing all the line segments with the lengths less than 0.15 time of the median to obtain the line segments to be fused;
(3) taking a line segment to be fused which is not recorded as a used line segment from all line segments to be fused as a first line segment to be fused, recording the line segment which is used, and creating a set of only the line segment as a line segment set to be fused of the first line segment to be fused;
(4) taking another line segment to be fused which is not recorded as a used line segment and is not matched with the first line segment to be fused as a second line segment to be fused from all the line segments to be fused;
(5) matching a first segment to be fused with a second segment to be fused, if the included angle between the straight line of the first segment to be fused and the straight line of the second segment to be fused is less than or equal to the included angle threshold value, and the included angle between the straight line of the connecting line of the middle points of the two segments to be fused and the straight line of the two segments to be fused is also less than or equal to the included angle threshold value, recording that the second segment to be fused is used, and classifying the second segment to be fused into the segment set to be fused of the first segment to be fused;
(6) performing the operation in the step (4) again until a second line segment to be fused meeting the condition in the step (4) can not be found;
(7) and taking the line segment set to be fused, taking the coordinates of the two farthest end points of the line segment set to be fused, and connecting the two end points to be used as a fused line segment.
(8) Repeating the operation of the step (7) to complete the fusion of the line segments in all the line segment sets to be fused;
(9) counting the number of fused line segments, if the number is more than or equal to 8, reducing the threshold value of the included angle by 1-3 degrees (the change size is determined according to the size of the current threshold value of the included angle, and any value in the angle interval does not influence the final result), wherein the minimum value is not less than 0 degree, if the number is less than 8, the threshold value of the included angle is expanded by 1-3 degrees, the maximum value is not more than 10 degrees, returning to the operation (3), and circulating the operation for 8 times;
(10) and (4) taking the included angle threshold value with the number of the last fused line segments being 8, and if 8 fused line segments can not be obtained through fusion once, taking the included angle threshold value result of the last time as the final included angle threshold value.
Step four: and fusing the edge line segments, and calculating an optimal range threshold between a second edge point and the edge line segment in the second binary image, as shown in fig. 6:
(1) set the range threshold to 3 pixels;
(2) enumerating the edge line segments in the second step, and calculating the straight line distance from the second edge point to the edge line segment in the second step, wherein the straight line distance can meet the second edge point less than or equal to the range threshold, if the number of the second edge points is less than 5, removing the edge line segment, and finally taking the edge line segment which is not removed as a reliable edge line segment;
(3) calculating the median of the lengths of the reliable edge line segments, and removing all the line segments with the lengths less than 0.15 time of the median to obtain the line segments to be fused;
(4) setting the included angle threshold as the final included angle threshold in the third step by using the data obtained in the step, and repeating the operations from (3) to (8) in the third step to obtain a fused line segment;
(5) counting the number of fused line segments, if the number is more than or equal to 8, reducing the range threshold value from 1 pixel to 2 pixels (the change size is determined according to the size of the current range threshold value, and any value in the range interval does not influence the final result), wherein the minimum is not less than 0 pixel, if the number is less than 8, expanding the range threshold value from 1 pixel to 2 pixels, and the maximum is not more than 5 pixels, returning to the operation (2), and circulating the operation for 8 times;
(6) and (4) taking a range threshold value with the number of the last fused line segments being 8, and if 8 fused line segments can not be obtained through fusion once, taking the range threshold value result of the last time as a final range threshold value.
Step five: and fitting by using the second edge points in the second step to obtain a fitted line segment, as shown in fig. 7:
(1) enumerating the edge line segments in the second step, and calculating the distance from the second edge points in the second step to the straight line where the edge line segments are located, wherein the distance between the second edge points and the straight line segments can meet the threshold value of the final range in the fourth step, if the number of the second edge points is less than 5, removing the edge line segments, otherwise, recording the second edge points which can meet the threshold value condition of the final range, taking the second edge points as the edge point set of the edge line to be fitted, and taking the edge line segments which are not removed finally as reliable edge line segments;
(2) calculating the median of the lengths of the reliable edge line segments, and removing all the line segments with the lengths less than 0.15 time of the median to obtain the line segments to be fused;
(3) using the data obtained in the step, setting an included angle threshold as a final included angle threshold in the third step, and repeating the operations from (3) to (6) in the third step to obtain a line segment set to be fused;
(4) taking a line segment set to be fused, combining edge point sets of edge lines to be fitted, corresponding to the edge line segments, in the step (1) to obtain an edge point set of the fitted edge line, calling an OpenCV function fitLine, wherein distType is a constant value CV _ DIST _ HBER of OpenCV, param parameter is 0, reps parameter is 0.02, aeps parameter is 0.02, fitting the edge point set to obtain a fitted line, making an external regular rectangle of all edge line segment endpoints in the line segment set to be fused, and intersecting the fitted line with the external regular rectangle to obtain a fitted line segment;
(5) repeating the operation in the step (4), and processing all the line segment sets to be fused to obtain all the fitted line segments;
(6) calculating the median of the lengths of the line segments to be fitted, and removing all the line segments with the lengths less than 0.01 time of the median to obtain the line segments to be fitted;
(7) setting an included angle threshold as a final included angle threshold in the third step by using the data obtained in the step, replacing the line segment to be fused in the third step (3) with the line segment to be fitted, repeating the operations from the third step (3) to the third step (8), and regarding the fused line segment obtained in the third step (8) as the fitted line segment in the step;
(8) enumerating the fitted line segments, calculating the distance between the second edge points and the straight lines where the fitted line segments are located, recording the second edge points which can meet the final range threshold value in the fourth step or less, recording the number of the second edge points which can meet the final range threshold value condition, recording the relationship between the fitted straight lines and the number of the second edge points, sorting the number of the second edge points in an ascending order, enumerating from the second to the back, if the previous number is less than or equal to 0.4 times of the current number for the first time, deleting the fitted line segments corresponding to the first second edge points from the first second edge points to the previous second edge points, and taking the remaining fitted line segments as the final fitted line segments;
(9) and if the number of the finally fitted line segments is not 8, returning to the step (3) in the second step, and taking another outline rectangle record for operation.
Step six: using the fitted line segments, the position of the landmark image is determined, as shown in fig. 8:
(1) recording the straight line where the line segment finally fitted in the step five is located as a landmark edge straight line, making intersection points between every two landmark edge straight lines, and recording the intersection points as potential landmark edge intersection points if the coordinates of the intersection points are in the circumscribed regular rectangle of the father rectangle in the step two;
(2) taking a road sign edge line segment as a first road sign edge straight line;
(3) another road sign edge line segment is taken as a second road sign edge straight line, a potential road sign edge intersection point exists between the second road sign edge straight line and the first road sign edge straight line, another road sign edge straight line is taken as a third road sign edge straight line, a potential road sign edge intersection point exists between the third road sign edge straight line and the second road sign edge straight line, another road sign edge straight line is taken as a fourth road sign edge straight line, a potential road sign edge intersection point exists between the fourth road sign edge straight line and the first road sign edge straight line and between the fourth road sign edge straight line and the third road sign edge straight line, and a closed graph formed by the four road sign edge straight lines is a quadrangle and is marked as a first road sign quadrangle;
(4) the remaining 4 edges are straight lines, and the formed closed graph is a quadrangle and marked as a second road mark quadrangle according to the methods in (2) and (3); if only one second road sign quadrangle can be obtained, enabling the second road sign quadrangle to be located on the inner side of the first road sign quadrangle, enabling the first road sign quadrangle and the second road sign quadrangle to jointly determine the edge of a road sign image, otherwise, returning to the operation (2) and taking another road sign edge straight line;
(5) if all the road sign edge straight lines are traversed in the step (2), the determined first road sign quadrangle and the determined second road sign quadrangle cannot be obtained in the step (4), and the step (3) is returned;
(6) making vectors from the intersection point of diagonal lines of the first road mark quadrangle to four corner points of the first road mark quadrangle, wherein the clockwise included angle between the vector and the positive direction of the abscissa is marked as an outer right lower corner point if the angle is smaller than 90 degrees, otherwise, the clockwise included angle is marked as an outer left lower corner point if the angle is smaller than 180 degrees, otherwise, the clockwise included angle is marked as an outer left upper corner point if the angle is smaller than 270 degrees, and otherwise, the clockwise included angle is marked;
(7) if the number of each of the outer right lower angular point, the outer left upper angular point and the outer right upper angular point is not 1, returning to the operation (2) and taking another landmark edge straight line;
(8) if the four corner points of the second road marking quadrangle are in the triangles determined by the diagonal intersection point of the first road marking quadrangle, the outer upper left corner point and the outer lower left corner point, the four corner points are marked as inner left corner points, if the four corner points are in the triangles determined by the diagonal intersection point of the first road marking quadrangle, the outer lower left corner points and the outer lower right corner points, the four corner points are marked as inner lower corner points, if the four corner points are in the triangles determined by the diagonal intersection point of the first road marking quadrangle, the outer lower right corner points and the outer upper right corner points, the four corner points are marked as inner upper corner points, if the four corner points are in the triangles determined by the diagonal intersection point of the first road marking;
(9) if the number of each of the inner left corner point, the inner lower corner point, the inner right corner point and the inner upper corner point is not 1, returning to the operation (2) and taking another road sign edge straight line;
(10) according to the steps, the outer right lower corner point, the outer left upper corner point, the outer right upper corner point, the inner left corner point, the inner right corner point and the inner upper corner point of the road sign image in the original gray level image can be finally obtained.
Step seven: establishing a landmark coordinate system, and calculating the pose of the camera in the first step in the landmark coordinate system, as shown in fig. 9:
(1) to go toThe coordinate center of the landmark is the origin of coordinates of the landmark, the coordinate center is the positive direction of the y axis upwards, the coordinate center is the positive direction of the x axis rightwards, the direction passing through the landmark from the back side of the landmark is the positive direction of the z axis, the coordinates of each angle of the landmark in a coordinate system are set, the coordinates of the upper left corner of the square on the outer side are (-a/2, a/2), the coordinates of the lower left corner are (-a/2 ), the coordinates of the lower right corner are (a/2, -a/2), the coordinates of the upper right corner are (a/2 ), a is the side lengthLeft angle coordinate isBase angle coordinate isThe right angle coordinate isb is the side length of the square at the inner side;
(2) calling an OpenCV function solvePnP, wherein an object Points parameter is world coordinates of a left upper corner, a left lower corner, a right upper corner, an upper corner, a left corner, a lower corner and a right corner in the step (1), an imagePoints parameter is coordinates of an outer right lower corner point, an outer left upper corner point, an outer right upper corner point, an inner left corner point, an inner right corner point and an inner upper corner point in an image in the step six, a camera matrix is an internal parameter of the camera in the step one, distCoefs is a distortion parameter of the camera in the step one, a useExtrinsicGuess parameter is false, a flag parameter is a constant value SOLVEPN _ ITERATIVE in OpenCV, a translation amount and a rotation angle of the camera to a landmark are obtained, and a pose of the camera in a landmark coordinate system is calculated;
step eight: recognizing numbers in the signpost:
(1) according to the sixth step, connecting the outer right lower corner point, the outer left upper corner point and the outer right upper corner point end to end in sequence to form a closed quadrangle, making the concentric quadrangle of the quadrangle, reducing the sides of the quadrangle to one fourth of the original quadrangle, and correspondingly reducing the right lower corner point, the left upper corner point and the right upper corner point of the zoomed quadrangle respectively;
(2) calling an OpenCV function findHomography, and calculating homography matrixes among a reduced lower right corner, a reduced lower left corner, a reduced upper right corner and fixed points (0, 0), (0, 64), (64, 64), (64, 0);
(3) calling an OpenCV function warp Peractive, and performing affine transformation on an original gray image in a quadrilateral region defined by a reduced lower right corner point, a reduced lower left corner point, a reduced upper left corner point and a reduced upper right corner point in sequence to an image with the size of 64 x 64 pixels by using the homography matrix to obtain a front image;
(4) calling an OpenCV function threshold, wherein the threshold parameter is 0, the maxval parameter is 255, and the type parameter is an OpenCV constant value THRESH _ OTSU, and calculating a first binary image of the image with the size of 64 x 64 pixels;
(5) calling an OpenCV function findContours, and calculating a first edge point of the first binary image and an outline consisting of the first edge point;
(6) calculating a circumscribed regular rectangle of the outline, enabling the sides of the regular rectangle to be parallel or perpendicular to the width and height of the first binary image, and searching a minimum circumscribed regular rectangle, so that more than 95% of the edge points are contained in the rectangular area and are marked as a central digital rectangular area;
(7) calling an OpenCV function threshold, wherein the threshold parameter is 0, the maxval parameter is 255, the type parameter is an OpenCV constant value THRESH _ OTSU, and calculating a second value image of the original gray image in the central digital rectangular area;
(8) calling an OpenCV function findContours, and calculating a second edge point of the second two-value graph;
(9) recording pixel blocks (hereinafter referred to as pixels) of gray values and positions in the image, and calling all recorded pixels as a pixel set, taking a pixel which is not added into the pixel set and has a gray value of 0 from the second binary image as a first pixel, and recording that the first pixel belongs to the first pixel set;
(10) one pixel in the pixel set is added into the first pixel set if a pixel with a gray value of 0 exists in the 8 surrounding pixels;
(11) repeating the operation in the step (10) until no new pixel is added into the first pixel set, calculating a minimum circumscribed rectangle of the pixel points in the first pixel set, wherein the sides of the rectangle are parallel or vertical to the height and width of the second binary image, and recording the rectangle as a single character rectangular area;
(12) repeating the operation in (9) until the pixel which is not added to the pixel set can not be found;
(13) taking the position of an image in a single rectangular area as a basis, sequentially taking the image of an original gray image in the single rectangular area from left to right, calling an OpenCV function resize, converting the image into an image to be detected with the width of 20 pixels and the height of 32 pixels, and sequentially obtaining the numbers in the image from left to right through a trained handwritten number recognition neural network model;
step nine: calculating robot position and orientation:
(1) searching the database for the pose Twz of the landmark in the world coordinate system according to the number obtained in (13) of the step eight, and searching the pose Tzc of the camera in the landmark coordinate system according to the pose Tzc of the camera obtained in the step seven, wherein both Twz and Tzc areThe matrix of the form, R is a rotation matrix, t is a translation matrix, and the pose Twc of the camera in the world coordinate system is obtained by pose transformation, wherein the pose Twc is Twz Tzc;
(2) knowing the pose Trc of the camera in a robot coordinate system, calculating the inverse position Tcr of the Trc, solving the pose Twr-Twc-Tcr of the robot in a world coordinate system, and solving the direction and the position of the robot in the world coordinate system through a rotating torque matrix and a translation matrix in the pose;
the method is integrated into a robot program, when the robot needs to perform visual landmark positioning, landmark images shot by a camera installed on the robot are input into the program, the program calculates the pose of the camera on the robot according to the poses of landmarks which are input into a database in world coordinates, and then the pose of the robot in the world coordinate system is obtained through the known pose transformation relation between the camera and the robot, so that the robot is positioned.
While the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Any modification which does not depart from the functional and structural principles of the present invention is intended to be included within the scope of the claims.
Claims (10)
1. A robot positioning method based on visual road signs is characterized in that: the method comprises the following steps:
the method comprises the following steps: carrying out binarization processing on a frame of gray level image shot by a robot camera to obtain a first binary image of the gray level image;
step two: searching a road sign image and determining the position;
step three: calculating the pose of the robot camera in a landmark coordinate system;
step four: identifying a number in the landmark image;
step five: and calculating to obtain the positioning of the robot.
2. The visual landmark based robot positioning method of claim 1, wherein: the first step specifically comprises the following substeps:
dividing the gray level image into continuous square regions with the same size;
performing the following operation on each square region: combining 8 square areas around each square area to form a sampling area with the size of 3 multiplied by 3, and obtaining a first adaptive binarization threshold value of each pixel in each square area by using an Otsu algorithm on the sampling area, wherein the first adaptive binarization threshold value of each pixel in each square area is the first adaptive binarization threshold value of each pixel in the gray level image;
and carrying out image filtering on the first self-adaptive binarization threshold value of each pixel in the gray-scale image to obtain a second self-adaptive binarization threshold value of each pixel, and carrying out binarization operation on the gray-scale image by using the second self-adaptive binarization threshold value of each pixel to obtain a first binary image of the image.
3. The visual landmark based robot positioning method of claim 2, wherein: the square area is divided, and the side length is as follows:
wherein s is the size of the image, and f(s) is the side length of the square area;
the positions of the square region division are allocated as follows:
in a coordinate system of the gray scale image, horizontal coordinates become larger from left to right, vertical coordinates become larger from top to bottom, l is a left side boundary of a leftmost square region, and t is an upper boundary of an uppermost square region, so that coordinates of the upper left corner of each square region are (i + b + l, j + b + t), wherein i and j are non-negative integers, and an implicit condition of constraint i and j is that an intersection exists between the represented square region and the gray scale image shot by the robot camera.
4. The visual landmark based robot positioning method of claim 2, wherein: the image filtering includes: and adopting an average filtering mode, taking a pixel as a center, summing the first adaptive binarization threshold values of all pixels in the square range in a square range with the side length of a, and taking an average value to obtain a second adaptive binarization threshold value of the pixel, wherein a is 2 times the side length of the square region plus 1.
5. The visual landmark based robot positioning method of claim 2, wherein: the second step specifically comprises the following substeps:
substep 21: after the first binary image is subjected to expansion corrosion treatment, extracting first edge points which are continuously distributed in all edges and making the first edge points into a circumscribed rectangle;
substep 22: taking two external rectangles, wherein one external rectangle is directly nested into the other external rectangle, selecting the external rectangle at the outer side of the two external rectangles as a positive rectangle, and the edge of the positive rectangle is parallel or vertical to the edge of the first binary image after the expansion corrosion treatment;
substep 23: in the first binary image corresponding to the regular rectangle, a hough transformation algorithm is adopted to obtain a plurality of edge line segments, whether the number of the edge line segments is more than or equal to 8 is judged, and if the number of the edge line segments is not more than 8, the substep 22 is returned;
substep 24: taking a gray image corresponding to the regular rectangular area, and obtaining a second binary image according to the method in the first step;
substep 25: extracting second edge points from the second binary image, wherein the second edge points are the edge points left after removing the edge points of the outermost rectangle from all the edge points in the second binary image and then removing the edge points of concentric rectangles similar to the sub-rectangles of the outermost rectangle in the outermost rectangle;
fusing the edge line segments, merging the edge line segments with included angles less than or equal to 0-10 degrees into a straight line, and screening and removing the edge line segments meeting the following conditions: the number of second edge points with the distance to the straight line of the edge line segment within the range of 0 pixel to 5 pixels is less than 5;
calculating an included angle threshold value and a range threshold value, fitting straight lines to the second edge points by using the included angle threshold value and the range threshold value, if the number of the straight lines obtained by final fitting is 8, performing the next step, and if not, returning to the step 23;
substep 26: enumerating the condition that the 8 straight lines can be combined into two quadrangles, taking the two quadrangles which are mutually nested, wherein the larger quadrangle in the two quadrangles is the outer edge of the road sign image, the smaller quadrangle is the inner edge of the road sign image, and the positions of the road sign image in the image shot by the robot camera are determined by the two quadrangles which are mutually nested.
6. The visual landmark based robot positioning method of claim 5, wherein: the third step specifically comprises:
and C, according to the position of the landmark image obtained in the step II in the image shot by the robot camera, obtaining a first position of 8 angular points of the landmark image in the image shot by the robot camera, wherein the 8 angular points of the landmark image have a second position in a landmark coordinate system of the landmark image, and calculating the relative pose between the robot camera and the landmark by utilizing a PnP algorithm by combining the first position and the second position.
7. The visual landmark based robot positioning method of claim 6, wherein: the fourth step specifically comprises:
and determining the specific position of the image of the central number according to the outer edge of the road sign image obtained in the step two, converting the image of the central number into a front image by utilizing affine transformation, extracting edge points of the front image, wherein the continuously distributed edge points are used as external regular rectangles for dividing sub-images of each number in the image of the central number, zooming the sub-images to the width and the height of 20 x 32, and identifying the sub-images of 20 x 32 by using a handwritten number identification neural network model to obtain a corresponding digital result in the image.
8. The visual landmark based robot positioning method of claim 7, wherein: the fifth step specifically comprises:
and searching the corresponding pose of the landmark in the world coordinate system from the database according to the digital result in the fourth step, calculating the pose of the robot camera in the world coordinate system when the robot camera shoots the image by combining the poses of the robot camera in the landmark coordinate system in the third step, and further calculating to obtain the pose of the robot in the world coordinate system so as to realize the positioning of the robot.
9. The visual landmark based robot positioning method of claim 1, wherein: the visual road sign comprises two mutually nested concentric rectangles, the rectangle positioned on the outer side is horizontally placed, an included angle between the rectangle positioned on the inner side and the rectangle positioned on the outer side is 45 degrees, the two squares have color difference, and three numbers are arranged in the rectangle positioned on the inner side.
10. The visual landmark based robot positioning method of claim 1, wherein: before the step one, the method further comprises the following steps:
and pasting the visual road signs on the wall surface, recording the poses of the visual road signs in a world coordinate system, inputting the poses into the database, and measuring the displacement and the rotation angle between the camera and the center of the robot as the relative poses between the camera and the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910890204.XA CN112541943A (en) | 2019-09-20 | 2019-09-20 | Robot positioning method based on visual road signs |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910890204.XA CN112541943A (en) | 2019-09-20 | 2019-09-20 | Robot positioning method based on visual road signs |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112541943A true CN112541943A (en) | 2021-03-23 |
Family
ID=75012493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910890204.XA Withdrawn CN112541943A (en) | 2019-09-20 | 2019-09-20 | Robot positioning method based on visual road signs |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112541943A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450335A (en) * | 2021-06-30 | 2021-09-28 | 湖南三一华源机械有限公司 | Road edge detection method, road edge detection device and road surface construction vehicle |
CN116494262A (en) * | 2023-05-06 | 2023-07-28 | 安徽同湃特机器人科技有限公司 | Detection method for traveling path point of spraying operation of ceiling of spraying robot |
-
2019
- 2019-09-20 CN CN201910890204.XA patent/CN112541943A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450335A (en) * | 2021-06-30 | 2021-09-28 | 湖南三一华源机械有限公司 | Road edge detection method, road edge detection device and road surface construction vehicle |
CN116494262A (en) * | 2023-05-06 | 2023-07-28 | 安徽同湃特机器人科技有限公司 | Detection method for traveling path point of spraying operation of ceiling of spraying robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111243032B (en) | Full-automatic detection method for checkerboard corner points | |
CN109886896B (en) | Blue license plate segmentation and correction method | |
CN107424142B (en) | Weld joint identification method based on image significance detection | |
CN109389121B (en) | Nameplate identification method and system based on deep learning | |
CN112348815A (en) | Image processing method, image processing apparatus, and non-transitory storage medium | |
CN106529537A (en) | Digital meter reading image recognition method | |
CN107767382A (en) | The extraction method and system of static three-dimensional map contour of building line | |
CN109726717B (en) | Vehicle comprehensive information detection system | |
CN106251353A (en) | Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof | |
CN106815583B (en) | Method for positioning license plate of vehicle at night based on combination of MSER and SWT | |
CN115717894B (en) | Vehicle high-precision positioning method based on GPS and common navigation map | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN104700071A (en) | Method for extracting panorama road profile | |
CN111860509A (en) | Coarse-to-fine two-stage non-constrained license plate region accurate extraction method | |
CN111222507A (en) | Automatic identification method of digital meter reading and computer readable storage medium | |
CN111783493A (en) | Identification method and identification terminal for batch two-dimensional codes | |
CN112541943A (en) | Robot positioning method based on visual road signs | |
CN116188756A (en) | Instrument angle correction and indication recognition method based on deep learning | |
CN108022245A (en) | Photovoltaic panel template automatic generation method based on upper thread primitive correlation model | |
CN114241438A (en) | Traffic signal lamp rapid and accurate identification method based on prior information | |
CN112053407B (en) | Automatic lane line detection method based on AI technology in traffic law enforcement image | |
CN115880683B (en) | Urban waterlogging ponding intelligent water level detection method based on deep learning | |
CN110046618B (en) | License plate recognition method based on machine learning and maximum extremum stable region | |
CN106780541A (en) | A kind of improved background subtraction method | |
CN111241911A (en) | Self-adaptive lane line detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210913 Address after: 310000 room a626, building 1, No. 1, Lvting Road, Cangqian street, Yuhang District, Hangzhou City, Zhejiang Province Applicant after: HANGZHOU TUGE TECHNOLOGY Co.,Ltd. Address before: 100088 room 630, block C, 28 xinjiekouwai street, Xicheng District, Beijing Applicant before: Beijing Zhengan Investment Center (L.P.) |
|
TA01 | Transfer of patent application right | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210323 |
|
WW01 | Invention patent application withdrawn after publication |