CN114012736A - Positioning object for assisting environment positioning and robot system - Google Patents
Positioning object for assisting environment positioning and robot system Download PDFInfo
- Publication number
- CN114012736A CN114012736A CN202111489946.5A CN202111489946A CN114012736A CN 114012736 A CN114012736 A CN 114012736A CN 202111489946 A CN202111489946 A CN 202111489946A CN 114012736 A CN114012736 A CN 114012736A
- Authority
- CN
- China
- Prior art keywords
- positioning
- setting
- subunits
- determined
- stator unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007613 environmental effect Effects 0.000 claims abstract description 5
- 239000007787 solid Substances 0.000 claims description 9
- 244000017020 Ipomoea batatas Species 0.000 claims description 5
- 235000002678 Ipomoea batatas Nutrition 0.000 claims description 5
- 235000007688 Lycopersicon esculentum Nutrition 0.000 claims description 4
- 240000003768 Solanum lycopersicum Species 0.000 claims description 4
- 244000061456 Solanum tuberosum Species 0.000 claims description 4
- 235000002595 Solanum tuberosum Nutrition 0.000 claims description 4
- 244000144730 Amygdalus persica Species 0.000 claims description 3
- 244000241235 Citrullus lanatus Species 0.000 claims description 3
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 claims description 3
- 240000000560 Citrus x paradisi Species 0.000 claims description 3
- 240000001008 Dimocarpus longan Species 0.000 claims description 3
- 235000000235 Euphoria longan Nutrition 0.000 claims description 3
- 235000011430 Malus pumila Nutrition 0.000 claims description 3
- 235000015103 Malus silvestris Nutrition 0.000 claims description 3
- 244000183278 Nephelium litchi Species 0.000 claims description 3
- 235000006040 Prunus persica var persica Nutrition 0.000 claims description 3
- 244000294611 Punica granatum Species 0.000 claims description 3
- 235000014360 Punica granatum Nutrition 0.000 claims description 3
- 235000014443 Pyrus communis Nutrition 0.000 claims description 3
- 240000001987 Pyrus communis Species 0.000 claims description 3
- 235000009754 Vitis X bourquina Nutrition 0.000 claims description 3
- 235000012333 Vitis X labruscana Nutrition 0.000 claims description 3
- 240000006365 Vitis vinifera Species 0.000 claims description 3
- 235000014787 Vitis vinifera Nutrition 0.000 claims description 3
- 235000012015 potatoes Nutrition 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 9
- 238000000034 method Methods 0.000 description 21
- 238000013507 mapping Methods 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012163 sequencing technique Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 2
- 241000220225 Malus Species 0.000 description 2
- 235000015742 Nephelium litchi Nutrition 0.000 description 2
- 108010001267 Protein Subunits Proteins 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 235000013311 vegetables Nutrition 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 101001098066 Naja melanoleuca Basic phospholipase A2 1 Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a positioning object and a robot system for assisting environmental positioning; the positioning object comprises a carrier object and a positioning unit arranged on the surface of the carrier object, wherein the positioning unit comprises M setting subunits, and the M setting subunits are arranged into a polygon; each edge in the polygon comprises the setting subunits with the number not less than 4; the setting subunit is oval. The positioning object provided by the invention realizes the accurate identification of the positioning object by the combined design of the setting subunits in the aspects of shape and arrangement, reduces the occupation of identification resources and improves the identification speed and the positioning speed.
Description
Technical Field
The application relates to the technical field of environment perception and positioning, in particular to a positioning object and a robot system for assisting environment positioning.
Background
Robotic positioning techniques are commonly used for positioning markers or visual markers. The positioning object belongs to a self-defined article which has image feature identification and specific coding and decoding rules and can solve the pose transformation relation between the positioning object and the camera according to the camera mapping relation, and the positioning object is an artificial object consistent with a known model. The positioning object and the corresponding positioning method are mostly applied to the field of needing to directly or indirectly know the relative pose between a camera and a target object, and the positioning object is needed to be used when high-precision and repeatable image-based measurement is needed; in addition, the use of such localizers is also required in the field of size assessment tasks in the visual field, such as robotic navigation and SLAM, motion capture, pose estimation, camera calibration, and augmented reality. The locators are placed in a set scene to provide a frame of reference. Such a positioning object is selected whenever image-based pose measurement is required. For example, the mechanical arm needs to grab some objects at random positions, a method for setting the mechanical arm to execute a fixed motion path cannot be adopted at this time, a camera needs to be additionally mounted on the mechanical arm to identify a positioning object stuck on the target object, then the relative pose relationship between the positioning object and the camera is solved, and because the pose relationship between the camera and the mechanical arm is known, the pose relationship between the positioning object and the mechanical arm can be indirectly calculated, and the task of grabbing the target object can be performed.
According to the existing method based on the identification of the positioning objects, at present, ARTag and AprilTag are used more positioning marks on the positioning objects, and the positioning objects are matched by detecting the edges and the outlines of the positioning objects during identification, so that the defects of complicated shape of the positioning objects, overhigh CPU resource consumed by image identification and low identification speed are overcome; and the shape of the positioning mark can not be flexibly changed, the anti-noise point capability is poor, and the like.
Disclosure of Invention
The invention provides a positioning object for assisting environment positioning and a robot system, which are used for solving or partially solving the technical problems that the existing positioning object for visual marking has large identification calculation amount, more resources are occupied, and the shape of the positioning object can not be flexibly adjusted.
In order to solve the above technical problem, according to an alternative embodiment of the present invention, there is provided a positioning object for assisting environmental positioning, the positioning object includes a carrier object and a positioning unit disposed on a surface of the carrier object, the positioning unit includes M setting sub-units, and the M setting sub-units are arranged in a polygon; each edge in the polygon comprises the setting subunits with the number not less than 4; the setting subunit is oval.
Optionally, the stator unit is circular or solid circular.
Optionally, the stator unit is in an elliptical ring shape or a solid ellipse shape.
Optionally, the stator unit is displayed as one of the patterns of orange, apple, pear, peach, pomegranate, grape, lychee, longan, plum, pomelo and watermelon.
Optionally, the stator unit is displayed as one of tomato, potato, sweet potato and sweet potato.
Optionally, the stator unit is set to display one of the figures of an egg, a cake and a cake.
Optionally, the stator unit is set to display one of the figures of a football, a basketball, a volleyball, a table tennis and a football.
Optionally, the stator unit is displayed as one of a bowl, a disc, a clock dial and a button.
Optionally, the stator units are displayed as polygons with the number of sides not less than 6.
According to another alternative embodiment of the present invention, there is provided a robot system, comprising at least one robot and at least one set of the positioning objects of the above technical solutions, wherein the positioning objects are pre-positioned at set positions.
Through one or more technical schemes of the invention, the invention has the following beneficial effects or advantages:
the invention provides a locator product for assisting environment positioning, which is characterized in that based on the principle that the projected graph of a locator on an image coordinate system of a camera does not change the collinearity of setting subunits and the intersection ratio of each edge determined according to the positions of the setting subunits, a plurality of setting subunits are arranged into a polygon on the surface of a carrier object, and each edge of the polygon comprises not less than 4 setting subunits; based on the principle of the invariant mapping of the ellipse shape, the setting subunit is made to appear as the ellipse shape under the space coordinate system, so that the projection graph of the setting subunit on the image coordinate system of the camera is the ellipse graph, and the setting subunit can be identified through the ellipse identification of the image; therefore, the positioning object provided by the invention can flexibly adjust and set the arrangement of the subunits based on the principle that the collinear mapping is not changed and the cross ratio mapping is not changed; based on the principle that the elliptic projection is not changed, only the stator unit is required to be set to be an elliptic figure, and compared with the commonly used visual positioning objects such as ARTag and AprilTag, the shape complexity of the pattern is reduced; through the combined design of the setting subunits in the aspects of shape and arrangement, the accurate identification of the positioning object is realized, the occupation of identification resources is reduced, the identification speed is improved, and meanwhile, the positioning object has certain anti-noise and partial shielding resistance.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows an example of a fixture for setting a subunit as a ring according to one embodiment of the invention;
FIG. 2 shows an example of a fixture for setting subunits as solid circles according to one embodiment of the present invention;
fig. 3 shows a principle schematic diagram of collinearity invariance of a camera projection according to another embodiment of the present invention.
Detailed Description
In order to make the present application more clearly understood by those skilled in the art to which the present application pertains, the following detailed description of the present application is made with reference to the accompanying drawings by way of specific embodiments. Throughout the specification, unless otherwise specifically noted, terms used herein should be understood as having meanings as commonly used in the art. Accordingly, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. If there is a conflict, the present specification will control. Unless otherwise specifically stated, various apparatuses and the like used in the present invention are either commercially available or can be prepared by existing methods.
In order to solve the technical problems that the existing visual mark or locator has large recognition calculation amount, high resource occupation and inflexible shape adjustment of the locator, the invention provides a graphic design of the locator for assisting environmental positioning on the first aspect, and the overall thought of the graphic design is as follows:
the positioning object comprises a carrier object and a positioning unit arranged on the surface of the carrier object, wherein the positioning unit comprises M setting subunits, and the M setting subunits are arranged into a polygon; each edge in the polygon comprises the setting subunits with the number not less than 4; the setting subunit is oval.
The principle that the positioning object can reduce the occupation of the identification resources is as follows: the invention provides a locator product for assisting environment positioning, which is characterized in that based on the principle that the projected graph of a locator on an image coordinate system of a camera does not change the collinearity of setting subunits and the intersection ratio of each edge determined according to the positions of the setting subunits, a plurality of setting subunits are arranged into a polygon on the surface of a carrier object, and each edge of the polygon comprises not less than 4 setting subunits; based on the principle of the invariant mapping of the ellipse shape, the setting subunit is made to appear as the ellipse shape under the space coordinate system, so that the projection graph of the setting subunit on the image coordinate system of the camera is the ellipse graph, and the setting subunit can be identified through the ellipse identification of the image; therefore, the positioning object provided by the invention can flexibly adjust and set the arrangement of the subunits based on the principle that the collinear mapping is not changed and the cross ratio mapping is not changed; based on the principle that the elliptic projection is not changed, only the stator unit is required to be set to be an elliptic figure, and compared with the commonly used visual positioning objects such as ARTag and AprilTag, the shape complexity of the pattern is reduced; through the combined design of the setting subunits in the aspects of shape and arrangement, the accurate identification of the positioning object is realized, the occupation of identification resources is reduced, the identification speed is improved, and meanwhile, the positioning object has certain anti-noise and partial shielding resistance.
In some alternative embodiments, the carrier for carrying the positioning unit may be a wood board, a metal board, or other objects, or various ornaments displayed in a field environment, such as a vertical cabinet, a table, a chair, or the like, or a wall surface, which is not limited herein. The shape of the carrier can be square or round, and can be determined according to actual requirements.
In some alternative embodiments, the number of sides of the polygon formed by the plurality of the set sub-units is at least 4. Not less than 4 setting subunits are required per edge because calculating the intersection ratio of one edge requires at least the position coordinates of 4 collinear setting subunits.
Since it is required that the setting subunit is an ellipse-like figure, the setting subunit may alternatively be in the shape of a circular ring or a solid circle. FIG. 1 shows a carrier object which is a wooden plate, the positioning unit being a positioning object formed by 12 ring-shaped setting sub-units, each side comprising 4 collinear setting sub-units; fig. 2 shows a carrier object as a wooden plate, the positioning element being a quadrangular positioning object formed by 12 setting sub-elements in the shape of solid circles, each side comprising 4 collinear setting sub-elements. The projection of the circular and elliptical setting subunits on the image coordinate system of the camera is elliptical, and it should be noted that the circular shape belongs to a special ellipse.
In some alternative embodiments, the stator unit is provided in the shape of an elliptical ring or a solid elliptical shape. The projection of an elliptical ring or solid ellipse on the image coordinate system is likewise elliptical.
It should be noted that the shape of the setting subunit is an ellipse, and may also be a shape in which the edge or the contour is fitted to satisfy an ellipse equation.
In some alternative embodiments, the stator element is one of a rounded or elliptical-like fruit pattern selected from the group consisting of orange, apple, pear, peach, pomegranate, tomato, grape, lychee, longan, plum, grapefruit, watermelon, and the like. The projection shape of the fruit figures on the image coordinate system of the camera is an ellipse, or the outline meets the form of an ellipse equation after fitting; similarly, oval vegetables such as tomatoes, potatoes, sweet potatoes, white potatoes and the like can also be used as setting subunits, and the sizes of the fruit or vegetable patterns can be adjusted according to actual requirements.
In some alternative embodiments, the stator unit is presented as one of an egg, a cake, and a cake. The projected shapes of these figures on the image coordinate system are also elliptical.
In some alternative embodiments, the stator unit is one of a ball pattern of a football, a basketball, a volleyball, a table tennis, a football, etc., which is circular or elliptical, and the size of the ball pattern is adjusted according to actual needs.
In some alternative embodiments, the stator unit is in the shape of a circular or oval object in life, such as a bowl, a disk, a clock dial, a button, and the like, and the patterns of the object also satisfy that the projection on the image coordinate system of the camera is circular or oval.
In some alternative embodiments, the given sub-unit takes the shape of a leaf of a circle, an oval leaf or a flower, and these plant patterns also satisfy the projection on the image coordinate system of the camera as a circle or an oval.
In some alternative embodiments, the set subunit is in the form of a polygon having a number of sides of not less than 6. For polygons with more sides, the shape can satisfy the ellipse equation through edge fitting.
At present, an indoor robot mainly depends on a laser SLAM or a visual SLAM to sense and position the environment, but under some special environments, clear environmental characteristics (such as parking lots, long corridors, restaurants and the like) are lacked, an additional environment sensing positioning method is needed at the moment, and a positioning object for positioning the environment is arranged. When a robot environment map is constructed in the early stage, a positioning object is deployed in some environments where the robot is not easy to position, the robot senses, identifies and positions the positioning object in the SLAM process, stores the position and the ID information in the map, and in the normal task operation of the robot, the information of the positioning object is identified and positioned to be matched with the information of the positioning object stored in the previous environment, so that the method is used for assisting the robot in sensing and positioning the environment.
In a second aspect, in another optional embodiment, a method for identifying a positioning object is described with an application scenario of indoor robot positioning, which includes the following steps:
s1: obtaining an image;
specifically, the robot can continuously capture images of the surrounding environment through a camera carried on the robot to calculate the position and posture information of the robot in the indoor walking process, and then the robot is positioned by combining a built-in off-line map.
S2: determining a plurality of candidate patterns with set shapes in the image and position information corresponding to each candidate pattern; the set shape is circular or elliptical;
the method comprises the following steps of carrying out ellipse detection based on a shot image, and saving an ellipse pattern or an ellipse-like pattern in the image as a candidate pattern by searching the ellipse pattern or the ellipse-like pattern; meanwhile, most of image noise points in a non-elliptical shape are eliminated. When the candidate pattern is determined, the pixel coordinates of the pixel points forming the candidate pattern are calculated, and the position information of each candidate pattern can be reflected according to the pixel coordinates of the pixel points. The pixel coordinates represent the position of the pixel in the image, and in this embodiment, the pixel coordinates are the coordinates of the pixel point in the image coordinate system of the camera.
An alternative method of determining candidate patterns is:
s21: carrying out binarization processing on the image to obtain a binarized image;
the binarization of the image can adopt two schemes of global threshold binarization and local threshold binarization, and the method can not accurately binarize the pattern and the background in consideration of the fact that the method for automatically segmenting the global threshold in some scenes.
The scheme of local threshold binarization specifically comprises the following steps: segmenting the image into a plurality of local regions; determining a local threshold corresponding to each local region, including: determining the gray mean value and the gray standard deviation of each local area according to the gray value of the target pixel point in the local area and the gray values of the other pixel points in the local area; determining a deviation value of the gray standard deviation according to the gray standard deviations of all the local areas; sequentially determining a local threshold corresponding to each local area according to the gray average value, the gray standard deviation and the deviation value; and carrying out binarization processing on the image according to the local threshold value to obtain the binarized image.
The basic principle of local threshold binarization is as follows: based on a certain target pixel point P (x, y), the gray value of the certain target pixel point is g (x, y), the neighborhood of the certain target pixel point is r multiplied by r, the gray value of each pixel point in the neighborhood is g (i, j), and r is the number of pixels for dividing the neighborhood and is determined according to actual requirements; (x, y) are pixel coordinates. Then, by calculating the local threshold T in the neighborhood, binarization processing of pixels in the neighborhood is performed based on the local threshold T.
The mean value m (x, y) of the gray pixels of the neighborhood is calculated using the following formula:
the gray scale standard deviation s (x, y) of this neighborhood is calculated using the following formula:
after calculating the standard deviation s of each neighborhood, the variation range or deviation value R of the standard deviation can be calculated according to the standard deviations of all neighborhoods, wherein R is s (x, y)max-s(x,y)min。
Next, a method for determining the local threshold T according to the mean m (x, y), the standard deviation s (x, y) and the deviation R is shown in the following formula:
in the formula (3), k is a correction coefficient and is determined by an experiment.
On the other hand, the local threshold T may also be calculated using the following equation:
in the formula (4), k is a first correction coefficient, and p is a second correction coefficient, and the values thereof are determined through experiments.
S22: performing edge detection on the binary image to obtain a plurality of regions to be determined in the binary image and edge pixel points of each region to be determined;
specifically, the edge detection method may use edge detection operators such as Sobel, Prewitt, Roberts, Canny, and the like to perform detection, so as to obtain edge pixel points corresponding to each region to be determined, and also obtain pixel coordinates of each edge pixel point.
S23: fitting according to the edge pixel points of each region to be determined to obtain an edge fitting equation; and determining the area to be determined with the edge fitting equation as a circular equation or an elliptical equation as the candidate pattern.
Specifically, fitting is performed on edge pixel points of each region to be determined, or fitting is performed based on pixel coordinates of the edge pixel points, so that an edge fitting equation of each region to be determined can be obtained. From the form of the fitting equation, the shape of the region to be determined can be determined.
For example, if the edge fitting equation for a region to be determined has (x-a)2+(y-b)2In the form of c, it may be determined that the region to be determined is circular; if the edge fitting equation of a certain area to be determined has x2/a+y2If/b is 1, the area to be determined can be determined to be elliptical. A circle is generally considered a special case of an ellipse.
Through the scheme, an ellipse or an ellipse-like graph in the image is screened out and is used as a candidate pattern; while rejecting most non-elliptical noise in the image.
S3: according to the position information corresponding to each candidate pattern, carrying out co-linear matching on the candidate patterns to obtain a graph to be determined; the graph to be determined comprises M candidate patterns, the M candidate patterns are arranged into a polygon, each edge of the polygon comprises the candidate patterns which are collinear with each other and the number of the candidate patterns is not less than 4;
the principle on which co-linear matching is based is the principle of collinearity invariance of the camera projective transformation, i.e. candidate patterns that are collinear in the spatial coordinate system do not change their collinear features on the image coordinate system projected to the camera. As shown in fig. 3, the collinear ABCD four points in the spatial coordinate system, after projection onto the image coordinate system, correspond to points a ', B', C ', D', which are also collinear.
The following scheme can be adopted for the co-linear matching:
the first scheme is as follows:
determining a plurality of co-linear groups in the plurality of candidate patterns according to the position information corresponding to each candidate pattern; each collinear group comprises not less than four candidate patterns which are collinear with each other; determining a first corner subunit from the plurality of co-linear groupings; the first corner subunit is a candidate pattern belonging to two collinear groups simultaneously; and obtaining the graph to be determined according to the collinear grouping of the first corner subunit and the first corner subunit.
According to the scheme, the colinearity matching is carried out one by one on the basis of the position of each candidate pattern, and all colinearity groups are determined. Considering that the candidate pattern is a region, when performing the co-linear matching, a representative pixel point in the candidate pattern, such as a pixel coordinate of a center pixel point or a pixel coordinate of a centroid pixel point, may be selected for the co-linear matching. Taking the centroid as an example, firstly selecting the centroid coordinates of the first candidate pattern and the second candidate pattern, determining the corresponding linear equation, then traversing the centroid coordinates of the other candidate patterns, and calculating the distances between the other candidate patterns and the straight line; if a candidate pattern: if the distance of the third candidate pattern is 0 or less than a distance threshold, the third candidate pattern may be determined to be collinear with the first candidate pattern and the second candidate pattern. By repeating the above method, all the colinear groups can be obtained.
For all collinearity groupings, it is possible to retain a collinearity grouping of the candidate patterns with the number of collinearity corresponding to the number of collinearity of the setup subunit of the locator design by screening. For example, if there are 4 collinear setting subunits on one edge during the design of the positioning object, the collinear grouping with the candidate pattern collinear number of 4 is retained, and if there are 5 collinear setting subunits on one edge, the collinear grouping with the candidate pattern collinear number of 5 is retained.
Next, a corner sub-unit is determined from the determined co-linear groupings, the corner sub-unit being an intersection pattern of the two sets of co-linear groupings, e.g., candidate pattern A, B, C, D is collinear, while candidate pattern a is also collinear with candidate pattern E, F, G, then candidate pattern a is a corner sub-unit. After the corner sub-units and the corresponding co-linear groups of the corner sub-units are found, the graph to be determined can be determined according to the candidate patterns of the corner sub-units and the corner sub-units which are co-linear.
In the above scheme, when the co-linear matching is performed, it is necessary to match each candidate pattern one by one whether the candidate pattern is co-linear with other candidate patterns, and the calculation amount is large. In order to reduce the amount of calculation, another alternative is:
scheme II:
inputting the pixel coordinate corresponding to each candidate pattern into a pre-trained spatial index model to obtain a second corner subunit and a co-linear group to which the second corner subunit belongs; the colinearity group comprises at least four candidate patterns which are colinear with each other, and the second corner point subunit is a candidate pattern which simultaneously belongs to two colinearity groups; and obtaining the graph to be determined according to the second corner subunit and the collinear group to which the second corner subunit belongs.
Specifically, the spatial index may select a lattice index based on a hash concept, or a quadtree or R-tree based on a tree concept, or the like. A spatial index model is constructed and trained in advance, and then the centroid pixel coordinate corresponding to each candidate pattern is input into the spatial index model, so that the corner point subunits and the colinear groupings corresponding to the corner point subunits are directly output.
Taking the position information as the centroid pixel coordinates in the candidate patterns and taking the space index model as the quadtree model as an example, the centroid pixel coordinates of all the candidate patterns are input into the quadtree model, two corner subunits which can be used as corners are firstly checked, and then whether other candidate patterns and the two corner subunits can form collinear candidate edges is detected, so that the determination of a third corner and a fourth corner is completed. After the corner sub-units have been determined, candidate patterns that are collinear with the corner sub-units and located between the corner sub-units are included in a co-linear grouping.
S4: determining the intersection ratio of each edge in the graph to be determined according to the position information of the M candidate patterns;
referring to fig. 3, according to the cross ratio invariant characteristic of the camera projection, the cross ratio of ABCD located in the spatial coordinate system and a ' B ' C ' D located in the image coordinate system is equal, that is, cross-ratio (a, B, C, D) ═ cross-ratio (a ', B ', C ', D '); wherein cross-ratio (a, B, C, D) ═ AB |/| BD |/| AC |/| CD |, cross-ratio (a ', B', C ', D') |/| a 'B' |/| B 'D' |/| a 'C' |/| C 'D' |.
In calculating the cross ratio, the distance between two candidate patterns may be calculated from the centroid pixel coordinates of the candidate patterns. For a figure to be determined with a number of collinearity of 4, the calculation of the crossing ratio is given by the above formula. For more than 4 patterns to be determined, the cross-over ratio can be defined in a more flexible manner, for example, cross-ratio (a, B, C, D, E) can be: i AC I/CE I/AB I/BE I, or AD I/DE I/AC I/CE I.
S5: determining a target positioning object corresponding to the graph to be determined from a positioning object model library according to the intersection ratio of each edge; the object positioning model library comprises object positioning information of a plurality of preset objects, and the object positioning information comprises the intersection ratio of each edge in the preset objects.
Specifically, after the intersection ratio of each edge in the graph to be determined is calculated, matching is performed by combining the intersection ratios corresponding to the known locators in the locator model library, so that the locator information corresponding to the graph to be determined is determined.
An alternative cross-ratio matching scheme is:
determining the distance between the graph to be determined and each preset positioning object according to the intersection ratio of each edge in the graph to be determined and the intersection ratio of each edge in the preset positioning objects; and determining the preset positioning object with the distance smaller than a set threshold value as a target positioning object corresponding to the graph to be determined.
Taking the positioning object of fig. 2 as an example, all the cross ratios of the graph to be determined can be combined into a cross ratio one-dimensional vector C0=(cr1,cr2,cr3,cr4) And the cross ratio one-dimensional vector C of a plurality of preset positioning objects is stored in the positioning object model libraryi=(cri1,cri2,cri3,cri4). Then calculate C separately0And each CiThe calculated distance may be a euclidean distance, a manhattan distance, a chebyshev distance, or a mahalanobis distance. And when the distance value is 0 or is smaller than the set threshold value, indicating that the corresponding preset positioning object is matched with the graph to be determined in the current positioning scene.
Another optional matching scheme of the cross ratio is a hash scheme, which is specifically as follows:
the method comprises the steps that the information of the positioning objects in a positioning object model base comprises cross ratio feature codes of each preset positioning object, the cross ratio feature codes are obtained by carrying out hash operation on a first sequence by using a hash function, and the first sequence is a cross ratio sequence obtained by sequencing the cross ratios of all edges in the preset positioning objects by a set sequencing method;
when the positioning objects are matched, sequencing all the cross ratios of the graph to be determined according to the set sequencing method to obtain a second sequence; performing hash operation on the second sequence according to the hash function to obtain a feature code to be determined; and according to the feature code to be determined, determining a target positioning object corresponding to the graph to be determined from the positioning object model library, wherein the cross ratio feature code of the target positioning object is the same as the feature code to be determined.
The principle of the method for identifying the positioning object provided by the embodiment is as follows: based on the principle that the projection ellipse of a camera is unchanged, candidate patterns with the shapes of ellipses or ellipse-like images are quickly searched, and most of noise points in the images are eliminated while the candidate patterns are searched; then based on the principle that the collinearity of the camera projection is unchanged, obtaining a graph to be determined, which consists of a plurality of groups of collinear candidate patterns, by carrying out collinearity matching on the candidate patterns, and simultaneously removing ellipticity-like noise points which do not belong to markers in the image; calculating the intersection ratio of each edge in the graph to be determined by using the position information of the candidate pattern, and determining a target positioning object matched with the graph to be determined from a positioning object model library based on the principle that the intersection ratio of camera projection is unchanged; after the information of the positioning object corresponding to the graph to be determined is obtained, the geometric shape of the positioning object can be determined, under the condition that the position data are known, the pose information of the camera can be calculated, and environment sensing and positioning are carried out according to the pose information. According to the method for identifying the positioning object based on the ellipse identification, the collinearity matching and the cross ratio matching, only the projection of the setting subunit of the positioning object on the image coordinate system of the camera is required to be an ellipse, so that the shape design of the setting subunit can be simplified, the identification calculation amount is small, the identification speed is high, and the identification resource occupation is saved; the method of the collinearity matching and the cross ratio matching ensures that the setting subunit can flexibly adjust the arrangement shape according to the requirement, and when the positioning object is partially shielded, partially stained or partially exposed, the remaining side which is not affected in the positioning object can still be determined through the collinearity matching, and the cross ratio of the remaining side is calculated to carry out the positioning object matching.
Based on the same inventive concept of the foregoing embodiments, in yet another alternative embodiment, a robot system is provided, which includes at least one robot and at least one set of positioners of the foregoing embodiments, which are placed at set positions in advance. When the robot walks to a set position, a camera carried by the robot shoots an environment picture, a positioning object in the environment picture is identified, the position and pose of the robot are calculated by the positioning object, and the robot is positioned based on the position and pose.
Through one or more embodiments of the present invention, the present invention has the following advantageous effects or advantages:
the invention provides a positioning object for assisting environment positioning, which is characterized in that based on the principle that the projected graph of the positioning object on the image coordinate system of a camera does not change the collinearity of setting subunits and does not change the intersection ratio of each edge determined according to the positions of the setting subunits, a plurality of setting subunits are arranged into a polygon on the surface of a carrier object, and each edge of the polygon comprises not less than 4 setting subunits; based on the principle of the invariant mapping of the ellipse shape, the setting subunit is made to appear as the ellipse shape under the space coordinate system, so that the projection graph of the setting subunit on the image coordinate system of the camera is the ellipse graph, and the setting subunit can be identified through the ellipse identification of the image; therefore, the positioning object provided by the invention can flexibly adjust and set the arrangement of the subunits based on the principle that the collinear mapping is not changed and the cross ratio mapping is not changed; based on the principle that the elliptic projection is not changed, only the stator unit is required to be set to be an elliptic figure, and compared with the commonly used visual positioning objects such as ARTag and AprilTag, the shape complexity of the pattern is reduced; through the combined design of the setting subunits in the aspects of shape and arrangement, the accurate identification of the positioning object is realized, the occupation of identification resources is reduced, the identification speed is improved, and meanwhile, the positioning object has certain anti-noise and partial shielding resistance.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. The positioning object for assisting the environmental positioning is characterized by comprising a carrier object and a positioning unit arranged on the surface of the carrier object, wherein the positioning unit comprises M setting subunits which are arranged into a polygon; each edge in the polygon comprises the setting subunits with the number not less than 4; the setting subunit is oval.
2. The retainer of claim 1, wherein the stator unit is configured as a circular ring or a solid circle.
3. The retainer of claim 1, wherein the stator unit is provided in the form of an elliptical ring or a solid ellipse.
4. The fixture of claim 1, wherein the stator unit is configured to display a graphic selected from the group consisting of an orange, an apple, a pear, a peach, a pomegranate, a grape, a litchi, a longan, a plum, a grapefruit, and a watermelon.
5. The fixture of claim 1, wherein said stator unit is configured to display a pattern selected from the group consisting of tomatoes, potatoes, sweet potatoes and sweet potatoes.
6. The fixture of claim 1, wherein the stator unit is configured to display one of an egg, a cake, and a cake.
7. The fixture of claim 1, wherein the stator unit is configured to display a graphic selected from the group consisting of a soccer ball, a basketball, a volleyball, a table tennis, and a football.
8. The fixture of claim 1, wherein the stator unit is configured to display one of a bowl, a disk, a clock face, and a button.
9. The fixture according to claim 1, wherein the setting subunit is displayed as a polygon having a number of sides of not less than 6.
10. A robotic system comprising at least one robot and at least one set of positioners according to any of claims 1 to 9, said positioners being pre-positioned at set positions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111489946.5A CN114012736A (en) | 2021-12-08 | 2021-12-08 | Positioning object for assisting environment positioning and robot system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111489946.5A CN114012736A (en) | 2021-12-08 | 2021-12-08 | Positioning object for assisting environment positioning and robot system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114012736A true CN114012736A (en) | 2022-02-08 |
Family
ID=80068205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111489946.5A Pending CN114012736A (en) | 2021-12-08 | 2021-12-08 | Positioning object for assisting environment positioning and robot system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114012736A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103697813A (en) * | 2013-12-31 | 2014-04-02 | 中建铁路建设有限公司 | Ballastless track slab dimension detection method and device |
CN107085853A (en) * | 2017-05-04 | 2017-08-22 | 中国矿业大学 | Guide rail single eye stereo vision mining area derrick deformation monitoring method |
US20190038365A1 (en) * | 2016-02-12 | 2019-02-07 | Intuitive Surgical Operations, Inc | Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery |
CN110443853A (en) * | 2019-07-19 | 2019-11-12 | 广东虚拟现实科技有限公司 | Scaling method, device, terminal device and storage medium based on binocular camera |
CN110490913A (en) * | 2019-07-22 | 2019-11-22 | 华中师范大学 | Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method |
CN110959099A (en) * | 2017-06-20 | 2020-04-03 | 卡尔蔡司Smt有限责任公司 | System, method and marker for determining the position of a movable object in space |
CN112184826A (en) * | 2019-07-05 | 2021-01-05 | 杭州海康机器人技术有限公司 | Calibration plate and calibration method |
-
2021
- 2021-12-08 CN CN202111489946.5A patent/CN114012736A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103697813A (en) * | 2013-12-31 | 2014-04-02 | 中建铁路建设有限公司 | Ballastless track slab dimension detection method and device |
US20190038365A1 (en) * | 2016-02-12 | 2019-02-07 | Intuitive Surgical Operations, Inc | Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery |
CN107085853A (en) * | 2017-05-04 | 2017-08-22 | 中国矿业大学 | Guide rail single eye stereo vision mining area derrick deformation monitoring method |
CN110959099A (en) * | 2017-06-20 | 2020-04-03 | 卡尔蔡司Smt有限责任公司 | System, method and marker for determining the position of a movable object in space |
CN112184826A (en) * | 2019-07-05 | 2021-01-05 | 杭州海康机器人技术有限公司 | Calibration plate and calibration method |
CN110443853A (en) * | 2019-07-19 | 2019-11-12 | 广东虚拟现实科技有限公司 | Scaling method, device, terminal device and storage medium based on binocular camera |
CN110490913A (en) * | 2019-07-22 | 2019-11-22 | 华中师范大学 | Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113894799B (en) | Robot and marker identification method and device for assisting environment positioning | |
US8824781B2 (en) | Learning-based pose estimation from depth maps | |
CN104732514B (en) | For handling the equipment, system and method for height map | |
Ge et al. | Fruit localization and environment perception for strawberry harvesting robots | |
US9836645B2 (en) | Depth mapping with enhanced resolution | |
US9330307B2 (en) | Learning based estimation of hand and finger pose | |
CN107392086B (en) | Human body posture assessment device, system and storage device | |
Mondéjar-Guerra et al. | Robust identification of fiducial markers in challenging conditions | |
JP2019518297A (en) | Robot Assisted Object Learning Vision System | |
US20220058414A1 (en) | Arbitrary visual features as fiducial elements | |
CN103824275B (en) | Saddle dots structure and the system and method for determining its information are searched in the picture | |
CN105847987A (en) | Method and system for correcting human body actions through television and body feeling accessory component | |
US20150243029A1 (en) | Method and image processing system for determining parameters of a camera | |
CN106650628B (en) | Fingertip detection method based on three-dimensional K curvature | |
CN115609591A (en) | 2D Marker-based visual positioning method and system and composite robot | |
CN110443242A (en) | Read frame detection method, Model of Target Recognition training method and relevant apparatus | |
CN115937203A (en) | Visual detection method, device, equipment and medium based on template matching | |
JP2017003525A (en) | Three-dimensional measuring device | |
CN108388854A (en) | A kind of localization method based on improvement FAST-SURF algorithms | |
CN111179347B (en) | Positioning method, positioning equipment and storage medium based on regional characteristics | |
CN106097362B (en) | The automatic of artificial circular mark detects and localization method in a kind of x-ray image | |
CN114012736A (en) | Positioning object for assisting environment positioning and robot system | |
AUGMENTED | Grayscale image enhancement for enhancing features detection in marker-less augmented reality technology | |
Pratomo et al. | Algorithm border tracing vs scanline in blob detection for robot soccer vision system | |
CN113378886B (en) | Method for automatically training shape matching model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100089 Applicant after: Beijing Yunji Technology Co.,Ltd. Address before: Room 702, 7 / F, 67 North Fourth Ring Road West, Haidian District, Beijing Applicant before: BEIJING YUNJI TECHNOLOGY Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220208 |