CN112033408B - Paper-pasted object space positioning system and positioning method - Google Patents
Paper-pasted object space positioning system and positioning method Download PDFInfo
- Publication number
- CN112033408B CN112033408B CN202010880379.5A CN202010880379A CN112033408B CN 112033408 B CN112033408 B CN 112033408B CN 202010880379 A CN202010880379 A CN 202010880379A CN 112033408 B CN112033408 B CN 112033408B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- target
- points
- image
- color block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/04—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means
Abstract
The invention discloses a sticker type object space positioning system and a positioning method, which belong to the technical field of physical space positioning and comprise a digital image processing module, a target pose change identification module and a coordinate system dimension conversion module; determining the position and the orientation of a target through the color block extracted from the digital processing module, and calculating instantaneous speed, acceleration and angular speed according to the time difference of upper and lower frames and the displacement of the target in corresponding time; solving to obtain parameters such as internal parameters, distortion parameters, external parameters and the like, so that a two-dimensional coordinate system is converted into a three-dimensional coordinate system through matrix transformation, and finally the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of the target under the three-dimensional coordinate system are obtained; the sticker can be pasted on the recognition target, is simple, convenient and easy to use, has strong transportability, can be applied to a plurality of fields of production and life, and greatly reduces the cost; the method has the characteristics of high identification speed, high positioning precision and the like, and the practical effect is far greater than that of the traditional two-dimensional code positioning scheme.
Description
Technical Field
The invention belongs to the technical field of physical space positioning, and particularly relates to a paper-pasted object space positioning system and a positioning method.
Background
The object positioning method has wide application in a plurality of fields, in particular to the field of a multi-target system requiring global visual positioning, such as part identification and grabbing position planning of a manufacturing robot, global visual positioning of a mechanical fish school, global visual positioning of a warehouse robot and the like. However, the existing object positioning method mainly relies on methods such as sensor positioning and deep learning models under a multi-view camera, and has the disadvantages of poor universality, complex operation, simplicity and usability, and the problems need to be solved urgently.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and aims to provide a sticker type object space positioning method, which utilizes a square matrix sticker to identify and position the object through the change of the pose and convert the dimension of a coordinate system so as to obtain the information of the orientation, the running track, the position, the instantaneous speed, the acceleration, the angular speed and the like of each object in a three-dimensional coordinate system.
The invention adopts the following technical scheme for solving the technical problems:
a paper-pasted object space positioning system comprises a digital image processing module, a target pose change identification module and a coordinate system dimension conversion module;
the digital image processing module is used for acquiring a set of outline points of a connected domain by utilizing a findContours function of OpenCV, extracting each color block from an image by adopting a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the size identity of the color block, and cleaning outer points, so that each color block in the image can be extracted more accurately;
the target pose change identification module is used for determining the position and the orientation of a target through the color blocks extracted from the digital processing module, and calculating data information such as instantaneous speed, acceleration, angular speed and the like according to the time difference of upper and lower frames and the displacement of the target in corresponding time;
and the coordinate system dimension conversion module is used for obtaining the internal parameter, the distortion parameter and the external parameter by calibrating the checkerboard and solving through matrix transformation, so that the two-dimensional coordinate system is converted into the three-dimensional coordinate system through the matrix transformation, and finally the conditions of the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of the target in the three-dimensional coordinate system are obtained.
A positioning method of an object space positioning system based on a sticker type specifically comprises the following steps;
step 1, digital image processing: acquiring a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the identity of the size of the color block, and cleaning outliers, thereby accurately extracting each color block in the image;
step 2, identifying and recognizing the pose change of the target: in the color block cluster corresponding to the target, two points with the largest distance between the two points are searched, namely two black color blocks on the bottom edge of the sticker corresponding to the color blocks, and the orientation of the target is obtained by making the perpendicular bisector of the two points; the middle point of the two black points is used as a target positioning point, and data information such as the running track, the position, the instantaneous speed, the acceleration, the angular speed and the like of the target can be obtained through calculation according to the images of the upper and lower continuous frames;
step 3, converting a coordinate system: the method comprises the following specific steps:
step 3.1, converting the two-dimensional coordinate system of the image into a three-dimensional coordinate system;
and 3.2, detecting the characteristic points in the image, such as the checkerboard angular points, to obtain pixel coordinate values of the checkerboard angular points, and calculating to obtain physical coordinate values of the calibration board angular points according to the known checkerboard size and the origin of the world coordinate system. Solving a camera internal parameter matrix according to the relation between the physical coordinate value and the pixel coordinate value, finally solving a camera external parameter matrix corresponding to each picture, calculating distortion parameters, and optimizing the parameters by utilizing an L-M algorithm, wherein the L-M is Levenberg-Marquardt;
obtaining a 2-dimensional position of a point in a world coordinate system after multiplying point coordinates in an image coordinate system by internal parameters, distortion parameters and external parameters; the specific calculation formula is as follows:
wherein the left term is the object coordinate in the world coordinate system, z c Distance of camera to plane, T -1 Is an external reference matrix, K -1 The product of the internal reference matrix and the distortion matrix is obtained, the rightmost item is an image coordinate, and the target coordinate system can be converted into a three-dimensional coordinate system;
in a three-dimensional coordinate system according toThe time difference between the upper and lower frames and the displacement of the target in the corresponding time are calculated according to the formulaAnd obtaining the information of the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of each target in the three-dimensional coordinate system by using a formula.
Compared with the prior art, the technical scheme adopted by the invention has the following technical effects:
1. the object positioning method provided by the invention only needs to paste the sticker on the recognition target, is simple, convenient and easy to use, has strong transportability, can be applied to multiple fields of production and life, and greatly reduces the cost;
2. the method has the characteristics of high recognition speed, high positioning accuracy and the like, and the practical effect is far greater than that of the traditional two-dimensional code positioning scheme.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic view of the color lump sticker of the present invention;
FIG. 3 is a schematic representation of the marker location of the object of the present invention;
FIG. 4 is a schematic diagram of pose change identification for the targets of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A paper-pasted object space positioning system comprises a digital image processing module, a target pose change identification module and a coordinate system dimension conversion module;
the digital image processing module is used for acquiring a set of outline points of a connected domain by utilizing a findContours function of OpenCV, extracting each color block from an image by adopting a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the size identity of the color block, and cleaning outer points, so that each color block in the image can be extracted more accurately;
the target pose change identification module is used for determining the position and the orientation of a target through the color blocks extracted from the digital processing module, and calculating data information such as instantaneous speed, acceleration, angular speed and the like according to the time difference of upper and lower frames and the displacement of the target in corresponding time;
and the coordinate system dimension conversion module is used for obtaining the internal parameter, the distortion parameter and the external parameter by calibrating the checkerboard and solving through matrix transformation, so that the two-dimensional coordinate system is converted into the three-dimensional coordinate system through the matrix transformation, and finally the conditions of the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of the target in the three-dimensional coordinate system are obtained.
A method for positioning a space of a sticker-type object, as shown in fig. 1, specifically comprises the following steps:
step 1, digital image processing: acquiring a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the identity of the size of the color block, and cleaning outer points, thereby accurately extracting each color block in the image;
step 2, identifying and recognizing the pose change of the target: in the color block cluster corresponding to the target, searching two points with the largest distance between the two points, namely two black color blocks at the bottom edge of the sticker corresponding to the color blocks, and drawing a perpendicular bisector of the two points to obtain the orientation of the target; the middle point of the two black points is used as a target positioning point, and data information such as the running track, the position, the instantaneous speed, the acceleration, the angular speed and the like of a target can be obtained through calculation according to the images of the upper and lower continuous frames;
step 3, converting a coordinate system: the method comprises the following specific steps:
step 3.1, converting the two-dimensional coordinate system of the image into a three-dimensional coordinate system;
and 3.2, detecting the characteristic points in the image, such as the checkerboard angular points, to obtain pixel coordinate values of the checkerboard angular points, and calculating to obtain physical coordinate values of the calibration board angular points according to the known checkerboard size and the origin of the world coordinate system. Solving a camera internal parameter matrix according to the relation between the physical coordinate value and the pixel coordinate value, finally solving a camera external parameter matrix corresponding to each picture, calculating distortion parameters, and optimizing the parameters by utilizing an L-M algorithm, wherein the L-M is Levenberg-Marquardt;
obtaining a 2-dimensional position of a point in a world coordinate system after multiplying a point coordinate in an image coordinate system by internal reference, distortion parameters and external reference; the specific calculation formula is as follows:
wherein the left term is the object coordinate in the world coordinate system, z c Distance of camera to plane, T -1 Is a reference matrix, K -1 The product of the internal reference matrix and the distortion matrix is obtained, the rightmost item is an image coordinate, and the target coordinate system can be converted into a three-dimensional coordinate system;
under the three-dimensional coordinate system, according to the time difference between the upper frame and the lower frame and the displacement of the target moving in the corresponding time, according to a formulaAnd obtaining the information of the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of each target in the three-dimensional coordinate system by using a formula.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention. While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (1)
1. A sticker type object space positioning system is characterized in that: the system comprises a digital image processing module, a target pose change identification module and a coordinate system dimension conversion module;
the digital image processing module is used for acquiring a set of outline points of a connected domain by utilizing a findContours function of OpenCV, extracting each color block from an image by adopting a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the size identity of the color block, and cleaning outer points, so that each color block in the image can be extracted more accurately;
the target pose change identification module is used for determining the position and the orientation of a target through the color blocks extracted from the digital processing module, and calculating data information of instantaneous speed, acceleration and angular speed according to the time difference of upper and lower frames and the displacement of the target in corresponding time;
the coordinate system dimension conversion module is used for obtaining internal parameters, distortion parameters and external parameters by calibrating the checkerboard and solving through matrix transformation, so that a two-dimensional coordinate system is converted into a three-dimensional coordinate system through the matrix transformation, and finally the conditions of the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of the target in the three-dimensional coordinate system are obtained;
the method specifically comprises the following steps;
step 1, digital image processing: acquiring a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the identity of the size of the color block, and cleaning outliers, thereby accurately extracting each color block in the image;
step 2, identifying and recognizing the pose change of the target: in the color block cluster corresponding to the target, searching two points with the largest distance between the two points, and making a perpendicular bisector of the two points, wherein the two points with the largest distance between the two points are two black color blocks at the bottom edge of the sticker of the corresponding color block, so as to obtain the orientation of the target; calculating to obtain data information of a running track, a position, an instantaneous speed, an acceleration and an angular speed of the target according to the images of the upper and lower continuous frames by taking the middle point of the two black points as a target positioning point;
step 3, converting a coordinate system: the method comprises the following specific steps:
step 3.1, converting the two-dimensional coordinate system of the image into a three-dimensional coordinate system;
step 3.2, detecting the characteristic points including the checkerboard angular points in the image to obtain pixel coordinate values of the checkerboard angular points, and calculating to obtain physical coordinate values of the calibration board angular points according to the known checkerboard size and the origin of a world coordinate system; solving an internal parameter matrix of the camera according to the relation between the physical coordinate value and the pixel coordinate value, finally solving an external parameter matrix of the camera corresponding to each picture, calculating distortion parameters, and optimizing the internal parameter, the external parameter and the distortion parameters by utilizing an L-M algorithm, wherein the L-M is Levenberg-Marquardt;
obtaining a 2-dimensional position of a point in a world coordinate system after multiplying a point coordinate in an image coordinate system by internal reference, distortion parameters and external reference; the specific calculation formula is as follows:
wherein the left term is the object coordinate in the world coordinate system, z c Distance of camera to plane, T -1 Is a reference matrix, K -1 The product of the internal reference matrix and the distortion matrix, the rightmost item is the image coordinate, and the target coordinate system is converted into a three-dimensional coordinate system through calculation(ii) a Under a three-dimensional coordinate system, according to the time difference of the upper frame and the lower frame and the displacement of the target moving in the corresponding time, according to a formulaAnd obtaining the information of the orientation, the running track, the position, the instantaneous speed, the acceleration and the angular speed of each target in the three-dimensional coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010880379.5A CN112033408B (en) | 2020-08-27 | 2020-08-27 | Paper-pasted object space positioning system and positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010880379.5A CN112033408B (en) | 2020-08-27 | 2020-08-27 | Paper-pasted object space positioning system and positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112033408A CN112033408A (en) | 2020-12-04 |
CN112033408B true CN112033408B (en) | 2022-09-30 |
Family
ID=73586060
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010880379.5A Active CN112033408B (en) | 2020-08-27 | 2020-08-27 | Paper-pasted object space positioning system and positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112033408B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784942B (en) * | 2020-12-29 | 2022-08-23 | 浙江大学 | Special color block coding method for positioning navigation in large-scale scene |
CN112833883B (en) * | 2020-12-31 | 2023-03-10 | 杭州普锐视科技有限公司 | Indoor mobile robot positioning method based on multiple cameras |
CN113674362B (en) * | 2021-08-24 | 2023-06-27 | 北京理工大学 | Indoor imaging positioning method and system based on spatial modulation |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08315140A (en) * | 1995-05-15 | 1996-11-29 | Canon Inc | Image processor and its method |
CN104866859A (en) * | 2015-05-29 | 2015-08-26 | 南京信息工程大学 | High-robustness visual graphical sign and identification method thereof |
CN106127203A (en) * | 2016-06-29 | 2016-11-16 | 孟祥雨 | A kind of device to object location and followed the trail of and the method for image recognition |
CN107123146A (en) * | 2017-03-20 | 2017-09-01 | 深圳市华汉伟业科技有限公司 | The mark localization method and system of a kind of scaling board image |
CN107239748A (en) * | 2017-05-16 | 2017-10-10 | 南京邮电大学 | Robot target identification and localization method based on gridiron pattern calibration technique |
CN107481287A (en) * | 2017-07-13 | 2017-12-15 | 中国科学院空间应用工程与技术中心 | It is a kind of based on the object positioning and orientation method and system identified more |
CN107622499A (en) * | 2017-08-24 | 2018-01-23 | 中国东方电气集团有限公司 | A kind of identification and space-location method based on target two-dimensional silhouette model |
CN108198199A (en) * | 2017-12-29 | 2018-06-22 | 北京地平线信息技术有限公司 | Moving body track method, moving body track device and electronic equipment |
WO2020031950A1 (en) * | 2018-08-07 | 2020-02-13 | 日本電信電話株式会社 | Measurement calibration device, measurement calibration method, and program |
CN110954063A (en) * | 2018-09-27 | 2020-04-03 | 北京自动化控制设备研究所 | Optical relative measurement method for unmanned aerial vehicle landing recovery |
CN111191759A (en) * | 2019-08-26 | 2020-05-22 | 上海懒书智能科技有限公司 | Two-dimensional code generation method and positioning and decoding method based on GPU |
-
2020
- 2020-08-27 CN CN202010880379.5A patent/CN112033408B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08315140A (en) * | 1995-05-15 | 1996-11-29 | Canon Inc | Image processor and its method |
CN104866859A (en) * | 2015-05-29 | 2015-08-26 | 南京信息工程大学 | High-robustness visual graphical sign and identification method thereof |
CN106127203A (en) * | 2016-06-29 | 2016-11-16 | 孟祥雨 | A kind of device to object location and followed the trail of and the method for image recognition |
CN107123146A (en) * | 2017-03-20 | 2017-09-01 | 深圳市华汉伟业科技有限公司 | The mark localization method and system of a kind of scaling board image |
CN107239748A (en) * | 2017-05-16 | 2017-10-10 | 南京邮电大学 | Robot target identification and localization method based on gridiron pattern calibration technique |
CN107481287A (en) * | 2017-07-13 | 2017-12-15 | 中国科学院空间应用工程与技术中心 | It is a kind of based on the object positioning and orientation method and system identified more |
CN107622499A (en) * | 2017-08-24 | 2018-01-23 | 中国东方电气集团有限公司 | A kind of identification and space-location method based on target two-dimensional silhouette model |
CN108198199A (en) * | 2017-12-29 | 2018-06-22 | 北京地平线信息技术有限公司 | Moving body track method, moving body track device and electronic equipment |
WO2020031950A1 (en) * | 2018-08-07 | 2020-02-13 | 日本電信電話株式会社 | Measurement calibration device, measurement calibration method, and program |
CN110954063A (en) * | 2018-09-27 | 2020-04-03 | 北京自动化控制设备研究所 | Optical relative measurement method for unmanned aerial vehicle landing recovery |
CN111191759A (en) * | 2019-08-26 | 2020-05-22 | 上海懒书智能科技有限公司 | Two-dimensional code generation method and positioning and decoding method based on GPU |
Non-Patent Citations (1)
Title |
---|
基于OpenCV的二维定位系统设计;余绍鹏等;《贵州大学学报(自然科学版)》;20101015(第05期);第63-66页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112033408A (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112033408B (en) | Paper-pasted object space positioning system and positioning method | |
CN107610176B (en) | Pallet dynamic identification and positioning method, system and medium based on Kinect | |
CN110068270B (en) | Monocular vision box volume measuring method based on multi-line structured light image recognition | |
CN103411553B (en) | The quick calibrating method of multi-linear structured light vision sensors | |
CN111340797A (en) | Laser radar and binocular camera data fusion detection method and system | |
CN110261870A (en) | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method | |
CN104598883B (en) | Target knows method for distinguishing again in a kind of multiple-camera monitoring network | |
CN105243664B (en) | A kind of wheeled mobile robot fast-moving target tracking method of view-based access control model | |
CN112132857B (en) | Dynamic object detection and static map reconstruction method of dynamic environment hybrid vision system | |
CN104200495A (en) | Multi-target tracking method in video surveillance | |
CN110223355B (en) | Feature mark point matching method based on dual epipolar constraint | |
CN110021029B (en) | Real-time dynamic registration method and storage medium suitable for RGBD-SLAM | |
CN104794737A (en) | Depth-information-aided particle filter tracking method | |
CN107097256B (en) | Model-free method for tracking target of the view-based access control model nonholonomic mobile robot under polar coordinates | |
CN103778436A (en) | Pedestrian gesture inspecting method based on image processing | |
Yuan et al. | Combining maps and street level images for building height and facade estimation | |
CN112197705A (en) | Fruit positioning method based on vision and laser ranging | |
CN112652020A (en) | Visual SLAM method based on AdaLAM algorithm | |
CN110910389B (en) | Laser SLAM loop detection system and method based on graph descriptor | |
Liao et al. | Se-calib: Semantic edges based lidar-camera boresight online calibration in urban scenes | |
CN109064536B (en) | Page three-dimensional reconstruction method based on binocular structured light | |
CN112734844B (en) | Monocular 6D pose estimation method based on octahedron | |
CN111340884B (en) | Dual-target positioning and identity identification method for binocular heterogeneous camera and RFID | |
CN111932617B (en) | Method and system for realizing real-time detection and positioning of regular objects | |
CN115144828B (en) | Automatic online calibration method for intelligent automobile multi-sensor space-time fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |