CN115496898B - Mobile robot target positioning method and system - Google Patents
Mobile robot target positioning method and system Download PDFInfo
- Publication number
- CN115496898B CN115496898B CN202211432356.3A CN202211432356A CN115496898B CN 115496898 B CN115496898 B CN 115496898B CN 202211432356 A CN202211432356 A CN 202211432356A CN 115496898 B CN115496898 B CN 115496898B
- Authority
- CN
- China
- Prior art keywords
- depth
- camera
- target
- color
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000005457 optimization Methods 0.000 claims abstract description 52
- 238000013519 translation Methods 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 34
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000002329 infrared spectrum Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000012855 volatile organic compound Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application relates to the technical field of target identification, in particular to a target positioning method and a target positioning system for a mobile robot. The method comprises the following steps: a target identification step, wherein a target color image and a depth image are collected, and the target color image is identified to obtain a color area frame of the target color image in the color image; a depth information acquisition step, namely establishing a camera model, calibrating a camera, establishing a registration relationship, and performing iterative optimization on the registration relationship based on an optimization error function to obtain a depth region frame of a target in a depth image; acquiring point cloud data, namely acquiring effective depth data in a depth area frame, selecting obstacle avoidance target depth data in the effective depth data, and acquiring point cloud data of a target under a depth camera coordinate system; and a target positioning step, namely converting the point cloud data into a color camera coordinate system to determine a target pose, and fitting to obtain an obstacle avoidance target area of the mobile robot. The method and the device solve the problem of registration from the color image to the depth image, reduce the operation cost, and are simple and convenient.
Description
Technical Field
The application relates to the technical field of target identification, in particular to a target positioning method and system for a mobile robot.
Background
With the development of artificial intelligence, people have more and more extensive research on intelligent mobile robots. At present, the traditional target identification method is mostly adopted in the aspect of target identification, and the identification effect and the accuracy are poor due to the diversity of forms, illumination and backgrounds. The target detection method based on the deep convolutional neural network is strong in robustness and high in identification accuracy. After the target is correctly identified, it is also more critical to achieve the attitude estimation of the target.
The Kinect is a Depth Camera, kinect v2 comprises a Color Camera Color Camera, a Depth (infrared) Camera Depth Sensor, an infrared projector IR Emitters and a Microphone Array, the Color Camera is used for shooting Color videos and images in a visual angle range, the infrared projector is used for projecting near infrared spectrums, the Depth (infrared) Camera is used for analyzing infrared spectrums, depth information is obtained through the time returned after the reflected infrared rays are projected, and Depth images of human bodies and objects in the visual range are created. The existing mobile robot target positioning method generally obtains a color image and a depth image of a scene by a Kinect v2 camera to complete the positioning of a target.
As disclosed in patent CN112136505A, in a color image and depth image registration part, mostly a mapping function provided by SDK (Software Development Kit) is used to complete registration between the two in the existing target identification and positioning method, so as to implement information fusion between the two, but the method has insufficient flexibility, and because the depth camera and the color camera are not overlapped, a part of the depth image cannot be captured by the color camera in an actual scene, the registration effect is not good, for the display of the depth image, a certain multiple of depth image data is often expanded, and the registration result is also inaccurate; the registration principle is mainly that the coordinates of the depth camera are converted into the coordinates of the RGB camera, and then the coordinates of the RGB camera are converted into the coordinates of an RGB plane, so that the method is more limited to the registration from the depth image to the color image.
If not, the registration is carried out from the color image to the depth image, and the number of image pixels corresponding to the unit length of the CCD image surface of the Kinect v2 cameraN x AndN y can not be obtained by searching data, therefore, the prior art can not directly deduce the registration relationship under the condition of unknown any depth information, and lacks the registration relationship even if the color image is obtainedThe pixel value can not obtain the coordinate value on the depth image, and the accurate target positioning of the target scene can not be realized.
Disclosure of Invention
The embodiment of the application provides a mobile robot target positioning method and system, which at least solve the problem of color image to depth image registration in the related technology, realize target identification and positioning only by means of a Kinect v2 camera, reduce the operation cost, and are simple and convenient.
In a first aspect, an embodiment of the present application provides a mobile robot target positioning method, including:
a target identification step, namely acquiring a target color image and a depth image, identifying the target color image through a pre-trained MobileNet V3-SSD model to obtain a color area frame of a target in the color image, and obtaining a target category and pixel coordinates (a)u rgb ,v rgb );
The method comprises the following steps of obtaining depth information, establishing a Kinect v2 camera model, calibrating a camera, obtaining internal parameters, distortion parameters and external parameters of a color camera and the depth camera, establishing a registration relation for converting a color image into a depth image, performing iterative optimization on the registration relation based on an optimized error function, obtaining a depth region frame of a target in the depth image based on the optimized registration relation, and calculating the optimized error function through the following calculation models:
wherein,is an iterationiColor projection coordinates, p, of the next target point in the color image plane rgb0 For the real pixel coordinates of the target point on the color image,i=1,2,…,n,nis the iteration number;
a point cloud data acquisition step, namely acquiring all effective depth data in the depth area frame by traversing the depth area frame, selecting obstacle avoidance target depth data in the effective depth data based on a preset obstacle avoidance range, and acquiring point cloud data of a target under a depth camera coordinate system;
and a target positioning step, namely converting the point cloud data into a color camera coordinate system to determine the pose of the target under the color camera coordinate system, and fitting to obtain an obstacle avoidance target area of the mobile robot.
In some embodiments, the depth information obtaining step includes:
a registration relation establishing step, namely establishing a Kinect v2 camera model and calibrating a camera to obtain internal parameters A of a color camera and a depth camera rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameter R of the depth camera depth And T depth And then, calculating a rotation matrix and a translation vector between coordinate systems of the color camera and the depth camera, and establishing a registration relation of the color image to the depth image, wherein the registration relation is calculated based on the following calculation model:
ρ 1 p depth =ρ 2 A depth R depth R rgb -1 A rgb -1 p rgb +A depth (T depth -R depth R rgb -1 T rgb ),
wherein,ρ 1 for the depth information in the depth image,ρ 2 for depth information in color images, p rgb For the colour projection coordinates, p, of the target point in the colour image plane depth Projecting coordinates for the depth of the target point on the depth image plane;
and a registration iterative optimization step, namely performing iterative optimization on registration based on the real rotation translation relation of the color camera and the depth camera in the Kinect v2 after calibration and the Kinect v2 camera model.
In some of these embodiments, the registration iterative optimization step further comprises:
a depth value obtaining step, namely rotating the depth camera after the color camera and the depth camera are in a set approximate parallel relation to ensure that the color camera and the depth camera are in a real rotation translation relation, respectively collecting the depth values of the target point in the depth image under the corresponding relation to obtain an initial depth valueρ 10 =z depth0 And updating the depth valueρ 11 =z depth1 ;
Establishing an optimization error function, namely, according to the real rotation translation relation and the respective external parameters of the depth camera coordinate system and the color camera coordinate system, setting the depth camera coordinate P depth1 Converting to obtain color projection coordinate p rgb1 And establishing the optimization error function;
optimizing the iteration step of the error function, and configuring the iteration range to the depth valueρ 1i =z depthi Iteration is carried out to obtain the depth value with the minimum error function and the corresponding depth projection coordinate is converted by a coordinate system to obtain the color camera coordinate of the target pointAnd realizing the registration of the target point color image to the depth image.
In some embodiments, the effective depth data is calculated by converting the depth image into a grayscale image based on the following calculation model:
depth_value=k×gray_value
wherein,kis a scale factor;gray_valueare gray scale values.
In some embodiments, the point cloud data is calculated by the following calculation model:
wherein, (ii) (u,v) For a coordinate point on the depth image,dis the depth value of the coordinate point and is,f x 、f y indicating focal length of depth camerafIn thatx、yThe pixel metric in the direction is measured by the pixel,c x 、c y indicating the amount of shift of the optical axis from the center of the projection plane coordinates.
In some embodiments, the obstacle avoidance target area of the mobile robot is a spherical surface, and the spherical surface is solved by constructing an objective function and solving a partial derivative of a vector of the objective function to be 0;
wherein the spherical surface is represented as:
the objective function is represented as:
in some embodiments, the obstacle avoidance target region further includes a plane formed by the maximum distance of the detected obstacle avoidance target, and the obstacle avoidance target region is calculated by the following calculation model:
wherein,z max the maximum distance to the target is the distance between the targets,δand presetting a safety margin for avoiding obstacles.
In a second aspect, an embodiment of the present application provides a mobile robot target positioning system, configured to implement the mobile robot target positioning method according to the first aspect, including:
a target identification module for collecting a target color image and a depth image, and identifying the target color image through a pre-trained MobileNet V3-SSD model to obtain a targetIn the color area frame of the color image, the object type and the pixel coordinate are obtained (u rgb ,v rgb );
The depth information acquisition module is used for establishing a Kinect v2 camera model and calibrating a camera to obtain internal parameters, distortion parameters and external parameters of a color camera and the depth camera so as to establish a registration relation for converting a color image into a depth image, carrying out iterative optimization on the registration relation based on an optimized error function, and obtaining a depth region frame of a target in the depth image based on the optimized registration relation;
the point cloud data acquisition module is used for acquiring all effective depth data in the depth area frame by traversing the depth area frame, selecting obstacle avoidance target depth data in the effective depth data based on a preset obstacle avoidance range, and acquiring point cloud data of a target under a depth camera coordinate system;
and the target positioning module is used for converting the point cloud data into a color camera coordinate system so as to determine the pose of the target under the color camera coordinate system, and fitting to obtain an obstacle avoidance target area of the mobile robot.
In some embodiments, the depth information obtaining module comprises:
a registration relation establishing module for establishing a Kinect v2 camera model and calibrating the camera to obtain the internal parameters A of the color camera and the depth camera rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameter R of the depth camera depth And T depth And then, calculating a rotation matrix and a translation vector between coordinate systems of the color camera and the depth camera, and establishing a registration relation of the color image to the depth image, wherein the registration relation is calculated based on the following calculation model:
ρ 1 p depth =ρ 2 A depth R depth R rgb -1 A rgb -1 p rgb +A depth (T depth -R depth R rgb -1 T rgb ),
wherein,ρ 1 for the depth information in the depth image,ρ 2 for depth information in color images, p rgb For the colour projection coordinates, p, of the target point in the colour image plane depth Projecting coordinates for the depth of the target point on a depth image plane;
and the registration iterative optimization module is used for performing iterative optimization on registration based on the real rotational-translational relation between the color camera and the depth camera in the Kinect v2 after calibration and the Kinect v2 camera model.
In some of these embodiments, the registration iterative optimization module further comprises:
a depth value obtaining module for rotating the depth camera after the color camera and the depth camera are in the set approximate parallel relationship to make the color camera and the depth camera in the real rotation translation relationship, respectively collecting the depth values of the target point in the depth image to obtain the initial depth valueρ 10 =z depth0 And updating the depth valueρ 11 =z depth1 ;
An optimization error function establishing module for establishing the depth camera coordinate P according to the real rotation translation relation and the respective external parameters of the depth camera coordinate system and the color camera coordinate system depth1 Converting to obtain color projection coordinate p rgb1 Establishing an optimization error function;
an optimization error function iteration module for configuring the iteration range to the depth valueρ 1i =z depthi Iteration is carried out to obtain the depth value with the minimum error function and the corresponding depth projection coordinate is converted by a coordinate system to obtain the color camera coordinate of the target pointRealizing the color image depth of the target pointAnd (5) registering the images.
Compared with the related art, the mobile robot target positioning method and system provided by the embodiment of the application can realize registration optimization from a color image to a depth image without depending on an SDK (software development kit) toolkit, realize information fusion of the color image and the depth image, reduce the operation cost and are simple and convenient; the target recognition is carried out based on the convolutional neural network algorithm, the accuracy and robustness of the target recognition are improved, and the recognition accuracy of the visual system of the mobile robot is further improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a mobile robot target location method according to an embodiment of the application;
FIG. 2 is a flow chart of a substep of a mobile robot target location method according to an embodiment of the present application;
FIG. 3 is a flow chart of a mobile robot target location method in accordance with a preferred embodiment of the present application;
FIG. 4 is a schematic diagram of an identified region box according to an embodiment of the present application;
FIG. 5 is a schematic view of a camera model according to an embodiment of the present application;
fig. 6 is a schematic diagram of a registration principle according to an embodiment of the present application;
fig. 7 is a schematic diagram of a registration optimization principle according to an embodiment of the present application;
fig. 8 is a schematic diagram of an obstacle avoidance target area according to an embodiment of the present application;
fig. 9 is a block diagram of a mobile robotic target positioning system according to an embodiment of the present application.
In the figure:
1. a target identification module; 2. a depth information acquisition module; 3. a point cloud data acquisition module;
4. a target positioning module;
21. a registration relationship establishing module; 22. a registration iterative optimization module;
221. a depth value acquisition module; 222. an optimization error function establishing module;
223. and optimizing an error function iteration module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the application, and that it is also possible for a person skilled in the art to apply the application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that such a development effort might be complex and tedious, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, given the benefit of this disclosure, without departing from the scope of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by one of ordinary skill in the art that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The use of the terms "including," "comprising," "having," and any variations thereof herein, is meant to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
MobileNet v3 network: the training network architecture is a training network architecture which is published by google in 2019 and comprises a first convolution layer conv2d, a plurality of bneck structures, a second convolution layer conv2d, a pooling layer pool, a third convolution layer conv2d and a fourth convolution layer conv2d.
The method mainly aims at the target positioning problem of the mobile robot in the indoor environment, the Kinect v2 camera is used as a perception sensor of the mobile robot, in the target positioning process, the target is identified through the color image, the registration and optimization of the color image and the depth image are achieved through the establishment of a camera model and the calibration of the camera, and then the registration of the color image to the depth image is conducted.
An embodiment of the present application provides a mobile robot target positioning method, and fig. 1 to 3 are flowcharts of a mobile robot target positioning method according to an embodiment of the present application, and as shown in fig. 1 to 3, the flowchart includes the following steps:
a target identification step S1, collecting a target color image and a depth image, identifying the target color image through a pre-trained MobileNet V3-SSD model to obtain a color area frame of the target in the color image, as shown in FIG. 4, so as to obtain a target category and pixel coordinates: (u rgb ,v rgb ) (ii) a The MobileNet V3-SSD model uses an SSD network as a meta-structure, and four feature extraction layers are added after a pooling layer and a convolution layer behind the pooling layer of the MobileNet V3 network are removed, so that a series of region frames with fixed sizes are generated by selecting six feature extraction layers, categories in the frames are scored, and the problem of region overlapping in a detection result is solved by adopting a non-maximum inhibition method.
It should be noted that the MobileNetV3-SSD model of the present application is built using a Pytorch framework, and is trained and tested using a Pascal VOC2007 +2012 dataset, wherein the Pascal VOC dataset has 20 categories. Training data was 16551 images using a training set and a test set of VOCs 2007 and 2012; the test data was using the VOC2007 test set for a total of 4952 images. In the training process, the GPU is used for training, the image size of the input network is set to be 512 multiplied by 512, and 12K steps are trained.
And a depth information obtaining step S2, establishing a Kinect v2 camera model and calibrating a camera to obtain internal parameters, distortion parameters and external parameters of a color camera and the depth camera so as to establish a registration relation for converting a color image into a depth image, performing iterative optimization on the registration relation based on an optimized error function, and obtaining a depth region frame of a target in the depth image based on the optimized registration relation.
In some embodiments, the depth information obtaining step S2 includes:
a registration relation establishing step S21, establishing a Kinect v2 camera model and calibrating the camera to obtain internal parameters A of the color camera and the depth camera rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameter R of the depth camera depth And T depth Calculating a rotation matrix and a translation vector between coordinate systems of the color camera and the depth camera, and establishing a registration relation of color image to depth image conversion;
and a registration iterative optimization step S22, which is used for performing iterative optimization on registration based on the real rotational-translational relation between the color camera and the depth camera in the Kinect v2 after calibration and the Kinect v2 camera model.
In the registration relationship establishing step S21 in the embodiment of the present application, a Kinect v2 camera model is established according to the pinhole imaging and projection principle, and the camera model is as shown in fig. 5. Wherein the image coordinate system OXY is converted into a pixel coordinate system according to the transmission principleouvThe conversion relationship of (1) is as follows:
wherein,N x andN y the number of image pixels corresponding to a unit length of the image plane of a CCD (Charge-coupled Device) of the camera.
Camera coordinate systemo c x c y c z c The conversion into the image coordinate system OXY is expressed as follows:
wherein,ρis a coefficient of proportionality that is,fis the camera focal length.
The camera coordinate system can be calculated based on the expressions (1) and (2) as shown in the aboveo c x c y c z c To the pixel coordinate systemouvThe conversion relationship of (c) is as follows:
Continuing with reference to FIG. 5, a world coordinate systemo w x w y w z w To camera coordinate systemo c x c y c z c The conversion relationship of (1) is as follows:
wherein,in order to rotate the matrix of the matrix,for the translation vector, the rotation matrix and the translation vector can be calculated.
In a camera imaging system, due to the fact that lens distortion exists, certain errors exist in obtained image coordinates, and correction can be conducted according to a distortion compensation formula (5).
Wherein, (ii) (u d ,v d ) Is the actual coordinates of the image (a)u,v) In order to compensate for the coordinates after the compensation,k 1 、k 2 respectively the first order distortion compensation coefficient and the second order distortion compensation coefficient of the image,。
after the camera models shown in the formulas (1) to (5) are established, the embodiment of the application calibrates the color camera and the depth camera. Optionally, the calibration is performed by a Zhangzhengyou calibration method, a 5 × 7 checkerboard is used as a calibration board, and the distance between two adjacent checkerboards is 30mm. And (3) shooting 25 poses of calibration plates at different distances and different angles by using Kinect v2, and respectively obtaining 25 color images and depth images. Because the depth image and the infrared image in the Kinect v2 are obtained by the same camera, the depth image can be calibrated by using the clear infrared image, the angular points of the calibrated image are extracted, the calibration of the two cameras can be completed through related calculation, and the internal parameters A of the color camera and the depth camera are obtained rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameters R of the depth camera depth And T depth 。
Further, a rotation matrix and a translation vector between the color camera and the depth camera coordinate systems are calculated based on the parameters, and a registration relation of color image to depth image conversion is established. Specifically, the method comprises the following steps:
setting the camera coordinates of a target point in the depth camera coordinate system and the color camera coordinate system and finding the color projection coordinates thereof in the depth image and color image plane, i.e. the coordinates thereof in the depth pixel coordinate system and the color pixel coordinate system, the depth camera coordinates of the target point being expressed asThe coordinates of the target point of the color camera are expressed asThe depth projection coordinates of the target point are expressed asThe color projection coordinates of the target point are expressed as. Wherein the depth camera coordinate system and the color camera coordinate system satisfy a transformation relationship shown as:
wherein R is depth_rgb A rotation matrix of formula (6), T depth_rgb Is the translation vector of equation (6).
Based on the binocular stereo vision model, in the common field of view range of the color camera and the depth camera, the two cameras can capture a target object at the same time, as shown in fig. 6, and the coordinates of the object in the world coordinate system are unique. Therefore, the world coordinate of the target object is set to P w =(x w, y w ,z w ) T Based on the conversion relationship from the world coordinate system to the camera coordinate system shown in equation (4), the depth camera coordinates and color camera coordinates corresponding to the world coordinates are expressed by equation (7) below:
p in the formula (7) w Elimination, a rotation matrix R as in equation (6) above can be obtained depth_rgb And translation vector T depth_rgb Specifically, the following formula (8):
based on the conversion relationship from the camera coordinate system to the pixel coordinate system, as shown in equation (3), the following calculation relationship can be established:
wherein,ρ 1 is a scale factor in the depth image,ρ 2 scale factors in a color image.
Based on the calculation relation and combined with formulas (6) - (8), the registration relation converted from the color image to the depth image can be established as follows:
ρ 1 p depth =ρ 2 A depth R depth R rgb -1 A rgb -1 p rgb +A depth (T depth -R depth R rgb -1 T rgb ) (9)
furthermore, on the premise of unknown depth information, iterative optimization is carried out on registration based on a real rotation-translation relation and a camera model after calibration of the color camera and the depth camera in Kinect v 2. As shown in fig. 6, the registration iterative optimization step S22 specifically includes:
a depth value obtaining step S221, namely rotating the depth camera after the color camera and the depth camera are in the set approximate parallel relationship to enable the color camera and the depth camera to be in the real rotation translation relationship, and respectively collecting the depth values of the target points in the depth image under the corresponding relationship; when the color camera and the infrared camera are set to be approximately parallel in structure, the rotation relationship between the color camera and the infrared camera is expressed as follows:
(ii) a The translation relation is as follows:at this time, the process of the present invention,ρ 1 =ρ 2 +t z the color projection coordinates of the corresponding target points are recorded asThe depth projection coordinates of the target point are recorded asAt this time, the initial depth value isρ 10 =z depth0 And then calculating the depth camera coordinates based on the formula (2);
Then, based on the rotation matrix R depth_rgb Rotating the depth camera to enable the color camera and the depth camera to be in a real rotation translation relation, wherein the coordinates of the depth camera after rotation are as follows:
at this time, the updated depth value is obtainedρ 11 =z depth1 。
An optimization error function establishing step S222, which is to obtain a rotation matrix R according to the calibration of the color camera and the depth camera depth_rgb Translation vector T depth_rgb And the depth camera coordinate P is converted from the depth camera coordinate system and the color camera coordinate system shown in the formula (6) depth1 Converting to obtain color projection coordinate p rgb1 Wherein、(ii) a And establishing an optimization error function, wherein the optimization error function is obtained by calculating through the following calculation model:
wherein,is an iterationiSubsequent color projection coordinates, p rgb0 The real pixel coordinates of the target object on the color image, which are the color projection coordinates of the target point in the above-mentioned approximately parallel relationship,i=1,2,…,n,nis the number of iterations.
An iteration step S223 of optimizing an error function, configuring an iteration range to a depth valueρ 1i =z depthi Iteration is carried out to obtain the depth value with the minimum error function and the corresponding depth projection coordinate is converted by a coordinate system to obtain the color camera coordinate of the target pointAnd realizing the registration of the target point color image to the depth image. Wherein the actual depth information is taken into accountρ 11 Nearby, the application is based on Kinect v2 and rotation matrix R depth_rgb To, forρ 1 Iteration is performed, the iteration range is configured according to the detectable range of Kinect v2, as shown in FIG. 7, and the number of iterations is based on the iteration rangenIs limited.
In the step S223 of optimizing the error function iteration, the iterated depth value and the updated depth value are in a proportional relationship during the iteration process, so that a scaling factor j =issetz depthi /z depth1 Calculating iterative coordinates based on the scale factor (x depth ,y depth ,z depth ) i . Repeating the iterative step of optimizing the error function and solving the corresponding (A), (B)u rgb ,v rgb ) i Up toe min =min{e(i)|i=1,2,…,n}。
Based on the steps, the color image of the indoor scene is obtained by using Kinect v2, after an indoor object in the color image is detected by using a MobileNet V3-SSD model, a registration optimization model of the color image and the depth image is constructed, registration optimization of the color image to the depth image is realized, and image information of the color image and the depth image is fused, so that initial depth information of the object is estimated and obtained, wherein the MobileNet V3-SSD model is generated by combining a lightweight neural network MobileNet V3 and an SSD (Single Shot MultiBox Detector) algorithm.
According to the method and the device, after the pose of the infrared camera is corrected by using the depth information of the previous embodiment, the optimization of the depth information is completed based on the iterative optimization of a binocular stereoscopic vision model, more accurate object depth information is obtained, and by using the object depth information, the pose point cloud of an object under a color camera coordinate system is further obtained through the conversion relation between the color camera and the infrared camera based on the object depth information, so that the obstacle avoidance area of the mobile robot is estimated. The method specifically comprises the following steps:
a point cloud data acquisition step S3, acquiring all effective depth data in the depth area frame by traversing the depth area frame, based on a preset obstacle avoidance range [0,∂]selecting obstacle avoidance target depth data in the effective depth data, and acquiring point cloud data of a target under a depth camera coordinate system; the effective depth data is obtained by converting the depth image into a gray image and then calculating based on the following calculation model:
depth_value=k×gray_value(12)
wherein,kis a measure of the size factor, specifically,k=4095/255, 4095 being the depth maximum of the Kinect v2 detection;gray_valueare gray scale values.
The point cloud data is calculated by the following calculation model:
wherein, (ii) (u,v) For a coordinate point on the depth image,das a depth value of the coordinate point,f x 、f y indicating focal length of depth camerafIn thatx、yThe pixel metric in the direction is measured by the pixels,c x 、c y indicating the amount of shift of the optical axis from the center of the projection plane coordinates.
A target positioning step S4, converting the point cloud data into a color camera coordinate system to determine the pose of the target under the color camera coordinate system, fitting to obtain an obstacle avoidance target area of the mobile robot, wherein the obstacle avoidance target area of the mobile robot is a spherical surface, and solving the spherical surface by constructing an objective function and solving a partial derivative of a vector of the objective function to be 0;
the objective function is expressed as:
for an objective functionx 0 、y 0 、z 0 、rThe partial derivative is calculated and made equal to 0, as follows:
substituting point cloud data in color camera coordinate system to solve in combination with formula (14)x 0 、y 0 、z 0 、r。
In some embodiments, in order to ensure the obstacle avoidance accuracy, the obstacle avoidance target region further includes a plane formed by the detected farthest distance of the obstacle avoidance target, as shown in fig. 8, the obstacle avoidance target region is calculated by the following calculation model:
wherein,z max the distance to the object that is the farthest distance,δand presetting a safety margin for avoiding obstacles.
Based on the steps, in the target pose estimation process, the pose point cloud of the object under the color camera coordinate system is obtained through the conversion relation between the color camera and the infrared camera based on the object depth information, so that the obstacle avoidance area of the mobile robot is estimated.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a mobile robot target positioning system, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the system is omitted for brevity. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a mobile robot target locating system according to an embodiment of the present application, as shown in fig. 9, the system including:
a target identification module 1, configured to collect a target color image and a depth image, identify the target color image through a pre-trained MobileNetV3-SSD model to obtain a color region frame of the target in the color image, and obtain a target category and pixel coordinates (c: (a)u rgb ,v rgb );
The depth information acquisition module 2 is used for establishing a Kinect v2 camera model and calibrating a camera to obtain internal parameters, distortion parameters and external parameters of a color camera and the depth camera so as to establish a registration relation for converting a color image into a depth image, and performing iterative optimization on the registration relation based on an optimized error function to obtain a depth region frame of a target in the depth image based on the optimized registration relation;
the point cloud data acquisition module 3 is used for acquiring all effective depth data in the depth area frame by traversing the depth area frame, and based on a preset obstacle avoidance range [0,∂]selecting obstacle avoidance target depth data in the effective depth data, and acquiring point cloud data of a target under a depth camera coordinate system;
and the target positioning module is used for converting the point cloud data into a color camera coordinate system to determine the pose of the target under the color camera coordinate system, and fitting to obtain an obstacle avoidance target area of the mobile robot.
In some embodiments, the depth information acquiring module 2 includes:
a registration relation establishing module 21 for establishing a Kinect v2 camera model and calibrating the camera to obtain the internal parameters A of the color camera and the depth camera rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameter R of the depth camera depth And T depth So as to calculate the rotation matrix and the translation vector between the coordinate systems of the color camera and the depth camera, establish the registration relationship of the color image to the depth image conversion, and the registration relationship is calculated based on the formula (9);
and the registration iterative optimization module 22 is used for performing iterative optimization on registration based on the real rotational-translational relation between the color camera and the depth camera in the Kinect v2 after calibration and the Kinect v2 camera model.
In some of these embodiments, the registration iterative optimization module 22 further comprises:
a depth value acquisition module 221 for making the colorAfter the camera and the depth camera are in the set approximate parallel relation, the depth camera is rotated to enable the color camera and the depth camera to be in the real rotating translation relation, the depth values of the target point in the depth image are respectively collected, and the initial depth value is obtainedρ 10 =z depth0 And updating the depth valueρ 11 =z depth1 ;
An optimization error function establishing module 222, configured to establish a rotation matrix R according to a true rotational-translational relationship, i.e. calibrated by the color camera and the depth camera depth_rgb Translation vector T depth_rgb And the depth camera coordinate system and color camera coordinate system transformation relation shown in equation (6) converts the depth camera coordinate P depth1 Converting to obtain color projection coordinate p rgb1 And an optimization error function is established, which passes through the above equation (11).
An optimization error function iteration module 223 for configuring an iteration range versus depth valueρ 1i =z depthi Iteration is carried out to obtain the depth value with the minimum error function and the corresponding depth projection coordinate is converted by a coordinate system to obtain the color camera coordinate of the target pointAnd realizing the registration of the target point color image to the depth image.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In summary, the embodiment of the application solves the problem of rapid and accurate obstacle avoidance of the mobile robot in an indoor environment, and can complete rapid identification of objects in the indoor environment through the MobileNet V3-SSD model based on the lightweight convolutional neural network, which can be applied to an embedded platform, on the premise of reducing cost and improving working efficiency; meanwhile, the position of an object in the color image is corresponding to the position in the depth image through a registration optimization algorithm to obtain the depth information of the object, and then the pose estimation of the target is completed through a camera model, so that the requirements of the robot on the accuracy and speed of object perception are met, and the effectiveness of the robot in real-time obstacle avoidance is ensured.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A mobile robot target positioning method, comprising:
a target identification step, wherein a target color image and a depth image are collected, and the target color image is identified through a pre-trained MobileNet V3-SSD model to obtain a color area frame of a target in the color image;
the method comprises the following steps of obtaining depth information, establishing a Kinect v2 camera model, calibrating a camera, obtaining internal parameters, distortion parameters and external parameters of a color camera and the depth camera, establishing a registration relation for converting a color image into a depth image, performing iterative optimization on the registration relation based on an optimized error function, obtaining a depth region frame of a target in the depth image based on the optimized registration relation, and calculating the optimized error function through the following calculation models:
wherein,is an iterationiColor projection coordinates, p, of the next target point in the color image plane rgb0 The true pixel coordinates on the color image for the target point,i=1,2,…,n,nfor the iteration times, the registration relation is calculated based on the following calculation model:
ρ 1 p depth =ρ 2 A depth R depth R rgb -1 A rgb -1 p rgb +A depth (T depth -R depth R rgb -1 T rgb ),
wherein,ρ 1 for the depth information in the depth image,ρ 2 for depth information in color images, p rgb For the colour projection coordinates, p, of the target point in the colour image plane depth For the depth projection coordinates of the target point in the depth image plane, A rgb And A depth As intrinsic parameters of colour cameras and depth cameras, R rgb And T rgb As an external parameter of the colour camera, R depth And T depth External parameters of the depth camera;
a point cloud data acquisition step, namely acquiring all effective depth data in the depth area frame by traversing the depth area frame, selecting obstacle avoidance target depth data in the effective depth data based on a preset obstacle avoidance range, and acquiring point cloud data of a target under a depth camera coordinate system;
and a target positioning step, namely converting the point cloud data into a color camera coordinate system to determine the pose of the target under the color camera coordinate system, and fitting to obtain an obstacle avoidance target area of the mobile robot.
2. The mobile robot target positioning method according to claim 1, wherein the depth information acquiring step includes:
a step of establishing a registration relation, namely establishing a Kinect v2 cameraThe model is calibrated by a camera to obtain the internal parameters A of the color camera and the depth camera rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameters R of the depth camera depth And T depth Calculating a rotation matrix and a translation vector between coordinate systems of the color camera and the depth camera, and establishing a registration relation of color image to depth image conversion;
and a registration iterative optimization step, namely performing iterative optimization on registration based on the real rotational-translational relation between the color camera and the depth camera in the Kinect v2 after calibration and the Kinect v2 camera model.
3. The mobile robotic target location method of claim 2, wherein the registration iterative optimization step further comprises:
a depth value obtaining step, namely rotating the depth camera after the color camera and the depth camera are in a set approximate parallel relation to ensure that the color camera and the depth camera are in a real rotation translation relation, respectively collecting the depth values of the target point in the depth image under the corresponding relation to obtain an initial depth valueρ 10 =z depth0 And updating the depth valueρ 11 =z depth1 ;
Establishing an optimization error function, namely, according to the real rotation translation relation and the respective external parameters of the depth camera coordinate system and the color camera coordinate system, setting the depth camera coordinate P depth1 Converting to obtain color projection coordinate p rgb1 And establishing the optimization error function;
optimizing the iteration step of the error function, and configuring the iteration range to the depth valueρ 1i =z depthi Iteration is carried out to obtain the depth value with the minimum error function and the corresponding depth projection coordinate is converted by a coordinate system to obtain the color camera coordinate of the target pointAnd realizing the registration of the target point color image to the depth image.
4. The method of claim 1, wherein the effective depth data is calculated by converting a depth image into a grayscale image based on the following calculation model:
depth_value=k×gray_value,
wherein,kis a scale factor;gray_valueare gray scale values.
5. The mobile robot target positioning method according to claim 4, wherein the point cloud data is calculated by a calculation model including:
wherein, (ii) (u,v) For the coordinate points on the depth image,dis the depth value of the coordinate point and is,f x 、f y indicating focal length of depth camerafIn thatx、yThe pixel metric in the direction is measured by the pixels,c x 、c y indicating the offset of the optical axis from the coordinate center of the projection plane.
6. The mobile robot target positioning method according to claim 1, wherein the obstacle avoidance target area of the mobile robot is a spherical surface, and the spherical surface is solved by constructing an objective function and solving a vector solution partial derivative of the objective function to 0;
the objective function is expressed as:
7. the mobile robot target positioning method according to claim 6, wherein the obstacle avoidance target area further includes a plane formed by a maximum distance of an obstacle avoidance target obtained by detection, and the obstacle avoidance target area is obtained by calculation through a calculation model as follows:
wherein,z max the maximum distance to the target is the distance between the target and the target,δand presetting a safety margin for avoiding obstacles.
8. A mobile robotic object positioning system for implementing a mobile robotic object positioning method as claimed in any one of claims 1-7, comprising:
the target identification module is used for acquiring a target color image and a depth image, and identifying the target color image through a pre-trained MobileNet V3-SSD model to obtain a color area frame of a target in the color image;
the depth information acquisition module is used for establishing a Kinect v2 camera model and calibrating a camera to obtain internal parameters, distortion parameters and external parameters of a color camera and the depth camera so as to establish a registration relation for converting the color image into the depth image, carrying out iterative optimization on the registration relation based on an optimized error function, and obtaining a depth region frame of a target in the depth image based on the optimized registration relation, wherein the registration relation is obtained by calculation based on the following calculation model:
ρ 1 p depth =ρ 2 A depth R depth R rgb -1 A rgb -1 p rgb +A depth (T depth -R depth R rgb -1 T rgb ),
wherein,ρ 1 for the depth information in the depth image,ρ 2 for depth information in color images, p rgb For the colour projection coordinates, p, of the target point in the colour image plane depth For the depth projection coordinates of the target point in the depth image plane, A rgb And A depth Intrinsic parameters, R, for colour and depth cameras rgb And T rgb As an external parameter of the colour camera, R depth And T depth External parameters for the depth camera;
the point cloud data acquisition module is used for acquiring all effective depth data in the depth area frame by traversing the depth area frame, selecting obstacle avoidance target depth data in the effective depth data based on a preset obstacle avoidance range, and acquiring point cloud data of a target under a depth camera coordinate system;
and the target positioning module is used for converting the point cloud data into a color camera coordinate system so as to determine the pose of the target under the color camera coordinate system, and fitting to obtain an obstacle avoidance target area of the mobile robot.
9. The mobile robotic target positioning system of claim 8, wherein the depth information acquisition module comprises:
a registration relation establishing module for establishing a Kinect v2 camera model and calibrating the camera to obtain the internal parameters A of the color camera and the depth camera rgb And A depth Distortion parameter, external parameter R of color camera rgb And T rgb And the external parameters R of the depth camera depth And T depth Calculating a rotation matrix and a translation vector between coordinate systems of the color camera and the depth camera, and establishing a registration relation of color image to depth image conversion;
and the registration iterative optimization module is used for performing iterative optimization on registration based on the real rotational-translational relation between the color camera and the depth camera in the Kinect v2 after calibration and the Kinect v2 camera model.
10. The mobile robotic object localization system of claim 9, wherein the registration iterative optimization module further comprises:
a depth value obtaining module for rotating the depth camera after the color camera and the depth camera are in the set approximate parallel relationship to make the color camera and the depth camera in the real rotation translation relationship, respectively collecting the depth values of the target point in the depth image to obtain the initial depth valueρ 10 =z depth0 And updating the depth valueρ 11 =z depth1 ;
An optimization error function establishing module for establishing the depth camera coordinate P according to the real rotation translation relation and the respective external parameters of the depth camera coordinate system and the color camera coordinate system depth1 Converting to obtain color projection coordinate p rgb1 And establishing an optimized error function;
an optimization error function iteration module for configuring the iteration range to the depth valueρ 1i =z depthi Iteration is carried out to obtain the depth value with the minimum error function and the corresponding depth projection coordinate is converted by a coordinate system to obtain the color camera coordinate of the target pointAnd realizing the registration of the target point color image to the depth image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211432356.3A CN115496898B (en) | 2022-11-16 | 2022-11-16 | Mobile robot target positioning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211432356.3A CN115496898B (en) | 2022-11-16 | 2022-11-16 | Mobile robot target positioning method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115496898A CN115496898A (en) | 2022-12-20 |
CN115496898B true CN115496898B (en) | 2023-02-17 |
Family
ID=85115890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211432356.3A Active CN115496898B (en) | 2022-11-16 | 2022-11-16 | Mobile robot target positioning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115496898B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105045263A (en) * | 2015-07-06 | 2015-11-11 | 杭州南江机器人股份有限公司 | Kinect-based robot self-positioning method |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
CN107680140A (en) * | 2017-10-18 | 2018-02-09 | 江南大学 | A kind of depth image high-resolution reconstruction method based on Kinect cameras |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN111429574A (en) * | 2020-03-06 | 2020-07-17 | 上海交通大学 | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion |
CN111476841A (en) * | 2020-03-04 | 2020-07-31 | 哈尔滨工业大学 | Point cloud and image-based identification and positioning method and system |
CN111612841A (en) * | 2020-06-22 | 2020-09-01 | 上海木木聚枞机器人科技有限公司 | Target positioning method and device, mobile robot and readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9321173B2 (en) * | 2012-06-22 | 2016-04-26 | Microsoft Technology Licensing, Llc | Tracking and following people with a mobile robotic device |
-
2022
- 2022-11-16 CN CN202211432356.3A patent/CN115496898B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105045263A (en) * | 2015-07-06 | 2015-11-11 | 杭州南江机器人股份有限公司 | Kinect-based robot self-positioning method |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
CN107680140A (en) * | 2017-10-18 | 2018-02-09 | 江南大学 | A kind of depth image high-resolution reconstruction method based on Kinect cameras |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN111476841A (en) * | 2020-03-04 | 2020-07-31 | 哈尔滨工业大学 | Point cloud and image-based identification and positioning method and system |
CN111429574A (en) * | 2020-03-06 | 2020-07-17 | 上海交通大学 | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion |
CN111612841A (en) * | 2020-06-22 | 2020-09-01 | 上海木木聚枞机器人科技有限公司 | Target positioning method and device, mobile robot and readable storage medium |
Non-Patent Citations (1)
Title |
---|
基于Kinect的特定人员鲁棒识别与定位;邢关生等;《河北工业大学学报》;20141015(第05期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115496898A (en) | 2022-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136208B (en) | Joint automatic calibration method and device for robot vision servo system | |
CN109509226B (en) | Three-dimensional point cloud data registration method, device and equipment and readable storage medium | |
CN106875339B (en) | Fisheye image splicing method based on strip-shaped calibration plate | |
TWI555379B (en) | An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
CN106228538B (en) | Binocular vision indoor orientation method based on logo | |
CN112985293B (en) | Binocular vision measurement system and measurement method for single-camera double-spherical mirror image | |
CN103971378A (en) | Three-dimensional reconstruction method of panoramic image in mixed vision system | |
Yan et al. | Joint camera intrinsic and lidar-camera extrinsic calibration | |
CN110288656A (en) | A kind of object localization method based on monocular cam | |
WO2020038386A1 (en) | Determination of scale factor in monocular vision-based reconstruction | |
CN111461963B (en) | Fisheye image stitching method and device | |
CN115830103A (en) | Monocular color-based transparent object positioning method and device and storage medium | |
CN110782498B (en) | Rapid universal calibration method for visual sensing network | |
CN110060304B (en) | Method for acquiring three-dimensional information of organism | |
CN108154536A (en) | The camera calibration method of two dimensional surface iteration | |
CN110490943B (en) | Rapid and accurate calibration method and system of 4D holographic capture system and storage medium | |
CN112767546B (en) | Binocular image-based visual map generation method for mobile robot | |
CN112348775A (en) | Vehicle-mounted all-round-looking-based pavement pool detection system and method | |
CN109859137A (en) | A kind of irregular distortion universe bearing calibration of wide angle camera | |
CN110648362A (en) | Binocular stereo vision badminton positioning identification and posture calculation method | |
Ohashi et al. | Fisheye stereo camera using equirectangular images | |
Chan et al. | An improved method for fisheye camera calibration and distortion correction | |
CN109087360A (en) | A kind of scaling method that robot camera is joined outside | |
CN112907680A (en) | Automatic calibration method for rotation matrix of visible light and infrared double-light camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20221220 Assignee: Qingdao Dadi Digital Technology Co.,Ltd. Assignor: SHANDONG University OF SCIENCE AND TECHNOLOGY Contract record no.: X2024980010335 Denomination of invention: Mobile Robot Target Localization Method and System Granted publication date: 20230217 License type: Common License Record date: 20240722 |
|
EE01 | Entry into force of recordation of patent licensing contract |