CN112884832B - Intelligent trolley track prediction method based on multi-view vision - Google Patents

Intelligent trolley track prediction method based on multi-view vision Download PDF

Info

Publication number
CN112884832B
CN112884832B CN202110270322.8A CN202110270322A CN112884832B CN 112884832 B CN112884832 B CN 112884832B CN 202110270322 A CN202110270322 A CN 202110270322A CN 112884832 B CN112884832 B CN 112884832B
Authority
CN
China
Prior art keywords
dimensional
intelligent trolley
pose
visual
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110270322.8A
Other languages
Chinese (zh)
Other versions
CN112884832A (en
Inventor
耿雪纯
蔡骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202110270322.8A priority Critical patent/CN112884832B/en
Publication of CN112884832A publication Critical patent/CN112884832A/en
Application granted granted Critical
Publication of CN112884832B publication Critical patent/CN112884832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an intelligent trolley track prediction method based on multi-view vision, which solves the defects of instable real-time positioning, easy signal disconnection, use limitation, low working efficiency and high cost of the existing indoor intelligent trolley.

Description

Intelligent trolley track prediction method based on multi-view vision
Technical Field
The invention relates to an indoor positioning technology, in particular to an intelligent trolley track prediction method based on multi-view vision.
Background
In the prior art, real-time positioning and track prediction are realized for an outdoor intelligent trolley based on a global positioning navigation technology (GPS), but accurate positioning cannot be performed on the intelligent trolley in areas with weak GPS signals or areas which cannot be covered by the GPS at all, for example, indoor intelligent trolley positioning. However, since the sensor is greatly influenced by the outside, when the signal is interfered, the positioning cannot be accurately realized, for example, wiFi positioning, bluetooth positioning, radio frequency identification positioning, etc. can be used in some places which cannot be covered by GPS. These methods have the disadvantage of instability, which can occur when the indoor intelligent vehicle is operated at a distance from the receiving source. Therefore, the existing technology has the disadvantages of high cost, certain limitation and low working efficiency.
Disclosure of Invention
The invention aims to provide an intelligent trolley track prediction method based on multi-view vision, which can realize real-time positioning based on machine vision and has low cost and high stability.
The technical purpose of the invention is realized by the following technical scheme:
an intelligent trolley track prediction method based on multi-view vision comprises the following steps:
s1, fixedly installing a plurality of cameras, shooting pictures with different poses of a set number of checkerboards, and calibrating the cameras by a Zhang Zhengyou calibration method to obtain internal parameters and distortion parameters of the cameras;
s2, the visual label is pasted on the intelligent trolley, the camera shoots in real time, and the real-time two-dimensional coordinate of the intelligent trolley is obtained by positioning the pose of the visual label;
s3, establishing a world coordinate origin according to a PnP algorithm, defining a corresponding relation between world space coordinates and two-dimensional coordinates of a plurality of points, and obtaining external parameters of the camera at the moment;
s4, transforming a coordinate system of the two-dimensional coordinate pose output by the visual tag, and converting the two-dimensional coordinate into a three-dimensional space coordinate;
s5, constructing a pose measurement model through a multi-view stereoscopic vision model, solving a three-dimensional space pose by introducing a least square method, and optimally solving the space pose of the mechanical arm of the intelligent trolley by combining a triangular gravity center method;
and S6, obtaining pose information of the intelligent trolley through the three-dimensional space coordinate, drawing the movement track of the trolley and carrying out error analysis.
Preferably, the information of the visual label comprises four corner pixels, a central pixel, a homography matrix, and an ID corresponding to each label.
Preferably, the visual label employs the aprilatag visual system.
Preferably, the transformation of the three-dimensional space coordinates is specifically
The method comprises the steps that through the difference of the placement positions of three cameras, pictures of visual labels on the intelligent trolley are obtained at the same time in the same scene;
solving camera external parameters under the condition of giving N3D point coordinates in the world and two-dimensional coordinates on an image through a PnP algorithm;
and converting a series of two-dimensional coordinates into three-dimensional coordinates according to the internal reference and the external reference of the camera.
Preferably, the three cameras are installed in different positions and at different angles in space.
In conclusion, the invention has the following beneficial effects:
the pose of the intelligent trolley is positioned by a multi-vision machine vision and vision labeling technology, so that the indoor real-time positioning of the sensorless intelligent trolley is realized, and the existing intelligent trolley positioning technology is improved; the three-dimensional pose is calculated through multi-view vision, the problem of monocular camera depth calculation can be solved, the binocular camera has higher precision, more accurate indoor intelligent trolley positioning can be realized, and the requirements of intelligent trolley multi-angle and large-range real-time positioning are met.
Drawings
FIG. 1 is a schematic block diagram of the process flow of the present method;
FIG. 2 is a schematic diagram of multi-view position and pose measurement of the intelligent vehicle;
FIG. 3 is a multi-view vision pose measurement model diagram.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
According to one or more embodiments, an intelligent trolley trajectory prediction method based on multi-view vision is disclosed, as shown in fig. 1 and 2, comprising the following steps:
s1, fixedly installing a plurality of cameras, shooting pictures with different poses of a set number of checkerboards, and calibrating the cameras by a Zhang Zhengyou calibration method to obtain internal parameters and distortion parameters of the cameras.
The installation of camera specifically is: the number of the cameras is preferably 3, three cameras are installed at different positions in the indoor space, the angle of each camera is different, multi-view vision is achieved through the three cameras, the multi-view vision measuring system can cover a larger measuring area, compared with single binocular vision, the three binocular vision measuring system has better robustness and wider application in actual complex application scenes.
The calibration of the internal reference of the camera is specifically as follows: the chessboard pattern calibration board is manufactured, three fixedly installed video cameras synchronously shoot 20 pictures of the calibration board at different positions and rotation angles, data are synchronously acquired in a multi-view mode, and the three video cameras are respectively calibrated by a Zhang Zhengyou calibration method to obtain the internal reference and distortion coefficient of each camera.
According to the calibration of the camera, a translation matrix and a rotation matrix under a camera coordinate system can be calculated, and then the three-dimensional space coordinate of the center of the object can be obtained according to the translation matrix, so that the six-degree-of-freedom attitude estimation is realized.
S2, the visual label is pasted on the intelligent trolley, the camera shoots in real time, and the real-time two-dimensional coordinate of the intelligent trolley is obtained by positioning the pose of the visual label.
The information of the visual label comprises four corner pixels, a central pixel, a homography matrix and an ID corresponding to each label.
The AprilTag visual system is adopted as the visual label, the visual label system is widely used in the field of robot, AR and camera calibration, is similar to a two-dimensional code (QR) technology, reduces complexity, can quickly detect the markers and calculate the relative positions of the markers, and can accurately estimate the two-dimensional coordinate pose of the intelligent trolley through the label to realize a remote real-time positioning technology.
A plurality of visual labels are pasted on an indoor intelligent trolley body, the positions of the visual labels on the intelligent trolley are shot in real time, when the intelligent trolley works, three cameras shoot video information in real time and transmit the video information to a computer, and the computer recognizes two-dimensional coordinates of a central point of a visual label AprilTag through the visual labels on the intelligent trolley body.
And S3, establishing a world coordinate origin according to a PnP algorithm, defining a corresponding relation between world space coordinates and two-dimensional coordinates of a plurality of points, and obtaining external parameters of the camera at the moment. And (3) solving the camera external parameters through the known 4 angular points and the calibrated camera internal parameters and distortion according to the two-dimensional coordinates and the space coordinates of the images of the plurality of fixed points by utilizing a PnP algorithm, and simultaneously determining the origin of a world coordinate system.
And S4, transforming the two-dimensional coordinate pose output by the visual tag into a coordinate system, and converting the two-dimensional coordinate into a three-dimensional space coordinate, as shown in figure 3.
The two-dimensional coordinates can be converted into camera coordinates according to the information of a camera which is input into a computer in advance and the spatial position relation of each device, the camera coordinates are converted into spatial position coordinates under a world coordinate system, and the conversion of the three-dimensional space coordinates is specifically as follows:
the method comprises the steps that through the difference of the placement positions of three cameras, pictures of visual labels on the intelligent trolley are obtained at the same time in the same scene; solving camera external parameters under the condition of giving N3D point coordinates in the world and two-dimensional coordinates on an image through a PnP algorithm; converting a series of two-dimensional coordinates into three-dimensional coordinates according to internal and external parameters of the camera, wherein in formula 1, A matrix is an internal parameter matrix of the camera, B matrix is an external parameter matrix of the camera, u and v are two-dimensional coordinates, and X is W ,Y W ,Z W As three-dimensional coordinates, Z C Is a scaling factor.
Figure GDA0003771688500000051
And S5, constructing a pose measurement model through a multi-view stereo vision model, introducing a least square method to obtain a three-dimensional space pose as shown in FIG. 3, and optimizing to obtain the space pose of the intelligent vehicle by combining a triangular gravity center method.
In practical application, because data is always noisy, three-dimensional coordinates of a measured object are respectively three disjoint points obtained by performing three-mesh visual fusion through a least square method, and the optimal three-dimensional coordinates can be obtained through a gravity center method.
And S6, obtaining pose information of the intelligent trolley through the three-dimensional space coordinate, drawing the movement track of the trolley and carrying out error analysis.
The intelligent vehicle track prediction method can realize real-time track prediction of the intelligent vehicle, meets the work requirement of accurate positioning of the indoor intelligent vehicle, and improves flexibility and controllability.
The pose positioning is carried out through technologies such as multi-view machine vision and a visual label system, the sensorless indoor real-time positioning of the intelligent trolley is realized, and the existing intelligent trolley positioning technology is improved. AprilTag is a visual reference system, and is a visual task such as robot positioning and camera calibration, and can calculate the accurate position and direction of a camera in a two-dimensional coordinate system. Therefore, the cost for purchasing an expensive intelligent sensor trolley can be saved, and the system has good robustness and economy. The user can get rid of the trouble caused by the defects of inconvenience and low working efficiency caused by the positioning of an indoor intelligent trolley in an area where the global positioning system can not be positioned. The used visual label system is also economic and reliable, the precision of the multi-view vision is more accurate than that of the binocular vision, the estimation and prediction work efficiency of the indoor intelligent vehicle is greatly improved, the three-dimensional pose is calculated through the multi-view vision, the problem of monocular camera depth calculation can be solved, the binocular camera has higher precision, more accurate indoor intelligent vehicle positioning can be realized, and the real-time positioning requirements of the intelligent vehicle on multiple angles and large range are realized.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.

Claims (2)

1. An intelligent trolley track prediction method based on multi-view vision is characterized by comprising the following steps:
s1, fixedly installing a plurality of cameras, shooting pictures with different poses of a set number of checkerboards, and calibrating the cameras by a Zhang Zhengyou calibration method to obtain internal parameters and distortion parameters of the cameras;
s2, the visual label is pasted on the intelligent trolley, the camera shoots in real time, and the real-time two-dimensional coordinate of the intelligent trolley is obtained by positioning the pose of the visual label; the information of the visual label comprises four corner pixels, a central pixel, a homography matrix and an ID (identity) corresponding to each label; the visual label adopts an Apriltag visual system;
s3, establishing a world coordinate origin according to a PnP algorithm, defining a corresponding relation between world space coordinates and two-dimensional coordinates of a plurality of points, and obtaining an external parameter of the camera at the moment;
s4, transforming a coordinate system of the two-dimensional coordinate pose output by the visual tag, and converting the two-dimensional coordinate into a three-dimensional space coordinate; the method comprises the steps that the visual labels on the intelligent car are subjected to picture acquisition under the same scene at the same time through different placement positions of three cameras; solving camera external parameters under the condition of giving N3D point coordinates in the world and two-dimensional coordinates on an image through a PnP algorithm; converting a series of two-dimensional coordinates into three-dimensional coordinates according to internal parameters and external parameters of the camera;
s5, constructing a pose measurement model through a multi-view stereo vision model, solving a three-dimensional space pose by introducing a least square method, and optimally solving the space pose of the intelligent trolley by combining a triangular gravity center method;
and S6, obtaining pose information of the intelligent trolley through the three-dimensional space coordinate, drawing a track of the movement track of the trolley and carrying out error analysis.
2. The intelligent trolley track prediction method based on multi-view vision as claimed in claim 1, wherein: the three cameras are arranged in different positions and different angles in space.
CN202110270322.8A 2021-03-12 2021-03-12 Intelligent trolley track prediction method based on multi-view vision Active CN112884832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270322.8A CN112884832B (en) 2021-03-12 2021-03-12 Intelligent trolley track prediction method based on multi-view vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270322.8A CN112884832B (en) 2021-03-12 2021-03-12 Intelligent trolley track prediction method based on multi-view vision

Publications (2)

Publication Number Publication Date
CN112884832A CN112884832A (en) 2021-06-01
CN112884832B true CN112884832B (en) 2022-10-21

Family

ID=76042455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270322.8A Active CN112884832B (en) 2021-03-12 2021-03-12 Intelligent trolley track prediction method based on multi-view vision

Country Status (1)

Country Link
CN (1) CN112884832B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781576B (en) * 2021-09-03 2024-05-07 北京理工大学 Binocular vision detection system, method and device for adjusting pose with multiple degrees of freedom in real time
CN118470099B (en) * 2024-07-15 2024-09-24 济南大学 Object space pose measurement method and device based on monocular camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106527426A (en) * 2016-10-17 2017-03-22 江苏大学 Indoor multi-target track planning system and method
CN108571971A (en) * 2018-05-17 2018-09-25 北京航空航天大学 A kind of AGV vision positioning systems and method
CN109658461A (en) * 2018-12-24 2019-04-19 中国电子科技集团公司第二十研究所 A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109018591A (en) * 2018-08-09 2018-12-18 沈阳建筑大学 A kind of automatic labeling localization method based on computer vision
CN108827316B (en) * 2018-08-20 2021-12-28 南京理工大学 Mobile robot visual positioning method based on improved Apriltag
US10997448B2 (en) * 2019-05-15 2021-05-04 Matterport, Inc. Arbitrary visual features as fiducial elements
CN112364677A (en) * 2020-11-23 2021-02-12 盛视科技股份有限公司 Robot vision positioning method based on two-dimensional code

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106527426A (en) * 2016-10-17 2017-03-22 江苏大学 Indoor multi-target track planning system and method
CN108571971A (en) * 2018-05-17 2018-09-25 北京航空航天大学 A kind of AGV vision positioning systems and method
CN109658461A (en) * 2018-12-24 2019-04-19 中国电子科技集团公司第二十研究所 A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pose Estimation for Multicopters Based on Monocular Vision and AprilTag;Guo Zhenglong et al.;《Proceedings of the 37th Chinese Control Conference》;20180727;第4717-4722页 *
基于AprilTag的智能小车拓展定位追踪应用;何浩楠 等;《现代信息科技》;20200825;第4卷(第16期);第24-30页 *

Also Published As

Publication number Publication date
CN112884832A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN106197422B (en) A kind of unmanned plane positioning and method for tracking target based on two-dimensional tag
CN104933718B (en) A kind of physical coordinates localization method based on binocular vision
CN107990940B (en) Moving object tracking method based on stereoscopic vision measurement technology
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN112884832B (en) Intelligent trolley track prediction method based on multi-view vision
CN111220126A (en) Space object pose measurement method based on point features and monocular camera
Xia et al. Global calibration of non-overlapping cameras: State of the art
CN110108269A (en) AGV localization method based on Fusion
CN108007456A (en) A kind of indoor navigation method, apparatus and system
Aliakbarpour et al. An efficient algorithm for extrinsic calibration between a 3d laser range finder and a stereo camera for surveillance
CN102914295A (en) Computer vision cube calibration based three-dimensional measurement method
CN106370160A (en) Robot indoor positioning system and method
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
CN115830142A (en) Camera calibration method, camera target detection and positioning method, camera calibration device, camera target detection and positioning device and electronic equipment
Jung et al. A novel 2.5 D pattern for extrinsic calibration of tof and camera fusion system
CN113112543A (en) Large-view-field two-dimensional real-time positioning system and method based on visual moving target
CN111199576A (en) Outdoor large-range human body posture reconstruction method based on mobile platform
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
CN116824067B (en) Indoor three-dimensional reconstruction method and device thereof
Cheng et al. 3D radar and camera co-calibration: A flexible and accurate method for target-based extrinsic calibration
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
Chen et al. Low cost and efficient 3D indoor mapping using multiple consumer RGB-D cameras
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
CN110415292A (en) Movement attitude vision measurement method of ring identification and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant