CN112884832B - Intelligent trolley track prediction method based on multi-view vision - Google Patents
Intelligent trolley track prediction method based on multi-view vision Download PDFInfo
- Publication number
- CN112884832B CN112884832B CN202110270322.8A CN202110270322A CN112884832B CN 112884832 B CN112884832 B CN 112884832B CN 202110270322 A CN202110270322 A CN 202110270322A CN 112884832 B CN112884832 B CN 112884832B
- Authority
- CN
- China
- Prior art keywords
- coordinates
- dimensional
- pose
- camera
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004438 eyesight Effects 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000000007 visual effect Effects 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000005259 measurement Methods 0.000 claims description 7
- SAZUGELZHZOXHB-UHFFFAOYSA-N acecarbromal Chemical compound CCC(Br)(CC)C(=O)NC(=O)NC(C)=O SAZUGELZHZOXHB-UHFFFAOYSA-N 0.000 claims description 5
- 230000005484 gravity Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
技术领域technical field
本发明涉及室内定位技术,特别涉及一种基于多目视觉的智能小车轨迹预测方法。The invention relates to indoor positioning technology, in particular to a method for predicting the trajectory of an intelligent car based on multi-eye vision.
背景技术Background technique
现有技术中,对于室外智能小车是基于全球定位导航技术(GPS)来实现实时的定位和轨迹预测,可是在一些GPS信号较弱或者GPS根本覆盖不到的区域则无法对智能小车进行精准的定位,例如,室内智能小车定位,目前智能小车定位技术是依靠传感器对外部信号进行处理,从而获得实时的位姿信息。但由于传感器会受到外部影响较大,当信号受到干扰的时候则无法准确实现定位,例如在一些GPS无法覆盖的地方可使用WiFi定位、蓝牙定位、射频识别定位等。这些方法存在着不稳定的缺点,当室内智能小车在距离接受源较远的地方工作时,可能会发生信号断联的情况。因此,现有的技术成本过高,具有一定的局限性,工作效率低。In the prior art, the outdoor smart car is based on global positioning and navigation technology (GPS) to realize real-time positioning and trajectory prediction, but in some areas where the GPS signal is weak or cannot be covered by GPS at all, the smart car cannot be accurately detected. Positioning, for example, indoor smart car positioning, the current smart car positioning technology relies on sensors to process external signals to obtain real-time pose information. However, since the sensor will be greatly affected by the outside world, when the signal is interfered, the positioning cannot be accurately achieved. For example, in some places that cannot be covered by GPS, WiFi positioning, Bluetooth positioning, and radio frequency identification positioning can be used. These methods have the disadvantage of instability. When the indoor smart car works far away from the receiving source, the signal disconnection may occur. Therefore, the existing technology has high cost, certain limitations, and low work efficiency.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种基于多目视觉的智能小车轨迹预测方法,能基于机器视觉实时定位,成本低、稳定性高。The purpose of the present invention is to provide a method for predicting the trajectory of an intelligent car based on multi-eye vision, which can locate in real time based on machine vision, and has low cost and high stability.
本发明的上述技术目的是通过以下技术方案得以实现的:The above-mentioned technical purpose of the present invention is achieved through the following technical solutions:
一种基于多目视觉的智能小车轨迹预测方法,包括有以下步骤:A method for predicting the trajectory of an intelligent car based on multi-eye vision, comprising the following steps:
S1、固定安装若干摄像机,并拍摄设定张数棋盘格的不同位姿图片,通过张正友标定法对摄像机进行标定,得到摄像机的内参及畸变参数;S1. Install several cameras fixedly, and take pictures of different poses of the checkerboard with the set number, calibrate the cameras by Zhang Zhengyou's calibration method, and obtain the internal parameters and distortion parameters of the cameras;
S2、将视觉标签粘贴于智能小车上,摄像机实时进行拍摄通过定位视觉标签的位姿得到智能小车实时的二维坐标;S2. Paste the visual label on the smart car, and the camera shoots in real time to obtain the real-time two-dimensional coordinates of the smart car by locating the pose of the visual label;
S3、根据PnP算法确立世界坐标原点,定义多个点的世界空间坐标和二维坐标的对应关系,并求得此时摄像机的外参;S3. Establish the origin of the world coordinates according to the PnP algorithm, define the correspondence between the world space coordinates of multiple points and the two-dimensional coordinates, and obtain the external parameters of the camera at this time;
S4、将视觉标签输出的二维坐标位姿进行坐标系的变换,将二维坐标转化为三维空间坐标;S4, transform the coordinate system of the two-dimensional coordinate pose output by the visual label, and convert the two-dimensional coordinates into three-dimensional space coordinates;
S5、通过多目立体视觉模型构建位姿测量模型,引入最小二乘法求得三维空间位姿,结合三角形重心法优化求得智能小车机械臂空间位姿;S5. Construct a pose measurement model through a multi-eye stereo vision model, introduce the least squares method to obtain the three-dimensional space pose, and optimize the intelligent car robotic arm space pose by combining with the triangular center of gravity method;
S6、通过三维空间坐标得到智能小车的位姿信息,对小车运动轨迹进行轨迹绘制并进行误差分析。S6, obtaining the pose information of the smart car through the three-dimensional space coordinates, drawing the trajectory of the car's motion trajectory and performing error analysis.
作为优选,视觉标签的信息包括有四个角点像素、中心点像素、单应性矩阵、每个标签对应ID。Preferably, the information of the visual label includes four corner pixels, center pixels, a homography matrix, and an ID corresponding to each label.
作为优选,所述视觉标签采用AprilTag视觉系统。Preferably, the visual label adopts the AprilTag visual system.
作为优选,三维空间坐标的转化具体为Preferably, the transformation of three-dimensional space coordinates is specifically:
通过三个摄像机摆放位置的不同,在同一时间同一场景下对智能小车上的视觉标签进行图片获取;Through the different placement of the three cameras, the visual labels on the smart car are obtained at the same time and in the same scene;
通过PnP算法在给定世界上N个3D点坐标以及图像上的二维坐标情况下,求解相机外参;Through the PnP algorithm, given the coordinates of N 3D points in the world and the 2D coordinates on the image, the camera extrinsic parameters are solved;
根据相机内参、外参将一系列二维坐标转化为三维坐标。Convert a series of two-dimensional coordinates into three-dimensional coordinates according to the camera's internal and external parameters.
作为优选,三个摄像机在空间内安装呈不同位置、不同角度设置。Preferably, the three cameras are installed at different positions and at different angles in the space.
综上所述,本发明具有以下有益效果:To sum up, the present invention has the following beneficial effects:
通过多目机器视觉、视觉标签技术对智能小车进行位姿定位,实现无传感器智能小车的室内实时定位,提高了现有的智能小车定位技术;通过多目视觉来计算三维位姿不仅可解决了单目相机深度计算问题,对于双目相机来说有着更高的精度,能实现更精准的室内智能小车定位,实现智能小车多角度,大范围的实时定位需求。Using multi-eye machine vision and visual labeling technology to locate the pose of the smart car, realize the indoor real-time positioning of the sensorless smart car, and improve the existing smart car positioning technology; The monocular camera depth calculation problem has higher accuracy for the binocular camera, which can achieve more accurate indoor smart car positioning, and realize the multi-angle and large-scale real-time positioning requirements of the smart car.
附图说明Description of drawings
图1为本方法的流程示意框图;Fig. 1 is a schematic flow diagram of the method;
图2为多视图智能小车位姿测量示意图;Figure 2 is a schematic diagram of a multi-view smart car pose measurement;
图3为多目视觉位姿测量模型图。Figure 3 is a diagram of the multi-eye vision pose measurement model.
具体实施方式Detailed ways
以下结合附图对本发明作进一步详细说明。The present invention will be further described in detail below with reference to the accompanying drawings.
根据一个或多个实施例,公开了一种基于多目视觉的智能小车轨迹预测方法,如图1及图2所示,包括有以下步骤:According to one or more embodiments, a method for predicting the trajectory of an intelligent car based on multi-eye vision is disclosed, as shown in FIG. 1 and FIG. 2 , including the following steps:
S1、固定安装若干摄像机,并拍摄设定张数棋盘格的不同位姿图片,通过张正友标定法对摄像机进行标定,得到摄像机的内参及畸变参数。S1. Install several cameras fixedly, and take pictures of different poses of a set number of checkerboards. The cameras are calibrated by Zhang Zhengyou's calibration method, and the internal parameters and distortion parameters of the cameras are obtained.
摄像机的安装具体为:摄像机数量优选为3台,在室内空间不同的位置安装三个摄像机,并且每个摄像机的角度不同,通过三个摄像机实现多目视觉,多目视觉测量系统能够覆盖更大测量面积,且相较于单一的双目视觉,具有三个双目视觉的测量系统有着更好的鲁棒性,在实际复杂的应用场景中有着更广泛的用途。The installation of cameras is as follows: the number of cameras is preferably three, three cameras are installed at different positions in the indoor space, and the angles of each camera are different, and three cameras are used to achieve multi-eye vision, and the multi-eye vision measurement system can cover a larger area. Compared with a single binocular vision, the measurement system with three binocular visions has better robustness and has a wider range of uses in practical and complex application scenarios.
对摄像机内参的标定具体为:制作棋盘格标定板,固定安装的三个摄像机同步拍摄标定板的20张不同位置和旋转角度的图片,进行多目同步采集数据,采用张正友标定法分别对三个摄像机进行标定获得各相机的内参以及畸变系数。The calibration of the internal parameters of the camera is as follows: making a checkerboard calibration board, and three fixed cameras synchronously shoot 20 pictures of the calibration board at different positions and rotation angles, and collect data synchronously with multiple objects. The camera is calibrated to obtain the internal parameters and distortion coefficients of each camera.
根据摄像机标定可算出摄像机坐标系下的平移矩阵、旋转矩阵,进而根据平移矩阵可获得物体中心的三维空间坐标,从而实现六自由度姿态估计。According to the camera calibration, the translation matrix and rotation matrix in the camera coordinate system can be calculated, and then the three-dimensional space coordinates of the center of the object can be obtained according to the translation matrix, so as to realize the six-degree-of-freedom attitude estimation.
S2、将视觉标签粘贴于智能小车上,摄像机实时进行拍摄通过定位视觉标签的位姿得到智能小车实时的二维坐标。S2. Paste the visual label on the smart car, and the camera shoots in real time to obtain the real-time two-dimensional coordinates of the smart car by locating the pose of the visual label.
视觉标签的信息包括有四个角点像素、中心点像素、单应性矩阵、每个标签对应ID。The information of the visual label includes four corner pixels, center pixel, homography matrix, and ID corresponding to each label.
视觉标签采用AprilTag视觉系统,该视觉标签系统在机器人,AR,相机校准领域使用较为广泛,与二维码(QR)技术相似,但降低了复杂度,可快速检测标志物,并计算标志物的相对位置,通过此标签可准确估计智能小车的二维坐标位姿,实现远程实时定位技术。The visual label adopts the AprilTag vision system, which is widely used in the field of robot, AR, and camera calibration. It is similar to the two-dimensional code (QR) technology, but reduces the complexity, can quickly detect markers, and calculate the marker Relative position, through this tag, the two-dimensional coordinate pose of the smart car can be accurately estimated, and the remote real-time positioning technology can be realized.
室内智能小车身上贴上多个视觉标签,实时拍摄智能小车上视觉标签的位置,当智能小车在工作时,三个摄像头实时拍摄视频信息传输给计算机,计算机通过智能小车身上的视觉标签来进行识别视觉标签AprilTag中心点的二维坐标。Multiple visual labels are attached to the indoor smart car body, and the position of the visual label on the smart car is captured in real time. When the smart car is working, the three cameras shoot video information in real time and transmit it to the computer. The computer uses the visual labels on the smart car body to identify. The 2D coordinates of the center point of the visual label AprilTag.
S3、根据PnP算法确立世界坐标原点,定义多个点的世界空间坐标和二维坐标的对应关系,并求得此时摄像机的外参。利用PnP算法根据多个固定点的图像二维坐标和空间坐标,通过已知的4个角点和标定的相机内参和畸变求取相机外参,同时可确定世界坐标系的原点。S3. Establish the origin of the world coordinates according to the PnP algorithm, define the correspondence between the world space coordinates of multiple points and the two-dimensional coordinates, and obtain the external parameters of the camera at this time. The PnP algorithm is used to obtain the external parameters of the camera according to the two-dimensional coordinates and spatial coordinates of the images of multiple fixed points, through the known four corner points and the calibrated camera internal parameters and distortion, and at the same time, the origin of the world coordinate system can be determined.
S4、将视觉标签输出的二维坐标位姿进行坐标系的变换,将二维坐标转化为三维空间坐标,如图3所示。S4 , transform the coordinate system of the two-dimensional coordinate pose output by the visual label, and convert the two-dimensional coordinates into three-dimensional space coordinates, as shown in FIG. 3 .
根据预先输入计算机内的摄像头的信息和各设备的空间位置关系可以将二维坐标转化为相机坐标,在将相机坐标转化为世界坐标系下的空间位置坐标,三维空间坐标的转化具体为:According to the information of the camera input in the computer in advance and the spatial position relationship of each device, the two-dimensional coordinates can be converted into camera coordinates. When the camera coordinates are converted into the spatial position coordinates in the world coordinate system, the conversion of the three-dimensional spatial coordinates is as follows:
通过三个摄像机摆放位置的不同,在同一时间同一场景下对智能小车上的视觉标签进行图片获取;通过PnP算法在给定世界上N个3D点坐标以及图像上的二维坐标情况下,求解相机外参;根据相机内参、外参将一系列二维坐标转化为三维坐标,见公式1,A矩阵为相机内参矩阵,B矩阵为相机外参矩阵,u,v为二维坐标,XW,YW,ZW为三维坐标,ZC为比例系数。Through the different placement of the three cameras, the visual labels on the smart car are obtained at the same time and in the same scene; through the PnP algorithm, given the coordinates of N 3D points in the world and the two-dimensional coordinates on the image, Solve the camera external parameters; convert a series of two-dimensional coordinates into three-dimensional coordinates according to the camera internal parameters and external parameters, see formula 1, A matrix is the camera internal parameter matrix, B matrix is the camera external parameter matrix, u, v are two-dimensional coordinates, X W , Y W , Z W are the three-dimensional coordinates, and Z C is the scale factor.
S5、通过多目立体视觉模型构建位姿测量模型,如图3所示,引入最小二乘法求得三维空间位姿,结合三角形重心法优化求得智能小车的空间位姿。S5. Construct a pose measurement model through a multi-eye stereo vision model, as shown in Figure 3, introduce the least squares method to obtain the three-dimensional spatial pose, and combine the triangular center of gravity method to optimize the space pose of the smart car.
在实际应用中,由于数据总是有噪声的,通过最小二乘法进行三目视觉融合可得出被测物体的三维坐标分别是不相交的三个点,在经过重心法就可得到最优的三维坐标。In practical applications, since the data is always noisy, the trinocular vision fusion by the least squares method can obtain that the three-dimensional coordinates of the measured object are three disjoint points, and the optimal center of gravity method can be obtained. three-dimensional coordinates.
S6、通过三维空间坐标得到智能小车的位姿信息,对小车运动轨迹进行轨迹绘制并进行误差分析。S6, obtaining the pose information of the smart car through the three-dimensional space coordinates, drawing the trajectory of the car's motion trajectory and performing error analysis.
可实现对智能小车实现实时轨迹预测,满足了室内智能小车精确定位的工作需求,并且提高了灵活性和可控性。It can realize real-time trajectory prediction for the smart car, meet the work requirements for accurate positioning of the indoor smart car, and improve the flexibility and controllability.
通过多目机器视觉,视觉标签系统等技术进行位姿定位,对智能小车实现无传感器室内实时定位,提高了现有的智能小车定位技术。AprilTag是一个视觉基准系统,机器人定位和相机校准等视觉任务,能够计算相机在二维坐标系下的精确位置与方向。因此可以节省购置昂贵传感器智能小车的费用,有很好的鲁棒性和经济性。用户可以摆脱由于全球定位系统定位不到的区域,例如室内的智能小车定位所带来的不方便和工作效率低等缺点所带来的困扰。所用的视觉标签系统也是经济可靠,多目视觉的精度比双目视觉更加精确,对室内智能小车估计预测工作效率有很大的提升,通过多目视觉来计算三维位姿不仅可解决了单目相机深度计算问题,对于双目相机来说有着更高的精度,能实现更精准的室内智能小车定位,实现智能小车多角度,大范围的实时定位需求。Through multi-eye machine vision, visual labeling system and other technologies for pose positioning, sensorless indoor real-time positioning of smart cars is realized, which improves the existing smart car positioning technology. AprilTag is a visual reference system for vision tasks such as robot positioning and camera calibration, which can calculate the precise position and orientation of the camera in a two-dimensional coordinate system. Therefore, the cost of purchasing an expensive sensor smart car can be saved, and it has good robustness and economy. Users can get rid of the troubles caused by the inconvenience and low work efficiency caused by the inconvenience and low work efficiency caused by the areas that cannot be located by the global positioning system, such as indoor intelligent car positioning. The visual labeling system used is also economical and reliable. The accuracy of multi-eye vision is more accurate than that of binocular vision, which greatly improves the efficiency of indoor intelligent car estimation and prediction. The calculation of 3D pose through multi-eye vision can not only solve the problem of monocular vision The problem of camera depth calculation has higher accuracy for binocular cameras, which can achieve more accurate indoor smart car positioning, and realize the multi-angle and large-scale real-time positioning requirements of smart cars.
本具体实施例仅仅是对本发明的解释,其并不是对本发明的限制,本领域技术人员在阅读完本说明书后可以根据需要对本实施例做出没有创造性贡献的修改,但只要在本发明的权利要求范围内都受到专利法的保护。This specific embodiment is only an explanation of the present invention, and it does not limit the present invention. Those skilled in the art can make modifications without creative contribution to the present embodiment as required after reading this specification, but as long as the rights of the present invention are used All claims are protected by patent law.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110270322.8A CN112884832B (en) | 2021-03-12 | 2021-03-12 | Intelligent trolley track prediction method based on multi-view vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110270322.8A CN112884832B (en) | 2021-03-12 | 2021-03-12 | Intelligent trolley track prediction method based on multi-view vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112884832A CN112884832A (en) | 2021-06-01 |
CN112884832B true CN112884832B (en) | 2022-10-21 |
Family
ID=76042455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110270322.8A Active CN112884832B (en) | 2021-03-12 | 2021-03-12 | Intelligent trolley track prediction method based on multi-view vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884832B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113781576B (en) * | 2021-09-03 | 2024-05-07 | 北京理工大学 | Binocular vision detection system, method and device for adjusting pose with multiple degrees of freedom in real time |
CN118470099B (en) * | 2024-07-15 | 2024-09-24 | 济南大学 | Method and device for measuring object spatial posture based on monocular camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106527426A (en) * | 2016-10-17 | 2017-03-22 | 江苏大学 | Indoor multi-target track planning system and method |
CN108571971A (en) * | 2018-05-17 | 2018-09-25 | 北京航空航天大学 | A kind of AGV visual positioning system and method |
CN109658461A (en) * | 2018-12-24 | 2019-04-19 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109018591A (en) * | 2018-08-09 | 2018-12-18 | 沈阳建筑大学 | A kind of automatic labeling localization method based on computer vision |
CN108827316B (en) * | 2018-08-20 | 2021-12-28 | 南京理工大学 | Mobile robot visual positioning method based on improved Apriltag |
US10997448B2 (en) * | 2019-05-15 | 2021-05-04 | Matterport, Inc. | Arbitrary visual features as fiducial elements |
CN112364677A (en) * | 2020-11-23 | 2021-02-12 | 盛视科技股份有限公司 | Robot vision positioning method based on two-dimensional code |
-
2021
- 2021-03-12 CN CN202110270322.8A patent/CN112884832B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106527426A (en) * | 2016-10-17 | 2017-03-22 | 江苏大学 | Indoor multi-target track planning system and method |
CN108571971A (en) * | 2018-05-17 | 2018-09-25 | 北京航空航天大学 | A kind of AGV visual positioning system and method |
CN109658461A (en) * | 2018-12-24 | 2019-04-19 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment |
Non-Patent Citations (2)
Title |
---|
Pose Estimation for Multicopters Based on Monocular Vision and AprilTag;Guo Zhenglong et al.;《Proceedings of the 37th Chinese Control Conference》;20180727;第4717-4722页 * |
基于AprilTag的智能小车拓展定位追踪应用;何浩楠 等;《现代信息科技》;20200825;第4卷(第16期);第24-30页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112884832A (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104933718B (en) | A physical coordinate positioning method based on binocular vision | |
CN104200086B (en) | Wide-baseline visible light camera pose estimation method | |
CN108734744B (en) | Long-distance large-view-field binocular calibration method based on total station | |
CN106651990B (en) | Indoor map construction method and indoor positioning method based on indoor map | |
CN110728715A (en) | Camera angle self-adaptive adjusting method of intelligent inspection robot | |
CN111210478B (en) | Method, medium and system for calibrating external parameters of multi-camera system without common field of view | |
KR102016636B1 (en) | Calibration apparatus and method of camera and rader | |
CN111220126A (en) | Space object pose measurement method based on point features and monocular camera | |
JP2017224280A (en) | Visual positioning-based navigation apparatus and method | |
CN108226938A (en) | A kind of alignment system and method for AGV trolleies | |
CN102779347A (en) | Method and device for tracking and locating target for aircraft | |
CN101852609A (en) | A ground obstacle detection method based on robot binocular stereo vision | |
CN106774296A (en) | A kind of disorder detection method based on laser radar and ccd video camera information fusion | |
CN111681283B (en) | Monocular stereoscopic vision-based relative pose calculation method applied to wireless charging alignment | |
CN112884832B (en) | Intelligent trolley track prediction method based on multi-view vision | |
CN102072706A (en) | Multi-camera positioning and tracking method and system | |
Aliakbarpour et al. | An efficient algorithm for extrinsic calibration between a 3d laser range finder and a stereo camera for surveillance | |
CN113920191B (en) | 6D data set construction method based on depth camera | |
CN112045655A (en) | Mobile robot pose measurement method and system for large-scale multi-site scene | |
Jung et al. | A novel 2.5 D pattern for extrinsic calibration of tof and camera fusion system | |
CN114413958A (en) | Monocular visual ranging and speed measurement method for unmanned logistics vehicles | |
CN112017248A (en) | 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics | |
Cheng et al. | 3d radar and camera co-calibration: A flexible and accurate method for target-based extrinsic calibration | |
CN113487726B (en) | Motion capture system and method | |
CN111243021A (en) | Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |