WO2021115236A1 - 一种标记场景用来定位的方法和装置 - Google Patents
一种标记场景用来定位的方法和装置 Download PDFInfo
- Publication number
- WO2021115236A1 WO2021115236A1 PCT/CN2020/134392 CN2020134392W WO2021115236A1 WO 2021115236 A1 WO2021115236 A1 WO 2021115236A1 CN 2020134392 W CN2020134392 W CN 2020134392W WO 2021115236 A1 WO2021115236 A1 WO 2021115236A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- collision information
- target area
- collision
- feature
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000012795 verification Methods 0.000 claims description 16
- 238000000691 measurement method Methods 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 1
- 230000000903 blocking effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
Definitions
- the present invention relates to the technical field of positioning, in particular to a method and device for marking scenes for positioning.
- the present invention provides a method and device for positioning based on feature picture marks, which can at least reduce the dependence on GPS signal positioning or assist positioning, and serve as a data basis for navigation.
- One feature of the present invention is that it can collect image features or arrange feature pictures on the ceiling to reduce the possible occlusion of parts of the scene due to surrounding moving objects.
- Another feature of the present invention is that it can use graphics to perform verification and verification based on the results of collision information collection for each scene, and a feature of this visualization tool is that it can stretch 2D collision information according to height to form 3D Collision information.
- the present invention can verify the passability of various vehicles by adding a bounding box of a specified size.
- a feature of the present invention is that the laser picture can be arranged to avoid interference of light or dynamic light source on the simple feature picture.
- a vehicle ranging and positioning device is characterized by comprising: a feature and a software system.
- the feature location is the selected location in the scene where feature pictures should be arranged or collected.
- the characteristic picture is a picture obtained by shooting, or a picture or a laser picture used for posting after printing, and is used to mark a coordinate in the target area and related information for identification.
- the software system is a system for storing the collision information table of the target area, loading the collision information table of the target area, and verifying the rationality of the collision information collection of the target area to reduce human error.
- a vehicle ranging and positioning device is characterized in that it comprises:
- Step 1 Make the collision information of the target area, and make the collision information (collision contour) of the target area through the measurement method.
- the collision information is stored in 2D mode, and the collision information can be stored according to line segments or collisions.
- the surface is stored in the collision information table corresponding to the target area.
- Step 2 Select feature locations to collect feature pictures or layout feature pictures, and collect the coordinates, orientation, passing direction here, and channel width and height of the feature picture, and store this information in the collision information table.
- Step 3 Use a verification tool to verify the collision information of the target area, which can be converted into 3D collision information for inspection or display.
- the measurement method is used to produce the collision information of the target area.
- 2D collision information is usually produced (it can be understood as the projection of the collision surface on the horizontal plane).
- a collision information record is made for one collision surface, wherein the projection of the collision surface on the horizontal plane is recorded as 2D collision information, and the coordinates of the center of the collision information line segment are used as the coordinates of the collision information.
- the information is stored in the collision information table of the target area.
- the coordinates of the collision information there are two ways to mark the coordinates of the collision information, one is to use GPS values, and the other is to select a coordinate origin in the target area to establish a coordinate axis (for example, according to north as the y-axis direction of the 2D coordinate axis or the 3D coordinate axis) Z-axis depth direction), mark the position according to the relative coordinates of the collision information and the origin of this coordinate axis.
- a coordinate axis for example, according to north as the y-axis direction of the 2D coordinate axis or the 3D coordinate axis
- Z-axis depth direction mark the position according to the relative coordinates of the collision information and the origin of this coordinate axis.
- a coordinate origin is selected in the target area to establish a coordinate axis to obtain the relative coordinates of each collision information (for example, the center of each collision information) and the coordinate axis.
- GPS can be used to mark the edge of the target area, so that the target area can be selected by GPS (similar to describing the outline of the target area).
- the GPS range value (similar to a bounding box) of each target area can be stored to roughly identify the range of the target area.
- the arrangement or collection position of the feature pictures may be the ceiling (that is, above).
- the arranged characteristic picture may be a picture with image characteristics that can be easily recognized, or a picture with laser characteristics.
- select features in the target area to collect or arrange feature pictures, and collect relevant information.
- the selection of features can be done by a fixed number of meters (between 0.1 and 100 meters), or it can be collected in locations with features. Or arrange feature pictures.
- each piece of data in the collision information table of the target area also contains a line segment used to describe the length of the collision information (in the case of 3D, the line segment can be expanded into a plane in the vertical direction), coordinates (the center point of the line segment is in the coordinate system of the target area Coordinates), the direction of the line segment, the passing direction at the collision information (which is a vector), the width and height of the passage at the collision information (the line segment and the direction indicate the direction of the passage), and the maximum passing speed through here.
- this piece of data describes the collision information (or tag information) corresponding to the feature picture, there should also be the ID of the feature picture and the image content of the feature picture.
- the software system reads out all the data in the entire collision information table of a target area at one time for collision detection and positioning.
- the verification tool is a tool that reads out all the data in the entire collision information table at one time, and then displays it according to the stored collision information, so as to facilitate the verification by the human eye.
- the verification tool will display or graphically display each record in the collision information table of the target area.
- the collision information can be displayed as a line segment
- the passing direction (vector information) is displayed as a line segment with arrows
- the channel width and height are displayed as a line segment (usually perpendicular to the line segment of the collision information).
- a button can be added to the verification tool to change the current 2D collision information into 3D collision information.
- the specific method is to create a parallel line segment according to the height in the configuration information. Line segments of the same length and direction, and then connecting the four vertices of the two line segments to form two triangles is the collision information in 3D.
- a method of dragging the containment box (2D or 3D containment box) in the scene or automatic pathfinding can be used to verify the passage of the scene.
- the vector direction of the line connecting the two moving points of the mouse before and after dragging is the moving direction of the containment box, and it can stop or slide when it encounters the collision information or the channel width limit.
- the containment box automatically finds the path, it moves from the current position to the position clicked by the mouse.
- find the waypoint information according to the 2D collision information (which can be the A-star wayfinding method), and then follow the waypoint information Trial movement (hypothetical movement according to a series of configured angles or angle ranges, or simulated movement) until a feasible path is found.
- the up and down keys represent forward and backward
- the left and right keys represent turn left and right.
- Figure 1 schematically shows the content of a record in the collision information table.
- Figure 2 schematically shows a graphical representation of the passing direction, channel width and collision information, top view.
- Fig. 3 schematically shows a schematic diagram showing a part of the display of a complete collision information table, top view.
- Step 1 Select a point as the center point of the coordinate system of the target area, then measure the collision information of the entire target area, store each wall in a record according to the line segment, and measure the passage in front of the wall according to the width and height of the channel Stored in the same record as by direction.
- Each record in the collision information table contains collision information (collision surface information or projection information of the collision surface to the horizontal plane) obtained by measurement or ⁇ and mark, and store the line segment length and line segment coordinates (in the target area coordinate system) Relative coordinates) and orientation (vector value or angle value or radian value), as well as through related information, storing the direction of passage (usually represented by a vector), the width and height of the channel (the vector and numerical value representing the width of the channel by line segments) Pass height, refer to Figure 2), feature picture ID, feature picture information, and the maximum value of the passing speed.
- collision information collision surface information or projection information of the collision surface to the horizontal plane
- the line segment length and line segment coordinates in the target area coordinate system
- orientation vector value or angle value or radian value
- Step 3 Use the verification tool to load the data of the entire collision information table of this target area, and display it.
- the line segment is displayed according to the collision information
- the passing direction is displayed as an arrow
- the channel width and height are displayed as a line segment and Height value to check the rationality and completeness of the data in the collision information table.
- Step 4 The verification tool can be used to verify the passability of a bounding box of a certain size.
- the bounding box is used to simulate a car.
- the size of the bounding box can be specified by input.
- the containment box when the mouse is dragging, read the coordinates of two consecutive mouse positions, and connect them in order to obtain the vector.
- the direction of the vector is the direction in which the containment box will move. If the containment box intersects the collision information during the movement, Stop, the containment box can be turned according to the direction to be moved during the moving process.
- the verification tool will simulate the movement of the vehicle, that is, the up and down keys correspond to the front and back movement of the containing box (the front and back of the containing box are the bounding box parts
- the left and right buttons turn left and right to adjust the angle of the containment box.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (7)
- 一种标记场景用来定位的装置,其特征在于,包括:特征处,特征图片,软件系统;所述特征处是场景中选取的应该布置特征图片或收集特征图片的地方;所述特征图片是拍摄得到的图片,或印刷后用来张贴的图片或镭射图片,用以标记目标区域中的一个坐标和相关信息的用来识别的图像信息;所述软件系统是用于储存目标区域的碰撞信息表、加载目标区域的碰撞信息表以及验证目标区域的碰撞信息收集的合理性以降低人为误差的系统。
- 一种标记场景用来定位的方法,其特征在于,包括:步骤一:制作目标区域的碰撞信息,通过测量的方法制作出目标区域的碰撞信息(碰撞轮廓),对于不复杂的目标区域,碰撞信息按照2D方式储存,储存时可以将碰撞信息按照线段或碰撞面为单位储存到此目标区域对应的碰撞信息表中;步骤二:选取特征处来收集特征图片或布置特征图片,并收集此特征图片的坐标、朝向、此处的通过方向和通道宽高等信息,将此信息也储存到碰撞信息表中;步骤三:使用验证工具来验证所述目标区域的碰撞信息,并可以转换成3D碰撞信息来检查或展示。
- 根据权利要求2所述的“步骤一”,其特征在于,制作目标区域的碰撞信息,通过测量的方法制作出目标区域的碰撞信息(碰撞轮廓),对于不复杂的目标区域,碰撞信息按照2D方式储存,储存时可以将碰撞信息按照线段或碰撞面为单位储存到此目标区域对应的碰撞信息表中,包括:使用测量的方法来制作目标区域的碰撞信息,对于较简单的场景通常只制作2D碰撞信息(可以理解是碰撞面在水平面上的投影);通过布置特征图片或\和采集图片用于做为能够被识别出的带坐标、朝向、通过方向和通道宽高等的碰撞信息,将这些信息储存到此目标区域的碰撞信息表中;标记碰撞信息坐标的方式有两种,一种是使用GPS值,一种是在目标区域中选取一个坐标原点建立坐标轴(例如按照北方为2D坐标轴的y轴方向或3D坐标轴的z轴深度方向),按照碰撞信息与这个坐标轴原点的相对坐标来标记位置;作为一种可选的实施方式,可以使用GPS对目标区域的边缘进行标记,以达到通过GPS来框选出目标区域(类似于描述目标区域轮廓);作为一种可选的实施方式,可以储存每个目标区域的GPS范围值(类似于包围盒), 用以大体识别此目标区域的范围;特征图片的布置或收集位置可以是天花板(就是上方);布置的特征图片可以是带有图像特征容易识别出来的图片,也可以是带有镭射特征的图片。
- 根据权利要求2所述的“步骤二”,其特征在于,选取特征处来收集特征图片或布置特征图片,并收集此特征图片的坐标、朝向、此处的通过方向和通道宽高等信息,将此信息也储存到碰撞信息表中,包括:在目标区域中选取特征处来收集或者布置特征图片,并收集相关信息,特征处的选取可以通过固定的米数(0.1米到100米之间),也可以在有特点的地方收集或布置特征图片;目标区域的碰撞信息表中的每条数据还包含用来描述碰撞信息长度的线段(3D情况下可以将线段按照垂直方向展开成面),坐标(线段中心点在目标区域的坐标系中的坐标),线段的方向,碰撞信息处的通过方向(是一个矢量),碰撞信息处的通道宽高(线段和方向来表示通道方向),以及通过此处的最大通过速度;进一步地,如果此条数据描述的是特征图片对应的碰撞信息(或者说标记信息),则还应该有特征图片的ID和特征图片的图像内容;所述软件系统会一次性将一个目标区域的整张碰撞信息表中的所有数据读取出来,以用作碰撞检测和定位使用。
- 根据权利要求2所述的“步骤三”,其特征在于,使用验证工具来验证所述目标区域的碰撞信息,并可以转换成3D碰撞信息来检查或展示,包括:所述验证工具是一次性将整张碰撞信息表中的所有数据读取出来,然后根据储存的碰撞信息来显示出来,以方便人眼进行验证的工具;所述验证工具会按照目标区域的碰撞信息表中的每条记录来显示或图形化显示;作为一种可选的实施方式,可以将碰撞信息显示成线段,通过方向(矢量信息)显示成带箭头的线段,通道宽高显示成一个线段(通常是与碰撞信息的线段垂直);作为一种可选的实施方式,可以在所述验证工具中增加一个按钮来使当前的2D信息变成3D碰撞信息,具体的做法是对一个线段按照配置信息中的高度来创建一个平行的相同长度和方向的线段,然后将两个线段的四个顶点连接成两个三角形就是3D中的碰撞信息了;作为一种可选的实施方式,可以使用包容盒(2D或3D包容盒)在场景中拖动或自动寻路的方法来验证场景的通过性;作为一种可选的实施方式,可以使用键盘的上下左右按键来模拟载具形式的方式来操作包容盒(2D或3D包容盒)在场景中移动来验证场景的通过性,与多数载具的运动方式类似,上下键代表的是向前和向后,左右键代表的是向左转和向右转。
- 一种计算机可读写介质,其上存储有计算机程序和相关数据,其特征在于,所述程序被处理器执行时实现本发明的相关计算功能和内容。
- 一种电子设备,其特征在于,包括:一个或多个处理器;存储装置,用于存储一个或多个程序。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911285578.5A CN112964255A (zh) | 2019-12-13 | 2019-12-13 | 一种标记场景用来定位的方法和装置 |
CN201911285578.5 | 2019-12-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021115236A1 true WO2021115236A1 (zh) | 2021-06-17 |
Family
ID=76270778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/134392 WO2021115236A1 (zh) | 2019-12-13 | 2020-12-08 | 一种标记场景用来定位的方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112964255A (zh) |
WO (1) | WO2021115236A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115272379B (zh) * | 2022-08-03 | 2023-11-28 | 上海新迪数字技术有限公司 | 一种基于投影的三维网格模型外轮廓提取方法及系统 |
CN116229560B (zh) * | 2022-09-08 | 2024-03-19 | 广东省泰维思信息科技有限公司 | 一种基于人体姿态的异常行为识别方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050171644A1 (en) * | 2004-01-30 | 2005-08-04 | Funai Electric Co., Ltd. | Autonomous mobile robot cleaner |
CN102945557A (zh) * | 2012-10-12 | 2013-02-27 | 北京海鑫科金高科技股份有限公司 | 基于移动终端的矢量现场图绘制方法 |
CN106239517A (zh) * | 2016-08-23 | 2016-12-21 | 北京小米移动软件有限公司 | 机器人及其实现自主操控的方法、装置 |
CN106530946A (zh) * | 2016-11-30 | 2017-03-22 | 北京贝虎机器人技术有限公司 | 用于编辑室内地图的方法及装置 |
CN106643727A (zh) * | 2016-12-02 | 2017-05-10 | 江苏物联网研究发展中心 | 用于机器人导航的地图的构建方法 |
CN108885453A (zh) * | 2015-11-11 | 2018-11-23 | 罗伯特有限责任公司 | 用于机器人导航的地图的划分 |
CN109855628A (zh) * | 2019-03-05 | 2019-06-07 | 异起(上海)智能科技有限公司 | 一种室内或楼宇间的定位、导航方法及装置、以及计算机可读写介质和电子设备 |
-
2019
- 2019-12-13 CN CN201911285578.5A patent/CN112964255A/zh active Pending
-
2020
- 2020-12-08 WO PCT/CN2020/134392 patent/WO2021115236A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050171644A1 (en) * | 2004-01-30 | 2005-08-04 | Funai Electric Co., Ltd. | Autonomous mobile robot cleaner |
CN102945557A (zh) * | 2012-10-12 | 2013-02-27 | 北京海鑫科金高科技股份有限公司 | 基于移动终端的矢量现场图绘制方法 |
CN108885453A (zh) * | 2015-11-11 | 2018-11-23 | 罗伯特有限责任公司 | 用于机器人导航的地图的划分 |
CN106239517A (zh) * | 2016-08-23 | 2016-12-21 | 北京小米移动软件有限公司 | 机器人及其实现自主操控的方法、装置 |
CN106530946A (zh) * | 2016-11-30 | 2017-03-22 | 北京贝虎机器人技术有限公司 | 用于编辑室内地图的方法及装置 |
CN106643727A (zh) * | 2016-12-02 | 2017-05-10 | 江苏物联网研究发展中心 | 用于机器人导航的地图的构建方法 |
CN109855628A (zh) * | 2019-03-05 | 2019-06-07 | 异起(上海)智能科技有限公司 | 一种室内或楼宇间的定位、导航方法及装置、以及计算机可读写介质和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112964255A (zh) | 2021-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4964801B2 (ja) | 2次元実写映像から3次元モデルを生成する方法及び装置 | |
US9767563B2 (en) | Image processing apparatus and method for obtaining position and orientation of imaging apparatus | |
US9953461B2 (en) | Navigation system applying augmented reality | |
CN101163940B (zh) | 摄影位置分析方法 | |
US10956739B2 (en) | Augmented reality robotic system visualization | |
US10086955B2 (en) | Pattern-based camera pose estimation system | |
JP4599184B2 (ja) | 指標配置計測方法、指標配置計測装置 | |
CN109816704A (zh) | 物体的三维信息获取方法和装置 | |
CN103512548B (zh) | 距离测量装置及距离测量方法 | |
WO2021115236A1 (zh) | 一种标记场景用来定位的方法和装置 | |
US10451403B2 (en) | Structure-based camera pose estimation system | |
US9858669B2 (en) | Optimized camera pose estimation system | |
US20190162534A1 (en) | True to size 3d-model conglomeration | |
JP4660569B2 (ja) | 物体検出装置及び物体検出方法 | |
JP2009503711A (ja) | 第2の対象物に対する第1対象物の相対位置を決定する方法及びシステム及び、対応するコンピュータプログラム及び対応するコンピュータ可読記録媒体 | |
US10410068B2 (en) | Determining the position of an object in a scene | |
Tian et al. | Registration and occlusion handling based on the FAST ICP-ORB method for augmented reality systems | |
CN111881899B (zh) | 机器人的定位部署方法、装置、设备及存储介质 | |
CN108733211A (zh) | 追踪系统、其操作方法、控制器、及电脑可读取记录媒体 | |
CN115240140A (zh) | 基于图像识别的设备安装进度监控方法及系统 | |
WO2021111613A1 (ja) | 3次元地図作成装置、3次元地図作成方法、及び3次元地図作成プログラム | |
JP2006267026A (ja) | 画像処理方法、画像処理装置 | |
Ahrnbom et al. | Seg2Pose: Pose Estimations from Instance Segmentation Masks in One or Multiple Views for Traffic Applications. | |
JP2020112952A (ja) | 移動支援装置 | |
JP5964611B2 (ja) | 3次元地図表示システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20899586 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20899586 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/10/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20899586 Country of ref document: EP Kind code of ref document: A1 |