CN112964255A - Method and device for positioning marked scene - Google Patents

Method and device for positioning marked scene Download PDF

Info

Publication number
CN112964255A
CN112964255A CN201911285578.5A CN201911285578A CN112964255A CN 112964255 A CN112964255 A CN 112964255A CN 201911285578 A CN201911285578 A CN 201911285578A CN 112964255 A CN112964255 A CN 112964255A
Authority
CN
China
Prior art keywords
collision information
target area
collision
information
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911285578.5A
Other languages
Chinese (zh)
Inventor
于毅欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yiqi Shanghai Intelligent Technology Co Ltd
Original Assignee
Yiqi Shanghai Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yiqi Shanghai Intelligent Technology Co Ltd filed Critical Yiqi Shanghai Intelligent Technology Co Ltd
Priority to CN201911285578.5A priority Critical patent/CN112964255A/en
Priority to PCT/CN2020/134392 priority patent/WO2021115236A1/en
Publication of CN112964255A publication Critical patent/CN112964255A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An apparatus for locating a marked scene, comprising: feature point, feature picture, software system. The feature position is a position selected from the scene where the feature picture should be arranged or collected. The characteristic picture is a picture obtained by shooting, or a picture or a laser picture used for posting after printing, and is used for marking image information used for identifying one coordinate and related information in the target area. The software system is a system for storing a collision information table of a target area, loading the collision information table of the target area, and verifying the reasonability of the collision information collection of the target area to reduce human errors. The invention can collect image characteristics or arrange characteristic pictures on the ceiling to reduce the possible obstruction of the scene part by surrounding flowing objects. The invention can verify the passing possibility of various vehicles by adding the bounding box with the specified size.

Description

Method and device for positioning marked scene
Technical Field
The invention relates to the technical field of positioning, in particular to a method and a device for positioning a marked scene.
Background
The invention provides a method and a device for positioning according to characteristic picture marks, which are used for reducing the dependence on GPS signal positioning or auxiliary positioning to at least a certain extent and used as a data basis for navigation. It is a feature of the present invention that image features may be collected or feature pictures may be arranged on the ceiling to reduce occlusion of portions of the scene that may be caused by surrounding moving objects. Another feature of the invention is that verification and verification can be performed using graphical based on the results of the collision information collection for each scene, and a feature of this visualization tool is that 2D collision information can be stretched by height to form 3D collision information. And the present invention can verify the passing of various vehicles by adding bounding boxes of a specified size. The invention has the characteristic that the interference of light rays or dynamic light sources to the pure characteristic pictures can be avoided by the method for arranging the laser pictures.
Disclosure of Invention
According to an aspect of the present disclosure, a vehicle ranging and positioning device is characterized by comprising: characteristically, a software system.
The feature position is a position selected from the scene where the feature picture should be arranged or collected.
The characteristic picture is a picture obtained by shooting, or a picture or a laser picture used for posting after printing, and is used for marking image information used for identifying one coordinate and related information in the target area.
The software system is a system for storing a collision information table of a target area, loading the collision information table of the target area, and verifying the reasonability of the collision information collection of the target area to reduce human errors.
According to an aspect of the present disclosure, a vehicle ranging and positioning device is characterized by comprising:
the method comprises the following steps: the collision information of the target area is made, the collision information (collision profile) of the target area is made by a measuring method, and for uncomplicated target areas, the collision information is stored in a 2D manner, and the collision information can be stored in a collision information table corresponding to the target area by taking a line segment or a collision surface as a unit.
Step two: and selecting a feature position to collect a feature picture or arrange the feature picture, collecting information such as the coordinate, the orientation, the passing direction and the channel width of the feature picture, and storing the information into a collision information table.
Step three: the collision information of the target area is verified using a verification tool and can be converted into 3D collision information for review or presentation.
Preferably, the method of measurement is used to make collision information for the target area, typically only 2D collision information (which can be understood as the projection of the collision surface onto a horizontal plane) for simpler scenes.
Preferably, in principle, a crash area is produced as a crash information record, wherein the projection of the crash area onto a horizontal plane is recorded as 2D crash information, using the coordinates of the center of the crash information line segment as the coordinates of the crash information.
Preferably, the feature picture or/and the collected picture are arranged and used as collision information with coordinates, orientation, passing direction, channel width and height and the like which can be identified, and the information is stored in a collision information table of the target area.
Preferably, there are two ways to mark the coordinates of the collision information, one is to use GPS values, and the other is to select a coordinate origin in the target area to establish a coordinate axis (e.g., a y-axis direction of a 2D coordinate axis or a z-axis depth direction of a 3D coordinate axis in the north direction) and mark the position according to the relative coordinates of the collision information and the origin of the coordinate axis.
Further, in most cases, a coordinate axis is established by selecting a coordinate origin in the target area to find the relative coordinates of each collision information (e.g., the center of each collision information) and the coordinate axis.
As an alternative, the edge of the target area may be marked using GPS to frame the target area by GPS (similar to outlining the target area).
As an alternative embodiment, GPS range values (similar to bounding boxes) for each target area may be stored to generally identify the range of the target area.
Preferably, the arrangement or collection position of the feature picture may be a ceiling (i.e., above).
Preferably, the arranged characteristic picture can be a picture with image characteristics which are easy to identify, and can also be a picture with laser characteristics.
Optionally, in order to maximize the effect of the picture with the laser feature, a specific camera may be used.
Preferably, the feature is selected from the target area to collect or arrange the feature pictures and collect the related information, and the feature can be selected from a fixed number of meters (between 0.1 and 100 meters) and can also be collected or arranged at a characteristic place.
Preferably, each piece of data in the collision information table of the target area further contains a line segment (the line segment may be developed into a plane in the vertical direction in the 3D case) for describing the length of the collision information, coordinates (coordinates of the center point of the line segment in the coordinate system of the target area), the direction of the line segment, the passing direction (which is a vector) at the collision information, the width and height of the passage (the line segment and the direction represent the passage direction) at the collision information, and the maximum passing speed therethrough.
Further, if the piece of data describes the collision information (or the tag information) corresponding to the feature picture, the ID of the feature picture and the image content of the feature picture should be available.
Preferably, the software system will read all the data in the entire collision information table of one target area at a time for collision detection and positioning.
Preferably, the verification tool is a tool for reading all data in the whole collision information table at one time and then displaying the data according to the stored collision information so as to facilitate the verification by human eyes.
Preferably, the verification tool will display or \ and graphically display according to each record in the collision information table of the target area.
As an alternative, the collision information may be displayed as a line segment, and the channel width and height may be displayed as a line segment (generally perpendicular to the line segment of the collision information) by displaying the direction (vector information) as a line segment with an arrow.
As an alternative, a button may be added to the verification tool to change the current 2D collision information into 3D collision information, specifically, a parallel line segment with the same length and direction is created for a line segment according to the height in the configuration information, and then four vertices of two line segments are connected into two triangles, which is the collision information in 3D.
As an alternative embodiment, the passing of the scene may be verified by using a method of dragging or automatically seeking a way in the scene by a container box (2D or 3D container box).
Furthermore, the vector direction of the connecting line of the moving points of the mouse twice before and after dragging is the moving direction of the containing box, and the mouse can stop or sideslip when meeting collision information or channel width limitation.
Further, when the container box automatically seeks a path, the container box is moved from the current position to a position where the mouse clicks, path point information is firstly obtained according to 2D collision information seeking (which can be a star seeking method and the like), and then trial movement (hypothetical movement or analog movement according to a series of configured angles or angle ranges) is carried out according to the connection direction of the path point information until a feasible path is found.
As an alternative implementation, the pass-through of the scene can be verified by operating the containers (2D or 3D containers) to move in the scene in a manner simulating the vehicle form using the up, down, left and right keys of the keyboard, which represent forward and backward, and the left and right keys represent left and right turns, similar to the motion of most vehicles.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 schematically shows the contents of one record in the collision information table.
Fig. 2 schematically shows a graphical representation of the pass direction, the channel width and the crash information, in top view.
Fig. 3 schematically shows a schematic view, top view, of a part of a graphical presentation of a display of a complete collision information table.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
Step 1: selecting a point as the central point of the coordinate system of the target area, then measuring the collision information of the whole target area, storing each wall into a record according to line segments, and storing the trafficability of the front of the measured wall into the same record according to the width, the height and the trafficability direction of a channel.
Step 2: each record in the collision information table is obtained by measurement or \ and labeling, contains collision information (collision surface information or projection information of the collision surface to the horizontal plane), stores the length of a line segment, the coordinates of the line segment (relative coordinates in a target area coordinate system), and the orientation (vector value or angle value or arc value), and also stores, through related information, the passing direction (usually expressed by a vector), the channel width height (the passing height is expressed by a vector and a numerical value expressing the channel width by a line segment, refer to fig. 2), the feature picture ID, the feature picture information, and the passing speed maximum value.
It should be noted here that not every collision information record needs to have a feature picture, that is, some collision information is only simple collision information, the value of the corresponding feature picture ID entry is null or invalid, and the corresponding feature picture is null.
And step 3: the verification tool is used to load and display the data of the whole collision information table of the target area, and as an alternative embodiment, a line segment is displayed according to the collision information, the direction is displayed as an arrow, and the channel width and the channel height are displayed as a line segment and a height value, so as to check the reasonability and the integrity of the data in the collision information table.
And 4, step 4: the passing property of the bounding box with a certain size can be verified by using a verification tool, the bounding box is simulated to be a vehicle, the size of the bounding box can be specified by inputting, the position to which the bounding box is to be moved can be specified by using a mouse dragging or mouse clicking method when the passing property of the bounding box is verified, and the front, back, left and right of a carrier can be simulated by using the upper, lower, left and right keys of a keyboard to control the movement of the bounding box.
Furthermore, when the mouse is dragged, coordinates of two continuous mouse positions are read, a line is connected according to the sequence to obtain a vector, the direction of the vector is the direction in which the containing box is to move, the containing box stops if the containing box is intersected with the collision information in the moving process, and the containing box can be turned according to the direction in which the containing box is to move in the moving process.
Further, when the upper, lower, left and right keys of the keyboard are used for operating the bounding box for simulating the trafficability of the vehicle, the verification tool can simulate the motion mode of the vehicle, namely, the upper and lower keys are corresponding to the front and back motion of the containing box (the front and back of the containing box are the north and south directions of the local coordinate system of the bounding box, or the directions of 12 points and 6 points), and the left and right keys are used for steering left and right to adjust the angle of the containing box.
When the containing box meets the collision body, the containing box can stop or bounce to ensure that the motion can not be stopped forever.
The above is a specific embodiment of the present invention, but the scope of the present invention should not be limited thereto. Any changes or substitutions that can be easily made by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention, and therefore, the protection scope of the present invention is subject to the protection scope defined by the appended claims.

Claims (7)

1. An apparatus for locating a marked scene, comprising: feature points, feature pictures, software systems;
the characteristic position is a position which is selected from the scene and is required to arrange the characteristic picture or collect the characteristic picture;
the characteristic picture is a picture obtained by shooting, or a picture or a laser picture used for posting after printing, and is used for marking a coordinate in a target area and image information used for identifying related information;
the software system is a system for storing a collision information table of a target area, loading the collision information table of the target area, and verifying the reasonability of the collision information collection of the target area to reduce human errors.
2. A method for marking a scene for localization, comprising:
the method comprises the following steps: the method comprises the steps of making collision information of a target area, making the collision information (collision outline) of the target area by a measuring method, storing the collision information in a 2D mode for uncomplicated target areas, and storing the collision information into a collision information table corresponding to the target area by taking a line segment or a collision surface as a unit during storage;
step two: selecting a feature position to collect a feature picture or arrange the feature picture, collecting information such as coordinates, orientation, passing direction and channel width of the feature picture, and storing the information into a collision information table;
step three: the collision information of the target area is verified using a verification tool and can be converted into 3D collision information for review or presentation.
3. "step one" according to claim 2, wherein collision information of a target area is created, collision information (collision profile) of the target area is created by a measurement method, and for uncomplicated target areas, the collision information is stored in a 2D manner, and the collision information can be stored in units of line segments or collision surfaces in a collision information table corresponding to the target area, and the method includes:
the method of measurement is used for making collision information of a target area, and for simpler scenes, only 2D collision information is generally made (the projection of a collision surface on a horizontal plane can be understood);
the characteristic pictures are arranged or/and the collected pictures are used as collision information with coordinates, orientation, passing direction, channel width and height and the like which can be identified, and the information is stored in a collision information table of the target area;
two ways of marking the coordinates of the collision information are available, one is to use a GPS value, the other is to select a coordinate origin in the target area to establish a coordinate axis (for example, the y-axis direction of a 2D coordinate axis or the z-axis depth direction of a 3D coordinate axis according to the north direction), and the position is marked according to the relative coordinates of the collision information and the coordinate axis origin;
as an alternative, the edge of the target area may be marked by using GPS to frame the target area by GPS (similar to describing the outline of the target area);
as an alternative embodiment, the GPS range value (similar to a bounding box) for each target area may be stored to generally identify the range of the target area;
the arrangement or collection location of the feature pictures may be the ceiling (i.e., above);
the arranged characteristic pictures can be pictures with image characteristics which are easy to identify, and can also be pictures with laser characteristics.
4. The step two of claim 2, wherein the feature is selected to collect a feature picture or arrange a feature picture, and information such as coordinates, orientation, passing direction and channel width of the feature picture is collected and stored in the collision information table, and the step two comprises:
selecting a feature position in a target area to collect or arrange feature pictures and collect related information, wherein the feature position can be selected by a fixed meter (between 0.1 and 100 meters) or can be selected by a characteristic place to collect or arrange the feature pictures;
each piece of data in the collision information table of the target area further contains a line segment (the line segment may be developed into a plane in the vertical direction in the case of 3D), coordinates (coordinates of the center point of the line segment in the coordinate system of the target area), the direction of the line segment, the passing direction (which is a vector) at the collision information, the width and height of the passage (the line segment and the direction represent the passage direction) at the collision information, and the maximum passing speed therethrough;
further, if the piece of data describes collision information (or mark information) corresponding to the feature picture, the ID of the feature picture and the image content of the feature picture should also exist;
the software system reads all data in the whole collision information table of one target area at a time for collision detection and positioning.
5. "step three" according to claim 2, characterized in that the collision information of the target area is verified using a verification tool and can be converted into 3D collision information for inspection or presentation, comprising:
the verification tool is used for reading all data in the whole collision information table at one time and then displaying the data according to the stored collision information so as to facilitate the verification of human eyes;
the verification tool displays or graphically displays according to each record in the collision information table of the target area;
as an alternative embodiment, the collision information may be displayed as a line segment, and the direction (vector information) is displayed as a line segment with an arrow, and the channel width and height are displayed as a line segment (generally perpendicular to the line segment of the collision information);
as an alternative, a button may be added to the verification tool to change the current 2D information into 3D collision information, specifically, a parallel line segment with the same length and direction is created for a line segment according to the height in the configuration information, and then four vertices of two line segments are connected into two triangles, which is the collision information in 3D;
as an alternative embodiment, the passing of the scene may be verified by using a method of dragging or automatically seeking a way in the scene by a container box (2D or 3D container box);
as an alternative implementation, the pass-through of the scene can be verified by operating the containers (2D or 3D containers) to move in the scene in a manner simulating the vehicle form using the up, down, left and right keys of the keyboard, which represent forward and backward, and the left and right keys represent left and right turns, similar to the motion of most vehicles.
6. A computer-readable write medium on which a computer program and related data are stored, characterized in that the program, when executed by a processor, implements the relevant computing functions and content of the invention.
7. An electronic device, comprising:
one or more processors;
a storage device to store one or more programs.
CN201911285578.5A 2019-12-13 2019-12-13 Method and device for positioning marked scene Pending CN112964255A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911285578.5A CN112964255A (en) 2019-12-13 2019-12-13 Method and device for positioning marked scene
PCT/CN2020/134392 WO2021115236A1 (en) 2019-12-13 2020-12-08 Method and device for positioning by means of scene marking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911285578.5A CN112964255A (en) 2019-12-13 2019-12-13 Method and device for positioning marked scene

Publications (1)

Publication Number Publication Date
CN112964255A true CN112964255A (en) 2021-06-15

Family

ID=76270778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911285578.5A Pending CN112964255A (en) 2019-12-13 2019-12-13 Method and device for positioning marked scene

Country Status (2)

Country Link
CN (1) CN112964255A (en)
WO (1) WO2021115236A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272379A (en) * 2022-08-03 2022-11-01 杭州新迪数字工程系统有限公司 Projection-based three-dimensional grid model outline extraction method and system
CN116229560A (en) * 2022-09-08 2023-06-06 广东省泰维思信息科技有限公司 Abnormal behavior recognition method and system based on human body posture

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3841220B2 (en) * 2004-01-30 2006-11-01 船井電機株式会社 Autonomous traveling robot cleaner
CN102945557B (en) * 2012-10-12 2016-03-16 北京海鑫科金高科技股份有限公司 Based on the vector on-site drawing drawing method of mobile terminal
DE102015119501A1 (en) * 2015-11-11 2017-05-11 RobArt GmbH Subdivision of maps for robot navigation
CN106239517B (en) * 2016-08-23 2019-02-19 北京小米移动软件有限公司 The method, apparatus that robot and its realization independently manipulate
CN106530946A (en) * 2016-11-30 2017-03-22 北京贝虎机器人技术有限公司 Indoor map editing method and device
CN106643727A (en) * 2016-12-02 2017-05-10 江苏物联网研究发展中心 Method for constructing robot navigation map
CN109855628A (en) * 2019-03-05 2019-06-07 异起(上海)智能科技有限公司 Positioning, air navigation aid and device between a kind of indoor or building and computer-readable write medium and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272379A (en) * 2022-08-03 2022-11-01 杭州新迪数字工程系统有限公司 Projection-based three-dimensional grid model outline extraction method and system
CN115272379B (en) * 2022-08-03 2023-11-28 上海新迪数字技术有限公司 Projection-based three-dimensional grid model outline extraction method and system
CN116229560A (en) * 2022-09-08 2023-06-06 广东省泰维思信息科技有限公司 Abnormal behavior recognition method and system based on human body posture
CN116229560B (en) * 2022-09-08 2024-03-19 广东省泰维思信息科技有限公司 Abnormal behavior recognition method and system based on human body posture

Also Published As

Publication number Publication date
WO2021115236A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
US10657637B2 (en) System for inspecting objects using augmented reality
JP4964801B2 (en) Method and apparatus for generating 3D model from 2D live-action video
CN104995665B (en) Method for representing virtual information in true environment
US10084994B2 (en) Live streaming video over 3D
CN102037325A (en) Computer arrangement and method for displaying navigation data in 3D
CN111339588B (en) Two-dimensional drawing and three-dimensional model checking method, system and storage medium
CN106683037A (en) Method and equipment for three-dimensional visualized movement of track data
EP2695142B1 (en) Image processing system and method
CN112964255A (en) Method and device for positioning marked scene
JPWO2020090388A1 (en) Map generation system, map generation method and map generation program
US10025798B2 (en) Location-based image retrieval
CN111105695A (en) Map making method and device, electronic equipment and computer readable storage medium
JP2009157591A (en) Three-dimensional data processor, three-dimensional image generation device, navigation device and three-dimensional data processing program
Köppel et al. Context-responsive labeling in augmented reality
Trapp et al. Strategies for visualising 3D points-of-interest on mobile devices
CN111127661B (en) Data processing method and device and electronic equipment
CN110298320B (en) Visual positioning method, device and storage medium
US12039757B2 (en) Associating labels between multiple sensors
CN108733211A (en) Tracing system, its operating method, controller and computer-readable recording medium
US11561669B2 (en) Systems and methods of using a digital twin for interacting with a city model
WO2021111613A1 (en) Three-dimensional map creation device, three-dimensional map creation method, and three-dimensional map creation program
JP5964611B2 (en) 3D map display system
Brogan et al. Automatic generation and population of a graphics-based driving simulator: Use of mobile mapping data for behavioral testing of drivers
JP2016149142A (en) Three-dimensional map display system
Barros-Sobrín et al. Gamification for road asset inspection from Mobile Mapping System data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210615