CN113779162A - Method and system for marking scene - Google Patents
Method and system for marking scene Download PDFInfo
- Publication number
- CN113779162A CN113779162A CN202010125387.9A CN202010125387A CN113779162A CN 113779162 A CN113779162 A CN 113779162A CN 202010125387 A CN202010125387 A CN 202010125387A CN 113779162 A CN113779162 A CN 113779162A
- Authority
- CN
- China
- Prior art keywords
- information
- scene
- layer
- creating
- storing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 239000013598 vector Substances 0.000 claims description 19
- 230000002457 bidirectional effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Remote Sensing (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method of scene tagging, comprising: basic information of a scene is collected and created to describe the basic situation of the scene. And collecting and creating path information of a scene or/and a road surface for storing the passing rule of a region. Layer information is collected and created for storing information of a vertically stacked layer or a layer plane. Collecting and creating connection information for storing the information of the connection relation between the objects. The invention has the characteristic of supporting marking of a three-dimensional scene to guide navigation information or a navigation process so as to facilitate path finding or path planning for navigation. Another feature of the invention is that the feature pictures are arranged and the coordinates of each feature picture are recorded so that there is a basis for re-rectifying the coordinates inside.
Description
Technical Field
The invention relates to the technical field of computers and artificial intelligence, in particular to a method and a system for marking scenes.
Background
The GPS (Global Positioning System) can only return 2D coordinate information frequently, a plurality of current scene description methods are more general, and the invention aims to refine a more detailed marking method of a stereo scene to describe and collect solid or more fixed stereo objective scene information and can support mapping, description and storage in a larger range. The invention has another characteristic of supporting marking the stereo scene to guide navigation information or navigation process so as to facilitate the path finding or planning of the path for navigation. Another feature of the present invention is that the method of adding the feature picture can add the target object capable of being more accurately positioned to the scene.
Disclosure of Invention
According to one aspect of the disclosure, a method for scene marking is characterized by comprising:
basic information of a scene is collected and created to describe the basic situation of the scene.
And collecting and creating path information of a scene or/and a road surface for storing the passing rule of a region.
Layer information is collected and created for storing information of a vertically stacked layer or a layer plane.
Collecting and creating connection information for storing the information of the connection relation between the objects.
Feature picture information is collected and created, wherein the feature picture is a picture taken or a picture printed for posting or a laser picture, and image information for identification is used for marking a coordinate and related information in a target area.
Preferably, edge vertex information is created for the scene, either as coordinates relative to some coordinate system origin or as acquired by global positioning techniques.
Preferably, a containment box (bounding box) for this scene or sub-scene is generated from the edge vertices.
Preferably, as an alternative implementation, one way to determine that an area is divided into sub-scenes is according to the size of the area, and when the size satisfies more than a certain value (one value of 1 to 100000000 square meters, for example, 2 square meters), the continuous area can be regarded as a sub-scene.
Preferably, as an alternative implementation, one way to determine a region to be divided into sub-scenes is to regard the continuous region as a sub-scene when the size is larger than a certain value and the aspect ratio is smaller than a certain value (e.g. 10) and larger than a certain value (e.g. 0.1) according to the size of the region and the shape of the containing box (bounding box).
Preferably, as an alternative implementation, a point (e.g. the center point of the container box) may be selected in the container box (bounding box) of the scene to connect the surrounding vertices to create a triangle set or a sector set.
Preferably, as an alternative implementation, a vertex (according to a fixed distance or a fixed length ratio of the line segments, refer to fig. 1) may be added to a connection line between the edge vertex of the scene and the center point of the container box of the scene in a direction toward or away from the center point, so as to connect with the edge vertex to form an edge buffer area and an obtained triangle list or a sector list of the edge buffer area.
Preferably, as an alternative implementation, a vertex (according to a fixed distance or a fixed length ratio of the line segments, refer to fig. 4) may be added in a direction approaching the inside of the graph, so as to be connected with the edge vertex to form an edge buffer area and an obtained triangle list or a sector list of the edge buffer area.
Preferably, as an alternative implementation, a vertex (according to a fixed distance or a fixed length ratio of the line segments) may be added to the direction from the center point to the center point on the connecting line between the edge vertex of the scene and the center point of the container box of the scene, so as to connect with the edge vertex to form the edge buffer area and the triangle list or the sector list of the obtained edge buffer area, as schematically shown in fig. 1.
Preferably, the basic information of the scene includes unique ID information of the scene.
Preferably, the path information includes edge vertex information of the path.
Preferably, the path information includes transitivity information, which may be unidirectional passing information or bidirectional passing information, and may be 3D link information (a connection set of 3D points, or understood as unidirectional linked list information of 3D points or a bidirectional linked list of 3D points).
Preferably, as an optional implementation manner, the passability information may be composed of a 3D passing direction vector, a passing width vector perpendicular to the 3D passing direction vector, and passing height vector information (total 3 vectors).
Preferably, as an optional implementation manner, the passability information may be composed of one 3D coordinate and the passing width vector and the passing height vector information (2 vectors in total) perpendicular to the passing direction at this coordinate.
Preferably, the path information includes maximum and minimum passing speeds of the road surface information.
Preferably, the unique ID information of this road surface information is contained in the path information.
Preferably, the path information includes container information.
Preferably, each layer information contains information such as a containing box (bounding box), a unique ID of the layer information, and edge vertex information.
Preferably, each of the layer information contains information about how many layers the layer information has in total, the layer in which the layer information is located (the value may be a negative number to indicate underground).
Preferably, as an alternative implementation, each layer information may contain information having a carrying strength value or a strength type capable of carrying the layer information.
Preferably, the link information is information mainly used for storing a connection relationship between logical objects (basic information of a scene, access information of a road surface, layer information) in the scene.
Preferably, the unique ID information and the edge vertex information of the unicom information may be included in the unicom information.
Preferably, the link information may include a list of objects connected to the link information, and each object connected to the link information is added with a type to indicate whether the object is a scene, path information, layer information, or link information.
Preferably, the computer device and the storage medium are characterized by being capable of storing information.
Preferably, as an alternative implementation, all the tagged information of the scene may be stored in a file, or under a folder.
Preferably, as an alternative implementation, the information may be divided into different files according to various information types of the scene to be stored respectively.
Preferably, as an alternative implementation, a virtual container box may be created around a scene, and all scenes or sub-scenes inside the container box or the label information in the scenes may be stored in a folder or in the same file.
Preferably, the scene information may be a container of other information, and the other information included in the scene may also be retrieved from a storage list or a storage container of various information according to a bounding box of the scene information.
Preferably, sub-scene information may be divided from the scene information.
Preferably, as an optional implementation manner, a data type or identifier may be added to the edge vertex information, the path information, the layer information, and the link information to distinguish them.
Preferably, as an alternative embodiment, the edge vertex information, the path information, the layer information, and the link information may be stored in different files to be distinguished.
The edge vertex is coordinates relative to the origin of a coordinate system or coordinates obtained by global positioning technology, and is mainly used for describing the edge of the scene.
Optionally, the difference between the road surface information and the layer information is: the road surface information is information of a horizontal plane or information regarded as a horizontal plane, and the layer information is information superimposed or overlapped in a vertical direction.
Preferably, the unique ID is a unique ID of each object.
Preferably, as an alternative embodiment, when measuring or/and making coordinate information, various coordinates may be collected according to the origin of a local coordinate system, and then the difference between the coordinates of the local coordinate system and the global coordinate system is used to modify the coordinate data collected this time.
Preferably, the coordinates of the center point of each feature picture are collected as the coordinates of the feature picture information.
Preferably, a unique ID is created for each feature picture.
Preferably, each feature picture and its corresponding unique ID are stored in a database.
Preferably, as an optional implementation manner, the ID corresponding to each feature picture may be stored in the scene, the path information, the layer information, or the link information where the feature picture is located.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram, top view, illustrating a method or rule for determining a selected point position of an edge buffer area.
Fig. 2 schematically shows a layout diagram, and a top view, of the path information and the link information included in a scene.
Fig. 3 is a schematic diagram, top view, schematically illustrating coordinate points and pass width vectors in the path information.
Fig. 4 is a schematic diagram, top view, schematically illustrating another method for determining a location of a selected point in an edge buffer region or a rule for determining the location.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
Step 1: a scene is established by selecting a place, which can be a company, firstly selecting a point as an origin of a local coordinate system in the scene (for example, a wall corner at the upper left corner is the origin of coordinates), then the axial direction of the local coordinate system is the same as the axial direction of a global coordinate system, then obtaining edge vertex information (list information) of the scene through measurement, obtaining containing box information through the edge vertex information, and then generating a unique ID for the scene object.
Step 2: finding every place that can pass (vehicle pass or human pass) in this scene, creating passage information, creating a unique ID of each passage information, collecting edge vertex information (may be list information), obtaining container information of this passage information from the edge vertex information, and obtaining passability information (selecting a point in a passage, measuring a relative coordinate of this point, a passing height vector, and a passing width vector to obtain information or a list of information).
And step 3: where a floor is present, a floor information is created, for example the floor information of the second floor contains the edge vertex information of this floor, the containment box, the unique ID, the number of floors (2) of this floor, the total number of floors (for example 33), the bearing strength value (for example, capable of resisting a 7-degree earthquake, is marked 7, and capable of stopping the helicopter is marked 100).
And 4, step 4: for places that can communicate to other objects (e.g. elevators or doors, mainly elevators), a connectivity information is established, e.g. an elevator of 33 floors, edge vertex information (may be list information) of this connectivity information is collected, bounding box information is calculated, a list of connected various types of information (information of 33 floors, mainly unique ID and type value, where one type value can be compared for each type, e.g. scene information is 1, through information is 2, layer information is 3, connectivity information is 4, feature picture information is 5, etc.).
And 5: posting or collecting feature pictures in a scene, collecting coordinates of each feature picture, creating a unique ID for each feature picture, and storing the feature picture and its unique ID correspondence to a database.
Further, as an optional implementation manner, the unique ID in each feature picture information may be stored in the scene, the path information, the layer information, or the link information where the feature picture is located.
Step 6: since all the coordinates are obtained by calculation relative to the origin of the local coordinate system, and the axial direction of the local coordinate system is also the same as the axial direction of the global coordinate system, the global coordinate system coordinate of the origin of the local coordinate system is calculated or measured, and the local coordinate is added to each local coordinate to obtain the conversion from the local coordinate to the global coordinate, wherein the local coordinate can be adjusted according to the actual situation.
And 7: after the relevant information of the scene is collected, the information is stored according to the previous storage mode, if a certain area (for example, the length and the width of 100 kilometers or ten thousand square kilometers) is separately stored on one machine, the corresponding storage medium is found according to the area where the current scene information is located for storage, a separate folder can be established for the scene during storage, each type of information is separately stored in one file, and also several types of information can be stored in one unified file.
The above is a specific embodiment of the present invention, but the scope of the present invention should not be limited thereto. Any changes or substitutions that can be easily made by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention, and therefore, the protection scope of the present invention is subject to the protection scope defined by the appended claims.
Claims (9)
1. A method of tagging a scene, comprising:
collecting and creating basic information of a scene, wherein the basic information is used for describing the basic situation of the scene;
collecting and creating path information of a scene or/and a road surface for storing a passing rule of an area;
collecting and creating layer information for storing information of a vertically stacked layer or a layer plane;
collecting and creating connection information for storing the information of the connection relation between the objects;
feature picture information is collected and created, wherein the feature picture is a picture taken or a picture printed for posting or a laser picture, and image information for identification is used for marking a coordinate and related information in a target area.
2. The method of claim 1, wherein collecting and creating base information for a scene further comprises:
creating edge vertex information for a scene, wherein the edge vertex information is coordinates relative to an origin of a coordinate system or coordinates obtained by a global positioning technology;
generating a containment box (bounding box) for this scene or sub-scene from the edge vertices;
as an alternative embodiment, one way to determine the division of an area into sub-scenes is based on the size of the area, and when the size is larger than a certain value (one of 1 to 100000000 square meters, for example, 2 square meters), the continuous area can be treated as a sub-scene;
as an alternative embodiment, one way to determine whether a region can be divided into sub-scenes is based on the size of the region and the shape of the containment box (bounding box) (aspect ratio is smaller than a certain value and larger than a certain value), and when the size is larger than a certain value and the aspect ratio is smaller than a certain value (e.g. 10) and larger than a certain value (e.g. 0.1), the continuous region can be considered as a sub-scene;
as an alternative, a point (e.g. the center point of the container box) may be selected from the container boxes (bounding boxes) of the scene to connect the surrounding vertices to create a triangle set or a sector set;
as an alternative embodiment, a vertex (according to a fixed distance or a fixed length ratio of line segments) may be added to the connection line between the edge vertex of the scene and the center point of the container box of the scene in a direction toward or away from the center point, so as to connect with the edge vertex to form an edge buffer area and an obtained triangle list or sector list of the edge buffer area;
as an alternative implementation, a vertex (according to a fixed distance or a fixed length ratio of the line segments) may be added to the edge vertex of the scene in the direction approaching the inside of the graph, so as to connect with the edge vertex to form an edge buffer area and a triangle list or a sector list of the obtained edge buffer area;
the basic information of the scene comprises the unique ID information of the scene.
3. The method of claim 1, wherein collecting and creating path information of a scene or/and a road surface for storing a passing rule of an area further comprises:
the path information comprises edge vertex information of a path;
the path information includes trafficability information, which may be unidirectional traffic information or bidirectional traffic information, and may be 3D link information (a connection set of 3D points, or understood as unidirectional linked list information of 3D points or a bidirectional linked list of 3D points);
as an alternative embodiment, the passability information may be composed of a 3D passing direction vector, a passing width vector perpendicular to the 3D passing direction vector, and passing height vector information (total 3 vectors);
as an alternative embodiment, the passability information may be composed of one 3D coordinate and a passing width vector and a passing height vector (2 vectors in total) perpendicular to the passing direction at this coordinate;
the path information comprises the maximum and minimum passing speeds of the road surface information;
the path information includes unique ID information of the road surface information;
the path information includes container box information.
4. The method of claim 1, wherein collecting and creating layer information for storing information of a vertically stacked layer or a layer plane further comprises:
the information contained in each layer information can be a containing box (bounding box), the unique ID of the layer information, and edge vertex information;
each layer information contains information about how many layers the layer information has, the layer of the layer information (the value may be negative to represent underground);
as an alternative, each layer information may contain information with the carrying strength value or the strength type capable of carrying the layer information.
5. The method of claim 1, wherein collecting and creating connectivity information for storing information about connections between said objects further comprises:
the link information is information mainly used for storing the connection relation between logic objects (basic information of the scene, access information of the road surface, layer information) in the scene;
the unique ID information and the edge vertex information of the connected information can be contained in the connected information;
the link information may include a list of objects connected to the link information, and each object connected to the link information is added with a type indicating whether the object is a scene, path information, layer information, or link information.
6. The method of claim 1, wherein the feature image is a photographed image, or a printed or laser image for posting, for identifying image information for marking a coordinate and related information in the target area, further comprising:
collecting the coordinates of the central point of each characteristic picture as the coordinates of the characteristic picture information;
creating a unique ID for each characteristic picture;
storing each feature picture and the unique ID corresponding to the feature picture in a database;
as an optional implementation manner, the ID corresponding to each feature picture may be stored in the scene, the path information, the layer information, or the link information of the feature picture.
7. A system for marking a pavement surface, comprising:
computer equipment and storage media, which are characterized by being capable of storing information;
as an alternative embodiment, all the tagged information of the scene may be stored in a file, or under a folder;
as an alternative implementation, the information types of the scenes may be divided into different files to be stored respectively;
as an alternative embodiment, a virtual container box may be created around a scene, and all scenes or sub-scenes inside the container box or label information in the scenes may be stored in a folder or in the same file.
8. A computer-readable write medium on which a computer program and related data are stored, characterized in that the program, when executed by a processor, implements the relevant computing functions and content of the invention.
9. An electronic device, comprising:
one or more processors;
a storage device to store one or more programs.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010125387.9A CN113779162A (en) | 2020-02-27 | 2020-02-27 | Method and system for marking scene |
PCT/CN2021/075491 WO2021169772A1 (en) | 2020-02-27 | 2021-02-05 | Method and system for marking scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010125387.9A CN113779162A (en) | 2020-02-27 | 2020-02-27 | Method and system for marking scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113779162A true CN113779162A (en) | 2021-12-10 |
Family
ID=77490663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010125387.9A Pending CN113779162A (en) | 2020-02-27 | 2020-02-27 | Method and system for marking scene |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113779162A (en) |
WO (1) | WO2021169772A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114111814A (en) * | 2021-10-21 | 2022-03-01 | 北京百度网讯科技有限公司 | High-precision map data processing method and device, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4947376B2 (en) * | 2007-12-26 | 2012-06-06 | アイシン・エィ・ダブリュ株式会社 | Three-dimensional data processing device, three-dimensional image generation device, navigation device, and three-dimensional data processing program |
CN102306106A (en) * | 2011-08-30 | 2012-01-04 | 盛趣信息技术(上海)有限公司 | Method and system for automatically generating navigation chart in virtual space, and pathfinding method and system |
CN109034003A (en) * | 2018-07-05 | 2018-12-18 | 华平智慧信息技术(深圳)有限公司 | Emergency command method and Related product based on scene |
CN109308838A (en) * | 2018-09-11 | 2019-02-05 | 中国人民解放军战略支援部队信息工程大学 | A kind of interior space topology road network generation method and device based on indoor map |
CN110017841A (en) * | 2019-05-13 | 2019-07-16 | 大有智能科技(嘉兴)有限公司 | Vision positioning method and its air navigation aid |
-
2020
- 2020-02-27 CN CN202010125387.9A patent/CN113779162A/en active Pending
-
2021
- 2021-02-05 WO PCT/CN2021/075491 patent/WO2021169772A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2021169772A1 (en) | 2021-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11914388B2 (en) | Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server | |
CN101163940B (en) | Imaging position analyzing method | |
CN109470254B (en) | Map lane line generation method, device, system and storage medium | |
Cheng et al. | Modeling tower crane operator visibility to minimize the risk of limited situational awareness | |
CN110287276A (en) | High-precision map updating method, device and storage medium | |
CN112526993B (en) | Grid map updating method, device, robot and storage medium | |
US11590989B2 (en) | Training data generation for dynamic objects using high definition map data | |
CN102804231A (en) | Piecewise planar reconstruction of three-dimensional scenes | |
US11507101B2 (en) | Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server | |
Huthwohl et al. | Challenges of bridge maintenance inspection | |
JP2022051770A (en) | Map generation system, map generation method and map generation program | |
JP2021144743A (en) | Three-dimensional data generation device, 3d data generation method, three-dimensional data generation program, and computer readable recording medium recording three-dimensional data generation program | |
CN113779162A (en) | Method and system for marking scene | |
CN115240154A (en) | Method, device, equipment and medium for extracting point cloud features of parking lot | |
CN105806347B (en) | A kind of true three-dimensional navigation method of gradient section and a kind of true three-dimensional navigation equipment | |
CN113424240A (en) | Travel road recognition device | |
JP5947666B2 (en) | Traveling road feature image generation method, traveling road feature image generation program, and traveling road feature image generation apparatus | |
CN112964255A (en) | Method and device for positioning marked scene | |
Gehrung et al. | A fast voxel-based indicator for change detection using low resolution octrees | |
JP2007293597A (en) | Analysis device, retrieval device and measurement device, and program | |
WO2023166411A1 (en) | Automatic digital inspection of railway environment | |
JP7427569B2 (en) | Condition determination device, condition determination system, and condition determination method | |
JP7378893B2 (en) | Map generation device, map generation method, and map generation program | |
Farhadmanesh et al. | Feasibility Study of Using Close-Range Photogrammetry as an Asset-Inventory Tool at Public Transportation Agencies | |
WO2021135845A1 (en) | Method and system for marking road surface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20211210 |
|
WD01 | Invention patent application deemed withdrawn after publication |