CN114067071B - High-precision map making system based on big data - Google Patents
High-precision map making system based on big data Download PDFInfo
- Publication number
- CN114067071B CN114067071B CN202111417146.2A CN202111417146A CN114067071B CN 114067071 B CN114067071 B CN 114067071B CN 202111417146 A CN202111417146 A CN 202111417146A CN 114067071 B CN114067071 B CN 114067071B
- Authority
- CN
- China
- Prior art keywords
- video
- data
- objects
- coordinate
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a high-precision map making system based on big data, which comprises a data acquisition module, a data construction module and a data conversion module, wherein the data acquisition module is used for shooting videos of a target area, the data construction module is used for processing the shot data to construct a three-dimensional model, the data conversion module is used for converting the three-dimensional model to form map data, the data construction module is used for firstly creating a three-dimensional scene approximately conforming to the reality through object identification, object modeling, object positioning and scene construction, then shooting the three-dimensional scene through a virtual camera, comparing the shot videos with the shot videos of the actual scene, correcting the coordinates of each object according to the difference of comparison, and continuously repeating the comparison and correction processes until no deviation occurs. A three-dimensional scene with extremely small error is set up through the method, and then the scene is converted into map data, so that the requirement of high precision is met.
Description
Technical Field
The present disclosure relates generally to the field of map processing, and more particularly to a big data based high precision mapping system.
Background
The high-precision map is a special map for automatic driving, plays a core role in accurate positioning, and is composed of vector information such as a lane model containing semantic information, road components and road attributes and a feature map layer for multi-sensor positioning.
A plurality of mapping systems have been developed, and through a lot of search and reference, it is found that the existing mapping systems, such as the systems disclosed in publication numbers KR101223245B1, KR100722365B1, CN106097444B and KR101312652B1, obtain three-dimensional laser point cloud data for generating high-precision maps and related information of grid maps; determining the position information of each three-dimensional laser point data in the three-dimensional laser point cloud data in the grid map; for each pixel point in the grid map, rendering the pixel point by using the reflection value of the corresponding three-dimensional laser point data in the three-dimensional laser point cloud data, thereby generating each grid image in the grid map; identifying traffic information in each grid image in the grid map by adopting a machine learning algorithm; clustering traffic information of each grid image in the grid map to obtain the traffic information of the grid map; and loading the traffic information of the grid map into the grid map to generate a high-precision map. However, the map generated by the system is not high enough in accuracy, and there is still room for improvement.
Disclosure of Invention
The invention aims to provide a high-precision map making system based on big data aiming at the defects,
the invention adopts the following technical scheme:
a high-precision map making system based on big data comprises a data acquisition module, a data construction module and a data conversion module, wherein the data acquisition module is used for shooting videos of a target area, the data construction module is used for processing shot data to construct a three-dimensional model, and the data conversion module is used for converting the three-dimensional model to form map data;
the data construction module comprises an object identification unit, an object positioning unit, an object management unit, an object modeling unit and a scene display unit, wherein the object identification unit is used for identifying independent individuals in a video, the independent individuals are called objects, the object positioning unit is used for determining the position information of the objects, the object management unit is used for storing all information of the objects, the object modeling unit is used for establishing a three-dimensional model of each object, and the scene display unit is used for establishing three-dimensional models of all the objects into a regional three-dimensional scene;
the object positioning unit takes one object as a coordinate origin and sequentially calculates coordinate information of other objects according to the video data, and the coordinate calculation formula is as follows:
wherein, (x, y) is the object coordinate to be calculated, (x) 0 ,y 0 ) The coordinate of the known object is represented by r, the straight-line distance between the two coordinates is represented by r, and the included angle between the coordinate connecting line of the two objects and the positive direction of the x axis is represented by theta;
after the regional three-dimensional scene is built, a virtual camera is set, the virtual camera shoots the regional three-dimensional scene according to shooting parameters which are the same as those of the data acquisition module to obtain a comparison video, the object positioning unit compares the comparison video with a main video shot by the data acquisition module and corrects the coordinates of each object to obtain a horizontal coordinate correction amount and a vertical coordinate correction amount, and the correction formula is as follows:
wherein x1(t) is a time function of an abscissa where the object appears in the control video, y1(t) is a time function of an ordinate where the object appears in the control video, x2(t) is a time function of an abscissa where the object appears in the main video, y2(t) is a time function of an ordinate where the object appears in the main video, t1 is an earliest time when the object appears in both the main video and the control video, t2 is a latest time when the object appears in both the main video and the control video, and k is a scale parameter;
further, the object identification unit identifies the suspected individuals by adopting an edge algorithm, and when the two suspected individuals are in a connected state in all the pictures, the two suspected individuals are combined into one individual and serve as an independent object;
furthermore, the videos shot by the data acquisition module comprise aerial shooting videos and ground shooting videos, one of the aerial shooting videos is used as a main video, and the other videos are used as auxiliary videos;
further, r is determined through the ground shot video, in the ground shooting process, the distance between the two objects in the picture changes, when the picture distance L is the largest, the photographer is located on a perpendicular bisector of a connecting line of the two objects, the distance between the measured object and the photographer is S, the included angle between the two objects and the photographer is alpha, and then the distance r between the two objects is:
furthermore, theta is determined through the aerial shooting video, in the aerial shooting process, the included angle between the advancing direction of a photographer and the positive direction of an x axis is beta, two objects can form a straight line track in a shooting picture, wherein the known included angle between the track of a coordinate object and the connecting line of the two objects is phi, then the theta of the two objects is:
θ=β+φ-π。
the beneficial effects obtained by the invention are as follows:
the system creates a three-dimensional scene approximately conforming to the reality through object identification, object modeling, object positioning and scene building, shoots the three-dimensional scene through a virtual camera, compares the shot video with the shot video of the actual scene, corrects the coordinate of each object according to the difference of the comparison, and repeats the comparison and correction processes continuously until no deviation occurs.
For a better understanding of the features and technical content of the present invention, reference should be made to the following detailed description of the invention and accompanying drawings, which are provided for purposes of illustration and description only and are not intended to limit the invention.
Drawings
FIG. 1 is a schematic view of the overall structural framework of the present invention;
FIG. 2 is a schematic diagram of the data construction module according to the present invention;
FIG. 3 is a schematic view of the connection state of the present invention;
FIG. 4 is a schematic diagram of two object linear distance calculations according to the present invention;
fig. 5 is a high-precision map of a part of road section according to the invention.
Detailed Description
The following is a description of embodiments of the present invention with reference to specific embodiments, and those skilled in the art will understand the advantages and effects of the present invention from the disclosure of the present specification. The invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. The drawings of the present invention are for illustrative purposes only and are not intended to be drawn to scale. The following embodiments will further explain the related art of the present invention in detail, but the disclosure is not intended to limit the scope of the present invention.
The first embodiment.
The embodiment provides a high-precision map making system based on big data, which is combined with a graph 1 and comprises a data acquisition module, a data construction module and a data conversion module, wherein the data acquisition module is used for shooting a video of a target area, the data construction module is used for processing the shot data to construct a three-dimensional model, and the data conversion module is used for converting the three-dimensional model to form map data;
the data construction module comprises an object identification unit, an object positioning unit, an object management unit, an object modeling unit and a scene display unit, wherein the object identification unit is used for identifying independent individuals in a video, the independent individuals are called objects, the object positioning unit is used for determining the position information of the objects, the object management unit is used for storing all information of the objects, the object modeling unit is used for establishing a three-dimensional model of each object, and the scene display unit is used for establishing three-dimensional models of all the objects into a regional three-dimensional scene;
the object positioning unit takes one object as a coordinate origin and sequentially calculates coordinate information of other objects according to the video data, and the coordinate calculation formula is as follows:
wherein, (x, y) is the object coordinate to be calculated, (x) 0 ,y 0 ) The coordinate of the known object is represented by r, the straight-line distance between the two coordinates is represented by r, and the included angle between the coordinate connecting line of the two objects and the positive direction of the x axis is represented by theta;
after the regional three-dimensional scene is built, a virtual camera is set, the virtual camera shoots the regional three-dimensional scene according to the shooting parameters same as those of the data acquisition module to obtain a comparison video, the object positioning unit compares the comparison video with the main video shot by the data acquisition module and corrects the coordinates of each object to obtain the abscissa correction quantity and the ordinate correction quantity, and the correction formula is as follows:
wherein x1(t) is a time function of an abscissa where the object appears in the control video, y1(t) is a time function of an ordinate where the object appears in the control video, x2(t) is a time function of an abscissa where the object appears in the main video, y2(t) is a time function of an ordinate where the object appears in the main video, t1 is an earliest time when the object appears in both the main video and the control video, t2 is a latest time when the object appears in both the main video and the control video, and k is a scale parameter;
the object identification unit identifies the suspected individuals by adopting an edge algorithm, and when the two suspected individuals are in a connected state in all pictures, the two suspected individuals are combined into one individual and serve as an independent object;
the video shot by the data acquisition module comprises an aerial shooting video and a ground shooting video, one video in the aerial shooting video is used as a main video, and the other videos are used as auxiliary videos;
determining r through a ground shooting video, wherein the distance between two objects in a picture is changed in the ground shooting process, when the picture distance L is the maximum, a photographer is positioned on a perpendicular bisector of a connecting line of the two objects, the distance between a measured object and the photographer is S, the included angle between the two objects and the photographer is alpha, and the distance between the two objects is as follows:
determining theta through an aerial shooting video, wherein in the aerial shooting process, an included angle between the advancing direction of a photographer and the positive direction of an x axis is beta, and two objects can form a straight line track in a shooting picture, wherein if the included angle between the track of a coordinate object and the connecting line of the two objects is known to be phi, the theta of the two objects is as follows:
θ=β+φ-π。
example two.
The embodiment includes the whole content of the first embodiment, and provides a high-precision map making system based on big data, which comprises a data acquisition module, a data construction module and a data conversion module, wherein the data acquisition module is used for shooting a video of a target area, the data construction module is used for processing the shot data to construct a three-dimensional model, and the data conversion module is used for converting the three-dimensional model to form map data;
with reference to fig. 2, the data construction module includes an object identification unit, an object positioning unit, an object management unit, an object modeling unit and a scene display unit, the object identification unit is configured to identify each independent object in an image, each independent object is called an object, the object management unit builds a file for each object, the file is configured to store related information of the object, the object positioning unit is configured to measure a relative distance between the objects and determine position information of each object, the position information is stored in the file of the corresponding object in the object management unit after being determined, the object modeling unit constructs a complete three-dimensional model according to multi-angle image information of one object, the three-dimensional model is stored in the file of the corresponding object in the object management unit after being constructed, and when the position information and the three-dimensional model of all the objects in the object management unit are determined, the scene display unit builds a regional three-dimensional scene according to the position information and the three-dimensional model, and the regional three-dimensional scene is sent to the data conversion module;
the shot data comprises video data and photographer data, the video data is shot video picture information, the photographer data is shot position and angle information, the video data and the photographer data have consistent time bars, and a photographer can be an unmanned aerial vehicle, a ground collection vehicle or a backpack RTK camera;
the object recognition unit takes one video as a main video and the other videos as auxiliary videos, intercepts a section of sub main video from the main video by using a time window, acquires photographer data in the section of sub main video, determines the approximate position in the sub main video according to the photographer data, and intercepts the corresponding sub auxiliary video from the auxiliary videos according to the approximate position and the photographer data of the auxiliary videos, wherein the sub auxiliary videos have the same approximate position as the sub main videos;
the object identification unit obtains analysis frames from the sub-main video, the analysis frames have the same frame interval, the object identification unit calculates edge lines in each analysis frame by adopting an edge algorithm, a plurality of suspected individuals are determined according to color information of an area surrounded by the edge lines and the edge lines, when two suspected individuals are in a connected state in different analysis frames, the two suspected individuals are combined into one suspected individual, and the connected state comprises adjacency, all inclusion and partial inclusion in combination with the graph 3;
the object identification unit analyzes the suspected individuals combined in the sub-main video according to the sub-auxiliary video, and when the combined suspected individuals can be divided into two unconnected suspected individuals, the combined suspected individuals are split again;
establishing a file in the object management unit by taking the suspected individual as an object, and adding the approximate position and color information of the object in the file;
the object identification module changes the time window and repeats the process continuously to establish files of all objects appearing in the video in the object management unit;
it should be noted that the suspected individuals that do not appear in the primary video but appear in the secondary video also serve as objects to establish an archive in the object management unit;
in particular, the person identified in the video by the object identification unit will not create a profile in the object management unit;
the object modeling unit acquires a video segment containing the object from a main video and an auxiliary video as a modeling reference video according to approximate position and color information in the object management unit file, the object modeling unit creates a three-dimensional model according to the modeling reference video, and the object modeling unit determines the surface texture of the three-dimensional model under a certain view angle according to photographer data corresponding to the modeling reference video;
the object positioning unit takes an object as a coordinate origin, the object can be any object, but for convenience, the object with the largest volume in the picture appearing first in the main video is taken as an origin object, and the coordinates of the other objects are calculated in sequence according to the shooting sequence of the main video;
the coordinate calculation formula is:
wherein, (x, y) is the object coordinate to be calculated, (x) 0 ,y 0 ) The coordinate of the known object is represented by r, the straight-line distance between the two coordinates is represented by r, and the included angle between the coordinate connecting line of the two objects and the positive direction of the x axis is represented by theta;
after the coordinates of all the objects are calculated, the scene display unit places the three-dimensional model of each object according to the coordinates to build a regional three-dimensional scene;
r and theta in the calculation formula are obtained by processing and calculating two objects as particles, wherein r is obtained by calculating ground shooting data, and theta is obtained by calculating aerial shooting data;
in the process of ground shooting, with reference to fig. 4, the distance between two objects in the picture changes, when the picture distance L is the maximum, the photographer is located on the midperpendicular of the connection line of the two objects, at this time, the distance between the measurement object and the photographer is S, the included angle between the two objects and the photographer is α, and then the distance r between the two objects is:
in the process of aerial shooting, an included angle between the advancing direction of a photographer and the positive direction of an x axis is beta, two objects can form a straight line track in a shot picture, wherein if the included angle between the track of a coordinate object and the connecting line of the two objects is known to be phi, theta of the two objects is as follows:
θ=β+φ-π;
because r and theta in coordinate calculation have errors, the position of an object in a regional three-dimensional scene needs to be adjusted;
setting a virtual camera in the regional three-dimensional scene, wherein the virtual camera shoots the regional three-dimensional scene according to the photographer data of the main video to obtain a comparison video, the object positioning unit compares the main video with the comparison video, when the positions of the three-dimensional model of the object in the comparison video and the real object in the main video in the picture deviate, the coordinates of the three-dimensional model of the object are adjusted according to the deviation value, after the coordinates of all the objects are adjusted, shooting the comparison video again and adjusting the coordinates, and repeating the process continuously until the positions of the three-dimensional model of all the objects in the comparison video and the positions of the real object in the picture are consistent;
the adjustment formula when deviation occurs is as follows:
wherein x1(t) is a time function of an abscissa where the object appears in the control video, y1(t) is a time function of an ordinate where the object appears in the control video, x2(t) is a time function of an abscissa where the object appears in the main video, y2(t) is a time function of an ordinate where the object appears in the main video, t1 is an earliest time when the object appears in both the main video and the control video, t2 is a latest time when the object appears in both the main video and the control video, and k is a scale parameter;
for the object only appearing in the auxiliary video, the virtual camera shoots the regional three-dimensional scene according to the data of the shooting person of the auxiliary video to obtain a contrast video, and the contrast video is compared with the auxiliary video to adjust the coordinate position of the object;
the data conversion module identifies objects in the regional three-dimensional scene, and in combination with fig. 5, basic attributes are given to each object, the basic attributes are divided into roads, fixed objects and moving objects, the data conversion module establishes a traffic area according to information such as width and length of the roads by taking the road objects as reference, the data conversion module establishes a collision area in the traffic area according to occupation of the fixed objects in the regional three-dimensional scene in the traffic area, the data conversion module establishes a suspicious area in the traffic area according to occupation of the moving objects in the three-dimensional scene in the traffic area, and the suspicious area can change along with movement of the moving objects;
the data conversion module adds traffic attributes to objects of the road attributes, wherein the traffic attributes comprise driving directions, zebra crossings, speed limit values, no-parking areas and road gradients, and the traffic attributes can be edited manually in a finally generated map.
The disclosure is only a preferred embodiment of the invention, and is not intended to limit the scope of the invention, so that all equivalent technical changes made by using the contents of the specification and the drawings are included in the scope of the invention, and further, the elements thereof can be updated as the technology develops.
Claims (2)
1. A high-precision map making system based on big data is characterized by comprising a data acquisition module, a data construction module and a data conversion module, wherein the data acquisition module is used for shooting a video of a target area, the data construction module is used for processing the shot data to construct a three-dimensional model, and the data conversion module is used for converting the three-dimensional model to form map data;
the data construction module comprises an object identification unit, an object positioning unit, an object management unit, an object modeling unit and a scene display unit, wherein the object identification unit is used for identifying independent individuals in a video, the independent individuals are called objects, the object positioning unit is used for determining the position information of the objects, the object management unit is used for storing all information of the objects, the object modeling unit is used for establishing a three-dimensional model of each object, and the scene display unit is used for establishing three-dimensional models of all the objects into a regional three-dimensional scene;
the object positioning unit takes one object as a coordinate origin and sequentially calculates coordinate information of other objects according to the video data, and the coordinate calculation formula is as follows:
wherein (x, y) is the object coordinate to be calculated, (x) 0 ,y 0 ) The coordinate of the known object is represented by r, the straight-line distance between the two coordinates is represented by r, and the included angle between the coordinate connecting line of the two objects and the positive direction of the x axis is represented by theta;
after the regional three-dimensional scene is built, a virtual camera is set, the virtual camera shoots the regional three-dimensional scene according to shooting parameters which are the same as those of the data acquisition module to obtain a comparison video, the object positioning unit compares the comparison video with a main video shot by the data acquisition module and corrects the coordinates of each object to obtain a horizontal coordinate correction amount and a vertical coordinate correction amount, and the correction formula is as follows:
where x1(t) is a function of time on the abscissa where the object appears in the control video, y1(t) is a function of time on the ordinate where the object appears in the control video, x2(t) is a function of time on the abscissa where the object appears in the main video, y2(t) is a function of time on the ordinate where the object appears in the main video, t1 is the earliest time when the object appears in both the main video and the control video, t2 is the latest time when the object appears in both the main video and the control video, and k is a scale parameter;
the object identification unit identifies the suspected individuals by adopting an edge algorithm, and when the two suspected individuals are in a connected state in all pictures, the two suspected individuals are combined into one individual and serve as an independent object;
the video shot by the data acquisition module comprises an aerial shooting video and a ground shooting video, one video in the aerial shooting video is used as a main video, and the other videos are used as auxiliary videos;
determining r through a ground shooting video, wherein the distance between two objects in a picture is changed in the ground shooting process, when the picture distance L is the maximum, a photographer is positioned on a perpendicular bisector of a connecting line of the two objects, the distance between a measured object and the photographer is S, the included angle between the two objects and the photographer is alpha, and the distance between the two objects is as follows:
2. the big-data-based high-precision mapping system according to claim 1, wherein θ is determined by the aerial photography video, during the aerial photography, the angle between the advancing direction of the photographer and the positive direction of the x-axis is β, and the two objects form a straight-line track in the photographed picture, wherein if the angle between the track of the coordinate object and the connecting line of the two objects is Φ, then θ of the two objects is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111417146.2A CN114067071B (en) | 2021-11-26 | 2021-11-26 | High-precision map making system based on big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111417146.2A CN114067071B (en) | 2021-11-26 | 2021-11-26 | High-precision map making system based on big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114067071A CN114067071A (en) | 2022-02-18 |
CN114067071B true CN114067071B (en) | 2022-08-30 |
Family
ID=80276583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111417146.2A Active CN114067071B (en) | 2021-11-26 | 2021-11-26 | High-precision map making system based on big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114067071B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004061387A1 (en) * | 2002-12-27 | 2004-07-22 | Hiroshi Arisawa | Multi-view-point video capturing system |
CN1656518A (en) * | 2002-05-21 | 2005-08-17 | 科乐美股份有限公司 | Three dimensional image processing program, three dimensional image processing method, and video game device |
CN1906943A (en) * | 2004-01-30 | 2007-01-31 | 株式会社丰田自动织机 | Video image positional relationship correction apparatus, steering assist apparatus having the video image positional relationship correction apparatus and video image positional relationship correcti |
KR101223245B1 (en) * | 2012-09-12 | 2013-01-17 | 삼부기술 주식회사 | Map image making system |
CN103021261A (en) * | 2011-09-23 | 2013-04-03 | 联想(北京)有限公司 | Automatic digital map correction method and device |
KR101312652B1 (en) * | 2013-04-23 | 2013-10-14 | 서울공간정보 주식회사 | Digital map making system to update the digital map |
WO2017007254A1 (en) * | 2015-07-08 | 2017-01-12 | 고려대학교 산학협력단 | Device and method for generating and displaying 3d map |
WO2018112695A1 (en) * | 2016-12-19 | 2018-06-28 | 深圳市阳日电子有限公司 | Image display method and mobile terminal |
CN111476894A (en) * | 2020-05-14 | 2020-07-31 | 小狗电器互联网科技(北京)股份有限公司 | Three-dimensional semantic map construction method and device, storage medium and electronic equipment |
CN111556283A (en) * | 2020-03-18 | 2020-08-18 | 深圳市华橙数字科技有限公司 | Monitoring camera management method and device, terminal and storage medium |
CN112489121A (en) * | 2019-09-11 | 2021-03-12 | 丰图科技(深圳)有限公司 | Video fusion method, device, equipment and storage medium |
CN113192125A (en) * | 2021-03-26 | 2021-07-30 | 南京财经大学 | Multi-camera video concentration method and system in geographic scene with optimal virtual viewpoint |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
US10372970B2 (en) * | 2016-09-15 | 2019-08-06 | Qualcomm Incorporated | Automatic scene calibration method for video analytics |
WO2019041351A1 (en) * | 2017-09-04 | 2019-03-07 | 艾迪普(北京)文化科技股份有限公司 | Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene |
-
2021
- 2021-11-26 CN CN202111417146.2A patent/CN114067071B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1656518A (en) * | 2002-05-21 | 2005-08-17 | 科乐美股份有限公司 | Three dimensional image processing program, three dimensional image processing method, and video game device |
WO2004061387A1 (en) * | 2002-12-27 | 2004-07-22 | Hiroshi Arisawa | Multi-view-point video capturing system |
CN1906943A (en) * | 2004-01-30 | 2007-01-31 | 株式会社丰田自动织机 | Video image positional relationship correction apparatus, steering assist apparatus having the video image positional relationship correction apparatus and video image positional relationship correcti |
CN103021261A (en) * | 2011-09-23 | 2013-04-03 | 联想(北京)有限公司 | Automatic digital map correction method and device |
KR101223245B1 (en) * | 2012-09-12 | 2013-01-17 | 삼부기술 주식회사 | Map image making system |
KR101312652B1 (en) * | 2013-04-23 | 2013-10-14 | 서울공간정보 주식회사 | Digital map making system to update the digital map |
WO2017007254A1 (en) * | 2015-07-08 | 2017-01-12 | 고려대학교 산학협력단 | Device and method for generating and displaying 3d map |
WO2018112695A1 (en) * | 2016-12-19 | 2018-06-28 | 深圳市阳日电子有限公司 | Image display method and mobile terminal |
CN112489121A (en) * | 2019-09-11 | 2021-03-12 | 丰图科技(深圳)有限公司 | Video fusion method, device, equipment and storage medium |
CN111556283A (en) * | 2020-03-18 | 2020-08-18 | 深圳市华橙数字科技有限公司 | Monitoring camera management method and device, terminal and storage medium |
CN111476894A (en) * | 2020-05-14 | 2020-07-31 | 小狗电器互联网科技(北京)股份有限公司 | Three-dimensional semantic map construction method and device, storage medium and electronic equipment |
CN113192125A (en) * | 2021-03-26 | 2021-07-30 | 南京财经大学 | Multi-camera video concentration method and system in geographic scene with optimal virtual viewpoint |
Non-Patent Citations (5)
Title |
---|
A method for constructing an actual virtual map of the road scene for recognition of dangerous situations in real time;Nina Krapukhina等;《2017 4th International Conference on Transportation Information and Safety (ICTIS)》;20171231;第545-550页 * |
交互式全景视频流地图设计;李海亭等;《测绘科学》;20181231;第43卷(第01期);第107-111页 * |
基于视频测量的运行状态模态分析;王彤等;《振动与冲击》;20170331;第36卷(第05期);第157-163、220页 * |
基于计算机视觉的工业机器人圆球定位设计;王玲琳等;《控制工程》;20161031;第23卷(第10期);第1634-1638页 * |
高精地图车道的三维可视化;张骋等;《计算机与现代化》;20200831(第08期);第25-29、59页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114067071A (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109931939B (en) | Vehicle positioning method, device, equipment and computer readable storage medium | |
US20210141378A1 (en) | Imaging method and device, and unmanned aerial vehicle | |
US5259037A (en) | Automated video imagery database generation using photogrammetry | |
CN108648194B (en) | Three-dimensional target identification segmentation and pose measurement method and device based on CAD model | |
CN107514993A (en) | The collecting method and system towards single building modeling based on unmanned plane | |
CN112465969A (en) | Real-time three-dimensional modeling method and system based on unmanned aerial vehicle aerial image data | |
CN109900274B (en) | Image matching method and system | |
CN117274499B (en) | Unmanned aerial vehicle oblique photography-based steel structure processing and mounting method | |
KR20230003803A (en) | Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system | |
CN117237789A (en) | Method for generating texture information point cloud map based on panoramic camera and laser radar fusion | |
CN114820793A (en) | Target detection and target point positioning method and system based on unmanned aerial vehicle | |
CN111462241A (en) | Target positioning method based on monocular vision | |
CN112712566B (en) | Binocular stereo vision sensor measuring method based on structure parameter online correction | |
KR20200142315A (en) | Method and apparatus of updating road network | |
CN114067071B (en) | High-precision map making system based on big data | |
CN111222586A (en) | Inclined image matching method and device based on three-dimensional inclined model visual angle | |
CN114004957B (en) | Augmented reality picture generation method, device, equipment and storage medium | |
CN113790711B (en) | Unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and storage medium | |
JP4480212B2 (en) | Calculation method of aerial photo position and orientation | |
CN117115434A (en) | Data dividing apparatus and method | |
Kang et al. | 3D urban reconstruction from wide area aerial surveillance video | |
CN112991372A (en) | 2D-3D camera external parameter calibration method based on polygon matching | |
CN117911643B (en) | Building viewpoint building method, system, equipment and medium based on BIM | |
CN116309379B (en) | Automatic aerial photography quality inspection method based on multi-data fusion | |
CN114663596B (en) | Large scene mapping method based on unmanned aerial vehicle real-time ground-imitating flight method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221214 Address after: 410137 Room 423, Complex Building, No. 1318, Kaiyuan East Road, Xingsha Industrial Base, Changsha Economic and Technological Development Zone, Hunan Province Patentee after: Hunan Chuangxin Weili Technology Co.,Ltd. Address before: 412000 No.79, wisdom road, Yunlong demonstration zone, Zhuzhou City, Hunan Province (New Campus of Hunan Automotive Engineering Vocational College) Patentee before: HUNAN AUTOMOTIVE ENGINEERING VOCATIONAL College |