CN111966772A - Live-action map generation method and system - Google Patents

Live-action map generation method and system Download PDF

Info

Publication number
CN111966772A
CN111966772A CN202010743950.9A CN202010743950A CN111966772A CN 111966772 A CN111966772 A CN 111966772A CN 202010743950 A CN202010743950 A CN 202010743950A CN 111966772 A CN111966772 A CN 111966772A
Authority
CN
China
Prior art keywords
vehicle
live
video
action map
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010743950.9A
Other languages
Chinese (zh)
Inventor
周志文
郭旭
李朝武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mapgoo Technology Co ltd
Original Assignee
Shenzhen Mapgoo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mapgoo Technology Co ltd filed Critical Shenzhen Mapgoo Technology Co ltd
Priority to CN202010743950.9A priority Critical patent/CN111966772A/en
Publication of CN111966772A publication Critical patent/CN111966772A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Abstract

The embodiment of the invention discloses a method and a system for generating a live-action map, wherein the method comprises the following steps: the method comprises the steps that a target vehicle connected to an Internet of vehicles operation platform is obtained in advance, and a vehicle event data recorder is arranged on the target vehicle; acquiring a shot video through a vehicle event data recorder of a target vehicle; acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform; and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map. According to the invention, the collection range of the live-action map is greatly improved, the collection cost is reduced and the collection period is shortened by the quantity of the equipment linked by the Internet of vehicles platform.

Description

Live-action map generation method and system
Technical Field
The invention relates to the technical field of data acquisition, in particular to a method and a system for generating a live-action map.
Background
At present, navigation and positioning services have occupied a great position in the life of people, and the navigation and positioning services are continuously integrated with the entity economy while bringing various conveniences to daily trips of people, so that the adjustment of the traditional industrial structure and the transformation of the economic growth mode are driven. In addition to providing traditional services such as geographic information, the internet map service has created a variety of new value-added services.
The city street view map as a kind of three-dimensional live-action map has become a new development direction of the internet map industry, the content of the map is richer, a 360-degree panoramic image of a selected city street can be displayed, and the taken landscape is not limited to the city street, but also relates to scenic spots, scenery in rooms, shopping malls, restaurants, museums and the like. Therefore, the method has more remarkable advantages in the aspects of providing location-based information services and integrating with various entity economy.
One of the key links for manufacturing street view maps is real view material acquisition, and the key equipment required by the street view map is a professional water heater and a load platform. The street view map is produced by shooting real-view materials through a detector, measuring and recording information such as the accurate position of a shooting point, the current direction of a load and the like through a load platform, synthesizing the real-view materials through software at the later stage, and matching the real-view materials with geographic coordinates.
The technical defects are as follows: the equipment is expensive, the acquisition cost is high, the acquisition period is long, and the acquisition area is narrow.
The prior art is therefore still subject to further development.
Disclosure of Invention
In view of the above technical problems, embodiments of the present invention provide a method and a system for generating a live-action map, which can solve the technical problems in the prior art that the raw data acquisition equipment of the live-action map is expensive, the acquisition cost is high, the acquisition period is long, and the acquisition area is narrow.
A first aspect of an embodiment of the present invention provides a live-action map generation method, including:
the method comprises the steps that a target vehicle connected to an Internet of vehicles operation platform is obtained in advance, and a vehicle event data recorder is arranged on the target vehicle;
acquiring a shot video through a vehicle event data recorder of a target vehicle;
acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform;
and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map.
Optionally, the acquiring, according to the car networking operation platform, vehicle information corresponding to the video shooting point includes:
and acquiring vehicle position data, vehicle altitude data, vehicle running direction and vehicle running speed corresponding to the video shooting point according to the Internet of vehicles operation platform.
Optionally, the matching the video and the vehicle information corresponding to the shooting point to generate the live-action map includes:
when the target vehicle is detected to be in the running process, vehicle position data and positioning time reported by the target vehicle are acquired;
matching the vehicle position data and the positioning time with background data, and judging whether a road section corresponding to the current vehicle position data needs to update the live-action map;
if the real-scene map needs to be updated, matching the video with the shooting point information to generate an updated real-scene map;
and if the updating is not needed, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
Optionally, the matching the video and the vehicle information corresponding to the shooting point to generate the live-action map includes:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
Optionally, the generating a live-action map according to the matched result includes:
identifying the picture through artificial intelligence, and calculating the accuracy of the acquired picture;
if the accuracy of the picture is abnormal, a manual terminal needs to be sent for calibration, a manual calibration result is obtained, and if the manual calibration is correct, an updated live-action map is generated according to a matched result;
and if the accuracy of the picture is normal, generating an updated live-action map according to the matched result.
A second aspect of the embodiments of the present invention provides a real world map generation system, where the system includes: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of:
the method comprises the steps that a target vehicle connected to an Internet of vehicles operation platform is obtained in advance, and a vehicle event data recorder is arranged on the target vehicle;
acquiring a shot video through a vehicle event data recorder of a target vehicle;
acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform;
and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map.
Optionally, the computer program when executed by the processor further implements the steps of:
and acquiring vehicle position data, vehicle altitude data, vehicle running direction and vehicle running speed corresponding to the video shooting point according to the Internet of vehicles operation platform.
Optionally, the computer program when executed by the processor further implements the steps of:
when the target vehicle is detected to be in the running process, vehicle position data and positioning time reported by the target vehicle are acquired;
matching the vehicle position data and the positioning time with background data, and judging whether a road section corresponding to the current vehicle position data needs to update the live-action map;
if the real-scene map needs to be updated, matching the video with the shooting point information to generate an updated real-scene map;
and if the updating is not needed, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
Optionally, the computer program when executed by the processor further implements the steps of:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
A third aspect of the embodiments of the present invention provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by one or more processors, the computer-executable instructions may cause the one or more processors to execute the live-action map generating method described above.
According to the technical scheme provided by the embodiment of the invention, a target vehicle accessed to an Internet of vehicles operation platform is obtained in advance, and an automobile data recorder is arranged on the target vehicle; acquiring a shot video through a vehicle event data recorder of a target vehicle; acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform; and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map. Compared with the prior art, the embodiment of the invention greatly improves the collection range of the live-action map, reduces the collection cost and shortens the collection period by the quantity of the equipment linked by the Internet of vehicles platform.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of a live-action map generation method according to an embodiment of the present invention;
fig. 2 is a schematic hardware structure diagram of another embodiment of a live-action map generating system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following detailed description of embodiments of the invention refers to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an embodiment of a live-action map generating method according to an embodiment of the present invention. As shown in fig. 1, includes:
s100, acquiring a target vehicle accessed to the Internet of vehicles operation platform in advance, and setting a driving recorder on the target vehicle;
s200, acquiring a shot video through a vehicle event data recorder of a target vehicle;
s300, acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform;
and S400, matching the video with the vehicle information corresponding to the shooting point to generate a live-action map.
Specifically, the car networking means: the vehicle-mounted equipment on the vehicle effectively utilizes all vehicle dynamic information in the information network platform through a wireless communication technology, and provides different functional services in the running process of the vehicle. The internet of vehicles exhibits the following features: the Internet of vehicles can provide guarantee for the distance between the vehicles, and the probability of collision accidents of the vehicles is reduced; the Internet of vehicles can help the vehicle owner to navigate in real time, and the efficiency of traffic operation is improved through communication with other vehicles and a network system. The car networking operation platform in the embodiment of the invention is introduced by taking a wheat and grain car networking operation service platform as an example, based on the wheat and grain car networking operation service platform, videos shot by a vehicle data recorder are obtained through a system GPS, car information of a shooting point is obtained through a system GPS, real scene materials are combined through big data and artificial intelligence in the later period, and are matched with geographic coordinates to generate a street view map;
breaking the monopoly of GPS in the street view acquisition field, together with equipment suppliers, realizing the localization and popularization of the street view acquisition system.
Specifically, the specific method for accessing the vehicle to the internet-of-vehicles operation platform comprises the following steps: the equipment is registered in advance on the Internet of vehicles operation platform, the Internet of vehicles operation platform verifies the parameters, and the verification fails without executing any operation; if the verification is successful, acquiring equipment information, and reading the cache; and judging whether the vehicle is registered or not, if so, returning a registration result and updating the route, and if not, registering the equipment, returning a registration result and updating the route.
Further, according to the car networking operation platform vehicle information that video shooting point corresponds, include:
and acquiring vehicle position data, vehicle altitude data, vehicle running direction and vehicle running speed corresponding to the video shooting point according to the Internet of vehicles operation platform.
During specific implementation, the system GPS of the cereal Internet of vehicles operation service platform is used for acquiring the position information such as the accurate position, the altitude, the current direction, the driving speed and the like of a shooting point. The position information of the shooting point is generally longitude and latitude information of the shooting point.
Further, matching the video and the vehicle information corresponding to the shooting point to generate a live-action map, comprising:
when the target vehicle is detected to be in the running process, vehicle position data and positioning time reported by the target vehicle are acquired;
matching the vehicle position data and the positioning time with background data, and judging whether a road section corresponding to the current vehicle position data needs to update the live-action map;
if the real-scene map needs to be updated, matching the video with the shooting point information to generate an updated real-scene map;
and if the updating is not needed, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
Specifically, accurate longitude and latitude and positioning time information are reported through positioning software during vehicle running, whether a live-action map needs to be updated or not is calculated and matched through big data, if so, a frame drawing instruction is sent to the equipment, the instruction content is millisecond time offset of a positioning point of the equipment after calculation, and as the positioning time is continuous, only a first time point is reserved, and a later time point is reserved in an incremental mode, so that the instruction data volume is effectively reduced, the quality of the instruction is ensured, and the video and the shooting point information are matched to generate the updated live-action map;
and if the live-action map does not need to be updated, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
The method for reporting the vehicle position by the target vehicle specifically comprises the following steps: reporting a track by a target vehicle, and verifying parameters; if the verification fails, no operation is executed; if the verification is successful, acquiring equipment information; reading the cache; judging whether the vehicle is registered on the Internet of vehicles service platform or not; if the registration is carried out, writing in the track, returning the result and reporting success; if not, reporting failure. And the third-party platform saves the track.
The track can be read by the owner of the target vehicle, the specific method comprises the steps that the consumer reads the track, the route is updated, the track is pushed to the consumer, and the consumer can filter the track according to the requirement.
The specific flow of issuing the frame extracting instruction to the third-party platform by the Internet of vehicles service platform is a receiving instruction; checking parameters; if the verification is successful, issuing an instruction, if the verification is failed, not executing any operation; receiving an instruction; searching for a route without judging whether the route is offline; calling back a partner instruction interface; an instruction is received.
The specific process of the third-party platform instruction confirmation is as follows: instruction confirmation and parameter verification; if the verification is successful, acquiring an instruction confirmation interface; the instruction acknowledgement informs the third party platform; returning an instruction result; if the verification fails, no operation is performed.
The process of transmitting the third-party platform picture back to the Internet of vehicles service platform is as follows: returning data, checking parameters, and if the checking is successful, acquiring a data returning interface; the data return informs the third party platform; uploading successfully; if the verification fails, no operation is performed.
Further, matching the video and the vehicle information corresponding to the shooting point to generate a live-action map, comprising:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
Specifically, a frame of a shot video corresponding to positioning time is extracted by a target vehicle, the frame is extracted and uploaded to a cloud storage center, corresponding frame extracting information of the cloud storage center is returned to a background, wherein the frame extracting information comprises but is not limited to frame extracting equipment, frame extracting time and a storage address, the background matches positioning points and collected pictures through big data calculation, and the matched information comprises but is not limited to frame extracting equipment, frame extracting time, positioning equipment and positioning time.
Further, generating a live-action map according to the matched result, comprising:
identifying the picture through artificial intelligence, and calculating the accuracy of the acquired picture;
if the accuracy of the picture is abnormal, a manual terminal needs to be sent for calibration, a manual calibration result is obtained, and if the manual calibration is correct, an updated live-action map is generated according to a matched result;
and if the accuracy of the picture is normal, generating an updated live-action map according to the matched result.
In specific implementation, the picture is identified by adopting artificial intelligence, and the accuracy (the watermark time of video recording and the equipment positioning time) of the collected picture is calculated by big data. And in the case of accuracy abnormity, manual intervention (manually judging whether the video recording time and the equipment positioning time are consistent) is adopted to calibrate the acquisition accuracy. An accuracy anomaly generally refers to an accuracy that is less than a preset threshold. For example, if the preset threshold is 90%, the accuracy is greater than 90%, which indicates that the current accuracy is normal, and if the accuracy is less than 90%, which indicates that the current accuracy is abnormal.
According to the embodiment of the method, aiming at the traditional judging method, the collection range of the live-action map is greatly improved, the collection cost is reduced, and the collection period is shortened through the number of the devices linked with the wheat and grain vehicle networking platform;
the real scene acquisition efficiency is improved and the acquisition cost is further reduced by a mode of sending instructions to the equipment through the background.
In some other embodiments, the current scheme is immediate acquisition, passive acquisition, and no data acquisition is required after the video expires or the device is taken offline. And in the later stage, a cloud acquisition mode is adopted to collect the equipment track and the equipment video on the cloud. Therefore, the performance consumption of the equipment can be reduced, the acquisition efficiency is improved, and historical data can be traced.
The method for generating a live-action map in the embodiment of the present invention is described above, and a live-action map generating system in the embodiment of the present invention is described below, please refer to fig. 2, fig. 2 is a schematic hardware structure diagram of another embodiment of a live-action map generating system in the embodiment of the present invention, and as shown in fig. 2, the system 10 includes: a memory 101, a processor 102 and a computer program stored on the memory and executable on the processor, the computer program realizing the following steps when executed by the processor 101:
the method comprises the steps that a target vehicle connected to an Internet of vehicles operation platform is obtained in advance, and a vehicle event data recorder is arranged on the target vehicle;
acquiring a shot video through a vehicle event data recorder of a target vehicle;
acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform;
and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map.
The specific implementation steps are the same as those of the method embodiments, and are not described herein again.
Optionally, the computer program when executed by the processor 101 further implements the steps of:
and acquiring vehicle position data, vehicle altitude data, vehicle running direction and vehicle running speed corresponding to the video shooting point according to the Internet of vehicles operation platform.
The specific implementation steps are the same as those of the method embodiments, and are not described herein again.
Optionally, the computer program when executed by the processor 101 further implements the steps of:
when the target vehicle is detected to be in the running process, vehicle position data and positioning time reported by the target vehicle are acquired;
matching the vehicle position data and the positioning time with background data, and judging whether a road section corresponding to the current vehicle position data needs to update the live-action map;
if the real-scene map needs to be updated, matching the video with the shooting point information to generate an updated real-scene map;
and if the updating is not needed, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
The specific implementation steps are the same as those of the method embodiments, and are not described herein again.
Optionally, the computer program when executed by the processor 101 further implements the steps of:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
The specific implementation steps are the same as those of the method embodiments, and are not described herein again.
Optionally, the computer program when executed by the processor 101 further implements the steps of:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
The specific implementation steps are the same as those of the method embodiments, and are not described herein again.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, e.g., to perform method steps S100-S400 of fig. 1 described above.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A live-action map generation method is characterized by comprising the following steps:
the method comprises the steps that a target vehicle connected to an Internet of vehicles operation platform is obtained in advance, and a vehicle event data recorder is arranged on the target vehicle;
acquiring a shot video through a vehicle event data recorder of a target vehicle;
acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform;
and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map.
2. The live-action map generation method according to claim 1, wherein the acquiring vehicle information corresponding to the video shooting point according to the car networking operation platform comprises:
and acquiring vehicle position data, vehicle altitude data, vehicle running direction and vehicle running speed corresponding to the video shooting point according to the Internet of vehicles operation platform.
3. The live-action map generation method according to claim 2, wherein the matching of the video and vehicle information corresponding to the shot point to generate the live-action map includes:
when the target vehicle is detected to be in the running process, vehicle position data and positioning time reported by the target vehicle are acquired;
matching the vehicle position data and the positioning time with background data, and judging whether a road section corresponding to the current vehicle position data needs to update the live-action map;
if the real-scene map needs to be updated, matching the video with the shooting point information to generate an updated real-scene map;
and if the updating is not needed, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
4. A live-action map generating method according to claim 3, wherein the generating of the live-action map by matching the video with the vehicle information corresponding to the shooting point comprises:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
5. The live-action map generation method according to claim 4, wherein the generating a live-action map according to the matched result includes:
identifying the picture through artificial intelligence, and calculating the accuracy of the acquired picture;
if the accuracy of the picture is abnormal, a manual terminal needs to be sent for calibration, a manual calibration result is obtained, and if the manual calibration is correct, an updated live-action map is generated according to a matched result;
and if the accuracy of the picture is normal, generating an updated live-action map according to the matched result.
6. A live-action map generating system, characterized in that the system comprises: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of:
the method comprises the steps that a target vehicle connected to an Internet of vehicles operation platform is obtained in advance, and a vehicle event data recorder is arranged on the target vehicle;
acquiring a shot video through a vehicle event data recorder of a target vehicle;
acquiring vehicle information corresponding to the video shooting point according to the Internet of vehicles operation platform;
and matching the video with the vehicle information corresponding to the shooting point to generate a live-action map.
7. A live-action map generating system as claimed in claim 6, wherein the computer program when executed by the processor further performs the steps of:
and acquiring vehicle position data, vehicle altitude data, vehicle running direction and vehicle running speed corresponding to the video shooting point according to the Internet of vehicles operation platform.
8. A live-action map generating system as claimed in claim 7, wherein the computer program when executed by the processor further performs the steps of:
when the target vehicle is detected to be in the running process, vehicle position data and positioning time reported by the target vehicle are acquired;
matching the vehicle position data and the positioning time with background data, and judging whether a road section corresponding to the current vehicle position data needs to update the live-action map;
if the real-scene map needs to be updated, matching the video with the shooting point information to generate an updated real-scene map;
and if the updating is not needed, continuously acquiring the vehicle position data and the positioning time of the target vehicle.
9. A live-action map generating system as claimed in claim 8, wherein the computer program when executed by the processor further performs the steps of:
sending a frame extracting instruction to a target vehicle needing to update the live-action map, and acquiring frame information of a video frame corresponding to positioning time in a shooting video extracted by the target vehicle;
matching the positioning points with the collected pictures according to the frame information of the video frames;
and generating a live-action map according to the matched result.
10. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the live-action map generation method of any one of claims 1-5.
CN202010743950.9A 2020-07-29 2020-07-29 Live-action map generation method and system Pending CN111966772A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010743950.9A CN111966772A (en) 2020-07-29 2020-07-29 Live-action map generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010743950.9A CN111966772A (en) 2020-07-29 2020-07-29 Live-action map generation method and system

Publications (1)

Publication Number Publication Date
CN111966772A true CN111966772A (en) 2020-11-20

Family

ID=73363408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010743950.9A Pending CN111966772A (en) 2020-07-29 2020-07-29 Live-action map generation method and system

Country Status (1)

Country Link
CN (1) CN111966772A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565387A (en) * 2020-12-01 2021-03-26 北京罗克维尔斯科技有限公司 Method and device for updating high-precision map
CN114623838A (en) * 2022-03-04 2022-06-14 智道网联科技(北京)有限公司 Map data acquisition method and device based on Internet of vehicles and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306904A (en) * 2016-08-25 2018-07-20 大连楼兰科技股份有限公司 Car networking road conditions video acquisition and sharing method and system
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN111024115A (en) * 2019-12-27 2020-04-17 奇瑞汽车股份有限公司 Live-action navigation method, device, equipment, storage medium and vehicle-mounted multimedia system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306904A (en) * 2016-08-25 2018-07-20 大连楼兰科技股份有限公司 Car networking road conditions video acquisition and sharing method and system
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN111024115A (en) * 2019-12-27 2020-04-17 奇瑞汽车股份有限公司 Live-action navigation method, device, equipment, storage medium and vehicle-mounted multimedia system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565387A (en) * 2020-12-01 2021-03-26 北京罗克维尔斯科技有限公司 Method and device for updating high-precision map
CN112565387B (en) * 2020-12-01 2023-07-07 北京罗克维尔斯科技有限公司 Method and device for updating high-precision map
CN114623838A (en) * 2022-03-04 2022-06-14 智道网联科技(北京)有限公司 Map data acquisition method and device based on Internet of vehicles and storage medium

Similar Documents

Publication Publication Date Title
JP6675770B2 (en) Map update method and in-vehicle terminal
CN110146869B (en) Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium
CN108413975B (en) Map acquisition method and system, cloud processor and vehicle
JP4321128B2 (en) Image server, image collection device, and image display terminal
CN102741900B (en) Road condition management system and road condition management method
CN109817022B (en) Method, terminal, automobile and system for acquiring position of target object
JP6398501B2 (en) In-vehicle camera diagnostic device
CN110164135B (en) Positioning method, positioning device and positioning system
CN110929703B (en) Information determination method and device and electronic equipment
US11680822B2 (en) Apparatus and methods for managing maps
US11589082B2 (en) Live view collection and transmission system
CN111966772A (en) Live-action map generation method and system
KR20190043396A (en) Method and system for generating and providing road weather information by using image data of roads
JP2020094956A (en) Information processing system, program, and method for information processing
CN114080537A (en) Collecting user contribution data relating to a navigable network
CN111323041B (en) Information processing system, storage medium, and information processing method
KR20110037045A (en) Image acquisition system by vehicle camera and controlling method thereof
KR102423653B1 (en) Road information collecting system using vehicle equipped with seprated camera
CN113063421A (en) Navigation method and related device, mobile terminal and computer readable storage medium
JP7046555B2 (en) In-vehicle device, server, display method, transmission method
KR20080019947A (en) Road information report method and system by using road information acquired from image
JP2003051021A (en) Information updating device, information acquisition device and information update processing device
JP7147791B2 (en) Tagging system, cache server, and control method of cache server
WO2023132147A1 (en) Information management system, center, information management method, and program
US20240135718A1 (en) Method and system for gathering image training data for a machine learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination