CN113419257A - Positioning calibration method, device, terminal equipment, storage medium and program product - Google Patents

Positioning calibration method, device, terminal equipment, storage medium and program product Download PDF

Info

Publication number
CN113419257A
CN113419257A CN202110732163.9A CN202110732163A CN113419257A CN 113419257 A CN113419257 A CN 113419257A CN 202110732163 A CN202110732163 A CN 202110732163A CN 113419257 A CN113419257 A CN 113419257A
Authority
CN
China
Prior art keywords
information
current vehicle
positioning
video image
driving video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110732163.9A
Other languages
Chinese (zh)
Inventor
徐怀亮
徐怀修
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Luzhuo Technology Co ltd
Original Assignee
Shenzhen Luzhuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Luzhuo Technology Co ltd filed Critical Shenzhen Luzhuo Technology Co ltd
Priority to CN202110732163.9A priority Critical patent/CN113419257A/en
Publication of CN113419257A publication Critical patent/CN113419257A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/396Determining accuracy or reliability of position or pseudorange measurements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/40Correcting position, velocity or attitude
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/485Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an optical system or imaging system

Abstract

The invention discloses a positioning calibration method, which comprises the following steps: acquiring a driving video image and navigation information of a current vehicle; determining the positioning information of the current vehicle according to the navigation information, and extracting environmental characteristic information from the driving video image; and calibrating the positioning information of the current vehicle according to the environmental characteristic information. The invention also discloses a positioning calibration device, terminal equipment, a storage medium and a program product. According to the invention, the environmental characteristic information is extracted from the driving video image acquired in real time, and the environmental characteristic information in the actual environment where the current vehicle is located is utilized to calibrate the positioning information of the current vehicle in the navigation information, so that the wrong navigation prompt caused by untimely update of the vehicle positioning information under the poor communication condition is avoided, and the driving safety is improved.

Description

Positioning calibration method, device, terminal equipment, storage medium and program product
Technical Field
The present invention relates to the field of driving assistance technologies, and in particular, to a positioning calibration method, an apparatus, a terminal device, a storage medium, and a program product.
Background
At present, with the development of the GPS positioning technology, various types of navigation software are applied more and more widely, particularly, vehicle navigation software for assisting driving. However, with the popularization of internet technology, in a noisy environment such as a city, various signal interferences exist, so that communication between navigation software and a server is affected, communication delay exists between the software and the server, the navigation software for assisting driving deviates the positioning of a vehicle, and wrong navigation prompts are easy to generate, so that driving safety is not facilitated.
Disclosure of Invention
The invention mainly aims to provide a positioning calibration method, a positioning calibration device, terminal equipment, a storage medium and a program product, and aims to solve the technical problem that wrong navigation prompt is generated to be not beneficial to driving safety when navigation software has deviation in positioning of a vehicle.
In addition, in order to achieve the above object, the present invention further provides a positioning calibration method, including the steps of:
acquiring a driving video image and navigation information of a current vehicle;
determining the positioning information of the current vehicle according to the navigation information, and extracting environmental characteristic information from the driving video image;
and calibrating the positioning information of the current vehicle according to the environmental characteristic information.
Optionally, the environmental characteristic information includes distance information and text information, and the step of calibrating the positioning information of the current vehicle according to the environmental characteristic information includes:
determining a target reference object from the environment where the current vehicle is located according to the text information of the environment characteristic information;
determining a first distance between the current vehicle and the target reference object according to distance information in the environment characteristic information;
acquiring coordinate information of the target reference object in the navigation information, and calculating a second distance between the current vehicle and the target reference object in the navigation information according to the coordinate information and the positioning information;
and calibrating the positioning information according to the first distance and the second distance.
Optionally, the step of extracting environmental feature information from the driving video image includes:
carrying out target detection on the driving video image, extracting object information in the driving video image, and calculating distance information between each object in the object information and the current vehicle;
when detecting that the object information contains character information, extracting the character information;
and integrating the distance information and the character information to obtain environment characteristic information.
Optionally, after the step of performing target detection on the driving video image, the method further includes:
when an obstacle is detected, identifying a type of the obstacle;
and outputting prompt information according to the type of the obstacle, wherein the type of the obstacle comprises a movable obstacle and a static obstacle, and the prompt information comprises a driving strategy for avoiding the obstacle.
Optionally, before the step of extracting the environmental characteristic information from the driving video image, the method further includes:
acquiring environmental information of the current vehicle, and if the environmental information meets a preset condition, performing enhancement processing on the driving video image information;
the step of enhancing the driving video image comprises the following steps:
extracting vector characteristics of the driving video image, and performing Fourier transform on the driving image based on the vector characteristics;
and carrying out filtering processing and sharpening processing on the driving video image subjected to Fourier transform so as to eliminate noise in the driving video image and enhance the outline characteristics in the driving video image.
Optionally, after the step of calibrating the positioning information of the current vehicle according to the environmental characteristic information, the method further includes:
generating target navigation information according to the calibrated positioning information, and outputting and displaying the target navigation information and the driving video image;
and detecting road condition information of the running route of the current vehicle based on the target navigation information and the driving video image, and outputting early warning prompt information when detecting that the road condition indicated by the road condition information changes.
In addition, to achieve the above object, the present invention also provides a positioning calibration apparatus, including:
the data acquisition module is used for acquiring driving video images and navigation information of the current vehicle;
the characteristic extraction module is used for determining the positioning information of the current vehicle according to the navigation information and extracting environmental characteristic information from the driving video image;
and the positioning calibration module is used for calibrating the positioning information of the current vehicle according to the environmental characteristic information.
In addition, to achieve the above object, the present invention also provides a terminal device, including: the positioning calibration method comprises a memory, a processor and a positioning calibration program stored on the memory and capable of running on the processor, wherein the positioning calibration program realizes the steps of the positioning calibration method when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, having a positioning calibration program stored thereon, where the positioning calibration program, when executed by a processor, implements the steps of the positioning calibration method as described above.
Furthermore, to achieve the above object, the present invention also provides a computer program product comprising a computer program, which when executed by a processor, performs the steps of the positioning calibration method as described above.
The embodiment of the invention provides a method, a device, terminal equipment, a storage medium and a program product. Compared with the prior art, if the communication condition between the navigation software and the server is poor, the positioning of the vehicle is deviated, so that wrong navigation prompt information is generated, and the driving safety is not facilitated. Based on this, in the embodiment of the invention, the driving video image and the navigation information of the current vehicle are acquired; determining the positioning information of the current vehicle according to the navigation information, and extracting environmental characteristic information from the driving video image; and calibrating the positioning information of the current vehicle according to the environmental characteristic information. Based on the real-time environmental characteristic information, the positioning information of the vehicle is calibrated, so that wrong navigation prompt information is avoided, and driving safety is improved while driving is assisted.
Drawings
Fig. 1 is a schematic hardware structure diagram of an implementation manner of a terminal device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a positioning calibration method according to a first embodiment of the present invention;
fig. 3 is a schematic functional block diagram of a positioning calibration apparatus according to another embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal (also called terminal, equipment or terminal equipment) in the embodiment of the invention can be a vehicle event data recorder, and can also be movable terminal equipment with display and data processing functions, such as a PC, a smart phone, a tablet computer, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a positioning calibration program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke a positioning calibration program stored in the memory 1005, which when executed by the processor implements the operations in the positioning calibration method provided by the embodiments described below.
Based on the hardware structure of the equipment, the embodiment of the positioning calibration method is provided.
It should be noted that, with the development of positioning and navigation technology, most automobiles have their own positioning and navigation functions, but when the automobiles are in a noisy environment, the positioning of the automobiles may be disabled due to signal interference or unsmooth communication caused by too much pressure of a server of positioning and navigation software on the automobiles, or even disabled navigation prompt information, which is not favorable for driving safety. Furthermore, in order to guarantee driving safety, more and more vehicle owners select to install the automobile data recorder with the auxiliary driving function on the automobile, so that the automobile data recorder can assist driving on the one hand, and on the other hand, after an accident happens, the automobile data recorder is convenient to obtain evidence and determine responsibility. Based on the above, the positioning calibration method provided in the invention aims to calibrate the positioning of the vehicle through a terminal device such as a vehicle data recorder capable of acquiring streaming media data in real time, thereby improving driving safety.
Referring to fig. 2, in a first embodiment of the positioning calibration method of the present invention, the positioning calibration method includes:
step S10, acquiring driving video images and navigation information of the current vehicle;
in this embodiment, the positioning calibration method is applied to a terminal device with display and data processing functions, where the terminal device may be a PC, a tablet computer, or a vehicle event data recorder with a camera and a display, where the display of the vehicle event data recorder may be a streaming media rearview mirror, and the vehicle event data recorder with the camera and the streaming media rearview mirror is taken as an example for description below.
The driving video images of the current vehicle are obtained through the camera of the driving recorder, wherein the obtained driving video images comprise the driving video images of the front and/or the back of the current vehicle and the driving video images of the left and the right sides of the current vehicle, and are specifically determined according to the installation position and the angle of the camera of the driving recorder, and one or a plurality of cameras of the driving recorder can be provided. Taking the acquisition of the driving video images of the front and back roads of the current vehicle as an example, the navigation information of the current vehicle is acquired at the same time, and the navigation information may be navigation information in a positioning navigation function of the current vehicle, navigation information in positioning navigation software in a mobile terminal of a driver of the current vehicle, such as a mobile phone, or navigation information in positioning navigation software pre-installed in a driving recorder. Specifically, if the current vehicle is the positioning information in the self-contained positioning navigation function of the current vehicle or the navigation information in the positioning navigation software in the mobile terminal of the driver of the current vehicle, the driver of the current vehicle can establish communication between the mobile terminal such as a mobile phone or the like or the current vehicle and the automobile data recorder through bluetooth or the like, and acquire the navigation information of the current vehicle; if the navigation information is the navigation information in the positioning navigation software pre-installed in the automobile data recorder, the navigation information of the current vehicle can be directly acquired through the automobile data recorder. In the present embodiment, navigation information obtained from positioning navigation software in a mobile phone of a driver (hereinafter, referred to as a user) of a current vehicle is taken as an example.
Step S20, determining the positioning information of the current vehicle according to the navigation information, and extracting environmental characteristic information from the driving video image;
after the navigation information of the current vehicle and the driving video images of the front and rear roads are acquired, the positioning information of the current vehicle is determined according to the acquired navigation information, and it should be noted that the positioning information is the position information of the current vehicle displayed in the positioning navigation software in the mobile phone of the user. And then extracting environment characteristic information in the environment where the current vehicle is located from the acquired driving video image, wherein the extracted environment characteristic information comprises various types of characteristic information, including object information with obvious characteristics such as buildings, bus stations and the like, and distance information between each object in the environment and the current vehicle, and the distance information represents the relative distance and the relative direction between the current vehicle and the environment object. Further, the extracted environmental characteristic information also includes text information, for example, text information exists on a certain building, the text information may be a name of the building, or a shop name in the building, or an advertisement, etc., and then the text information is extracted from the acquired driving video image, so as to obtain different types of environmental characteristic information extracted from the driving video image.
Further, in step S20, before the step of extracting the environmental characteristic information from the driving video image, the method includes:
step S21, acquiring the environmental information of the current vehicle, and if the environmental information meets the preset conditions, performing enhancement processing on the driving video image information;
before extracting the environmental characteristic information from the acquired driving video image, the acquired driving video image generally needs to be enhanced. The reason is that, for the current vehicle, the external environment is complicated and changeable, for example, severe weather such as rainstorm, fog or haze, and insufficient light such as rainy days or nights, so that the quality of the acquired driving video image can be affected, and when the quality of the acquired driving video image is not high, the acquired driving video image needs to be enhanced, so that the follow-up extraction processing of the environmental information characteristics is not affected. Whether the acquired driving video image needs to be enhanced is determined according to the environmental information of the current vehicle, wherein the environmental information mainly refers to the natural environment of the current vehicle, and the acquired driving video image is enhanced if the environmental information of the current vehicle meets the preset condition. Wherein, this preset condition can be light intensity among the natural environment, whether have thunderstorm, wind and snow, extreme bad weather influence video image's definition such as fog or haze.
In step S21, the step of performing enhancement processing on the driving video image includes:
step S211, extracting vector characteristics of the driving video image, and carrying out Fourier transform on the driving image based on the vector characteristics;
step S212, filtering and sharpening the driving video image after Fourier transformation so as to eliminate noise in the driving video image and enhance contour characteristics in the driving video image.
Specifically, when the acquired driving video image is subjected to enhancement processing, vector features of each image in the driving video image are extracted firstly, Fourier transformation is performed on the image based on the extracted vector features, then filtering and sharpening processing are performed on the image subjected to Fourier transformation, image noise is eliminated, and meanwhile, object contour features in the image are enhanced, so that target identification is facilitated.
As can be seen, the filtering process includes low-pass filtering for eliminating noise and high-pass filtering for enhancing high-frequency signals such as edges and the like to make the blurred picture clear, and planting filtering and mean filtering for removing or reducing noise. Furthermore, when enhancement processing is performed, which filtering mode is specifically used can be selected according to different environmental information, and therefore, information to be enhanced is different in driving video images acquired by different environmental information, so that different enhancement processing modes can be set for different environmental information to acquire an optimal image, thereby facilitating subsequent extraction of environmental characteristic information.
In the embodiment, the quality of the driving video image is improved by enhancing the acquired driving video image, so that the accuracy of the extracted environmental characteristic information is improved, and a good foundation is laid for further processing the extracted environmental characteristic information.
Further, in step S20, the step of extracting the environmental characteristic information from the driving video image may include:
step S201, carrying out target detection on the driving video image, extracting object information in the driving video image, and calculating distance information between each object in the object information and the current vehicle;
step S202, when detecting that the object information contains character information, extracting the character information;
and step S203, integrating the distance information and the character information to obtain environment characteristic information.
After the acquired driving video image is subjected to enhancement processing, the environmental characteristic information is extracted from the driving video image subjected to enhancement processing. Specifically, firstly, object detection is performed on a driving video image, so as to determine object information in the driving video image, and calculate distance information between each object and a current vehicle, and further, when it is detected that the driving video image contains Character information, the Character information in the image is identified and extracted by using technologies such as Optical Character Recognition (OCR), and the extracted Character information is integrated with the distance information of each object, so as to obtain finally extracted environmental characteristic information. The integrated environmental characteristic information is, for example, "a certain building on the right front is 200 m away", where "a certain building" is text information extracted from the driving video image of the building, "the right front" and "200 m" are distance information obtained by calculation, and the building recognizes that there is a building in the driving video image of the current vehicle by object detection through the result of object detection performed on the driving video image subjected to the enhancement processing.
Further, in step S201, after the target detection is performed on the driving video image, the method may further include:
a step a1 of, when an obstacle is detected, identifying a type of the obstacle;
step A2, outputting prompt information according to the type of the obstacle, wherein the type of the obstacle comprises a moving obstacle and a static obstacle, and the prompt information comprises a driving strategy for avoiding the obstacle.
After the enhanced driving video image is subjected to target detection, each object information in the driving video image can be identified, the identified object information is further detected, when an obstacle is detected, the type of the obstacle is identified, and prompt information is output according to the type of the obstacle. The types of the obstacles comprise static obstacles and moving obstacles, the motion states of all objects in the images can be determined through comparison between adjacent frame images according to the streaming media data of the acquired driving video images, and then the prior knowledge is combined to determine which objects are the obstacles and which are not the obstacles, and the motion states of the obstacles are determined. The moving obstacles include pedestrians, abnormally-driving vehicles and the like, and the static obstacles include arranged traffic barriers, fences, unidentified objects on the road surface and the like.
Further, the output prompt message at least comprises one of the following: the voice prompt, the text prompt and the image prompt are further used, the output prompt information comprises a driving strategy for avoiding the obstacle, for example, when a static obstacle with an unknown object in the middle of a road is identified through target detection, the voice prompt information is output to remind a user that the unknown object with the suspected obstacle is found at a position 50 m away from the road to cause congestion and the user switches to a left lane to decelerate, meanwhile, the obstacle is displayed on a flow media rearview mirror of the driving recorder in a graphic mode, and the position of the obstacle is reminded through modes of flashing or highlighting an indicator lamp and the like. The displayed image may be the outline of an obstacle extracted from the driving video image, or may be replaced by a special symbol or graphic.
Furthermore, the driving strategy for avoiding the obstacle can be generated by combining the distribution situation of other vehicles around the current vehicle and the driving information. Specifically, the driving information comprises a driving direction and a driving speed, the distribution situation of each vehicle around the current vehicle, the current driving direction, the current driving speed and the like of each vehicle are determined through the acquired driving video images, the driving information of each vehicle in a short time in the future is predicted according to the current driving direction and the current driving speed of each vehicle, and a driving strategy that the current vehicle can avoid obstacles is generated according to the prediction result and the distribution situation of each vehicle.
And step S30, calibrating the positioning information of the current vehicle according to the environment characteristic information.
Furthermore, after the environmental characteristic information is extracted from the acquired driving video image, the positioning information of the current vehicle in the navigation information is calibrated according to the extracted environmental characteristic information. Therefore, when a user performs positioning navigation through the positioning navigation software in the mobile phone, due to poor network signals of the mobile phone or poor communication conditions between the positioning navigation software and the server, the positioning information of the current vehicle is not updated timely, so that a large deviation exists in positioning. For example, in the peak hours of work, the usage amount of the positioning and navigation software is large, which causes the pressure of the software server to be too large, and affects the communication condition between the positioning and navigation software and the server; or, when the user is in a congested or remote area and the mobile phone network signal is poor, the positioning update information sent by the server cannot be received in time, so that the positioning information of the current vehicle is delayed, and thus wrong navigation prompt information is easily generated.
For example, in a possible situation, a user needs to turn at a relatively hidden small intersection, but because a lot of small shops are arranged around the intersection or the environment is relatively noisy, the mobile phone network signal of the user is poor, when the user uses navigation software in a mobile phone to navigate, the positioning is inaccurate, when the user is close to the intersection, the navigation software does not output a navigation prompt, when the navigation prompt is 50 meters away from the intersection to turn, the current vehicle is driven to be close to the intersection, and the wrong navigation prompt probably leads the user to miss the intersection to turn. And when the communication between the mobile phone navigation software and the server is delayed based on the environmental characteristic information extracted from the real-time streaming media data, the positioning of the current vehicle is calibrated by using the real-time environmental characteristic information around the current vehicle, so that the driving of a user is assisted, and the driving safety of the user is improved.
It should be noted that, in this embodiment, when calibrating the positioning information of the current vehicle, the current vehicle may be calibrated at regular time, or by obtaining a positioning update rule of the navigation software, when it is detected that the navigation software should update the positioning information of the current vehicle but not update the positioning information of the current vehicle, the current vehicle may be autonomously calibrated according to the real-time environment where the current vehicle is located. Specifically, the setting may be customized by a user, and is not particularly limited herein.
In the embodiment, the driving video image and the navigation information of the current vehicle are acquired; determining the positioning information of the current vehicle according to the navigation information, and extracting environmental characteristic information from the driving video image; and calibrating the positioning information of the current vehicle according to the environmental characteristic information. Based on the environmental characteristic information extracted from the real-time streaming media data, when the current vehicle is not positioned accurately, the positioning of the vehicle is calibrated by using the real-time environmental characteristic of the current vehicle, so that the driving is assisted, and the driving safety is improved.
Furthermore, when the current vehicle is in an environment with insufficient light or has a driving video image obtained under the influence of severe weather, the obtained driving video image is enhanced before extracting and changing a plurality of characteristic information from the driving video image, so that the quality of the driving video image is improved, the identification precision is improved when the driving video image is subjected to target detection, and the accuracy of positioning and calibration of the current vehicle is improved.
Further, on the basis of the first embodiment of the present invention, a second embodiment of the positioning calibration method of the present invention is provided.
This embodiment is a step of the refinement of step S30 in the first embodiment, and includes:
step S301, determining a target reference object from the environment where the current vehicle is located according to the text information of the environment characteristic information;
based on the above embodiments, in this embodiment, the extracted environmental characteristic information includes text information and distance information, and when the positioning information of the current vehicle is calibrated according to the extracted environmental characteristic information, a target reference object is determined from the environment where the current vehicle is located according to the extracted text information, where the target reference object is generally a stationary object with a landmark property, such as a large building like a hospital, a mall, and an office building, or an object with a special purpose such as a bus stop and a logo in navigation software.
Step S302, determining a first distance between the current vehicle and the target reference object according to distance information in the environment characteristic information;
and determining the actual distance between the current vehicle and the target reference object according to the selected target reference object and the distance information in the extracted environment characteristic information. For example, when the positioning information is to be calibrated during the current driving of the vehicle, an object having a character information mark and a landmark is selected from the extracted environmental characteristic information as a target reference object, and then the actual distance to the target reference object is determined.
Step S303, obtaining coordinate information of the target reference object in the navigation information, and calculating a second distance between the current vehicle and the target reference object in the navigation information according to the coordinate information and the positioning information;
after the actual distance between the current vehicle and the target reference object is determined, coordinate information of the target reference object in the navigation information is obtained, and it is known that in the navigation information, although a navigation interface interacting with a user is presented in the form of a map, each object in the navigation map is stored in the form of a coordinate point, and each coordinate point has a corresponding text identifier, such as a certain shopping mall, a certain restaurant or a certain public communication station, and the coordinate information of the target reference object in the navigation information selected according to the text information index, and according to the positioning information of the current vehicle in the navigation information, the positioning information also calculates the distance between the current vehicle and the target reference object in the navigation map according to the corresponding coordinate information and the coordinate information of the current vehicle and the target reference object.
Step S304, calibrating the positioning information according to the first distance and the second distance.
After the distance between the current vehicle and the target reference object in the navigation map is obtained through calculation, the distance is compared with the actual distance between the current vehicle and the target reference object, and if the distance is inconsistent or the difference is larger, the positioning information of the current vehicle in the navigation information is calibrated according to the actual distance. For example, if the actual distance between the current vehicle and the target reference object is 50 meters, and the distance between the current vehicle and the target reference object is 80 meters, the current vehicle is calibrated in the navigation map according to the actual distance, so as to avoid outputting wrong navigation prompt information.
Further, after step S30, the method may further include:
step S40, generating target navigation information according to the calibrated positioning information, and outputting and displaying the target navigation information and the driving video image;
after the positioning information of the current vehicle is calibrated, the navigation information is regenerated according to the calibrated positioning information, and the regenerated navigation information and the obtained enhanced driving video image are displayed on a streaming media rearview mirror of the driving recorder, so that a user can check the navigation information and the driving video image of the front road and the rear road in time according to the self requirement.
It should be noted that, for the display of the navigation information and the driving video image, the navigation information and the driving video image may be displayed simultaneously in different areas on a streaming media rearview mirror of the driving recorder, or alternatively displayed according to a switching instruction of a user, where the switching instruction includes a key instruction, a touch instruction, and a voice instruction, or may be a switching instruction automatically triggered according to a rule preset by the user to display different information, for example, when the current vehicle is detected to reverse, the driving video image is automatically switched to the rear road, and when the current vehicle is detected to normally drive, the navigation information and/or the driving video image of the front road is displayed and the navigation prompt information is output in a voice manner, which is not specifically limited herein.
And step S50, detecting the road condition information of the driving route of the current vehicle based on the target navigation information and the driving video image, and outputting early warning prompt information when detecting that the road condition indicated by the road condition information changes.
Furthermore, the road condition information of the current driving route of the vehicle is detected based on the regenerated navigation information and the acquired driving video image, and when the road condition of the current driving road is detected to be changed according to the road condition information, early warning prompt information is output to prompt a user to prepare for emergency in time. Specifically, the change of road conditions comprises a multi-bend area, an area with more deceleration strips, road construction and the like, when the fact that the current vehicle is about to enter the multi-bend area or the multi-deceleration strip area is detected, early warning prompt information is output to remind a user of paying attention to the multi-bend area or discomfort possibly caused in the driving process of the multi-deceleration strip area, when the fact that the road is under construction is detected, the early warning prompt information is output to remind the user of paying attention to deceleration and bypassing, the flexibility of auxiliary driving is improved, and therefore the experience of the user on the auxiliary driving function is improved.
In this embodiment, the positioning information of the vehicle in the navigation information is calibrated based on the real-time environment information of the vehicle by the streaming media data acquired in real time, the navigation information is regenerated based on the calibrated positioning information, and the generated navigation information and the acquired driving video image are output and displayed simultaneously, so that the user can query the information according to the self-requirement. Furthermore, the road condition information of the driving route of the current vehicle is detected based on the regenerated navigation information, and when the road condition indicated by the road condition information is detected to be changed, the early warning prompt information is output, so that the experience of the user on the auxiliary driving function is improved.
In addition, referring to fig. 3, an embodiment of the present invention further provides a positioning calibration apparatus, where the positioning calibration apparatus includes:
the data acquisition module 10 is used for acquiring driving video images and navigation information of the current vehicle;
the feature extraction module 20 is configured to determine positioning information of the current vehicle according to the navigation information, and extract environmental feature information from the driving video image;
and the positioning calibration module 30 is configured to calibrate the positioning information of the current vehicle according to the environmental characteristic information.
Optionally, the positioning calibration module 30 is further configured to:
determining a target reference object from the environment where the current vehicle is located according to the text information of the environment characteristic information;
determining a first distance between the current vehicle and the target reference object according to distance information in the environment characteristic information;
acquiring coordinate information of the target reference object in the navigation information, and calculating a second distance between the current vehicle and the target reference object in the navigation information according to the coordinate information and the positioning information;
and calibrating the positioning information according to the first distance and the second distance.
Optionally, the feature extraction module 20 is further configured to:
carrying out target detection on the driving video image, extracting object information in the driving video image, and calculating distance information between each object in the object information and the current vehicle;
when detecting that the object information contains character information, extracting the character information;
and integrating the distance information and the character information to obtain environment characteristic information.
Optionally, the feature extraction module 20 is further configured to:
when an obstacle is detected, identifying a type of the obstacle;
and outputting prompt information according to the type of the obstacle, wherein the type of the obstacle comprises a movable obstacle and a static obstacle, and the prompt information comprises a driving strategy for avoiding the obstacle.
Optionally, the positioning calibration apparatus further comprises an image enhancement module, configured to:
acquiring environmental information of the current vehicle, and if the environmental information meets a preset condition, performing enhancement processing on the driving video image information;
the step of enhancing the driving video image comprises the following steps:
extracting vector characteristics of the driving video image, and performing Fourier transform on the driving image based on the vector characteristics;
and carrying out filtering processing and sharpening processing on the driving video image subjected to Fourier transform so as to eliminate noise in the driving video image and enhance the outline characteristics in the driving video image.
Optionally, the positioning calibration apparatus further includes a road condition detection module, configured to:
generating target navigation information according to the calibrated positioning information, and outputting and displaying the target navigation information and the driving video image;
and detecting road condition information of the running route of the current vehicle based on the target navigation information and the driving video image, and outputting early warning prompt information when detecting that the road condition indicated by the road condition information changes.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a positioning calibration program is stored on the computer-readable storage medium, and when the positioning calibration program is executed by a processor, the positioning calibration program implements operations in the positioning calibration method provided in the foregoing embodiment.
In addition, an embodiment of the present invention further provides a computer program product, which includes a computer program, and when executed by a processor, the computer program implements the operations in the positioning calibration method provided in the foregoing embodiments.
The embodiments of the apparatus, the computer program product and the computer-readable storage medium of the present invention may refer to the embodiments of the positioning calibration method of the present invention, and are not described herein again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity/action/object from another entity/action/object without necessarily requiring or implying any actual such relationship or order between such entities/actions/objects; the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, in that elements described as separate components may or may not be physically separate. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the positioning calibration method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A positioning calibration method is characterized by comprising the following steps:
acquiring a driving video image and navigation information of a current vehicle;
determining the positioning information of the current vehicle according to the navigation information, and extracting environmental characteristic information from the driving video image;
and calibrating the positioning information of the current vehicle according to the environmental characteristic information.
2. The positioning calibration method according to claim 1, wherein the environment characteristic information includes distance information and text information, and the step of calibrating the positioning information of the current vehicle based on the environment characteristic information includes:
determining a target reference object from the environment where the current vehicle is located according to the text information of the environment characteristic information;
determining a first distance between the current vehicle and the target reference object according to distance information in the environment characteristic information;
acquiring coordinate information of the target reference object in the navigation information, and calculating a second distance between the current vehicle and the target reference object in the navigation information according to the coordinate information and the positioning information;
and calibrating the positioning information according to the first distance and the second distance.
3. The positioning calibration method according to claim 1, wherein the step of extracting the environmental feature information from the driving video image comprises:
carrying out target detection on the driving video image, extracting object information in the driving video image, and calculating distance information between each object in the object information and the current vehicle;
when detecting that the object information contains character information, extracting the character information;
and integrating the distance information and the character information to obtain environment characteristic information.
4. The positioning calibration method according to claim 3, wherein the step of performing the target detection on the driving video image further comprises:
when an obstacle is detected, identifying a type of the obstacle;
and outputting prompt information according to the type of the obstacle, wherein the type of the obstacle comprises a movable obstacle and a static obstacle, and the prompt information comprises a driving strategy for avoiding the obstacle.
5. The positioning calibration method according to claim 1, wherein the step of extracting the environmental feature information from the driving video image is preceded by:
acquiring environmental information of the current vehicle, and if the environmental information meets a preset condition, performing enhancement processing on the driving video image information;
the step of enhancing the driving video image comprises the following steps:
extracting vector characteristics of the driving video image, and performing Fourier transform on the driving image based on the vector characteristics;
and carrying out filtering processing and sharpening processing on the driving video image subjected to Fourier transform so as to eliminate noise in the driving video image and enhance the outline characteristics in the driving video image.
6. The method of claim 1, wherein the step of calibrating the location information of the current vehicle based on the environmental characteristic information is followed by further comprising:
generating target navigation information according to the calibrated positioning information, and outputting and displaying the target navigation information and the driving video image;
and detecting road condition information of the running route of the current vehicle based on the target navigation information and the driving video image, and outputting early warning prompt information when detecting that the road condition indicated by the road condition information changes.
7. A positioning calibration device, comprising:
the data acquisition module is used for acquiring driving video images and navigation information of the current vehicle;
the characteristic extraction module is used for determining the positioning information of the current vehicle according to the navigation information and extracting environmental characteristic information from the driving video image;
and the positioning calibration module is used for calibrating the positioning information of the current vehicle according to the environmental characteristic information.
8. A terminal device, characterized in that the terminal device comprises: memory, a processor and a positioning calibration program stored on the memory and executable on the processor, the positioning calibration program when executed by the processor implementing the steps of the positioning calibration method according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program which, when being executed by a processor, carries out the steps of the positioning calibration method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method according to any one of claims 1 to 6 when executed by a processor.
CN202110732163.9A 2021-06-29 2021-06-29 Positioning calibration method, device, terminal equipment, storage medium and program product Pending CN113419257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110732163.9A CN113419257A (en) 2021-06-29 2021-06-29 Positioning calibration method, device, terminal equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110732163.9A CN113419257A (en) 2021-06-29 2021-06-29 Positioning calibration method, device, terminal equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN113419257A true CN113419257A (en) 2021-09-21

Family

ID=77717361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110732163.9A Pending CN113419257A (en) 2021-06-29 2021-06-29 Positioning calibration method, device, terminal equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN113419257A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780603A (en) * 2016-12-09 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Vehicle checking method, device and electronic equipment
CN109214993A (en) * 2018-08-10 2019-01-15 重庆大数据研究院有限公司 A kind of haze weather intelligent vehicular visual Enhancement Method
CN110567475A (en) * 2019-09-19 2019-12-13 北京地平线机器人技术研发有限公司 Navigation method, navigation device, computer readable storage medium and electronic equipment
CN111060074A (en) * 2019-12-25 2020-04-24 深圳壹账通智能科技有限公司 Navigation method, device, computer equipment and medium based on computer vision
CN112985425A (en) * 2021-02-02 2021-06-18 恒大新能源汽车投资控股集团有限公司 Vehicle positioning method, device and system based on heterogeneous sensing data fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780603A (en) * 2016-12-09 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Vehicle checking method, device and electronic equipment
CN109214993A (en) * 2018-08-10 2019-01-15 重庆大数据研究院有限公司 A kind of haze weather intelligent vehicular visual Enhancement Method
CN110567475A (en) * 2019-09-19 2019-12-13 北京地平线机器人技术研发有限公司 Navigation method, navigation device, computer readable storage medium and electronic equipment
CN111060074A (en) * 2019-12-25 2020-04-24 深圳壹账通智能科技有限公司 Navigation method, device, computer equipment and medium based on computer vision
CN112985425A (en) * 2021-02-02 2021-06-18 恒大新能源汽车投资控股集团有限公司 Vehicle positioning method, device and system based on heterogeneous sensing data fusion

Similar Documents

Publication Publication Date Title
US11967109B2 (en) Vehicle localization using cameras
US11535155B2 (en) Superimposed-image display device and computer program
EP3315911B1 (en) Vehicle position determination device and vehicle position determination method
JP6411956B2 (en) Vehicle control apparatus and vehicle control method
US20160371983A1 (en) Parking assist system and method
JP6926976B2 (en) Parking assistance device and computer program
CN105608927A (en) Alerting apparatus
CN111144211A (en) Point cloud display method and device
KR20120079341A (en) Method, electronic device and recorded medium for updating map data
US11677930B2 (en) Method, apparatus, and system for aligning a vehicle-mounted device
US20170103271A1 (en) Driving assistance system and driving assistance method for vehicle
JP2019008709A (en) Vehicle, information processing system, information processing device, and data structure
JP2019109707A (en) Display control device, display control method and vehicle
CN112092809A (en) Auxiliary reversing method, device and system and vehicle
US20200307622A1 (en) System and method for oncoming vehicle warning
JP2008090654A (en) Driving operation support device
JP2019066440A (en) Navigation device, destination guidance system and program
JP2011232271A (en) Navigation device, accuracy estimation method for on-vehicle sensor, and program
JP6620378B2 (en) vehicle
JP2008090683A (en) Onboard navigation device
JP2008262481A (en) Vehicle control device
JP4798549B2 (en) Car navigation system
JP2023085254A (en) Display control device
CN113419257A (en) Positioning calibration method, device, terminal equipment, storage medium and program product
JP2019117214A (en) Object data structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination