WO2019037489A1 - 地图显示方法、装置、存储介质及终端 - Google Patents
地图显示方法、装置、存储介质及终端 Download PDFInfo
- Publication number
- WO2019037489A1 WO2019037489A1 PCT/CN2018/087683 CN2018087683W WO2019037489A1 WO 2019037489 A1 WO2019037489 A1 WO 2019037489A1 CN 2018087683 W CN2018087683 W CN 2018087683W WO 2019037489 A1 WO2019037489 A1 WO 2019037489A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- search
- prompt information
- virtual
- real scene
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000001514 detection method Methods 0.000 claims abstract description 40
- 230000003190 augmentative effect Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 48
- 238000012795 verification Methods 0.000 claims description 45
- 238000012360 testing method Methods 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 abstract description 17
- 235000019580 granularity Nutrition 0.000 description 62
- 238000010586 diagram Methods 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3667—Display of a road map
- G01C21/367—Details, e.g. road map scale, orientation, zooming, illumination, level of detail, scrolling of road map or positioning of current position marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3602—Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3885—Transmission of map data to client devices; Reception of map data by client devices
- G01C21/3896—Transmission of map data from central databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/954—Navigation, e.g. using categorised browsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/16—Type of output information
- B60K2360/166—Navigation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/16—Type of output information
- B60K2360/177—Augmented reality
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/18—Information management
- B60K2360/186—Displaying information according to relevancy
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/28—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/29—Instruments characterised by the way in which information is handled, e.g. showing information on plural displays or prioritising information according to driving conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- the present invention relates to the field of Internet technologies, and in particular, to a map display technology.
- map application can provide users with such services as map browsing, address inquiry, point of interest search, public exchange, driving navigation, pedestrian navigation, bus line inquiry and site inquiry, so it can be quickly popularized among the crowd.
- a destination point is usually given to the destination.
- the navigation path of the ground serves as a navigation prompt.
- the display mode is relatively simple and lacks diversity.
- an embodiment of the present invention provides a map display method, apparatus, storage medium, and terminal.
- the technical solution is as follows:
- a map display method which is applied to a terminal, and the method includes:
- the virtual navigation prompt information is superimposed and displayed in the first position, and an augmented reality image for performing map display is obtained.
- a map display device comprising:
- An acquiring module configured to acquire a real scene image of a current location and target navigation data that is navigated to the destination by the current location;
- a determining module configured to determine, according to the current location and the target navigation data, virtual navigation prompt information to be superimposed and displayed in the real scene image
- the determining module is further configured to determine, according to the device calibration parameter of the target device that captures the real scene image, that the virtual navigation prompt information is superimposed and displayed in the first position in the real scene image;
- a display module configured to perform verification detection on a current device calibration parameter of the target device; and when the current device calibration parameter of the target device passes the verification detection, superimposing the virtual navigation prompt information on the In the first position, an augmented reality image for performing map display is obtained.
- a third aspect provides a computer readable storage medium, where the storage medium stores at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program, and the code A set or set of instructions is loaded and executed by the processor to implement the map display method as described in the first aspect.
- a fourth aspect provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program
- the code set or instruction set is loaded and executed by the processor to implement the map display method as described in the first aspect.
- the AR technology is applied to the navigation field, and the navigation mode combining the virtual scene and the real scene is realized, so that the map display manner is more diverse and diverse, and when the virtual navigation prompt information is superimposed and displayed in the real scene image
- the embodiment of the invention also performs verification check on the current device calibration parameter, so that the virtual navigation prompt information is superimposed and displayed on the real scene image only when the device calibration parameter passes the verification detection, thereby greatly improving the virtual navigation.
- the probability that the prompt information is displayed to the correct position makes the real scene and the navigation prompt information more consistent, which improves the navigation accuracy.
- FIG. 1 is a schematic diagram of a map display provided by the background art of the present invention.
- FIG. 2A is a schematic flowchart of an execution method of an AR navigation method according to an embodiment of the present invention
- FIG. 2B is a schematic diagram of a vehicle-mounted multimedia terminal according to an embodiment of the present invention.
- 3A is a flowchart of a map display method according to an embodiment of the present invention.
- FIG. 3B is a schematic diagram of a point-to-point distance on a plane according to an embodiment of the present invention.
- FIG. 3C is a schematic diagram of a point-to-line distance on a plane according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a map display according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a map display according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram of a map display according to an embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a map display device according to an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of a vehicle-mounted multimedia terminal according to an embodiment of the present invention.
- AR Augmented Reality
- AR technology A technology that increases the user's perception of the real world through virtual information provided by computer systems. That is, the AR technology can apply the virtual information to the real world, and realize the superimposition of the computer-generated virtual object, the virtual scene or the system prompt information into the real scene, thereby realizing the enhancement of the reality.
- the goal of this technology is to place the virtual world in the real world and interact on the screen.
- the real scene and the virtual information are superimposed in the same picture in real time, it is perceived by the human senses. Can achieve a sensory experience beyond reality.
- External parameters Taking the shooting device as a camera, the external parameters determine the position and attitude of the camera in the world coordinate system. That is, the outer parameter defines the rules upon which the world coordinate system is transformed to the camera coordinate system.
- R refers to the rotation matrix, which is used to represent the rotation transformation operation of the camera, usually 3*3 size
- T refers to the translation matrix, which is used to characterize the translation transformation operation of the camera, usually 3*1 size.
- the camera is also taken as an example.
- the internal parameters are only related to the internal structure of the camera, such as the focal length and distortion parameters of the camera.
- the internal parameters determine the rules by which the camera coordinate system is transformed to the image coordinate system.
- the external parameters and internal parameters of the camera are collectively referred to as device calibration parameters.
- the above internal and external parameters can be uniformly represented by a projection matrix M.
- the projection formula of a 3D point projection in the world coordinate system to a two-dimensional image coordinate system is as follows:
- Internal and external reference calibration that is, the process of transforming from the world coordinate system to the image coordinate system.
- the internal and external parameters are corresponding to the process of obtaining the above projection matrix.
- the calibration process is divided into two parts: one is to convert from the world coordinate system to the camera coordinate system, this step is the conversion of the three-dimensional point to the three-dimensional point, and is used to calibrate the outside of the camera such as the rotation matrix R and the translation matrix T.
- the other parameter is the conversion from the camera coordinate system to the image coordinate system.
- This step is a three-dimensional point to two-dimensional point conversion, which is used to calibrate the internal parameters of the camera such as the internal reference matrix K.
- the AR technology is applied to the field of map navigation, and the AR map navigation other than the traditional two-dimensional map navigation is realized. Because the scheme can superimpose and display the virtual navigation prompt information in the real scene image, So users can get a sensory experience that transcends reality.
- the AR navigation technology can be applied to the driving scenario, and can be applied to other scenarios such as walking, such as walking, which is not specifically limited in this embodiment of the present invention.
- the map display method provided by the embodiment of the present invention provides a calibration method for the calibration parameters of the online device, and can periodically check and detect the current device calibration parameters of the target device, and only based on the current device calibration parameters.
- the virtual navigation prompt information is performed when the virtual navigation prompt information is projected to the first position of the real scene image, and the position error between the second position of the corresponding target detected in the real scene image is less than a preset threshold.
- Enhanced display which greatly improves the accuracy of navigation, so that the navigation prompt information is more matched and matched with the real-time scene, which can avoid the occurrence of such things as virtual navigation prompt information displayed in the wrong position, or traveling or entering the wrong direction according to the navigation guide information that is not accurate enough.
- the embodiment of the present invention can not only display the virtual navigation prompt information of various objects appearing in the real scene into the real scene image for enhanced display, but also highlight the virtual navigation prompt information that has the greatest influence on the current navigation.
- the target object may be a variety of road accessory facilities, such as a speed limit card, a turn signal, a traffic light, an electronic eye, a camera, etc., and may also be a lane line, a road tooth line, etc., which is not specifically limited in this embodiment of the present invention.
- the function module involved in the map display method according to the embodiment of the present invention includes an image, in addition to a real scene acquisition module, a positioning and posture module, a calibration module, a map acquisition module, a path planning module, and a projection display module. Detection module.
- the real scene obtaining module is configured to acquire a real scene image related to the real scene, and output the obtained real scene image to the projection display module and the image detecting module.
- the positioning and posture module is used to determine the current position and posture of the target.
- the calibration module is used to calibrate the external parameters and internal parameters of the device that captures the real scene, and outputs the result to the projection display module.
- the map acquisition module is configured to acquire map data to be displayed from the server according to the current position and posture of the target.
- the path planning module performs navigation to the destination route according to the positioning and positioning module and the output of the map acquisition module, thereby obtaining target navigation data for navigating to the destination.
- the projection display module is configured to superimpose the virtual navigation prompt information in the real scene image based on the current device calibration parameter and the target navigation data.
- the image detecting module is configured to perform target detection in the real scene image periodically based on the candidate detection area output by the projection display module.
- the candidate detection area refers to an image area whose center position is mentioned in the foregoing, that is, the detection result of the image detection module is actually used for verification detection of the calibration parameters of the device.
- the embodiment of the present invention performs enhanced display of the virtual navigation prompt information when the position error between the first position and the second position of the corresponding target detected in the candidate detection area is less than a preset threshold.
- the map display method provided by the embodiment of the present invention can be applied to an in-vehicle multimedia terminal integrated with a vehicle, and can also be applied to an intelligent mobile terminal mounted on a vehicle and independent of the vehicle.
- the in-vehicle multimedia terminal can be a multimedia device disposed in the center console of the vehicle.
- the vehicle-mounted multimedia terminal can support functions such as navigation, music playing, video playing, instant messaging, acquiring vehicle speed, sending and receiving, and parsing wireless broadcast messages.
- the radio broadcast packet may be a WiFi (Wireless Fidelity) packet or a Bluetooth packet, and is not specifically limited in this embodiment of the present invention.
- the positioning and positioning module, the calibration module, the map acquisition module, the path planning module, the projection display module and the image detection module shown in FIG. 2A can be placed in the vehicle multimedia terminal, if the vehicle multimedia terminal Without image acquisition, the real scene acquisition module is essentially placed within an imaging device that is mounted on the vehicle and establishes a data connection with the in-vehicle multimedia terminal. That is, the in-vehicle multimedia terminal indirectly acquires a real scene image through the image capturing device.
- the smart mobile terminal since various types of intelligent mobile terminals basically support the image acquisition function, the real scene acquisition module, the positioning and posture module, the calibration module, the map acquisition module, and the path shown in FIG. 2A are used.
- the planning module, the projection display module, and the image detection module can all be placed in the smart mobile terminal.
- FIG. 3A is a flowchart of a map display method according to an embodiment of the present invention.
- a schematic diagram of the execution of the driving scenario and the method of the AR navigation shown in FIG. 2A is used as an example.
- the method for the method of the present invention is to provide an in-vehicle multimedia terminal that supports the image acquisition function. :
- the positioning and orientation module of Figure 2A is used to determine the current position and attitude of the vehicle.
- the embodiment of the present invention adopts the SLAM (Simultaneous Localization And Mapping) technology, and the positioning and positioning mode based on the SLAM technology is compared with the traditional GPS. (Global Positioning System, Global Positioning System) and the positioning and positioning of the gyroscope, the accuracy is greatly improved.
- SLAM Simultaneous Localization And Mapping
- the target navigation data gives a navigation basis for traveling from the current location to the destination.
- the target navigation data is output by the path planning module based on the current position and posture of the vehicle output by the positioning and positioning module, and the map acquisition module outputs.
- the map data and the destination information input by the user are determined.
- the map data mentioned in the embodiment of the present invention is high-precision map data, which has centimeter-level positioning accuracy, including road network information, interest point information, road accessory facilities (such as traffic lights, electronic eyes, traffic signs, etc.).
- 302. Determine, according to the current location and the target navigation data, the virtual navigation prompt information to be superimposed and displayed in the real scene image, and determine, according to the device calibration parameter of the target device that captures the real scene image, that the virtual navigation prompt information is superimposed and displayed in the real The first position in the scene image.
- the target navigation data covers all navigation data from the current location to the destination, the real scene image is captured by the vehicle camera in real time. Therefore, in order to determine which virtual navigation prompt information should be superimposed and displayed in the real scene image captured in the current real time, the virtual navigation prompt information associated with the current location is first determined according to the target navigation data. For example, if the target navigation data is used to determine the current location or nearby objects including traffic lights, electronic eyes, traffic signs, etc., virtual information about the objects may be used as virtual navigation prompt information associated with the current location.
- the virtual navigation prompt information to be superimposed and displayed in the real scene image may be virtual information related to the current location or various road accessory facilities in the vicinity, related to the lane line or the road tooth line on the current driving road of the vehicle.
- the virtual information and the like are not specifically limited in the embodiment of the present invention.
- the virtual navigation prompt information superimposed display is further determined according to the device calibration parameter of the camera.
- the first position in the real scene image is the position of the target in the image coordinate system after the corresponding object in the world coordinate system is projected and transformed into the image coordinate system.
- the position of each object in the world coordinate system is known, and after acquiring the current device calibration parameter of the camera, that is, the projection matrix shown in the foregoing, the virtual navigation prompt information may be superimposed and displayed according to the projection matrix.
- the acquisition of the device calibration parameters is performed by the calibration module in FIG. 2A.
- the calibration of the internal reference in the calibration parameter of the device may be implemented by using a checkerboard calibration method, or the internal parameter value set by the device when the device is shipped may be directly used.
- an embodiment of the online external reference calibration method is provided in the embodiment of the present invention.
- the external reference calibration method is specifically a hierarchical search method based on parameter space, and the specific calibration process is as follows:
- the concept of hierarchical search is to divide the parameter value search range (also referred to as parameter space) of the foreign parameter from coarse to fine discrete, first search on the coarse granularity, and then based on the coarse The search results on the granularity are gradually searched at a finer granularity.
- each of the at least two search granularities is different in granularity. For example, taking the rotation angle and setting two search granularities as an example, one of the search granularities may be 1 degree and the other search granularity may be 0.1 degrees.
- the first search granularity is a search granularity with the largest granularity value among the at least two search granularities.
- the parameter search range is 0 degrees to 20 degrees, and the first search granularity is 1 degree.
- the search parameter values of each discrete state may be 0 degrees, 1 degree, 2 degrees, 3 degrees, ..., 20 degrees. .
- the core of the external parameter calibration method is a calculation method for defining a cost function.
- the embodiment of the present invention adopts a calculation method of using the Euclidean distance in the image space as a cost function value.
- the embodiment of the present invention is implemented in the following manner when calculating the cost function value of each search parameter value:
- the first type is for a point target in a real scene image.
- the point target may be a traffic sign such as a traffic light, a turn alert, an electronic eye, or the like.
- the third position is a position of the point target detected in the real scene image.
- the line target may be, for example, a lane line, a road tooth line, or the like.
- the fifth position is the position of the line object detected in the real scene image.
- the embodiment of the present invention specifically determines the first search parameter value having the minimum cost function value under the first search granularity. Then, using the first search parameter value as the initial value, continue to search at a finer granularity, as described in step (d) below.
- the granularity value of the second search granularity is smaller than the first search granularity and larger than other search granularities.
- a parameter value search range under the second search granularity is determined according to the first search granularity and the first search parameter value.
- the search parameter values in the discrete state are determined within the parameter value search range under the second search granularity, and each search parameter value in the current parameter value search range is calculated according to the foregoing manner.
- the search parameter values in each discrete state may be 2.1 degrees, 2.2 degrees, 2.3 degrees, ... 4 degrees like this.
- the above external parameter calibration method is to calculate the cost function value of each search parameter value in the search range corresponding to the search granularity for any search granularity to determine the minimum cost under the search granularity.
- the search parameter value of the function value after that, according to the search granularity from the largest to the smallest, based on the search parameter value obtained by the search, and then determine the search range of the parameter value corresponding to the next search granularity, of course, the granularity value of the next search granularity Less than this search granularity; after that, the search parameter value with the lowest cost function value under the next search granularity is determined in a similar manner to this search method; and so on, the repeated search is performed until the minimum search granularity is obtained.
- the target search parameter value for the cost function value after that, according to the search granularity from the largest to the smallest, based on the search parameter value obtained by the search, and then determine the search range of the parameter value corresponding to the next search gran
- the parameter space is discretely divided from coarse to fine, and the search is performed on the coarse granularity to obtain the search parameter value with the lowest cost function value; then, the initial value is used to refine the parameter.
- Space search again, get the search parameter value with the lowest cost function value at the current granularity; according to this loop, until the search parameter value with the lowest cost function value at the finest granularity is obtained, it is used as the final calibration parameter value.
- the embodiment of the present invention may perform the superimposed display of the virtual navigation prompt information according to the calculated first position. Further verifying the current device calibration parameters, and only after the current device calibration parameters pass the verification detection, the virtual navigation prompt information may be superimposed and displayed according to the calculated first position to enhance the virtual navigation prompt.
- the accuracy of the superimposed display of information please refer to step 303 below for details.
- the virtual navigation prompt information is superimposed and displayed in the first location, and is used to obtain Augmented reality images for map display.
- the current device calibration parameters are verified and detected, and the following steps are mainly included:
- the target object detection is performed in the target image region with the first position as the center point in the real scene image.
- the verification detection is not performed when the virtual navigation prompt information is superimposed on each frame of the real scene image, and the verification detection of the calibration parameter of the device may be performed periodically.
- the interval between verification tests can be 10s.
- the projection display module shown in FIG. 2A is based on the above calculation.
- a position draws a target image area as a candidate detection area in the real scene image.
- the target image area is centered on the first location.
- the target object matching the virtual navigation prompt information is detected in the target image area, thereby determining the current device calibration parameter. Whether it still continues to be available.
- the image detecting module in FIG. 2A is responsible for detecting the target object matching the virtual navigation prompt information in the target image region.
- the image detection module may be implemented by using a convolutional neural network detection algorithm or a deep learning detection algorithm when performing the detection of the target object, which is not specifically limited in the embodiment of the present invention, and may be determined in the real scene image by using the above detection algorithm. The location of the target.
- the position error between the sixth position and the first position of the target in the target image area is less than a preset threshold, then the first position theoretically calculated based on the current device calibration parameter and the sixth actually detected are proved. There is little difference between the positions, that is, the difference between the theoretical position and the real position calculated above is small, indicating that the accuracy of the current device calibration parameters is good, and there is no need to recalibrate, and the virtual determination based on the current device calibration parameters can be continued.
- the navigation prompt information is projected into the real scene image, thereby obtaining an enhanced display image, and the enhanced display image can be output as a map image to the display screen for display.
- the embodiment of the present invention may generally be performed in the following manner:
- the virtual navigation prompt information is the lane line information
- the virtual lane line of the current driving lane and the virtual lane line of the other lane are separately displayed in the first position, and the driving range of the current driving lane is marked.
- embodiments of the present invention provide lane level navigation.
- the embodiment of the present invention superimposes and displays all the lane line information on the current traveling road in the real scene image.
- the virtual lane line of the current driving lane is also displayed in distinction with the virtual lane lines of the other lanes.
- the current driving lane includes a total of four lanes, wherein the current driving lane of the vehicle is the second lane of the left lane, and the embodiment of the present invention performs the virtual lane line of the second lane of the left lane in the first display manner. Displayed, while the virtual lane lines for the other three lanes are displayed in the second display mode.
- the first display mode may be filled with the first color
- the second display mode may be filled with the second color, which is not specifically limited in the embodiment of the present invention.
- the other lanes are not mistakenly entered, and the driving range of the current lane can also be superimposed and displayed.
- a single color or a single style may be used to color fill or mark the image area defined by the current driving lane.
- a single yellow fill pattern is used to mark the travelable range of the current driving lane.
- the embodiment of the present invention may further display virtual direction indication information, such as a row of arrow indication information shown in FIG. 4, on the current traveling road.
- voice navigation may be performed in synchronization, which is not specifically limited in this embodiment of the present invention.
- the virtual navigation prompt information is road accessory facility information
- the virtual road accessory facility flag is displayed at the first location.
- the embodiments of the present invention can also mark various road accessory facilities on the current traveling road.
- the current location includes a total of five road access facilities, two of which are traffic lights and the other three are traffic signs.
- the virtual navigation prompt information is the road accessory facility information
- the embodiment of the present invention may be implemented in the following manners:
- a virtual frame is displayed at a position (ie, a first position) of each road accessory facility, and the virtual frame body may include each road accessory facility. That is, the virtual navigation prompt information for this type of expression is a virtual frame for highlighting each road accessory facility. In other words, for the mode (a) the virtual road attachment facility is marked as a virtual frame.
- the virtual navigation prompt information may include virtual prompt information for prompting the current color state of the traffic light in addition to the virtual frame, such as displaying a virtual text such as “currently red light” near the frame. information.
- (b) Superimposed display of a virtual road access facility at the location of each road attachment.
- the virtual road attachment facility is marked as a virtual object that matches each road attachment facility.
- a virtual traffic light can be separately generated, and the virtual traffic light can indicate the current color state. For example, if the current red light is present, the red light in the virtual traffic light is highlighted, and the traffic light is yellow. Not highlighted.
- a virtual traffic sign is also generated for each of them, and superimposed and displayed at the corresponding position.
- the method (b) is obviously superior to the method (a) for the case where various road attachments are far away from the camera being photographed, because the frame selection method of the method (a) may be due to the long distance. There will be defects in the road attachment facilities, and the user cannot see the defects of the road attachment facilities, and the method (b) solves this problem.
- the virtual navigation prompt information is the parallel line reminding information
- the virtual lane line of the current driving lane and the virtual lane line of the target parallel lane are distinguished from the other virtual lane lines.
- the embodiment of the present invention can also provide a line-by-line reminder to the user.
- the embodiment of the present invention will have a virtual lane line of the second lane on the left side and a virtual lane line of the first lane on the left side. It is displayed separately from the remaining virtual lane lines.
- the manner of distinguishing the display may also adopt a manner of filling with different colors, which is not specifically limited in the embodiment of the present invention.
- the embodiment of the present invention can also provide a language reminder synchronously, that is, on the basis of the above image display, a voice reminding message such as “the current left lane can be merged into the left lane” can be output.
- a language reminder synchronously, that is, on the basis of the above image display, a voice reminding message such as “the current left lane can be merged into the left lane” can be output.
- the virtual navigation prompt information is the interest point information
- the virtual point of interest mark of the current location is displayed at the first location.
- the virtual point of interest mark is specifically virtual text information
- the virtual text information may include name information of the point of interest and distance information from the current location of the vehicle, etc., which is not performed by the embodiment of the present invention. Specifically limited.
- the virtual point of interest tag is specifically a virtual object that matches the point of interest.
- the virtual object may be a virtual small building; if the point of interest is a restaurant, the virtual object may be a virtual small tableware.
- the first method may be used in a superimposed manner, for example, the virtual object and the virtual text information are superimposed and displayed at the same time, which is not specifically limited in the embodiment of the present invention.
- the embodiment of the present invention may also highlight virtual navigation prompt information that has the greatest impact on the current navigation. That is, if the determined virtual navigation prompt information includes at least two virtual navigation prompt information, the target navigation prompt information that has the greatest influence on the current navigation is determined in the at least two virtual navigation prompt information, and the target navigation prompt is The information is superimposed and displayed in a manner that distinguishes it from other virtual navigation prompt information.
- the target navigation prompt information that has the greatest impact on the current navigation refers to the most important virtual navigation prompt information in the current scene, and generally refers to the virtual navigation prompt information of the target closest to the vehicle.
- FIG. 5 as an example, in the five targets shown in FIG. 5, three traffic signs are closest to the vehicle, and in the current scene, the information indicated by these traffic signs is greater than the two traffic lights in the distance, so The target navigation prompt information in FIG. 5 is virtual navigation prompt information of three traffic signs.
- the foregoing steps 301 to 304 implement the AR navigation, and the embodiment of the present invention can ensure that the virtual navigation prompt information of the overlay display is correspondingly performed when the vehicle rotates or moves to cause the visual field of the camera to change. Changes, and display these navigation prompt information in the correct position on the display screen, so that the accuracy of AR navigation is greatly improved.
- the processing method of the threshold value and for other situations except the case, the embodiment of the present invention also gives a processing manner, which is as follows:
- the device calibration parameter of the target device is recalibrated, similar to the previous one, and obtained.
- the recalibrated equipment calibration parameters can also be verified and tested; when the recalibrated equipment calibration parameters pass the verification test, and then according to the recalibrated equipment calibration parameters, the determined The virtual navigation prompt information is superimposed and displayed in the real scene image at the current location.
- the root cause of triggering the recalibration of the device calibration parameters is that the camera itself may be loose or vibrated due to the motion of the vehicle, and the camera position and posture may be changed relative to the previous position and posture. In this case, If the virtual navigation prompt information is superimposed and displayed in the real scene image according to the current device calibration parameter, the position inaccuracy may occur, so the device calibration parameters need to be recalibrated.
- the recalibration method is the same as the external reference calibration method shown in the previous section, and will not be described here.
- recalibration of equipment calibration parameters refers only to recalibration of external parameters, not including internal parameters.
- the embodiment of the present invention further supports counting an average value of each position error obtained within a preset time length; wherein the position error refers to a target image area centered on the first position in the real scene image. The error between the sixth position and the first position of the target is detected. If the average value obtained is greater than the preset threshold, the device calibration parameters of the target device are recalibrated, similar to the previous one, and the recalibration is obtained. After the equipment calibration parameters, the recalibrated equipment calibration parameters can also be verified and tested; when the recalibrated equipment calibration parameters pass the verification test, the virtual navigation will be determined according to the recalibrated equipment calibration parameters. The prompt information is superimposed and displayed in the real scene image at the current position.
- the preset duration may be 1 s or 2 s, etc., and is not specifically limited in this embodiment of the present invention. Taking 50 ms per frame as an example, if the preset duration is 1 s, the position error of 20 frames can be obtained, and then the average value of the position errors of the 20 frames can be calculated, and then based on the calculated average value.
- the comprehensive statistics are performed for a period of time, the verification of the current equipment calibration parameters is more reasonable and accurate.
- the device calibration parameter of the target device may also be triggered to be recalibrated.
- the recalibrated device calibration parameters can also be verified and tested; when the recalibrated device calibration parameters pass the verification test, the calibration parameters will be determined according to the recalibrated device calibration parameters.
- the virtual navigation prompt information is superimposed and displayed in the real scene image at the current location.
- map display method uses high-precision map data and high precision in the automatic driving technology when applied to a driving scene.
- Positioning and positioning methods as well as real-time image information provide users with rich and accurate AR navigation methods. On the one hand, it can promote the core technology of autopilot, and form a closed loop of R&D to application.
- this virtual and integrated navigation method can also stimulate the user's desire to explore and use, increase the fun of navigation, and thus improve navigation. Usage rate.
- the method provided by the embodiment of the invention applies the AR technology to the navigation field, not only realizes the navigation mode combining the virtual scene and the real scene, but also makes the map display mode more diverse and diverse, and superimposes the virtual navigation prompt in the real scene image.
- the embodiment of the present invention performs verification check on the current device calibration parameters under the condition that the calibration parameters of the device are verified, and the virtual navigation prompt information is only obtained when the verification is verified.
- Superimposed display to the real scene image greatly improves the probability of displaying the virtual navigation prompt information to the correct position, which makes the real scene and the navigation prompt information more consistent, and improves the navigation accuracy.
- the embodiment of the present invention also proposes a method for recalibrating the calibration parameters of the device, which is based on the external parameter calibration method with better robust performance, and is based on the subsequent process.
- the recalibrated device calibration parameters are superimposed and displayed on the virtual navigation prompt information, further ensuring the navigation accuracy.
- the embodiment of the present invention can also perform navigation at the lane line level based on the virtual navigation prompt information, so that the navigation is more refined, and the navigation experience of the user can be greatly improved; and the influence on the current navigation can also be highlighted.
- FIG. 7 is a schematic structural diagram of a map display device according to an embodiment of the present invention. Referring to Figure 7, the device includes:
- the obtaining module 701 is configured to acquire a real scene image of the current location and target navigation data that is navigated to the destination by the current location;
- a determining module 702 configured to determine, according to the current location and the target navigation data, virtual navigation prompt information to be superimposed and displayed in the real scene image;
- the determining module 702 is further configured to determine, according to the device calibration parameter of the target device that captures the real scene image, that the virtual navigation prompt information is superimposed and displayed in the first position in the real scene image;
- the display module 703 is configured to perform verification check on the current device calibration parameter of the target device; when the current device calibration parameter of the target device passes the verification detection, the virtual navigation prompt information is superimposed and displayed in the The first position is described, and an augmented reality image for performing map display is obtained.
- the device calibration parameter includes an external parameter
- the device further includes:
- a calibration module configured to calculate a cost function value of each search parameter value in a parameter value search range corresponding to the current search granularity for the search granularity set for the outer parameter, and determine a search parameter having a minimum cost function value under the current search granularity value;
- the calibration module is further configured to determine a search value range of the parameter value corresponding to the next search granularity based on the search parameter value obtained by the current search, and determine, according to the current search mode, the minimum cost function value under the next search granularity. Searching for parameter values, and so on, until a target search parameter value having a minimum cost function value at a minimum search granularity is obtained, the granularity value of the next search granularity being less than the current search granularity; determining the target search parameter value as The current external parameters of the target device.
- the calibration module is configured to acquire a point target in a real scene image; and for each of the search parameter values, determine a virtual match with the point target according to the search parameter value Navigating a second position of the prompt information in the real scene image; calculating a linear distance between the second position and a third position of the point target, the third position being in the real scene image
- the detected position of the point target; the straight line distance is used as a cost function value of the search parameter value.
- the calibration module is configured to acquire a line object in a real scene image; for each of the search parameter values, determine a virtual match with the line object according to the search parameter value Navigating a fourth position of the prompt information in the real scene image; calculating a normal distance between the fourth position and a fifth position of the line object, the fifth position being in the real scene a position of the line object detected in the image; the normal distance is used as a cost function value of the search parameter value.
- the apparatus further comprises:
- a calibration module configured to recalibrate the device calibration parameter of the target device if the target object is not detected in the target image region with the first location as a center point in the real scene image;
- the display module is further configured to perform verification verification on the recalibrated device calibration parameter; when the recalibrated device calibration parameter passes the verification detection, according to the recalibrated device calibration parameter, the determined The virtual navigation prompt information is superimposed and displayed in the real scene image at the current location.
- the apparatus further comprises:
- a calibration module configured to calculate an average value of each of the position errors obtained within a preset time period; the position error refers to a target image area centered on the first position in the real scene image Detecting an error between the sixth position of the target and the first position; if the average value is greater than the preset threshold, recalibrating the device calibration parameter of the target device;
- the display module is further configured to perform verification verification on the recalibrated device calibration parameter; when the recalibrated device calibration parameter passes the verification detection, according to the recalibrated device calibration parameter, the determined The virtual navigation prompt information is superimposed and displayed in the real scene image at the current location.
- the apparatus further comprises:
- a calibration module configured to: if a target object is detected in the target image region with the first position as a center point in the real scene image, and the position error is greater than a preset threshold, recalibrating the device calibration parameter of the target device Performing calibration; the position error refers to an error between a sixth position of the target in the target image area and the first position;
- the display module is further configured to perform verification detection on the recalibrated device calibration parameter; when the recalibrated device calibration parameter passes the verification detection, according to the current device calibration parameter, the determined virtual The navigation prompt information is superimposed and displayed in the real scene image at the current position.
- the display module is configured to: if the verification condition of the calibration parameter of the device is currently met, perform a target image region with the first location as a center point in the real scene image. Target detection; if the target object matching the virtual navigation prompt information is detected, and the position error between the sixth position where the target is located and the first position is less than a preset threshold, determining device calibration parameters Tested by the check.
- the display module is configured to: if the virtual navigation prompt information includes at least two virtual navigation prompt information, determine, in the at least two virtual navigation prompt information, that the current navigation has the greatest influence Target navigation prompt information;
- the target navigation prompt information is superimposed and displayed in a manner different from other virtual navigation prompt information.
- the display module is configured to distinguish, between the virtual lane line of the current driving lane and the virtual lane line of the other lane, in the first location, if the virtual navigation prompt information is lane lane information. Displaying and marking the travelable range of the current driving lane; or, if the virtual navigation prompt information is road accessory facility information, displaying the virtual road accessory facility flag at the first location; or The virtual navigation prompt information is a parallel line reminding information, and the virtual lane line of the current driving lane and the virtual lane line of the target parallel lane are displayed separately from other virtual lane lines; or, if the virtual navigation prompt information is The point of interest information displays the virtual point of interest mark of the current location at the first location.
- the device provided by the embodiment of the invention applies the AR technology to the navigation field, not only realizes the navigation mode combining the virtual scene and the real scene, but also makes the map display mode more diverse and diverse, and superimposes the virtual navigation prompt in the real scene image.
- the embodiment of the present invention performs verification check on the current device calibration parameters under the condition that the calibration parameters of the device are verified, and the virtual navigation prompt information is only obtained when the verification is verified.
- Superimposed display to the real scene image greatly improves the probability of displaying the virtual navigation prompt information to the correct position, which makes the real scene and the navigation prompt information more consistent, and improves the navigation accuracy.
- the embodiment of the present invention also proposes a method for recalibrating the calibration parameters of the device, which is based on the external parameter calibration method with better robust performance, and is based on the subsequent process.
- the recalibrated device calibration parameters are superimposed and displayed on the virtual navigation prompt information, further ensuring the navigation accuracy.
- the embodiment of the present invention can also perform navigation at the lane line level based on the virtual navigation prompt information, so that the navigation is more refined, and the navigation experience of the user can be greatly improved; and the influence on the current navigation can also be highlighted.
- map display device provided by the above embodiment is only illustrated by the division of the above functional modules when performing map display. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the device is divided into different functional modules to perform all or part of the functions described above.
- map display device provided by the above embodiment is the same as the embodiment of the map display method, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
- FIG. 8 is a schematic structural diagram of a vehicle-mounted multimedia terminal according to an embodiment of the present invention.
- the vehicle-mounted multimedia terminal can be used to execute the map display method provided in the foregoing embodiment.
- the in-vehicle multimedia terminal 800 includes:
- the transceiver 110 a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a processor 170 including one or more processing cores, and the like.
- a memory 120 including one or more computer readable storage media
- an input unit 130 a display unit 140
- a sensor 150 a sensor 150
- an audio circuit 160 a processor 170 including one or more processing cores, and the like.
- the in-vehicle multimedia terminal structure shown in FIG. 8 does not constitute a limitation on the in-vehicle multimedia terminal, and may include more or less components than those illustrated, or combine some components or different components. Arrangement. among them:
- the transceiver 110 can be used to receive and transmit signals during the transmission and reception of information.
- the in-vehicle multimedia terminal 800 through the transceiver 110 can communicate with other devices located within the vehicle.
- the communication method includes, but is not limited to, a Bluetooth wireless communication method, a WiFi wireless communication method, and the like.
- the memory 120 can be used to store software programs and modules, and the processor 170 executes various functional applications and data processing by running at least one instruction, at least one program, code set, or instruction set stored in the memory 120.
- the memory 120 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, at least one instruction, at least one program, a code set or an instruction set, and the like; the storage data area can be stored according to the usage of the in-vehicle multimedia terminal 800. Created data (such as audio data) and so on.
- the input unit 130 can be configured to receive input numeric or character information and to generate signal inputs related to user settings and function control.
- input unit 130 may include a touch-sensitive surface as well as other input devices.
- Touch-sensitive surfaces also known as touch screens or trackpads, collect touch operations on or near the user and drive the corresponding connectors according to a pre-programmed program.
- the touch sensitive surface may include two parts of a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
- the processor 170 is provided and can receive commands from the processor 170 and execute them.
- touch-sensitive surfaces can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic waves.
- the input unit 130 may also include other input devices.
- other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.).
- the display unit 140 can be used to display information input by the user or information provided to the user and various graphical user interfaces of the in-vehicle multimedia terminal 800, which can be composed of graphics, text, icons, video, and any combination thereof.
- the display unit 140 may include a display panel.
- the display panel may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
- the touch-sensitive surface may cover the display panel, and when the touch-sensitive surface detects a touch operation thereon or nearby, it is transmitted to the processor 170 to determine the type of the touch event, and then the processor 170 displays according to the type of the touch event. A corresponding visual output is provided on the panel.
- the touch-sensitive surface and display panel are implemented as two separate components to implement input and output functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
- the in-vehicle multimedia terminal 800 can also include at least one type of sensor 150, such as a light sensor.
- the light sensor can include an ambient light sensor, wherein the ambient light sensor can adjust the brightness of the display panel according to the brightness of the ambient light.
- the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the in-vehicle multimedia terminal 800.
- the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal by the audio circuit 160. After receiving, it is converted to audio data, processed by the audio data output processor 170, transmitted to other devices, such as in the vehicle, via the transceiver 110, or outputted to the memory 120 for further processing.
- the processor 170 is a control center of the in-vehicle multimedia terminal 800 that connects various portions of the entire in-vehicle multimedia terminal using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling stored in the memory 120.
- the internal data performs various functions and processing data of the in-vehicle multimedia terminal 800, thereby performing overall monitoring of the in-vehicle multimedia terminal.
- the processor 170 may include one or more processing cores; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
- the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170.
- the display unit of the in-vehicle multimedia terminal may be a touch screen display
- the processor 170 of the in-vehicle multimedia terminal runs at least one instruction, at least one program, code set or instruction set stored in the memory 120, thereby implementing the above Method of displaying a map as described in the method embodiment.
- an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores at least one instruction, at least one program, a code set, or a set of instructions.
- the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor of the in-vehicle multimedia terminal to implement the map display method described in the above embodiments.
- a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
- the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Automation & Control Theory (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Chemical & Material Sciences (AREA)
- Mechanical Engineering (AREA)
- Mathematical Physics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Navigation (AREA)
Abstract
本发明公开了一种地图显示方法、装置、存储介质及终端,其中,方法包括:获取当前位置的真实场景图像以及由当前位置导航至目的地的目标导航数据;根据当前位置和目标导航数据,确定待叠加显示在真实场景图像中的虚拟导航提示信息;根据拍摄真实场景图像的目标设备的设备标定参数,确定虚拟导航提示信息叠加显示在所述真实场景图像中的第一位置;对当前的设备标定参数进行校验检测;当当前的设备标定参数通过校验检测时,将虚拟导航提示信息叠加显示在第一位置,得到用于进行地图显示的增强现实图像。本发明将 AR 技术运用到了导航领域中,实现了虚实景结合的导航方式,使得地图显示方式更加多元及多样化。
Description
本申请要求于2017年8月25日提交中国专利局、申请号为201710740562.3、发明名称为“地图显示方法、装置、存储介质及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明涉及互联网技术领域,特别涉及一种地图显示技术。
随着科学技术的逐渐进步,时下可为人们日常生活提供便捷服务的各种应用产品不断推陈出新,比如地图应用便是其中一种。其中,地图应用由于可以为用户提供诸如地图浏览、地址查询、兴趣点搜索、公交换乘、驾车导航、步行导航、公交线路查询及站点查询等多项服务,因此得以在人群中迅速普及。
以驾车导航为例,参见图1,时下在进行地图显示时,除了实时基于车辆的当前位置点来显示有关于周边环境的平面路网信息外,通常还会给出一个由当前位置点到达目的地的导航路径作为导航提示。
在实现本发明的过程中,发明人发现相关技术至少存在以下问题:
由于在进行地图显示时,仅显示平面形式的路网信息以及导航路径,因此显示方式较为单一,缺乏多样性。
发明内容
为了解决相关技术的问题,本发明实施例提供了一种地图显示方法、装置、存储介质及终端。所述技术方案如下:
第一方面,提供了一种地图显示方法,应用于终端,所述方法包括:
获取当前位置的真实场景图像以及由所述当前位置导航至目的地的目标导航数据;
根据所述当前位置和所述目标导航数据,确定待叠加显示在所述真实场景图像中的虚拟导航提示信息;
根据拍摄所述真实场景图像的目标设备的设备标定参数,确定所述虚拟导 航提示信息叠加显示在所述真实场景图像中的第一位置;
对所述目标设备当前的设备标定参数进行校验检测;
当所述目标设备当前的设备标定参数通过所述校验检测时,将所述虚拟导航提示信息叠加显示在所述第一位置,得到用于进行地图显示的增强现实图像。
第二方面,提供了一种地图显示装置,所述装置包括:
获取模块,用于获取当前位置的真实场景图像以及由所述当前位置导航至目的地的目标导航数据;
确定模块,用于根据所述当前位置和所述目标导航数据,确定待叠加显示在所述真实场景图像中的虚拟导航提示信息;
所述确定模块,还用于根据拍摄所述真实场景图像的目标设备的设备标定参数,确定所述虚拟导航提示信息叠加显示在所述真实场景图像中的第一位置;
显示模块,用于对所述目标设备当前的设备标定参数进行校验检测;当所述目标设备当前的设备标定参数通过所述校验检测时,将所述虚拟导航提示信息叠加显示在所述第一位置,得到用于进行地图显示的增强现实图像。
第三方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如第一方面所述的地图显示方法。
第四方面,提供了一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如第一方面所述的地图显示方法。
本发明实施例提供的技术方案带来的有益效果是:
本发明实施例将AR技术运用到了导航领域中,不但实现了虚景和实景结合的导航方式,使得地图显示方式更加多元以及多样化,而且在真实场景图像中叠加显示虚拟导航提示信息时,本发明实施例还会对当前的设备标定参数进行校验检测,实现仅在设备标定参数通过校验检测的情况下才会将虚拟导航提 示信息叠加显示到真实场景图像中,大大提升了将虚拟导航提示信息显示到正确位置的概率,使得实景与导航提示信息更加吻合,提升了导航的精准度。
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明背景技术提供的一种地图显示的示意图;
图2A是本发明实施例提供的一种AR导航的方法执行流程示意图;
图2B是本发明实施例提供的一种车载多媒体终端的示意图;
图3A是本发明实施例提供的一种地图显示方法的流程图;
图3B是本发明实施例提供的一种平面上点到点的距离示意图;
图3C是本发明实施例提供的一种平面上点到线的距离示意图;
图4是本发明实施例提供的一种地图显示示意图;
图5是本发明实施例提供的一种地图显示示意图;
图6是本发明实施例提供的一种地图显示示意图;
图7是本发明实施例提供的一种地图显示装置的结构示意图;
图8是本发明实施例提供的一种车载多媒体终端的结构示意图。
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
在对本发明实施例进行详细地解释说明之前,先对本发明实施例涉及的一些名词进行一下解释说明。
AR(Augmented Reality,增强现实)技术:是一种通过计算机系统提供的虚拟信息增加用户对真实世界感知的技术。即,AR技术可以将虚拟信息应用到真实世界,实现将计算机生成的虚拟物体、虚拟场景或系统提示信息叠加到真实场景中,从而实现对现实的增强。
换句话说,这种技术的目标是在屏幕上将虚拟世界套在现实世界中并进行互动,另外,由于真实场景和虚拟信息实时地叠加到了同一个画面中,因此在被人类感官所感知后,可以达到超越现实的感官体验。
外参(外部参数):以拍摄设备为摄像机为例,外参决定了摄像机在世界坐标系中的位置和姿态。即,外参限定了从世界坐标系变换到摄像机坐标系所依据的规则。
以P
w指代在世界坐标系的一个三维点,P
c指代这个三维点投影到摄像机坐标系中的三维点为例,则二者之间的关系可以描述为:
P
C=R·P
W+T (1)
其中,R指代旋转矩阵,用来表征摄像机的旋转变换操作,通常为3*3大小;T指代平移矩阵,用于表征摄像机的平移变换操作,通常为3*1大小。
内参(内部参数):同样以拍摄设备为摄像机为例,内参仅与摄像机的内部结构有关,比如包括摄像机的焦距、畸变参数等。内参决定了从摄像机坐标系变换到图像坐标系所依据的规则。在本发明实施例中,将摄像机的外参和内参统称为设备标定参数。
其中,上述内参和外参可以统一用一个投影矩阵M来表示。比如,世界坐标系中的一个三维点投影到二维的图像坐标系的投影公式如下:
(x,y)=M(X,Y,Z) (2)
其中,(X,Y,Z)为这个点在世界坐标系中的三维坐标值,(x,y)为这个点在图像坐标系中的二维坐标值。M=K[R|T],混合了外参和内参,大小为3*4,其中K为内参数矩阵。在本发明实施例中,通过投影矩阵实现将三维空间中的点投影到了图像空间。
内外参标定:即是从世界坐标系变换到图像坐标系的过程,换句话说,内外参标定对应求上述投影矩阵的过程。一般来说,标定的过程分为两个部分:一个是从世界坐标系变换为摄像机坐标系,这一步是三维点到三维点的转换,用于标定旋转矩阵R以及平移矩阵T这样的摄像机外部参数;另一个是从摄像机坐标系变换为图像坐标系,这一步是三维点到二维点的转换,用于标定内参矩阵K这样的摄像机内部参数。
在本发明实施例中,将AR技术应用到了地图导航领域中,实现了除了基于传统的二维地图导航之外的AR地图导航,由于本方案可以在真实场景图像中叠加显示虚拟导航提示信息,因此用户可以获得超越现实的感官体验。其中,这种AR导航技术既可以应用在驾驶场景下,也可以应用在诸如步行等其他有地图导航需求的场景下,本发明实施例对此不进行具体限定。
其中,本发明实施例提供的地图显示方法,给出了一种在线设备标定参数的标定方法,且可周期性地对目标设备当前的设备标定参数进行检验检测,仅在基于当前的设备标定参数将虚拟导航提示信息投影至真实场景图像的第一位置,与在真实场景图像中检测到的对应目标物的第二位置之间的位置误差小于预设阈值时,才将该虚拟导航提示信息进行增强显示,这大大提升了导航的精准度,使得导航提示信息与实时场景更加匹配和吻合,可以避免出现诸如虚拟导航提示信息显示在错误位置,或者按照不够精准的导航提示信息行进或行驶进入错误道路或死路的问题。
此外,本发明实施例除了可将真实场景中出现的各种目标物的虚拟导航提示信息投影至真实场景图像中进行增强显示外,还可以突出显示对当前导航影响最大的虚拟导航提示信息。其中,目标物既可为各种道路附属设施,比如限速牌、转向牌、红绿灯、电子眼、摄像头等,也可以为车道线、马路牙子线等,本发明实施例对此不进行具体限定。
下面结合图2A所示的AR导航的方法执行流程示意图,对本发明实施例所涉及的系统架构进行说明。
参见图2A,本发明实施例提供的地图显示方法所涉及的功能模块除了包括真实场景获取模块、定位定姿模块、标定模块、地图获取模块、路径规划模块以及投影显示模块之外,还包括图像检测模块。
其中,真实场景获取模块用于获取有关于真实场景的真实场景图像,并将获取到的真实场景图像输出给投影显示模块以及图像检测模块。
定位定姿模块用于确定目标物的当前位置以及姿态。
标定模块用于对拍摄真实场景的设备进行外参和内参的标定,并将结果输出给投影显示模块。
地图获取模块用于根据目标物的当前位置和姿态从服务器获取需要显示 的地图数据。而路径规划模块便根据定位定姿模块以及地图获取模块的输出进行导航至目的地的路径的规划,进而得到导航至目的地的目标导航数据。
投影显示模块用于基于当前的设备标定参数以及目标导航数据在真实场景图像中叠加虚拟导航提示信息。
在本发明实施例中,图像检测模块用于周期性地基于投影显示模块输出的候选检测区域,在真实场景图像中进行目标物检测。其中,候选检测区域指代以前文中提及的第一位置为中心点的一个图像区域,即图像检测模块的检测结果实际上用于对设备标定参数的校验检测。当上述第一位置与在候选检测区域中检测到的对应目标物的第二位置之间的位置误差小于预设阈值时,本发明实施例才将该虚拟导航提示信息进行增强显示。
以驾驶场景为例,本发明实施例提供的地图显示方法既可应用于与车辆一体化的车载多媒体终端,也可应用于安装在车辆上且独立于车辆的智能移动终端。如图2B所示,车载多媒体终端可为设置在车辆中控台的多媒体设备。该车载多媒体终端可支持导航、音乐播放、视频播放、即时通讯、获取车辆时速、收发以及解析无线广播报文等功能。其中,无线广播报文可以是WiFi(Wireless Fidelity,无线保真)报文或蓝牙报文等,本发明实施例对此不进行具体限定。
以上述车载多媒体终端为例,上述图2A中所示的定位定姿模块、标定模块、地图获取模块、路径规划模块、投影显示模块以及图像检测模块可置于车载多媒体终端内,若车载多媒体终端不支持图像获取,则真实场景获取模块实质上置于安装在车辆上且与车载多媒体终端建立数据连接的摄像设备内。即车载多媒体终端通过摄像设备来间接获取真实场景图像。
以上述智能移动终端为例,由于时下各种类型的智能移动终端基本均支持图像获取功能,因此上述图2A中所示的真实场景获取模块、定位定姿模块、标定模块、地图获取模块、路径规划模块、投影显示模块以及图像检测模块均可置于智能移动终端内。
图3A是本发明实施例提供的一种地图显示方法的流程图。结合驾驶场景以及图2A所示的AR导航的方法执行流程示意图,以该地图显示方法的执行主体为支持图像获取功能的车载多媒体终端为例,参见图3A,本发明实施例 提供的方法流程包括:
301、获取当前位置的真实场景图像以及由所述当前位置导航至目的地的目标导航数据。
在本发明实施例中,图2A中的定位定姿模块用于确定车辆的当前位置和姿态。其中,为了能够对车辆进行精准地定位定姿,本发明实施例采取SLAM(Simultaneous Localization And Mapping,同步定位与构图)技术实现,而基于SLAM技术的定位定姿方式,相较于采取传统的GPS(Global Positioning System,全球定位系统)以及陀螺仪的定位定姿方式,精度大大提高。
其中,目标导航数据给出了由当前位置行驶至目的地的导航依据,如图2A所示,目标导航数据由路径规划模块基于定位定姿模块输出的车辆当前位置和姿态、地图获取模块输出的地图数据、以及用户输入的目的地信息确定。其中,本发明实施例提及的地图数据为高精度地图数据,其是具有厘米级定位精度,包括路网信息、兴趣点信息、道路附属设施(如红绿灯、电子眼、交通路牌等)的相关信息和动态交通信息的下一代导航地图。不但可以告知用户前方的行驶方向和路况,而且可以实时地将动态交通信息传递给用户,以便于用户来判断前方的拥堵程度,协助选择最佳行驶路径。
302、根据当前位置和目标导航数据,确定待叠加显示在该真实场景图像中的虚拟导航提示信息,并根据拍摄该真实场景图像的目标设备的设备标定参数,确定虚拟导航提示信息叠加显示在真实场景图像中的第一位置。
以拍摄该真实场景图像的目标设备为车辆上安装的摄像机即车载摄像机为例,由于目标导航数据涵盖了由当前位置至目的地之间的全部导航数据,而真实场景图像是由车载摄像机实时拍摄的,因此为了确定应该在当前实时拍摄到的真实场景图像中叠加显示何种虚拟导航提示信息,还需先根据目标导航数据确定跟当前位置关联的虚拟导航提示信息。比如,通过目标导航数据确定当前位置或附近包括红绿灯、电子眼、交通路牌等目标物,则有关于这些目标物的虚拟信息均可作为与当前位置关联的虚拟导航提示信息。
换句话说,待叠加显示在该真实场景图像中的虚拟导航提示信息可为与当前位置或附近的各种道路附属设施相关的虚拟信息,与车辆当前行驶道路上的车道线或马路牙子线相关的虚拟信息等等,本发明实施例对此不进行具体限 定。
在本发明实施例中,在确定待叠加显示的虚拟导航提示信息后,为了将虚拟导航提示信息显示在显示屏幕上的正确位置,还会根据摄像机的设备标定参数来确定虚拟导航提示信息叠加显示在真实场景图像中的第一位置。其中,此处的第一位置即为将世界坐标系中的对应目标物投影变换到图像坐标系后,该目标物理论上在图像坐标系中的位置。
其中,各个目标物在世界坐标系中的位置已知,在获取到摄像机当前的设备标定参数,即前文所示的投影矩阵后,便可根据该投影矩阵来计算虚拟导航提示信息叠加显示在该真实场景图像中的第一位置。其中,设备标定参数的获取由图2A中的标定模块完成。在本发明实施例中,设备标定参数中内参的标定可以采用棋盘格标定方法实现,或者也可以直接采用设备出厂时设置的内参值。而对于设备标定参数中外参的标定,本发明实施例给出了一种在线外参标定方法。其中,该外参标定方法具体为一种基于参数空间的层次搜索方法,具体的标定过程如下:
(a)、为外参设置至少两个搜索粒度。
在本发明实施例中,层次搜索的概念是将外参的参数值搜索范围(又可称之为参数空间)进行由粗到细的离散划分,先在粗粒度上进行搜索,然后再基于粗粒度上的搜索结果再逐步在更细粒度上进行搜索。其中,至少两个搜索粒度中每一个搜索粒度都是不同的。举一个例来说,以旋转角度且设置两个搜索粒度为例,则其中一个搜索粒度可以为1度大小,另一个搜索粒度可以为0.1度大小。
(b)、以粒度值最大的第一搜索粒度,在外参的参数值搜索范围内确定各个呈离散状态的搜索参数值。
其中,第一搜索粒度为至少两个搜索粒度中粒度值最大的搜索粒度。以参数值搜索范围为0度至20度,第一搜索粒度为1度为例,则各个成离散状态的搜索参数值可为0度、1度、2度、3度、…、20度这样。
(c)、计算当前的参数值搜索范围内每一个搜索参数值的代价函数值,并确定在第一搜索粒度下具有最小代价函数值的第一搜索参数值。
对于该外参标定方法来说其核心是定义代价函数的计算方法,本发明实施 例采取了将图像空间中的欧式距离作为代价函数值的计算方法。其中,在计算每一个搜索参数值的代价函数值时,本发明实施例采取下述方式实现:
第一种、针对真实场景图像中的点目标物。
其中,点目标物可为诸如红绿灯、转向提示、电子眼等交通路牌。
首先,获取真实场景图像中的点目标物;然后,对于每一个搜索参数值,根据该搜索参数值,确定与该点目标物匹配的虚拟导航提示信息在真实场景图像中的第二位置,并如图3B所示计算第二位置与点目标物的第三位置之间的直线距离,将该直线距离作为该搜索参数值的代价函数值。
其中,第三位置为在真实场景图像中检测到的点目标物的位置。
第二种、针对真实场景图像中的线目标物。
其中,线目标物可为诸如车道线、马路牙子线等。
首先,获取真实场景图像中的线目标物;然后,对于每一个搜索参数值,根据该搜索参数值,确定与该线目标物匹配的第三虚拟导航提示信息在真实场景图像中的第四位置,并如图3C所示计算与第四位置与线目标物的第五位置之间的法线距离,将该法线距离作为该搜索参数值的代价函数值。
其中,第五位置为在真实场景图像中检测到的线目标物的位置。
综上所述,在计算得到在第一搜索粒度下各个搜索参数值的代价函数值后,本发明实施例具体是在第一搜索粒度下确定具有最小代价函数值的第一搜索参数值,接下来,以第一搜索参数值作为初值,继续在更细的粒度下进行搜索,详见下述步骤(d)。
(d)、基于第一搜索参数值,在第二搜索粒度下进行具有最小代价函数值的第二搜索参数值的搜索。
其中,第二搜索粒度的粒度值小于第一搜索粒度且大于其他搜索粒度。
作为一种可选的实施方式,在第二搜索粒度下进行具有最小代价函数值的第二搜索参数值的搜索时,可参考下述方式实现:
首先,根据第一搜索粒度和第一搜索参数值,确定在第二搜索粒度下的参数值搜索范围。
继续以第一搜索粒度为1度为例,假设第一搜索参数值为3度,则在第二搜索粒度下的参数值搜索范围可为3-1=2度至3+1=4度的范围。
接下来,以第二搜索粒度,在第二搜索粒度下的参数值搜索范围内确定各个呈离散状态的搜索参数值,并按照前文类似的方式计算当前的参数值搜索范围内每一个搜索参数值的代价函数值;之后,将在第二搜索粒度下具有最小代价函数值的搜索参数值作为第二搜索参数值。
假设第二搜索粒度为0.1度,在第二搜索粒度下的参数值搜索范围为2度至4度的范围,则各个呈离散状态的搜索参数值可为2.1度、2.2度、2.3度、…、4度这样。
(e)、在剩余的搜索粒度中按照粒度值由大到小的顺序,基于第二搜索粒度下的参数值搜索方式进行参数值搜索,直至在最小粒度值的搜索粒度下得到具有最小代价函数值的目标搜索参数值;将目标搜索参数值确定为目标设备当前的外参。
总结来说,上述外参标定方法即是对于任一个搜索粒度来说,均计算该搜索粒度对应的参数值搜索范围内每一个搜索参数值的代价函数值,以确定该搜索粒度下具有最小代价函数值的搜索参数值;之后,按照搜索粒度由大到小的顺序,基于这次搜索得到的搜索参数值,再确定下一个搜索粒度对应的参数值搜索范围,当然下一个搜索粒度的粒度值小于这一搜索粒度;之后,还是按照与这次搜索方式类似的方式,确定下一个搜索粒度下具有最小代价函数值的搜索参数值;依次类推,进行重复搜索,直至得到最小搜索粒度下具有最小代价函数值的目标搜索参数值。
换句话说,本发明实施例实现了将参数空间进行由粗到细的离散划分,先在粗粒度上进行搜索,得到代价函数值最小的搜索参数值;然后以此为初值,细化参数空间,再一次进行搜索,得到当前粒度下代价函数值最小的搜索参数值;依此循环,直至得到最细粒度下代价函数值最小的搜索参数值,将其作为最终的标定参数值。
需要说明的是,在得到虚拟导航提示信息叠加显示在真实场景图像中的第一位置后,本发明实施例除了立即根据计算得到的第一位置进行虚拟导航提示信息的叠加显示之外,还可进一步地对当前的设备标定参数进行校验检测,仅在当前的设备标定参数通过校验检测后,方可根据计算得到的第一位置进行虚拟导航提示信息的叠加显示,以提升对虚拟导航提示信息的叠加显示的精准 度,具体描述请参见下述步骤303。
303、对所述目标设备当前的设备标定参数进行校验检测;当所述目标设备当前的设备标定参数通过校验检测时,将虚拟导航提示信息叠加显示在所述第一位置,得到用于进行地图显示的增强现实图像。
作为一种可选的实现方式,对当前的设备标定参数进行校验检测,主要包括下述几个步骤:
303(a)、若当前满足对设备标定参数的校验条件,则在真实场景图像中以第一位置为中心点的目标图像区域中进行目标物检测。
在本发明实施例中,为了降低计算量,并不是在对每一帧真实场景图像进行虚拟导航提示信息叠加时均进行校验检测,这种对设备标定参数的校验检测可以周期性地进行,比如进行校验检测的间隔时长可为10s一次。
其中,若根据间隔时长确定出当前时刻为周期性地进行校验检测的时刻,则确定当前满足对设备标定参数的校验条件,图2A所示的投影显示模块便会基于上述计算得到的第一位置在真实场景图像中划出一个目标图像区域作为候选检测区域。其中,目标图像区域以第一位置为中心点,接下来,本发明实施例便会在这个目标图像区域中进行与虚拟导航提示信息匹配的目标物的检测,以此来判断当前的设备标定参数是否还继续可用。
303(b)、若在目标图像区域中检测到与虚拟导航提示信息匹配的目标物,且目标物所在的第六位置与第一位置之间的位置误差小于预设阈值,则确定当前的设备标定参数通过校验检测。
其中,图2A中的图像检测模块负责在目标图像区域中进行与虚拟导航提示信息匹配的目标物的检测。其中,图像检测模块在进行目标物的检测时可采取卷积神经网络检测算法或深度学习检测算法实现,本发明实施例对此不进行具体限定,通过上述检测算法可以在真实场景图像中确定出目标物的位置。
若目标物在目标图像区域中的第六位置与第一位置之间的位置误差小于预设阈值,则证明基于当前的设备标定参数在理论上计算得到的第一位置与真实检测到的第六位置之间相差无几,即上述计算出的理论位置与真实位置之间的差异很小,表明当前的设备标定参数的精度较好,无需重新标定,可以继续基于当前的设备标定参数将确定的虚拟导航提示信息投影到真实场景图像中, 进而得到增强显示图像,这个增强显示图像便可作为一张地图图像输出到显示屏幕上进行显示。
作为一种可选的实施方式,本发明实施例在对虚拟导航提示信息进行叠加显示时,通常可采取下述方式进行:
第一种、若虚拟导航提示信息为车道线信息,则在第一位置对当前行驶车道的虚拟车道线以及其他车道的虚拟车道线进行区分显示,并对当前行驶车道的可行驶范围进行标记。
针对该种方式,本发明实施例提供了车道级导航。参见图4,本发明实施例在真实场景图像中叠加显示了当前行驶道路上的全部车道线信息。此外,如果当前行驶道路上包括了多个车道,则为了使得用户明确当前行驶车道,还会将当前行驶车道的虚拟车道线同其他车道的虚拟车道线进行区分显示。
以图4为例,当前行驶车道上总共包括4个车道,其中车辆的当前行驶车道为左边第二个车道,则本发明实施例会以第一显示方式对左边第二个车道的虚拟车道线进行显示,而对于其他三个车道的虚拟车道线则以第二显示方式进行显示。其中,第一显示方式可为采用第一颜色进行填充,第二显示方式可为采用第二颜色进行填充,本发明实施例对此不进行具体限定。
此外,为了进一步地使得用户明确当前行驶车道,不会误进入其他车道,还可以叠加显示当前车道的可行驶范围。比如,可采用一个单一颜色或单一样式对当前行驶车道所界定的图像区域进行色彩填充或标记。比如,在图4中采取了单一黄色的填充方式对当前行驶车道的可行驶范围进行了标记。
此外,除了上述虚拟导航提示信息外,本发明实施例还可以在当前行驶道路上叠加显示虚拟方向指示信息,比如图4中所示的一排箭头指示信息。此外,还可以同步进行语音导航,本发明实施例对此不进行具体限定。
第二种、若虚拟导航提示信息为道路附属设施信息,则在第一位置处显示虚拟道路附属设施标记。
针对该种方式,本发明实施例还可以对当前行驶道路上的各种道路附属设施进行标记。如图5所示,当前位置一共包括5个道路附属设施,其中两个为红绿灯,另外3个为交通路牌。需要说明的是,针对虚拟导航提示信息为道路附属设施信息的情况,本发明实施例在第一位置处显示虚拟道路附属设施标记 时,可采取下述几种方式实现:
(a)如图5所示,在各个道路附属设施所在位置(即第一位置处)显示虚拟框体,以该虚拟框体可以包含各个道路附属设施为准。即,针对该种方式虚拟导航提示信息的表现形式为一个个用于突出显示各个道路附属设施的虚拟框体。换句话说,针对方式(a)虚拟道路附属设施标记为虚拟框体。
此外,针对红绿灯而言,虚拟导航提示信息除了虚拟框体以外,还可包括用于提示红绿灯当前颜色状态的虚拟提示信息,比如在框体周围附近显示一个诸如“当前为红灯”的虚拟文本信息。
(b)、在各个道路附属设施所在位置叠加显示一个虚拟的道路附属设施。换句话说,针对方式(b)虚拟道路附属设施标记为与各个道路附属设施匹配的虚拟物体。比如针对图5中的2个红绿灯来说,可以分别生成一个虚拟的红绿灯,该虚拟红绿灯可以指示当前的颜色状态,比如当前为红灯,则该虚拟红绿灯中红灯以高亮显示,黄绿灯以非高亮显示。
而对于剩余的3个交通路牌来说,同样分别为其生成一个虚拟交通路牌,并叠加显示在相应位置处。
需要说明的一点是,对于各种道路附属设施距拍摄的摄像机较远距离的情况,方式(b)要明显优于方式(a),因为由于距离较远,方式(a)的框选方式可能会存在由于道路附属设施成像较小,用户看不清道路附属设施的缺陷,而方式(b)正好解决了这一问题。
第三种、若虚拟导航提示信息为并线提醒信息,则对当前行驶车道的虚拟车道线以及目标并线车道的虚拟车道线同其他虚拟车道线进行区分显示。
针对该种方式,本发明实施例还可以向用户提供并线提醒。继续以图4为例,若当前左边第一个车道的行车较少,可以进行并线,则本发明实施例会对左边第二个车道的虚拟车道线,以及左边第一个车道的虚拟车道线同剩余的虚拟车道线进行区分显示。其中区分显示的方式同样可以采取采用不同颜色进行填充的方式,本发明实施例对此不进行具体限定。
此外,针对并线提醒来说,本发明实施例还可同步提供语言提醒,即在上述图像显示的基础上,还可以输出诸如“当前可由左二车道并入左一车道”的语音提醒信息,以为用户提供更加精细化的并线服务。
第四种、若虚拟导航提示信息为兴趣点信息,则在第一位置显示当前位置的虚拟兴趣点标记。
针对该种方式,为了优化导航体验,提高用户体验度,还可以基于车辆行驶的位置不同,而实时地对当前位置附近的兴趣点信息进行显示,以对用户进行相关提示。其中,在对虚拟兴趣点标记进行显示时,可采取下述几种方式实现:
方式一、如图6所示,该虚拟兴趣点标记具体为虚拟文本信息,该虚拟文本信息中可包含兴趣点的名称信息以及与车辆当前位置的距离信息等,本发明实施例对此不进行具体限定。
方式二、该虚拟兴趣点标记具体为与该兴趣点匹配的虚拟物体。比如,若兴趣点为一个商场,则该虚拟物体可为一个虚拟小建筑物;若该兴趣点为一个餐馆,则该虚拟物体可为一个虚拟小餐具。
针对第二种方式,也可与第一种方式叠加使用,比如同时叠加显示上述虚拟物体以及虚拟文本信息,本发明实施例对此不进行具体限定。
在另一个实施例中,本发明实施例还可以突出显示对当前导航影响最大的虚拟导航提示信息。即,若确定的虚拟导航提示信息中包括至少两个虚拟导航提示信息,则在至少两个虚拟导航提示信息中,还会再确定对当前导航影响最大的目标导航提示信息,并将目标导航提示信息以区别与其他虚拟导航提示信息的方式进行叠加显示。
其中,对当前导航影响最大的目标导航提示信息即指代在当前场景下最重要的虚拟导航提示信息,其通常指代距离车辆最近的目标物的虚拟导航提示信息。继续以图5为例,在图5所示的5个目标物中,3个交通路牌距离车辆最近,且在当前场景下这些交通路牌所指信息的重要程度要大于远方的2个红绿灯,因此图5中的目标导航提示信息便为3个交通路牌的虚拟导航提示信息。
综上所述,上述步骤301至步骤304实现了AR导航,且每当车辆转动或移动进而导致摄像机的视野变动时,本发明实施例可以保证叠加显示的虚拟导航提示信息也随之做相应地变化,并将这些导航提示信息显示在显示屏幕上的正确位置,使得AR导航的精准度得以大大提升。
需要说明的一点是,上述仅是给出了当在目标图像区域中检测到与虚拟导 航提示信息匹配的目标物,且目标物所在的第六位置与第一位置之间的位置误差小于预设阈值时的处理方式,而对于除了该种情况之外的其他情形,本发明实施例同样给出了处理方式,详细如下:
在另一个实施例中,若在真实场景图像中以第一位置为中心点的目标图像区域中未检测到目标物,则重新对目标设备的设备标定参数进行标定,与前文类似,在得到了重新标定的设备标定参数后,同样也可对这一重新标定的设备标定参数进行校验检测;当重新标定的设备标定参数通过校验检测时,再根据重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
针对该种方式,由于在目标图像区域中未检测到与上述虚拟导航提示信息匹配的目标物,因此表明当前的设备标定参数已经不适用了,所以需要重新进行标定。
其中,触发重新进行设备标定参数进行标定的根本原因为摄像机本身可能由于车辆的运动而产生松动或震动,导致摄像机位置和姿态可能相对于之前的位置和姿态发生了改变,在这种情况下,如果还按照当前的设备标定参数在真实场景图像中进行虚拟导航提示信息的叠加显示,则很可能会出现位置不准确的情况,所以需要重新对设备标定参数进行标定。其中,重新进行标定的方式同前文所示的外参标定方法一致,此处不再赘述。此外,重新标定设备标定参数,仅指代的是重新标定外参,而不包括内参。
在另一个实施例中,本发明实施例还支持统计在预设时长内得到的各个位置误差的平均值;其中,位置误差是指在真实场景图像中以第一位置为中心点的目标图像区域中所检测到目标物的第六位置与第一位置之间的误差,若得到的平均值大于预设阈值,则重新对目标设备的设备标定参数进行标定,与前文类似,在得到了重新标定的设备标定参数后,同样也可对这一重新标定的设备标定参数进行校验检测;当重新标定的设备标定参数通过校验检测时,再根据重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
其中,预设时长可为1s或2s等等,本发明实施例对此不进行具体限定。以每帧间隔50ms为例,则若预设时长为1s,则便可得到有关于20帧的位置 误差,之后便可计算这20帧的位置误差的平均值,进而基于计算出的平均值进行后续处理。针对该种方式,由于对一段时间内的情况进行了综合统计,因此对当前的设备标定参数的校验检测也更合理和准确。
在另一个实施例中,若在真实场景图像中以第一位置为中心点的目标图像区域检测到目标物,且位置误差大于预设阈值,则也可触发重新标定目标设备的设备标定参数的过程;之后,与前文类似,同样也可对这一重新标定的设备标定参数进行校验检测;当重新标定的设备标定参数通过校验检测时,再根据重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
以上对本发明实施例提供的地图显示方法进行了详细地解释说明,还需要说明的一点是,上述地图显示方法在应用到驾驶场景下时,通过使用自动驾驶技术中的高精度地图数据、高精度定位定姿方式以及实时影像信息,为用户提供了丰富和精确的AR导航方式。一方面可以促进自动驾驶部分核心技术落地,形成研发到应用的闭环;另一方面,这种虚实景融合的导航方式还能够激发用户的探索欲和使用热情,增加导航的乐趣,从而提高了导航的使用率。
本发明实施例提供的方法,将AR技术运用到了导航领域中,不但实现了虚景和实景结合的导航方式,使得地图显示方式更加多元以及多样化,而且在真实场景图像中叠加显示虚拟导航提示信息时,本发明实施例还会在满足对设备标定参数进行校验的条件下,对当前的设备标定参数进行校验检测,实现仅在通过校验检测的情况下才会将虚拟导航提示信息叠加显示到真实场景图像中,大大提升了将虚拟导航提示信息显示到正确位置的概率,使得实景与导航提示信息更加吻合,提升了导航的精准度。
另外,在当前的设备标定参数未通过检验检测的情况下,本发明实施例还提出了一种鲁棒性能较好的外参标定方法来实现对设备标定参数的重新标定,并在后续过程基于重新标定后的设备标定参数进行虚拟导航提示信息的叠加显示,进一步地确保了导航的精准度。
另外,本发明实施例在进行地图显示时,还可基于虚拟导航提示信息进行车道线级别的导航,使得导航更为精细,可大幅提升用户的导航体验;而且,还可以突出显示对当前导航影响最大的虚拟导航提示消息,并在真实场景图像 中对并线提醒以及附近兴趣点等进行增强显示,所以功能更为丰富以及精细。
图7是本发明实施例提供的一种地图显示装置的结构示意图。参见图7,该装置包括:
获取模块701,用于获取当前位置的真实场景图像以及由所述当前位置导航至目的地的目标导航数据;
确定模块702,用于根据所述当前位置和所述目标导航数据,确定待叠加显示在所述真实场景图像中的虚拟导航提示信息;
所述确定模块702,还用于根据拍摄所述真实场景图像的目标设备的设备标定参数,确定所述虚拟导航提示信息叠加显示在所述真实场景图像中的第一位置;
显示模块703,用于对所述目标设备当前的设备标定参数进行校验检测;当所述目标设备当前的设备标定参数通过所述校验检测时,将所述虚拟导航提示信息叠加显示在所述第一位置,得到用于进行地图显示的增强现实图像。
在另一个实施例中,所述设备标定参数中包括外参,该装置还包括:
标定模块,用于对于为所述外参设置的搜索粒度,计算当前搜索粒度对应的参数值搜索范围内每一个搜索参数值的代价函数值,确定当前搜索粒度下具有最小代价函数值的搜索参数值;
所述标定模块,还用于基于本次搜索得到的搜索参数值,确定下一个搜索粒度对应的参数值搜索范围,按照本次搜索方式,确定所述下一个搜索粒度下具有最小代价函数值的搜索参数值,依次类推,直至得到最小搜索粒度下具有最小代价函数值的目标搜索参数值,所述下一个搜索粒度的粒度值小于所述当前搜索粒度;将所述目标搜索参数值确定为所述目标设备当前的外参。
在另一个实施例中,所述标定模块,用于获取真实场景图像中的点目标物;对于所述每一个搜索参数值,根据所述搜索参数值,确定与所述点目标物匹配的虚拟导航提示信息在所述真实场景图像中的第二位置;计算所述第二位置与所述点目标物的第三位置之间的直线距离,所述第三位置为在所述真实场景图像中检测到的所述点目标物的位置;将所述直线距离作为所述搜索参数值的代价函数值。
在另一个实施例中,所述标定模块,用于获取真实场景图像中的线目标物; 对于所述每一个搜索参数值,根据所述搜索参数值,确定与所述线目标物匹配的虚拟导航提示信息在所述真实场景图像中的第四位置;计算与所述第四位置与所述线目标物的第五位置之间的法线距离,所述第五位置为在所述真实场景图像中检测到的所述线目标物的位置;将所述法线距离作为所述搜索参数值的代价函数值。
在另一个实施例中,该装置还包括:
标定模块,用于若在所述真实场景图像中以所述第一位置为中心点的目标图像区域中未检测到所述目标物,则重新对所述目标设备的设备标定参数进行标定;
所述显示模块,还用于对重新标定的设备标定参数进行校验检测;当所述重新标定的设备标定参数通过所述校验检测时,根据所述重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
在另一个实施例中,该装置还包括:
标定模块,用于统计在预设时长内得到的各个所述位置误差的平均值;所述位置误差是指在所述真实场景图像中以所述第一位置为中心点的目标图像区域中所检测到目标物的第六位置与所述第一位置之间的误差;若所述平均值大于所述预设阈值,则重新对所述目标设备的设备标定参数进行标定;
所述显示模块,还用于对重新标定的设备标定参数进行校验检测;当所述重新标定的设备标定参数通过所述校验检测时,根据所述重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
在另一个实施例中,该装置还包括:
标定模块,用于若在所述真实场景图像中以所述第一位置为中心点的目标图像区域检测到目标物,且位置误差大于预设阈值,则重新对所述目标设备的设备标定参数进行标定;所述位置误差是指目标物在所述目标图像区域中所在的第六位置与所述第一位置之间的误差;
所述显示模块,还用于对重新标定的设备标定参数进行校验检测;当所述重新标定的设备标定参数通过所述校验检测时,根据所述当前的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
在另一个实施例中,所述显示模块,用于若当前满足对所述设备标定参数 的校验条件,则在所述真实场景图像中以所述第一位置为中心点的目标图像区域进行目标物检测;若检测到与所述虚拟导航提示信息匹配的目标物,且所述目标物所在的第六位置与所述第一位置之间的位置误差小于预设阈值,则确定设备标定参数通过所述校验检测。
在另一个实施例中,所述显示模块,用于若所述虚拟导航提示信息中包括至少两个虚拟导航提示信息,则在所述至少两个虚拟导航提示信息中,确定对当前导航影响最大的目标导航提示信息;
将所述目标导航提示信息以区别与其他虚拟导航提示信息的方式进行叠加显示。
在另一个实施例中,所述显示模块,用于若所述虚拟导航提示信息为车道线信息,则在所述第一位置对当前行驶车道的虚拟车道线以及其他车道的虚拟车道线进行区分显示,并对所述当前行驶车道的可行驶范围进行标记;或,若所述虚拟导航提示信息为道路附属设施信息,则在所述第一位置处显示虚拟道路附属设施标记;或,若所述虚拟导航提示信息为并线提醒信息,则对所述当前行驶车道的虚拟车道线以及目标并线车道的虚拟车道线同其他虚拟车道线进行区分显示;或,若所述虚拟导航提示信息为兴趣点信息,则在所述第一位置显示所述当前位置的虚拟兴趣点标记。
本发明实施例提供的装置,将AR技术运用到了导航领域中,不但实现了虚景和实景结合的导航方式,使得地图显示方式更加多元以及多样化,而且在真实场景图像中叠加显示虚拟导航提示信息时,本发明实施例还会在满足对设备标定参数进行校验的条件下,对当前的设备标定参数进行校验检测,实现仅在通过校验检测的情况下才会将虚拟导航提示信息叠加显示到真实场景图像中,大大提升了将虚拟导航提示信息显示到正确位置的概率,使得实景与导航提示信息更加吻合,提升了导航的精准度。
另外,在当前的设备标定参数未通过检验检测的情况下,本发明实施例还提出了一种鲁棒性能较好的外参标定方法来实现对设备标定参数的重新标定,并在后续过程基于重新标定后的设备标定参数进行虚拟导航提示信息的叠加显示,进一步地确保了导航的精准度。
另外,本发明实施例在进行地图显示时,还可基于虚拟导航提示信息进行 车道线级别的导航,使得导航更为精细,可大幅提升用户的导航体验;而且,还可以突出显示对当前导航影响最大的虚拟导航提示消息,并在真实场景图像中对并线提醒以及附近兴趣点等进行增强显示,所以功能更为丰富以及精细。
需要说明的是:上述实施例提供的地图显示装置在进行地图显示时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的地图显示装置与地图显示方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图8是本发明实施例提供的一种车载多媒体终端的结构示意图,该车载多媒体终端可以用于执行上述实施例中提供的地图显示方法。参见图8,该车载多媒体终端800包括:
收发器110、包括有一个或一个以上计算机可读存储介质的存储器120、输入单元130、显示单元140、传感器150、音频电路160、包括有一个或者一个以上处理核心的处理器170等部件。本领域技术人员可以理解,图8中示出的车载多媒体终端结构并不构成对车载多媒体终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
收发器110可用于收发信息过程中信号的接收和发送。通过收发器110车载多媒体终端800可以和位于车辆内的其他设备进行通信。其中,通信方式包括但不限于蓝牙无线通信方式、WiFi无线通信方式等。
存储器120可用于存储软件程序以及模块,处理器170通过运行存储在存储器120的至少一条指令、至少一段程序、代码集或指令集,从而执行各种功能应用以及数据处理。存储器120主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一条指令、至少一段程序、代码集或指令集等;存储数据区可存储根据车载多媒体终端800的使用所创建的数据(比如音频数据)等。
输入单元130可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的信号输入。具体地,输入单元130可包括触敏表面以及其他输入设备。触敏表面,也称为触摸显示屏或者触控板,可收集用户在其上或附 近的触摸操作,并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器170,并能接收处理器170发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面。除了触敏表面,输入单元130还可以包括其他输入设备。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)等中的一种或多种。
显示单元140可用于显示由用户输入的信息或提供给用户的信息以及车载多媒体终端800的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元140可包括显示面板,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板。进一步的,触敏表面可覆盖显示面板,当触敏表面检测到在其上或附近的触摸操作后,传送给处理器170以确定触摸事件的类型,随后处理器170根据触摸事件的类型在显示面板上提供相应的视觉输出。虽然在图8中,触敏表面与显示面板是作为两个独立的部件来实现输入和输出功能,但是在某些实施例中,可以将触敏表面与显示面板集成而实现输入和输出功能。
车载多媒体终端800还可包括至少一种传感器150,比如光传感器。具体地,光传感器可包括环境光传感器,其中环境光传感器可根据环境光线的明暗来调节显示面板的亮度。
音频电路160、扬声器161,传声器162可提供用户与车载多媒体终端800之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器170处理后,经收发器110发送给诸如车辆内的其他设备,或者将音频数据输出至存储器120以便进一步处理。
处理器170是车载多媒体终端800的控制中心,利用各种接口和线路连接整个车载多媒体终端的各个部分,通过运行或执行存储在存储器120内的软件 程序和/或模块,以及调用存储在存储器120内的数据,执行车载多媒体终端800的各种功能和处理数据,从而对车载多媒体终端进行整体监控。可选的,处理器170可包括一个或多个处理核心;优选的,处理器170可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器170中。
具体在本实施例中,车载多媒体终端的显示单元可以是触摸屏显示器,车载多媒体终端的处理器170会运行存储在存储器120中的至少一条指令、至少一段程序、代码集或指令集,从而实现上述方法实施例所述的地图显示方法。
在另一个示例性的实施例中,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述车载多媒体终端的处理器加载并执行以实现上述实施例所述的地图显示方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
Claims (13)
- 一种地图显示方法,其特征在于,应用于终端,所述方法包括:获取当前位置的真实场景图像以及由所述当前位置导航至目的地的目标导航数据;根据所述当前位置和所述目标导航数据,确定待叠加显示在所述真实场景图像中的虚拟导航提示信息;根据拍摄所述真实场景图像的目标设备当前的设备标定参数,确定所述虚拟导航提示信息叠加显示在所述真实场景图像中的第一位置;对所述目标设备当前的设备标定参数进行校验检测;当所述目标设备当前的设备标定参数通过所述校验检测时,将所述虚拟导航提示信息叠加显示在所述第一位置,得到用于进行地图显示的增强现实图像。
- 根据权利要求1所述的方法,其特征在于,所述设备标定参数中包括外参和内参,所述方法还包括:对于为所述外参设置的搜索粒度,计算当前搜索粒度对应的参数值搜索范围内每一个搜索参数值的代价函数值,确定当前搜索粒度下具有最小代价函数值的搜索参数值;基于本次搜索得到的搜索参数值,确定下一个搜索粒度对应的参数值搜索范围,并按照本次搜索方式,确定所述下一个搜索粒度下具有最小代价函数值的搜索参数值,依此类推,直至得到最小搜索粒度下具有最小代价函数值的目标搜索参数值,所述下一个搜索粒度的粒度值小于所述当前搜索粒度;将所述目标搜索参数值确定为所述目标设备当前的外参。
- 根据权利要求2所述的方法,其特征在于,所述计算当前搜索粒度对应的参数值搜索范围内每一个搜索参数值的代价函数值,包括:获取真实场景图像中的点目标物;对于所述每一个搜索参数值,根据所述搜索参数值,确定与所述点目标物匹配的虚拟导航提示信息在所述真实场景图像中的第二位置;计算所述第二位置与所述点目标物的第三位置之间的直线距离,所述第三位置为在所述真实场景图像中检测到的所述点目标物的位置;将所述直线距离作为所述搜索参数值的代价函数值。
- 根据权利要求2所述的方法,其特征在于,所述计算当前搜索粒度对应的参数值搜索范围内每一个搜索参数值的代价函数值,包括:获取真实场景图像中的线目标物;对于所述每一个搜索参数值,根据所述搜索参数值,确定与所述线目标物匹配的虚拟导航提示信息在所述真实场景图像中的第四位置;计算与所述第四位置与所述线目标物的第五位置之间的法线距离,所述第五位置为在所述真实场景图像中检测到的所述线目标物的位置;将所述法线距离作为所述搜索参数值的代价函数值。
- 根据权利要求1至4中任一权利要求所述的方法,其特征在于,所述方法还包括:若在所述真实场景图像中以所述第一位置为中心点的目标图像区域中未检测到目标物,则重新对所述目标设备的设备标定参数进行标定;对重新标定的设备标定参数进行校验检测;当所述重新标定的设备标定参数通过所述校验检测时,根据所述重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
- 根据权利要求1至4中任一权利要求所述的方法,其特征在于,所述方法还包括:统计在预设时长内得到的各个位置误差的平均值;所述位置误差是指在所述真实场景图像中以所述第一位置为中心点的目标图像区域中所检测到目标物的第六位置与所述第一位置之间的误差;若所述平均值大于预设阈值,则重新对所述目标设备的设备标定参数进行标定;对重新标定的设备标定参数进行校验检测;当所述重新标定的设备标定参数通过所述校验检测时,根据所述重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
- 根据权利要求1至4中任一权利要求所述的方法,其特征在于,所述 方法还包括:若在所述真实场景图像中以所述第一位置为中心点的目标图像区域检测到目标物,且位置误差大于预设阈值,则重新对所述目标设备的设备标定参数进行标定;所述位置误差是指目标物在所述目标图像区域中所在的第六位置与所述第一位置之间的误差;对重新标定的设备标定参数进行校验检测;当所述重新标定的设备标定参数通过所述校验检测时,根据所述重新标定的设备标定参数,将确定的虚拟导航提示信息叠加显示在当前位置的真实场景图像中。
- 根据权利要求1、5至7中任一权利要求所述的方法,其特征在于,所述校验检测,包括:若当前满足对所述设备标定参数的校验条件,则在所述真实场景图像中以所述第一位置为中心点的目标图像区域进行目标物检测;若检测到与所述虚拟导航提示信息匹配的目标物,且所述目标物所在的第六位置与所述第一位置之间的位置误差小于预设阈值,则确定设备标定参数通过所述校验检测。
- 根据权利要求1所述的方法,其特征在于,所述将所述虚拟导航提示信息叠加显示在所述第一位置,包括:若所述虚拟导航提示信息中包括至少两个虚拟导航提示信息,则在所述至少两个虚拟导航提示信息中,确定对当前导航影响最大的目标导航提示信息;将所述目标导航提示信息以区别与其他虚拟导航提示信息的方式进行叠加显示。
- 根据权利要求1所述的方法,其特征在于,所述将所述虚拟导航提示信息叠加显示在所述第一位置,包括:若所述虚拟导航提示信息为车道线信息,则在所述第一位置对当前行驶车道的虚拟车道线以及其他车道的虚拟车道线进行区分显示,并对所述当前行驶车道的可行驶范围进行标记;或,若所述虚拟导航提示信息为道路附属设施信息,则在所述第一位置处显示虚拟道路附属设施标记;或,若所述虚拟导航提示信息为并线提醒信息,则对所述当前行驶车道的虚拟车道线以及目标并线车道的虚拟车道线同其他虚拟车道线进行区分显示;或,若所述虚拟导航提示信息为兴趣点信息,则在所述第一位置显示所述当前位置的虚拟兴趣点标记。
- 一种地图显示装置,其特征在于,所述装置包括:获取模块,用于获取当前位置的真实场景图像以及由所述当前位置导航至目的地的目标导航数据;确定模块,用于根据所述当前位置和所述目标导航数据,确定待叠加显示在所述真实场景图像中的虚拟导航提示信息;所述确定模块,还用于根据拍摄所述真实场景图像的目标设备的设备标定参数,确定所述虚拟导航提示信息叠加显示在所述真实场景图像中的第一位置;显示模块,用于对所述目标设备当前的设备标定参数进行校验检测;当所述目标设备当前的设备标定参数通过所述校验检测时,将所述虚拟导航提示信息叠加显示在所述第一位置,得到用于进行地图显示的增强现实图像。
- 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至10中任一权利要求所述的地图显示方法。
- 一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至10中任一权利要求所述的地图显示方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18848064.4A EP3675084A4 (en) | 2017-08-25 | 2018-05-21 | MAP DISPLAY METHOD, DEVICE, STORAGE MEDIUM AND TERMINAL |
US16/781,817 US11578988B2 (en) | 2017-08-25 | 2020-02-04 | Map display method, device, storage medium and terminal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710740562.3A CN110019580B (zh) | 2017-08-25 | 2017-08-25 | 地图显示方法、装置、存储介质及终端 |
CN201710740562.3 | 2017-08-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/781,817 Continuation US11578988B2 (en) | 2017-08-25 | 2020-02-04 | Map display method, device, storage medium and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019037489A1 true WO2019037489A1 (zh) | 2019-02-28 |
Family
ID=65438343
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/087683 WO2019037489A1 (zh) | 2017-08-25 | 2018-05-21 | 地图显示方法、装置、存储介质及终端 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11578988B2 (zh) |
EP (1) | EP3675084A4 (zh) |
CN (1) | CN110019580B (zh) |
MA (1) | MA49961A (zh) |
WO (1) | WO2019037489A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111639975A (zh) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | 一种信息推送方法及装置 |
CN112577488A (zh) * | 2020-11-24 | 2021-03-30 | 腾讯科技(深圳)有限公司 | 导航路线确定方法、装置、计算机设备和存储介质 |
CN113483774A (zh) * | 2021-06-29 | 2021-10-08 | 阿波罗智联(北京)科技有限公司 | 导航方法、装置、电子设备及可读存储介质 |
CN115278095A (zh) * | 2022-05-11 | 2022-11-01 | 岚图汽车科技有限公司 | 一种基于融合感知的车载摄像头控制方法及装置 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102018213634A1 (de) * | 2018-08-13 | 2020-02-13 | Audi Ag | Verfahren zum Betreiben einer in einem Kraftfahrzeug angeordneten Anzeigeeinrichtung und Anzeigeeinrichtung zum Verwenden in einem Kraftfahrzeug |
CN112446915B (zh) * | 2019-08-28 | 2024-03-29 | 北京初速度科技有限公司 | 一种基于图像组的建图方法及装置 |
CN110793544B (zh) * | 2019-10-29 | 2021-12-14 | 北京百度网讯科技有限公司 | 路侧感知传感器参数标定方法、装置、设备及存储介质 |
CN111124128B (zh) * | 2019-12-24 | 2022-05-17 | Oppo广东移动通信有限公司 | 位置提示方法及相关产品 |
KR20220125148A (ko) * | 2020-01-06 | 2022-09-14 | 엘지전자 주식회사 | 영상 출력 장치 및 그것의 제어 방법 |
CN111337015B (zh) * | 2020-02-28 | 2021-05-04 | 重庆特斯联智慧科技股份有限公司 | 一种基于商圈聚合大数据的实景导航方法与系统 |
CN111627114A (zh) * | 2020-04-14 | 2020-09-04 | 北京迈格威科技有限公司 | 室内视觉导航方法、装置、系统及电子设备 |
CN113781554A (zh) * | 2020-06-10 | 2021-12-10 | 富士通株式会社 | 目标物位置的确定装置、交通动态图的建立装置及方法 |
CN111795688B (zh) * | 2020-07-17 | 2023-11-17 | 南京邮电大学 | 一种基于深度学习和增强现实的图书馆导航系统实现方法 |
CN112330819B (zh) * | 2020-11-04 | 2024-02-06 | 腾讯科技(深圳)有限公司 | 基于虚拟物品的交互方法、装置及存储介质 |
CN114972494B (zh) * | 2021-02-26 | 2024-09-10 | 魔门塔(苏州)科技有限公司 | 一种记忆泊车场景的地图构建方法及装置 |
CN113192210A (zh) * | 2021-03-19 | 2021-07-30 | 深圳市慧鲤科技有限公司 | 信息展示方法及装置、电子设备和存储介质 |
CN113240816B (zh) * | 2021-03-29 | 2022-01-25 | 泰瑞数创科技(北京)有限公司 | 基于ar和语义模型的城市精确导航方法及其装置 |
KR20220141667A (ko) * | 2021-04-13 | 2022-10-20 | 현대자동차주식회사 | 내비게이션 단말의 통합 제어 방법 및 그 방법을 제공하는 자동차 시스템 |
CN113221359B (zh) * | 2021-05-13 | 2024-03-01 | 京东鲲鹏(江苏)科技有限公司 | 一种仿真场景生成方法、装置、设备及存储介质 |
CN112991752A (zh) * | 2021-05-20 | 2021-06-18 | 武汉纵横智慧城市股份有限公司 | 基于ar与物联网道路车辆可视化展示方法、装置及设备 |
CN113306392B (zh) * | 2021-06-29 | 2022-12-13 | 广州小鹏汽车科技有限公司 | 显示方法、车载终端、车辆和计算机可读存储介质 |
CN113608614A (zh) * | 2021-08-05 | 2021-11-05 | 上海商汤智能科技有限公司 | 展示方法、增强现实装置、设备及计算机可读存储介质 |
CN113961065B (zh) * | 2021-09-18 | 2022-10-11 | 北京城市网邻信息技术有限公司 | 导航页面的显示方法、装置、电子设备及存储介质 |
CN117723070B (zh) * | 2024-02-06 | 2024-07-02 | 合众新能源汽车股份有限公司 | 地图匹配初值的确定方法及装置、电子设备及存储介质 |
CN118072390B (zh) * | 2024-02-23 | 2024-09-03 | 金锐同创(北京)科技股份有限公司 | 基于ar设备的目标检测方法、装置、计算机设备及介质 |
CN118042417B (zh) * | 2024-03-25 | 2024-08-27 | 天津大学 | 一种面向Wi-Fi低分组速率的室内被动跟踪方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102980570A (zh) * | 2011-09-06 | 2013-03-20 | 上海博路信息技术有限公司 | 一种实景增强现实导航系统 |
CN104520675A (zh) * | 2012-08-03 | 2015-04-15 | 歌乐株式会社 | 摄像机参数运算装置、导航系统及摄像机参数运算方法 |
CN105335969A (zh) * | 2015-10-16 | 2016-02-17 | 凌云光技术集团有限责任公司 | 一种彩色线阵相机空间校正参数的获取方法 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110153198A1 (en) * | 2009-12-21 | 2011-06-23 | Navisus LLC | Method for the display of navigation instructions using an augmented-reality concept |
US8457355B2 (en) * | 2011-05-05 | 2013-06-04 | International Business Machines Corporation | Incorporating video meta-data in 3D models |
US9324151B2 (en) * | 2011-12-08 | 2016-04-26 | Cornell University | System and methods for world-scale camera pose estimation |
US9031782B1 (en) * | 2012-01-23 | 2015-05-12 | The United States Of America As Represented By The Secretary Of The Navy | System to use digital cameras and other sensors in navigation |
CN103335657B (zh) * | 2013-05-30 | 2016-03-02 | 佛山电视台南海分台 | 一种基于图像捕获和识别技术增强导航功能的方法和系统 |
KR20150087619A (ko) * | 2014-01-22 | 2015-07-30 | 한국전자통신연구원 | 증강 현실 기반의 차로 변경 안내 장치 및 방법 |
US10996473B2 (en) * | 2014-03-26 | 2021-05-04 | Atheer, Inc. | Method and apparatus for adjusting motion-based data space manipulation |
US20150317057A1 (en) * | 2014-05-02 | 2015-11-05 | Electronics And Telecommunications Research Institute | Navigation apparatus for providing social network service (sns) service based on augmented reality, metadata processor, and metadata processing method in augmented reality navigation system |
US10198865B2 (en) * | 2014-07-10 | 2019-02-05 | Seiko Epson Corporation | HMD calibration with direct geometric modeling |
EP3259704B1 (en) * | 2015-02-16 | 2023-08-23 | University Of Surrey | Three dimensional modelling |
KR101714185B1 (ko) * | 2015-08-05 | 2017-03-22 | 엘지전자 주식회사 | 차량 운전 보조장치 및 이를 포함하는 차량 |
CN106996795B (zh) * | 2016-01-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | 一种车载激光外参标定方法和装置 |
CN106092121B (zh) * | 2016-05-27 | 2017-11-24 | 百度在线网络技术(北京)有限公司 | 车辆导航方法和装置 |
KR102581359B1 (ko) * | 2016-09-02 | 2023-09-20 | 엘지전자 주식회사 | 차량용 사용자 인터페이스 장치 및 차량 |
CN106931961B (zh) * | 2017-03-20 | 2020-06-23 | 成都通甲优博科技有限责任公司 | 一种自动导航方法及装置 |
CN110573369B (zh) * | 2017-04-19 | 2022-05-17 | 麦克赛尔株式会社 | 平视显示器装置及其显示控制方法 |
US10168174B2 (en) * | 2017-05-09 | 2019-01-01 | Toyota Jidosha Kabushiki Kaisha | Augmented reality for vehicle lane guidance |
-
2017
- 2017-08-25 CN CN201710740562.3A patent/CN110019580B/zh active Active
-
2018
- 2018-05-21 MA MA049961A patent/MA49961A/fr unknown
- 2018-05-21 EP EP18848064.4A patent/EP3675084A4/en active Pending
- 2018-05-21 WO PCT/CN2018/087683 patent/WO2019037489A1/zh unknown
-
2020
- 2020-02-04 US US16/781,817 patent/US11578988B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102980570A (zh) * | 2011-09-06 | 2013-03-20 | 上海博路信息技术有限公司 | 一种实景增强现实导航系统 |
CN104520675A (zh) * | 2012-08-03 | 2015-04-15 | 歌乐株式会社 | 摄像机参数运算装置、导航系统及摄像机参数运算方法 |
CN105335969A (zh) * | 2015-10-16 | 2016-02-17 | 凌云光技术集团有限责任公司 | 一种彩色线阵相机空间校正参数的获取方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3675084A4 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111639975A (zh) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | 一种信息推送方法及装置 |
CN112577488A (zh) * | 2020-11-24 | 2021-03-30 | 腾讯科技(深圳)有限公司 | 导航路线确定方法、装置、计算机设备和存储介质 |
CN113483774A (zh) * | 2021-06-29 | 2021-10-08 | 阿波罗智联(北京)科技有限公司 | 导航方法、装置、电子设备及可读存储介质 |
CN113483774B (zh) * | 2021-06-29 | 2023-11-03 | 阿波罗智联(北京)科技有限公司 | 导航方法、装置、电子设备及可读存储介质 |
CN115278095A (zh) * | 2022-05-11 | 2022-11-01 | 岚图汽车科技有限公司 | 一种基于融合感知的车载摄像头控制方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110019580A (zh) | 2019-07-16 |
MA49961A (fr) | 2021-04-28 |
EP3675084A1 (en) | 2020-07-01 |
US11578988B2 (en) | 2023-02-14 |
CN110019580B (zh) | 2022-07-12 |
EP3675084A4 (en) | 2021-04-28 |
US20200173804A1 (en) | 2020-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110019580B (zh) | 地图显示方法、装置、存储介质及终端 | |
JP6763448B2 (ja) | 視覚強化ナビゲーション | |
US10677596B2 (en) | Image processing device, image processing method, and program | |
TWI574223B (zh) | 運用擴增實境技術之導航系統 | |
US10169923B2 (en) | Wearable display system that displays a workout guide | |
WO2016017254A1 (ja) | 情報処理装置、および情報処理方法、並びにプログラム | |
CN106663338A (zh) | 信息处理装置、信息处理方法和程序 | |
WO2017126172A1 (ja) | 情報処理装置、情報処理方法、及び記録媒体 | |
JP2010123121A (ja) | シースルー・ディスプレイに現実世界の対象物の位置をマークする方法及び装置 | |
US20220026981A1 (en) | Information processing apparatus, method for processing information, and program | |
EP3848674B1 (en) | Location signaling with respect to an autonomous vehicle and a rider | |
WO2020114214A1 (zh) | 导盲方法和装置,存储介质和电子设备 | |
JP2015118442A (ja) | 情報処理装置、情報処理方法およびプログラム | |
TW201200846A (en) | Global positioning device and system | |
CN109559382A (zh) | 智能导游方法、装置、终端和介质 | |
CN114608591B (zh) | 车辆定位方法、装置、存储介质、电子设备、车辆及芯片 | |
KR20200134401A (ko) | 모바일 디바이스와 연동하는 스마트 안경 작동 방법 | |
JP6487545B2 (ja) | 認知度算出装置、認知度算出方法及び認知度算出プログラム | |
KR101153127B1 (ko) | 스마트 폰의 지리정보 표시장치 | |
CN116974497A (zh) | 增强现实显示方法、装置、设备及存储介质 | |
CN118295613A (zh) | 信息显示方法及其处理装置与信息显示系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18848064 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018848064 Country of ref document: EP Effective date: 20200325 |