CN116242370A - Navigation processing method, device, equipment and program product based on augmented reality - Google Patents

Navigation processing method, device, equipment and program product based on augmented reality Download PDF

Info

Publication number
CN116242370A
CN116242370A CN202310119560.8A CN202310119560A CN116242370A CN 116242370 A CN116242370 A CN 116242370A CN 202310119560 A CN202310119560 A CN 202310119560A CN 116242370 A CN116242370 A CN 116242370A
Authority
CN
China
Prior art keywords
vanishing point
point coordinates
frame image
current
uncertainty
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310119560.8A
Other languages
Chinese (zh)
Inventor
陈浩
韩冰
张涛
庄浩
李佳栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310119560.8A priority Critical patent/CN116242370A/en
Publication of CN116242370A publication Critical patent/CN116242370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/265Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network constructional aspects of navigation devices, e.g. housings, mountings, displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Navigation (AREA)

Abstract

The present disclosure relates to augmented reality-based navigation processing methods, apparatus, devices, and program products. The method comprises the following steps: acquiring historical vanishing point coordinates in the current frame image and the historical frame image; predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, and determining predicted vanishing point coordinates; detecting a road vanishing point in the current frame image, and determining the detected vanishing point coordinates; and determining the current vanishing point coordinates in the current frame image based on the predicted vanishing point coordinates and the detected vanishing point coordinates. Therefore, accuracy, smoothness and anti-interference capability of vanishing point detection are improved, more accurate and better-continuity basic data are provided for subsequent calculation of a yaw angle corresponding to a current frame image, and further the phenomenon of jump of a virtual navigation mark in a guiding direction can be reduced to a great extent, and the accuracy and the continuity of AR navigation are improved.

Description

Navigation processing method, device, equipment and program product based on augmented reality
Technical Field
The disclosure relates to the technical field of maps, and in particular relates to a navigation processing method, device, equipment and program product based on augmented reality.
Background
In an augmented reality (Augmented Reality, AR) navigation scene, virtual navigation identifications need to be continuously and correctly displayed at corresponding positions in real-world images for navigation guidance of a user. Wherein the direction of the virtual navigation mark direction is determined mainly according to the yaw angle of the camera, and the yaw angle of the camera is determined mainly depending on the magnetometer.
However, magnetometers are susceptible to magnetic field interference, so that when AR navigation is performed in a region with magnetic field interference, a yaw angle corresponding to each frame of image is error-free and has no smooth continuity, and thus a direction of virtual navigation mark guidance deviates from an actual direction, and a discontinuity of the guidance direction may occur between each frame of images.
Disclosure of Invention
In order to solve the technical problem of low accuracy of yaw angle in an AR navigation scene caused by susceptibility of magnetometers to magnetic field interference, the disclosure provides a navigation processing method, device, equipment and program product based on augmented reality.
In a first aspect, an embodiment of the present disclosure provides a navigation processing method based on augmented reality, including:
acquiring historical vanishing point coordinates in the current frame image and the historical frame image;
Predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, and determining predicted vanishing point coordinates;
detecting a road vanishing point in the current frame image, and determining the detected vanishing point coordinates;
and determining the current vanishing point coordinates in the current frame image based on the predicted vanishing point coordinates and the detected vanishing point coordinates.
In a second aspect, an embodiment of the present disclosure further provides a navigation processing device based on augmented reality, including:
the data acquisition module is used for acquiring the historical vanishing point coordinates in the current frame image and the historical frame image;
the predicted vanishing point coordinate determining module is used for predicting the road vanishing point in the current frame image based on the historical vanishing point coordinate and determining the predicted vanishing point coordinate;
the detected vanishing point coordinate determining module is used for detecting the road vanishing point in the current frame image and determining the detected vanishing point coordinate;
and the current vanishing point coordinate determining module is used for determining the current vanishing point coordinate in the current frame image based on the predicted vanishing point coordinate and the detected vanishing point coordinate.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including:
A memory and a processor, the memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the augmented reality-based navigation processing method provided by any embodiment of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure further provide a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the navigation processing method based on augmented reality provided by any embodiment of the present disclosure.
In a fifth aspect, the disclosed embodiments also provide a computer program product for performing the augmented reality-based navigation processing method provided by any embodiment of the disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has at least the following advantages: on one hand, the vanishing point coordinates in the real world image are calculated to provide a data basis for the calculation of the yaw angle, so that the problem that the yaw angle is interfered by a magnetic field is avoided, and the accuracy of the yaw angle is improved by improving the accuracy of the vanishing point coordinates; on the other hand, by fusion processing of the vanishing point coordinates detected in the current frame image, the vanishing point coordinates calculated based on the historical vanishing point coordinates in the historical frame image, and the detected vanishing point coordinates and the predicted vanishing point coordinates, the current vanishing point coordinates in the current frame image can be obtained, smoothness and continuity of the vanishing point coordinates in the images of continuous frames can be improved, a data basis is provided for obtaining a smooth and continuous yaw angle, and accordingly accuracy and continuity of a subsequent virtual navigation mark guiding direction can be improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a navigation processing method based on augmented reality according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a processing procedure of an augmented reality-based navigation processing method that considers uncertainty of vanishing points according to an embodiment of the present disclosure;
fig. 3 is a flow chart of another navigation processing method based on augmented reality according to an embodiment of the present disclosure;
fig. 4 is a schematic view showing a display effect of an augmented reality navigation mark before and after yaw angle correction according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a navigation processing device based on augmented reality according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
In an augmented reality (Augmented Reality, AR) navigation scenario, a yaw angle of a navigation device may be determined from a magnetometer, and a virtual navigation mark may be rendered in a real-world image according to the yaw angle to navigate and guide a user. However, magnetometers are susceptible to magnetic field disturbances, such that yaw angle accuracy of the navigation device is low, resulting in poor rendering of virtual navigation markers.
In order to improve the accuracy of the yaw angle, vanishing point detection can be performed on the image, and a more accurate yaw angle is calculated according to the corresponding relation between vanishing points in the image and the yaw angle, so that the degree of fitting between the virtual navigation mark in the AR navigation scene and the ground feature in the image is improved. However, the vanishing point detection in the related art not only requires higher computing resources and increases power consumption of the navigation device, but also mainly aims at vanishing point detection of a single frame image, and does not have the capability of processing multiple frames of continuous images, so that the vanishing point smoothness in the continuous images (such as video) is poor, and the anti-interference capability is lacking. Thus, for AR navigation, virtual navigation mark blocking, guiding direction jump and other conditions are easy to occur, and the accuracy and the continuity of AR navigation are reduced.
Based on the above situation, the embodiment of the disclosure provides a navigation processing method based on augmented reality, so as to detect a road vanishing point in a current frame image, determine a detected vanishing point coordinate, predict the road vanishing point in the current frame image by using a history vanishing point coordinate in a history frame image, determine a predicted vanishing point coordinate, and fuse the detected vanishing point coordinate and the predicted vanishing point coordinate to obtain the current vanishing point coordinate in the current frame image fused with history information, thereby improving the accuracy, smoothness and anti-interference capability of continuously obtaining the vanishing point in the image in the travelling process of a user, providing more accurate and better-continuity basic data for subsequently calculating a yaw angle corresponding to the current frame image, further greatly reducing the phenomenon of jump of a virtual navigation mark in a guiding direction, and improving the accuracy and continuity of AR navigation.
Fig. 1 is a flow chart of a navigation processing method based on augmented reality, which is provided in an embodiment of the present disclosure, and may be applied to a scene of AR navigation based on an electronic map. The electronic map can be a high-precision map/high-definition map/three-dimensional map with higher map precision, a standard-precision map/navigation map/two-dimensional map with relatively lower map precision, and the like. The navigation processing method based on the augmented reality can be executed by a navigation processing device based on the augmented reality, and the device can be realized by software and/or hardware and can be integrated on electronic equipment with certain computing capability and provided with an electronic map client. The electronic device may be, for example, a smart phone, tablet computer, palm top computer, smart wearable device, notebook computer, vehicle-mounted device, etc.
As shown in fig. 1, the navigation processing method based on augmented reality provided by the embodiment of the disclosure may include:
s110, acquiring the historical vanishing point coordinates in the current frame image and the historical frame image.
The current frame image is an image shot at the current moment. The history frame image is an image captured at a history time before the current time, and may be, for example, an image of a next frame immediately before the current frame image or images of several frames before the current frame image. The history vanishing point coordinates refer to coordinates of road vanishing points in the history frame image, which are obtained when the history frame image is processed. Vanishing points refer to projection points of visual intersection points of parallel lines in real space in an imaging plane in the presence of perspective deformation. In the disclosed embodiment, the vanishing point is an imaging point in the image of the intersection of two edge lines of the road that the user is traveling.
Specifically, in the AR navigation scene, the electronic device generates a virtual navigation identifier by combining the map data and the navigation route, and acquires an image of a road in which the user is traveling, so that the virtual navigation identifier is projected on the road in which the user is traveling in the image through the AR imaging technology, and the user is guided in a navigation manner. In order to improve the accuracy of navigation guidance, the embodiment of the disclosure can improve the accuracy and smoothness of the yaw angle by processing the vanishing points of the road in the image before rendering the navigation mark.
In the process of processing the road vanishing point of each frame image, the electronic device fuses the history information in the history frame image before the current frame image, so as to improve the continuity and smoothness of the road vanishing point. Therefore, the electronic device acquires the current frame image, and simultaneously acquires the history frame image and the history vanishing point coordinates in the history frame image, and the history vanishing point coordinates and the history frame image are used as input data for subsequent processing.
S120, predicting the road vanishing point in the current frame image based on the historical vanishing point coordinates, and determining the predicted vanishing point coordinates.
The predicted vanishing point coordinates are coordinates of a position where a road vanishing point in the current frame image may exist, which are predicted from the history information.
Specifically, there is information continuity between the previous and subsequent frame images, so the electronic device can predict the predicted vanishing point coordinates in the current frame image by means of the relevant information in the current frame image and the historical frame image according to the historical vanishing point coordinates in the historical frame image.
In some embodiments, the electronic device may predict the road vanishing point in the current frame image by means of the motion relationship of the pixels. Namely, S120 includes: and performing sparse optical flow tracking processing based on the current frame image, the historical frame image and the historical vanishing point coordinates, and determining predicted vanishing point coordinates.
The optical flow is an instantaneous speed of a space moving object in which pixels on an observation imaging plane move, and is a method for finding a corresponding relation between a previous frame and a current frame by utilizing a change of pixels in an image sequence in a time domain and correlation between adjacent frames so as to calculate motion information of the object between the adjacent frames, wherein the method is a motion vector describing that the moving object in a three-dimensional space is represented to a pixel point reflected in a two-dimensional image. The sparse optical flow tracking algorithm (such as KLT algorithm) is an algorithm for performing optical flow tracking on a sparse feature point set of each frame of image.
Specifically, the electronic device may display the current frame image I t Historical frame image I t-1 And the historic vanishing point coordinates x t-1 As input data, a predicted vanishing point coordinate x 'is calculated using KLT algorithm as shown in the following formula' t
x′ t =KLT(I t-1 ,I t ,x t-1 )。
In other embodiments, the electronic device may predict the road vanishing point in the current frame image by means of the imaging laws of the neighboring frame images. Namely, S120 includes: and carrying out camera projection transformation on the basis of the historical vanishing point coordinates, the camera internal reference matrix and the camera relative pose from the historical frame image to the current frame image, and determining predicted vanishing point coordinates.
Wherein the camera intrinsic matrix is a matrix describing a correspondence between points in the camera coordinate system to points in the image coordinate system, which is characterized by a camera focal length and principal point coordinates (offset of the camera optical center with respect to the image plane). The camera internal parameter matrix can be determined by factory parameters of a camera for shooting images, and can also be determined by calibrating parameters of the camera. The camera relative pose is used for describing the relative relation between the pose of the camera at the current moment and the pose of the camera at the historical moment, and can be obtained through calculation of the two poses, and the pose of each moment can be obtained through measurement of a sensor associated with the camera or can be obtained through calibration of external parameters of the camera.
Specifically, the current frame image and the history frame image are at least two frame images among continuous images photographed in a short time, and it can be considered that some of the same feature points exist therein. Due to camera pose changes at different moments, these feature points may exhibit differentiated image projection characteristics when projected onto the current frame image and the historical frame image. Therefore, the transformation rule of the ground feature points in the images at different moments can be calculated according to the camera internal and external parameter differences between the current moment and the historical moment, namely the camera internal parameter matrix and the camera relative pose, and then the predicted vanishing point coordinates in the current frame image can be estimated by combining the historical vanishing point coordinates in the historical frame image.
In particular, the electronic device may utilize the historical vanishing point coordinate x according to a camera projective transformation algorithm as shown in the following formula t-1 Camera relative pose from camera reference matrix C to history frame image to current frame image
Figure BDA0004081880390000051
Calculating to obtain the predicted vanishing point coordinate x' t
Figure BDA0004081880390000052
In still other embodiments, the electronic device may integrate the motion relationship of the pixel points and the imaging law of the adjacent frame image to predict the road vanishing point in the current frame image. Namely, S120 includes: and (3) based on the current frame image, the historical vanishing point coordinates, the camera internal reference matrix and the camera relative pose, performing sparse optical flow tracking processing and camera projection transformation, and determining predicted vanishing point coordinates.
Specifically, the electronic device may use the KLT algorithm and the camera projective transformation algorithm to perform comprehensive prediction of the vanishing point of the road in the current image.
In an example, the electronic device may calculate the initial vanishing point coordinates using the KLT algorithm and the camera projective transformation algorithm, respectively, and calculate the average of the two initial vanishing point coordinates as the predicted vanishing point coordinates.
In another example, the electronic device may first execute either of the two algorithms described above, resulting in initial vanishing point coordinates; then, the electronic device uses the initial vanishing point coordinates to replace the historical vanishing point coordinates as input data, and executes another algorithm to obtain predicted vanishing point coordinates. For example, the electronic device may first use the historical vanishing point coordinates, the current frame image and the historical frame image as input data, and operate a sparse optical flow tracking algorithm (such as KLT algorithm) to obtain initial vanishing point coordinates; and then, the camera projective transformation is operated according to the initial vanishing point coordinates, the camera internal reference matrix and the camera relative pose, so as to obtain final predicted vanishing point coordinates. For another example, the electronic device may first operate the projective transformation of the camera with the historical vanishing point coordinates, the camera reference matrix and the camera relative pose to obtain initial vanishing point coordinates; then, the initial vanishing point coordinates, the current frame image and the historical frame image are used as input data, and a sparse optical flow tracking algorithm (such as a KLT algorithm) is operated to obtain predicted vanishing point coordinates. Thus, the accuracy of predicting the vanishing point coordinates can be improved to a certain extent.
S130, detecting a road vanishing point in the current frame image, and determining the detected vanishing point coordinates.
The detected vanishing point coordinates refer to coordinates of a road vanishing point obtained by performing image processing on the current frame image.
Specifically, the electronic device may perform corresponding image processing on the current frame image by using a related technology of vanishing point detection, to obtain detected vanishing point coordinates. For example, the detected vanishing point coordinates in the current frame image may be estimated using a Gabor filter in combination with a voting method; for another example, the current frame image can be input into a pre-trained deep neural network model for determining the vanishing point of the road to detect the vanishing point, so as to obtain the detected vanishing point coordinates; for another example, the current frame image may be subjected to processing such as line detection and intersection screening, so as to obtain the detected vanishing point coordinates.
And S140, determining the current vanishing point coordinates in the current frame image based on the predicted vanishing point coordinates and the detected vanishing point coordinates.
The coordinates of the current vanishing point refer to the coordinates of the road vanishing point in the current frame image which are finally determined.
Specifically, according to the above description, if only the detected vanishing point coordinates in the current frame image are utilized, a problem of discontinuity and non-smoothness between vanishing point coordinates in each frame image may occur. The predicted vanishing point coordinates are calculated by utilizing the historical vanishing point coordinates and the information continuity in the front and back adjacent frame images. Therefore, in the embodiment of the present disclosure, the predicted vanishing point coordinate and the detected vanishing point coordinate may be fused to obtain the current vanishing point coordinate. For example, average filtering, weighted filtering, or the like may be performed on the predicted vanishing point coordinates and the detected vanishing point coordinates, and a final vanishing point coordinate may be output as the current vanishing point coordinate in the current frame image. The current vanishing point coordinates not only keep the information of vanishing points in the current frame image, but also fuse the continuity information in the history frame image, thereby ensuring the smoothness between the current vanishing point coordinates and the history vanishing point coordinates in the history frame image and improving the anti-interference capability.
In some embodiments, in order to further improve accuracy and reliability of the vanishing point coordinates, uncertainty of a calculation result may be increased in a process of calculating the vanishing point coordinates, so that on one hand, reliability of the vanishing point coordinates may be represented, and on the other hand, weight of vanishing point coordinate fusion may be determined by means of uncertainty, so that accuracy of the current vanishing point coordinates is further improved.
On the basis of this embodiment, referring to fig. 2, the augmented reality-based navigation processing method may include:
s210, acquiring the coordinates and the history uncertainty of the history vanishing point in the current frame image and the history frame image.
Wherein the historical uncertainty is used to characterize the degree of reliability of the historical vanishing point coordinates. The smaller the uncertainty, the higher the reliability. The uncertainty can be characterized by a measure of confidence, standard deviation, covariance, uncertainty, etc.
Specifically, the electronic device may acquire its uncertainty, i.e., the history uncertainty, while acquiring the history vanishing point coordinates.
In an example, when the historical frame image is the first frame image, the historical uncertainty may be determined using an empirical set point or a related value determined by a related person in evaluating the image quality of the first frame image.
In another example, the historical uncertainty may be calculated from relevant feature data in the historical frame image based on the manner in which the measure of uncertainty is calculated and the requirements of the input data. For example, when the history frame image is the first frame image, corresponding processing can be performed in this manner to obtain the history uncertainty. It will be appreciated that when the historical frame image is not the first frame image, the historical uncertainty may also be processed in this manner.
In yet another example, the historical uncertainty may also be derived in a manner that calculates the uncertainty of the current vanishing point coordinates (i.e., the current uncertainty) in accordance with embodiments of the present disclosure. For example, when the history frame image is not the first frame image and the history frame image is processed as the current frame image at the corresponding time, the history frame image may be calculated by fusing the history information in the embodiment of the disclosure. In this example, the history uncertainty may be recorded in association information of the history frame image together with the history vanishing point coordinates, and then the electronic device may directly read the association information to obtain the history uncertainty.
S220, predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, determining predicted vanishing point coordinates, and determining the uncertainty of prediction corresponding to the predicted vanishing point coordinates based on the historical uncertainty.
Wherein the uncertainty of the prediction is used to characterize the degree of reliability of the predicted vanishing point coordinates.
Specifically, because the predicted vanishing point coordinates are derived from the historical vanishing point coordinates, the uncertainty of the prediction is largely related to the historical uncertainty. Based on this, the electronic device may calculate the reliability of the predicted vanishing point coordinates, that is, the uncertainty of the prediction, from the historical uncertainty after calculating the predicted vanishing point coordinates in the current frame image from the historical vanishing point coordinates.
Taking uncertainty as a covariance example, the electronic device is obtaining a historical covariance P t-1 Based on (a), the prediction uncertainty of the coordinates of the predicted vanishing point, i.e., the prediction covariance P, can be calculated according to the following formula t
P′ t =P t-1 +Q。
Wherein Q is a preset uncertainty increment value. Q may be empirically set (e.g., set as an identity matrix); the method can also be obtained by solving the unknown parameters through the information change rule between the front frame image and the rear frame image by utilizing the sparse optical flow tracking process and/or the camera projection transformation process.
S230, detecting a road vanishing point in the current frame image, determining a detected vanishing point coordinate, and determining the detected uncertainty corresponding to the detected vanishing point coordinate based on a preset uncertainty index and image features associated with the detected vanishing point coordinate in the current frame image.
Wherein the uncertainty of the detection is used to characterize the degree of reliability of the detected vanishing point coordinates. The preset uncertainty index refers to a measurement of a preselected uncertainty, and may be any one of confidence, standard deviation, covariance, and uncertainty, for example.
Specifically, in the process of detecting the vanishing point of the current frame image, the electronic device may extract corresponding image feature data from the current frame image according to a calculation mode of a preset uncertainty index and a requirement of input data, and calculate the uncertainty of detection of the detected vanishing point coordinate.
In an example, in the case that the vanishing point detection mode is a vanishing point detection mode combining a Gabor filter and a voting method or a vanishing point detection mode of straight line detection and intersection point screening, the corresponding feature data may be extracted from the current frame image according to a probability statistics mode of a preset uncertainty index, and the detected uncertainty may be calculated according to the probability statistics mode.
In an example, in the case that the vanishing point detection mode is a vanishing point detection mode combining a Gabor filter and a voting method or a vanishing point detection mode of straight line detection and intersection point screening, the corresponding feature data may be extracted from the current frame image according to a probability statistics mode of a preset uncertainty index, and the detection uncertainty may be calculated according to the probability statistics mode.
In another example, in the case of vanishing point detection using a deep neural network model, the uncertainty of the detection may be taken as one of the output parameters of the model to participate in training of the neural network model. Thus, the uncertainty of the detection can be output simultaneously when the deep neural network model is run to output the detected vanishing point coordinates.
In some embodiments, more noise information exists in the real world images involved in the walking navigation scene and the riding navigation scene, such as the environment where the road is located in the image is more complex, so that the lines in the image are more and more complex, the computing resource requirement of the vanishing point detection method in the related art increases with the increase of the number and complexity of the lines in the image, and the accuracy of the vanishing point detection method in the related art decreases with the increase of the complexity of the lines in the image. Therefore, in order to reduce the consumption of computing resources for vanishing point detection, the vanishing point detection device can be smoothly operated in a mobile terminal or a vehicle-mounted terminal with relatively weak computing power, and in order to further improve the accuracy of vanishing point detection, the vanishing point detection device can perform vanishing point detection by the following steps S231 to S233.
S231, performing straight line detection on the current frame image to generate at least one initial straight line.
Specifically, the electronic device may perform line extraction in the current frame image using a line extractor such as EDLines edge detection, LSD, hough transform, etc., resulting in a plurality of extracted lines, i.e., initial lines.
S232, screening all the initial straight lines based on preset conditions to obtain target straight lines.
The preset conditions are preset conditions for linear filtering. The preset conditions include at least one of a straight line being on the ground, an included angle of the straight line and the optical axis of the camera being within a preset angle range, a length of the straight line being within a preset length range, and a horizontal distance of the straight line from the optical center of the camera being within a preset distance range. The preset angle range, the preset length range and the preset distance range are here all threshold values of the respective dimensions set in advance, which can be set empirically.
Specifically, the obtained initial straight lines may have straight lines of non-road areas, which not only increases data processing capacity and causes redundant resource consumption, but also causes interference to the detection of subsequent vanishing points, and reduces the accuracy of vanishing point detection. Therefore, the electronic device performs filtering processing on each initial straight line obtained by using the preset condition as the basis of straight line filtering and screening, and the remaining filtered initial straight lines are target straight lines. The vanishing point detection is carried out by utilizing the target straight lines, so that the resource consumption can be reduced, the detection efficiency can be improved, the influence of noise straight lines can be reduced, and the detection accuracy can be improved.
S233, detecting vanishing points based on the target straight lines, determining detected vanishing point coordinates, and determining the detected uncertainty based on the preset uncertainty index, the target straight lines and the detected vanishing point coordinates.
Specifically, any two non-parallel target lines may intersect to obtain an intersection point, and then each target line may obtain a plurality of intersection points. The electronic equipment can screen out the intersection points with highest reliability and meeting certain requirements. Then, when the selected intersection point is one, the intersection point is directly used as a detected vanishing point; when the number of the selected intersection points is multiple, one intersection point can be randomly selected as a detected vanishing point, or the selected intersection points are sorted according to reliability, and the intersection points at the second position or the middle position are selected as the detected vanishing points, so that the error influence of reliability calculation is eliminated. The reliability here may be characterized by the number of target lines passing through an intersection, e.g. the more target lines passing through a certain intersection, the higher the reliability of that intersection.
In addition, the electronic device can determine required input data according to the calculation requirement of the preset uncertainty index by using each target straight line and the detected vanishing point coordinates, and calculate the detected uncertainty according to the calculation mode of the preset uncertainty index.
For example, when the uncertainty index is preset as the uncertainty of the measurement field, the vertical distance between the detected vanishing point and each target straight line can be calculated, the average value of the vertical distances is calculated, and then the maximum value of the difference value between each vertical distance and the average value is determined as the detected uncertainty.
For another example, when the uncertainty index is set as covariance, the uncertainty of the detection may be calculated by using the calculation formula of the perpendicular distance and covariance.
In some embodiments, to further improve the accuracy and computational efficiency of the detected vanishing points and the uncertainty of the detection, the vanishing point detection may be performed using a random sampling consensus RANSAC algorithm, i.e. S233 includes: screening each intersection point formed by each target straight line by utilizing a random sampling coincidence algorithm, and determining the detected vanishing point coordinates; screening each associated straight line in a preset area range of the detected vanishing point coordinates from each target straight line; and determining covariance corresponding to the detected vanishing point coordinates as the uncertainty of detection based on the vertical line distance between the detected vanishing point coordinates and each associated straight line.
The preset area range is a preset intersection screening range, and can be determined by a preset distance radius, wherein the preset distance radius can be empirically set.
Specifically, the electronic device calculates a plurality of intersections from each target straight line. Then, a RANSAC algorithm is performed on each intersection point to screen out the intersection point with the most reliable/reliable meeting the requirement from the intersection points obtained by intersecting any two target straight lines, and the intersection point is used as the detected vanishing point coordinate. Then, the electronic device may determine whether any of the above-mentioned intersection points is within a preset area range of the detected vanishing point coordinates. If so, the target straight line corresponding to the intersection point is determined as an associated straight line (also referred to as an interior point straight line) having a correlation with the detected vanishing point coordinates. And, the electronic device may calculate a perpendicular distance between the detected vanishing point coordinates and each associated straight line. For example, there are 9 associated straight lines, and then 9 perpendicular distances can be calculated. The electronic device may then use these perpendicular distances to calculate a covariance as the uncertainty of the detection. For example, the uncertainty of the detection is calculated by using a covariance formula and 9 vertical line distances; alternatively, an average of 9 perpendicular distances may be calculated and the product of the average and the identity matrix may be determined as the uncertainty of the detection.
S240, determining the current vanishing point coordinate and the current uncertainty in the current frame image based on the predicted vanishing point coordinate, the predicted uncertainty, the detected vanishing point coordinate and the detected uncertainty.
Wherein the current uncertainty is used to characterize the reliability of the current vanishing point coordinates.
Specifically, according to the above description, the uncertainty of prediction characterizes the degree of reliability of the predicted vanishing point coordinates, and the uncertainty of detection characterizes the degree of reliability of the detected vanishing point coordinates. Then, the electronic device may determine, as the current vanishing point coordinate, one of the predicted vanishing point coordinate and the detected vanishing point coordinate having a smaller uncertainty, based on the magnitude relation of the values of the predicted uncertainty and the detected uncertainty. Likewise, a smaller value of uncertainty may be determined as the current uncertainty.
Or the electronic equipment can fusion process the predicted vanishing point coordinate and the detected vanishing point coordinate by taking the predicted uncertainty and the detected uncertainty as weights to obtain the current vanishing point coordinate. And, the current uncertainty can be calculated from the predicted uncertainty and the detected uncertainty in a fusion processing manner.
In some embodiments, S240 includes: and respectively taking the predicted uncertainty and the detected uncertainty as weights, and carrying out weighted fusion processing on the predicted vanishing point coordinates and the detected vanishing point coordinates by using Kalman filtering to generate current vanishing point coordinates and the current uncertainty.
Specifically, according to the calculation mode of the kalman filtering, the electronic device respectively takes the predicted uncertainty and the detected uncertainty as weighting weights, performs weighted filtering processing on the predicted vanishing point coordinates and the detected vanishing point coordinates, and can calculate and output the current vanishing point coordinates and the current uncertainty. By such weighted filtering processing, the accuracy of the current vanishing point coordinates and the current uncertainty can be further improved.
In some embodiments, the basis of virtual navigation identifier (i.e. augmented reality navigation identifier) rendering in AR navigation can be performed by using the current vanishing point coordinates and the current uncertainty calculated in the above embodiments, so as to improve the rendering effect and display accuracy of the augmented reality navigation identifier. As shown in fig. 3, the navigation processing method based on augmented reality provided in the present embodiment includes:
s310, acquiring the coordinates and the history uncertainty of the history vanishing point in the current frame image and the history frame image.
S320, predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, determining predicted vanishing point coordinates, and determining the predicted uncertainty corresponding to the predicted vanishing point coordinates based on the historical uncertainty.
S330, determining the detected vanishing point coordinates of the road vanishing point in the current frame image, and determining the detected uncertainty corresponding to the detected vanishing point coordinates based on the preset uncertainty index and the image features associated with the detected vanishing point coordinates in the current frame image.
S340, determining the current vanishing point coordinate and the current uncertainty in the current frame image based on the predicted vanishing point coordinate, the predicted uncertainty, the detected vanishing point coordinate and the detected uncertainty.
S350, determining a current yaw angle and a current pitch angle corresponding to the current frame image based on the current vanishing point coordinates.
Specifically, during AR navigation, an augmented reality navigation mark (such as a virtual stereoscopic arrow or a virtual guide line) is positioned and displayed in the current frame image according to the yaw angle, the pitch angle, and the like. And a yaw angle (i.e., an initial yaw angle) and a pitch angle (i.e., an initial pitch angle) in a camera pose when the current frame image is photographed may have a certain deviation, thereby causing inaccurate display of the augmented reality navigation mark. Based on the principle of projective geometry, a certain calculating relation exists between the vanishing point of the road in the image and the yaw angle and pitch angle in the vanishing point. Therefore, in this embodiment, the current vanishing point coordinates and the current uncertainty may be used to calculate a new yaw angle (i.e., the current yaw angle) and a new pitch angle (i.e., the current pitch angle) with higher accuracy in the current frame image, so as to provide a rendering basis of the more accurate and reliable augmented reality navigation identifier.
The initial yaw angle and the initial pitch angle can be measured by a sensor (such as a magnetometer) associated with the camera, and can also be calculated by a camera external parameter calibration mode.
In some embodiments, the electronic device may determine the current yaw angle from the current vanishing point coordinates according to a correspondence of the yaw angle and the vanishing point coordinates in the image. For example, the correspondence between the yaw angle and the vanishing point in the image is: if the yaw angle of the camera is larger than 0, the vanishing point of the road moves rightwards from the center of the road, and the larger the yaw angle is, the larger the distance that the vanishing point moves rightwards is; if the yaw angle of the camera is less than 0, the vanishing point of the road moves leftwards from the center of the road, and the smaller the yaw angle is, the larger the distance the vanishing point moves leftwards. Then, the electronic device can calculate the current yaw angle according to the deviation direction and the deviation distance of the current vanishing point coordinate relative to the road center.
Likewise, the electronic device may be configured to determine, according to the correspondence between the pitch angle and the vanishing point coordinates in the image: if the pitch angle of the camera is greater than 0, the road vanishing point moves upwards from the road vertical datum line when the pitch angle is 0, and the greater the pitch angle is, the greater the distance that the vanishing point moves upwards is; if the pitch angle of the camera is smaller than 0, the vanishing point of the road moves downwards from the road vertical reference, and the smaller the pitch angle is, the larger the distance that the vanishing point moves downwards. And then the electronic equipment can calculate the current pitch angle according to the deviation direction and the deviation distance of the current vanishing point coordinate relative to the road vertical reference.
In other embodiments, S350 includes: if the current uncertainty is smaller than the preset threshold, determining a current yaw angle and a current pitch angle based on the current vanishing point coordinates and the camera internal reference matrix; and if the current uncertainty is greater than or equal to the preset threshold value, respectively determining an initial yaw angle and an initial pitch angle corresponding to the current frame image as the current yaw angle and the current pitch angle.
The preset threshold is a preset uncertainty critical value, which can be determined according to the standard calling rate of the service requirement. The preset threshold may be a numerical value or a threshold matrix consistent with the covariance dimension.
Specifically, in view of the fact that the current vanishing point coordinates may also be inaccurate, the determination manner of the current yaw angle and the current pitch angle is determined according to the current uncertainty in the embodiment, so as to further ensure the accuracy of the angle according to which the augmented reality navigation mark is rendered.
Illustratively, taking uncertainty as a two-dimensional covariance matrix as an example, the electronic device compares the current uncertainty to a preset threshold.
For example, when the preset threshold is a single value, a determinant of the two-dimensional covariance matrix may be calculated, and the determinant calculation result may be compared with the preset threshold.
For another example, when the preset threshold is a single value, another threshold may be calculated from the preset threshold and an empirically set threshold coefficient. Then, the two-dimensional threshold matrix is formed by using the preset threshold value and the other threshold value as element values on opposite corner lines of the two-dimensional matrix respectively. Alternatively, the preset threshold may be a two-dimensional threshold matrix containing element values on opposite corners of the two-dimensional matrix. At this time, the element values on the opposite corner lines of the two-dimensional covariance matrix can be respectively compared with the corresponding element values in the two-dimensional threshold matrix to represent the reliability degree of the current vanishing point coordinates in the x direction and the y direction.
If the comparison results in a current uncertainty greater than or equal to the preset threshold, indicating that the current vanishing point coordinates are less reliable, the accuracy of the yaw angle calculated from the current vanishing point coordinates may also be less accurate. At the moment, the rendering basis of the augmented reality navigation mark can be directly adopted by the initial yaw angle and the initial pitch angle, namely, the initial yaw angle and the initial pitch angle are respectively determined to be the current yaw angle and the current pitch angle.
If the comparison result is that the current uncertainty is smaller than the preset threshold value, which indicates that the reliability of the current vanishing point coordinates is higher, the current yaw angle and the current pitch angle can be calculated from the current vanishing point coordinates. At this time, the electronic device may calculate the current vanishing point coordinate x according to the following formula t And a camera internal reference matrix C, calculating to obtain a current yaw angle yaw And the current pitch
Figure BDA0004081880390000121
Wherein α, β and γ represent a first dimension row vector, a second dimension row vector and a third dimension row vector, respectively, in the matrix obtained by the first formula.
And S360, rendering the augmented reality navigation mark based on the current yaw angle and the current pitch angle, and displaying a rendering result in the current frame image.
Specifically, the electronic device determines the rendering and displaying directions of the augmented reality navigation mark according to the current yaw angle and the current pitch angle, renders the rendering and displaying directions into the current frame image by utilizing the related rendering settings, and displays the rendering results in the visual screen. Therefore, the rendering azimuth of the augmented reality navigation mark of the AR navigation can be calculated by using vanishing point coordinates with higher accuracy, better smoothness and stronger anti-interference capability, so that the phenomenon of jump of the guiding direction of the augmented reality navigation mark is reduced to a great extent, and the accuracy and the continuity of the AR navigation are improved.
Referring to fig. 4, the phonetic and text label display of ar navigation is a "straight going" navigation guide. However, in the left (a) diagram, since the initial yaw angle is erroneously shifted, the augmented reality navigation mark 410 in the (a) diagram, which is not corrected for yaw angle, is not directed in the straight direction of the road, but is shifted to the right direction of the road, and thus the problem of erroneous AR navigation guidance is likely to occur.
After the processing of the embodiments, the current vanishing point coordinates can be used to obtain a more accurate current yaw angle required for rendering. Then, as shown in the right (b) of fig. 4, the augmented reality navigation mark 420 corrected by the yaw angle points to the straight direction of the road, which is consistent with the navigation guidance displayed by the text mark, so as to improve the correctness and intuitiveness of the AR navigation guidance.
Fig. 5 is a schematic structural diagram of a navigation processing device based on augmented reality according to an embodiment of the present disclosure. As shown in fig. 5, an augmented reality-based navigation processing device 500 provided by an embodiment of the present disclosure may include:
a data acquisition module 510, configured to acquire coordinates of a vanishing point in a current frame image and a historical frame image;
the predicted vanishing point coordinate determining module 520 is configured to predict a road vanishing point in the current frame image based on the historical vanishing point coordinates, and determine predicted vanishing point coordinates;
the detected vanishing point coordinate determining module 530 is configured to detect a road vanishing point in the current frame image, and determine detected vanishing point coordinates;
the current vanishing point coordinate determining module 540 is configured to determine the current vanishing point coordinate in the current frame image based on the predicted vanishing point coordinate and the detected vanishing point coordinate.
In some embodiments, the augmented reality-based navigation processing device 500 further includes an uncertainty characterizing the degree of reliability of vanishing point coordinates;
accordingly, the data acquisition module 510 is further configured to:
acquiring a history uncertainty corresponding to the coordinates of the history vanishing point;
the predicted vanishing point coordinate determining module 520 is configured to:
predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, determining predicted vanishing point coordinates, and determining predicted uncertainty corresponding to the predicted vanishing point coordinates based on the historical uncertainty;
the detected vanishing point coordinate determining module 530 is configured to:
detecting a road vanishing point in the current frame image, determining a detected vanishing point coordinate, and determining a detected uncertainty corresponding to the detected vanishing point coordinate based on a preset uncertainty index and image features associated with the detected vanishing point coordinate in the current frame image;
the current vanishing point coordinate determining module 540 is configured to:
the current vanishing point coordinates and the current uncertainty in the current frame image are determined based on the predicted vanishing point coordinates, the predicted uncertainty, the detected vanishing point coordinates and the detected uncertainty.
In some embodiments, the current vanishing point coordinate determining module 540 is specifically configured to:
And respectively taking the predicted uncertainty and the detected uncertainty as weights, and carrying out weighted fusion processing on the predicted vanishing point coordinates and the detected vanishing point coordinates by using Kalman filtering to generate current vanishing point coordinates and the current uncertainty.
In some embodiments, the augmented reality-based navigation processing device 500 further includes:
the current yaw angle determining module is used for determining a current yaw angle and a current pitch angle corresponding to the current frame image based on the current vanishing point coordinate after determining the current vanishing point coordinate and the current uncertainty in the current frame image based on the predicted vanishing point coordinate, the predicted uncertainty, the detected vanishing point coordinate and the detected uncertainty;
and the AR rendering module is used for rendering the augmented reality navigation mark based on the current yaw angle and the current pitch angle and displaying a rendering result in the current frame image.
Further, the current yaw angle determination module is specifically configured to:
if the current uncertainty is smaller than the preset threshold, determining a current yaw angle and a current pitch angle based on the current vanishing point coordinates and the camera internal reference matrix;
if the current uncertainty is greater than or equal to a preset threshold value, respectively determining an initial yaw angle and an initial pitch angle corresponding to the current frame image as a current yaw angle and a current pitch angle; the initial yaw angle and the initial pitch angle are respectively the yaw angle and the pitch angle when the current frame image is shot.
In some embodiments, the predicted vanishing point coordinate determining module 520 is specifically configured to implement a function of predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates by any one of the following ways to determine the predicted vanishing point coordinates:
performing sparse optical flow tracking processing based on the current frame image, the historical frame image and the historical vanishing point coordinates, and determining predicted vanishing point coordinates;
performing camera projection transformation on the basis of the historical vanishing point coordinates, the camera internal reference matrix and the camera relative pose from the historical frame image to the current frame image, and determining predicted vanishing point coordinates;
and (3) based on the current frame image, the historical vanishing point coordinates, the camera internal reference matrix and the camera relative pose, performing sparse optical flow tracking processing and camera projection transformation, and determining predicted vanishing point coordinates.
In some embodiments, the detected vanishing point coordinate determining module 530 includes:
the initial straight line generation sub-module is used for carrying out straight line detection on the current frame image and generating at least one initial straight line;
the target straight line obtaining sub-module is used for screening all the initial straight lines based on preset conditions to obtain target straight lines; the preset conditions comprise at least one of that the straight line is positioned on the ground, the included angle between the straight line and the optical axis of the camera is in a preset angle range, the length of the straight line is in a preset length range and the horizontal distance between the straight line and the optical center of the camera is in a preset distance range;
The detected vanishing point coordinate determining submodule is used for detecting the vanishing point based on each target straight line, determining the detected vanishing point coordinate, and determining the detected uncertainty based on a preset uncertainty index, each target straight line and the detected vanishing point coordinate.
Further, the detected vanishing point coordinate determining submodule is specifically configured to:
screening each intersection point formed by each target straight line by utilizing a random sampling coincidence algorithm, and determining the detected vanishing point coordinates;
screening each associated straight line in a preset area range of the detected vanishing point coordinates from each target straight line;
and determining covariance corresponding to the detected vanishing point coordinates as the uncertainty of detection based on the vertical line distance between the detected vanishing point coordinates and each associated straight line.
The navigation processing device based on augmented reality provided by the embodiment of the disclosure can execute the navigation processing method based on augmented reality provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method. Details of the embodiments of the apparatus of the present disclosure that are not described in detail may refer to descriptions of any of the embodiments of the method of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, which is used to exemplarily illustrate an electronic device implementing an augmented reality-based navigation processing method in any embodiment of the present disclosure, and should not be construed as specifically limiting the embodiment of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processor (e.g., a central processing unit, a graphic processor, etc. 601) which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage device 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored, the processor 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to the bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While an electronic device 600 having various means is shown, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The computer program, when executed by the processor 601, may perform the functions defined in the augmented reality-based navigation processing method provided by any embodiment of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the client, server, may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the augmented reality based navigation processing method provided by any embodiment of the present disclosure.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a computer-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (11)

1. An augmented reality-based navigation processing method is characterized by comprising the following steps:
acquiring historical vanishing point coordinates in the current frame image and the historical frame image;
predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, and determining predicted vanishing point coordinates;
detecting a road vanishing point in the current frame image, and determining the detected vanishing point coordinates;
and determining the current vanishing point coordinates in the current frame image based on the predicted vanishing point coordinates and the detected vanishing point coordinates.
2. The method of claim 1, wherein the method further comprises characterizing uncertainty in the degree of reliability of vanishing point coordinates;
the step of predicting the road vanishing point in the current frame image based on the historical vanishing point coordinates, and the step of determining the predicted vanishing point coordinates includes:
Acquiring the history uncertainty corresponding to the history vanishing point coordinates;
predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, determining the predicted vanishing point coordinates, and determining a predicted uncertainty corresponding to the predicted vanishing point coordinates based on the historical uncertainty;
the detecting the vanishing point of the road in the current frame image, and determining the coordinates of the vanishing point comprises:
detecting a road vanishing point in the current frame image, determining a detected vanishing point coordinate, and determining a detected uncertainty corresponding to the detected vanishing point coordinate based on a preset uncertainty index and image features associated with the detected vanishing point coordinate in the current frame image;
the determining the current vanishing point coordinates in the current frame image based on the predicted vanishing point coordinates and the detected vanishing point coordinates includes:
determining a current vanishing point coordinate and a current uncertainty in the current frame image based on the predicted vanishing point coordinate, the predicted uncertainty, the detected vanishing point coordinate and the detected uncertainty.
3. The method of claim 2, wherein the determining the current vanishing point coordinates and the current uncertainty in the current frame image based on the predicted vanishing point coordinates, the predicted uncertainty, the detected vanishing point coordinates and the detected uncertainty comprises:
and respectively taking the predicted uncertainty and the detected uncertainty as weights, and carrying out weighted fusion processing on the predicted vanishing point coordinates and the detected vanishing point coordinates by using Kalman filtering to generate the current vanishing point coordinates and the current uncertainty.
4. The method of claim 2, wherein after the determining the current vanishing point coordinates and the current uncertainty in the current frame image based on the predicted vanishing point coordinates, the predicted uncertainty, the detected vanishing point coordinates and the detected uncertainty, the method further comprises:
determining a current yaw angle and a current pitch angle corresponding to the current frame image based on the current vanishing point coordinates;
and rendering an augmented reality navigation mark based on the current yaw angle and the current pitch angle, and displaying a rendering result in the current frame image.
5. The method of claim 4, wherein the determining a current yaw angle and a current pitch angle corresponding to the current frame image based on the current vanishing point coordinates comprises:
if the current uncertainty is smaller than a preset threshold, determining the current yaw angle and the current pitch angle based on the current vanishing point coordinates and a camera internal reference matrix;
if the current uncertainty is larger than or equal to the preset threshold value, determining an initial yaw angle and an initial pitch angle corresponding to the current frame image as the current yaw angle and the current pitch angle respectively; the initial yaw angle and the initial pitch angle are respectively the yaw angle and the pitch angle when the current frame image is shot.
6. The method of claim 1 or 2, wherein the predicting a road vanishing point in the current frame image based on the historical vanishing point coordinates, determining predicted vanishing point coordinates includes any one of:
performing sparse optical flow tracking processing based on the current frame image, the historical frame image and the historical vanishing point coordinates, and determining the predicted vanishing point coordinates;
performing camera projection transformation on the historical vanishing point coordinates, the camera internal reference matrix and the camera relative pose from the historical frame image to the current frame image to determine the predicted vanishing point coordinates;
And determining the predicted vanishing point coordinates based on the current frame image, the historical vanishing point coordinates, the camera internal reference matrix and the camera relative pose by performing sparse optical flow tracking processing and camera projection transformation.
7. The method of claim 2, wherein the detecting a road vanishing point in the current frame image, determining the detected vanishing point coordinates, and determining a detected uncertainty corresponding to the detected vanishing point coordinates based on a preset uncertainty indicator and image features associated with the detected vanishing point coordinates in the current frame image includes:
performing straight line detection on the current frame image to generate at least one initial straight line;
screening each initial straight line based on preset conditions to obtain a target straight line; the preset conditions comprise at least one of that the straight line is positioned on the ground, the included angle between the straight line and the optical axis of the camera is in a preset angle range, the length of the straight line is in a preset length range, and the horizontal distance between the straight line and the optical center of the camera is in a preset distance range;
and detecting vanishing points based on the target straight lines, determining the detected vanishing point coordinates, and determining the detected uncertainty based on the preset uncertainty index, the target straight lines and the detected vanishing point coordinates.
8. The method of claim 7, wherein, in the case where the preset uncertainty indicator is covariance, the detecting vanishing point based on each of the target straight lines, determining the detected vanishing point coordinates, and determining the detected uncertainty based on the preset uncertainty indicator, each of the target straight lines, and the detected vanishing point coordinates comprises:
screening each intersection point formed by each target straight line by utilizing a random sampling coincidence algorithm, and determining the detected vanishing point coordinates;
screening each associated straight line within a preset area range of the detected vanishing point coordinates from each target straight line;
and determining covariance corresponding to the detected vanishing point coordinates as the uncertainty of the detection based on the perpendicular line distance between the detected vanishing point coordinates and each associated straight line.
9. An augmented reality-based navigation processing device, comprising:
the data acquisition module is used for acquiring the historical vanishing point coordinates in the current frame image and the historical frame image;
the predicted vanishing point coordinate determining module is used for predicting the road vanishing point in the current frame image based on the historical vanishing point coordinate and determining the predicted vanishing point coordinate;
The detected vanishing point coordinate determining module is used for detecting the road vanishing point in the current frame image and determining the detected vanishing point coordinate;
and the current vanishing point coordinate determining module is used for determining the current vanishing point coordinate in the current frame image based on the predicted vanishing point coordinate and the detected vanishing point coordinate.
10. An electronic device, comprising:
a memory and a processor, the memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the augmented reality-based navigation processing method according to any one of claims 1 to 8.
11. A computer program product for performing the augmented reality based navigation processing method of any one of claims 1 to 8.
CN202310119560.8A 2023-01-18 2023-01-18 Navigation processing method, device, equipment and program product based on augmented reality Pending CN116242370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310119560.8A CN116242370A (en) 2023-01-18 2023-01-18 Navigation processing method, device, equipment and program product based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310119560.8A CN116242370A (en) 2023-01-18 2023-01-18 Navigation processing method, device, equipment and program product based on augmented reality

Publications (1)

Publication Number Publication Date
CN116242370A true CN116242370A (en) 2023-06-09

Family

ID=86632560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310119560.8A Pending CN116242370A (en) 2023-01-18 2023-01-18 Navigation processing method, device, equipment and program product based on augmented reality

Country Status (1)

Country Link
CN (1) CN116242370A (en)

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US20210190497A1 (en) Simultaneous location and mapping (slam) using dual event cameras
CN111079619B (en) Method and apparatus for detecting target object in image
US10694175B2 (en) Real-time automatic vehicle camera calibration
CN108198044B (en) Commodity information display method, commodity information display device, commodity information display medium and electronic equipment
CN112292711A (en) Correlating LIDAR data and image data
CN113377888B (en) Method for training object detection model and detection object
CN109583391B (en) Key point detection method, device, equipment and readable medium
US9454704B2 (en) Apparatus and method for determining monitoring object region in image
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN112947419B (en) Obstacle avoidance method, device and equipment
CN115409881A (en) Image processing method, device and equipment
KR20210093194A (en) A method, an apparatus an electronic device, a storage device, a roadside instrument, a cloud control platform and a program product for detecting vehicle's lane changing
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN112966654A (en) Lip movement detection method and device, terminal equipment and computer readable storage medium
CN113992860B (en) Behavior recognition method and device based on cloud edge cooperation, electronic equipment and medium
CN114022614A (en) Method and system for estimating confidence of three-dimensional reconstruction target position
CN103765477A (en) Line tracking with automatic model initialization by graph matching and cycle detection
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
US20130142388A1 (en) Arrival time estimation device, arrival time estimation method, arrival time estimation program, and information providing apparatus
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN116242370A (en) Navigation processing method, device, equipment and program product based on augmented reality
CN113077396B (en) Straight line segment detection method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination