CN107830869B - Information output method and apparatus for vehicle - Google Patents

Information output method and apparatus for vehicle Download PDF

Info

Publication number
CN107830869B
CN107830869B CN201711139956.XA CN201711139956A CN107830869B CN 107830869 B CN107830869 B CN 107830869B CN 201711139956 A CN201711139956 A CN 201711139956A CN 107830869 B CN107830869 B CN 107830869B
Authority
CN
China
Prior art keywords
image information
historical
dimensional image
dimensional
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711139956.XA
Other languages
Chinese (zh)
Other versions
CN107830869A (en
Inventor
饶先拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201711139956.XA priority Critical patent/CN107830869B/en
Publication of CN107830869A publication Critical patent/CN107830869A/en
Priority to PCT/CN2018/099164 priority patent/WO2019095735A1/en
Application granted granted Critical
Publication of CN107830869B publication Critical patent/CN107830869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3635Guidance using 3D or perspective road maps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3658Lane guidance

Abstract

The embodiment of the application discloses an information output method and device for a vehicle. The vehicle comprises a camera, and one specific embodiment of the method comprises the following steps: acquiring a current environment image containing a current lane line through a camera; determining whether the credibility of the current lane line identified from the current environment image is greater than or equal to a preset credibility threshold, wherein the credibility is determined based on at least one of the following items: the shielding degree of the barrier on the lane line and the camera parameters of the camera; and responding to the reliability being more than or equal to the reliability threshold, inputting the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and outputting the obtained three-dimensional image information. The embodiment can output the three-dimensional image information of the lane line with higher accuracy, and further improves the safety of vehicle driving.

Description

Information output method and apparatus for vehicle
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of vehicle navigation, and particularly relates to an information output method and device for a vehicle.
Background
At present, automobiles become more and more important vehicles in people's lives, and automobile intellectualization becomes the mainstream of automobile development at present in order to better improve the safety and the comfort of drivers in the automobile driving process. For example, Augmented Reality (AR) technology, which is a technology of calculating the position and angle of a camera image in real time and adding a corresponding image, video, and 3D model, may be applied to a vehicle navigation neighborhood.
Disclosure of Invention
The embodiment of the application provides an information output method and device for a vehicle.
In a first aspect, an embodiment of the present application provides an information output method for a vehicle, where the vehicle includes a camera, and the method includes: acquiring a current environment image containing a current lane line through a camera; determining whether the credibility of the current lane line identified from the current environment image is greater than or equal to a preset credibility threshold, wherein the credibility is determined based on at least one of the following items: the shielding degree of the barrier on the lane line and the camera parameters of the camera; and responding to the reliability being more than or equal to the reliability threshold, inputting the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and outputting the obtained three-dimensional image information.
In some embodiments, the vehicle further comprises a display screen, the method further comprising: responding to the credibility being smaller than the credibility threshold, inputting the two-dimensional image information of the current navigation guide line displayed on the display screen into the space model to obtain the three-dimensional image information of the current navigation guide line; and determining the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line and the three-dimensional image information of the current navigation guide line in the first preset historical time period, and outputting the determined three-dimensional image information.
In some embodiments, the spatial model is obtained by: acquiring two-dimensional image information of the historical navigation guide line displayed on the display screen in a second preset historical time period, and inputting the two-dimensional image information of the historical navigation guide line into the initial space model to obtain three-dimensional image information of the historical navigation guide line; in response to the fact that the reliability of the lane line identified from the historical environment image in the second preset historical time period is larger than or equal to the reliability threshold value, inputting the two-dimensional image information of the historical lane line in the historical environment image into the initial space model to obtain the three-dimensional image information of the historical lane line; determining a loss function of the initial space model based on the two-dimensional image information of the historical lane line, the three-dimensional image information of the historical lane line and the three-dimensional image information of the historical navigation guide line; and adjusting parameters of the initial space model based on the loss function, and taking the initial space model after the parameters are adjusted as the space model.
In some embodiments, determining the loss function of the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline includes: determining whether a two-dimensional historical lane line indicated by the two-dimensional image information of the historical lane line is a straight line and whether a three-dimensional historical lane line indicated by the three-dimensional image information of the historical lane line is a straight line; in response to determining that the two-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are not straight lines, a first loss function of the initial spatial model is determined.
In some embodiments, determining the loss function of the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline includes: in response to determining that the three-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are at least two, further determining whether the at least two three-dimensional historical lane lines are parallel; in response to determining that the at least two three-dimensional historical lane lines are not parallel, a second loss function of the initial spatial model is determined.
In some embodiments, determining the loss function of the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline includes: in response to determining that the three-dimensional history lane line is a straight line, further determining whether a vehicle traveling direction guided by the three-dimensional history lane line coincides with a vehicle traveling direction guided by the three-dimensional history navigation guide line indicated by the three-dimensional image information of the history navigation guide line; in response to determining that the direction of travel of the vehicle guided by the three-dimensional historical lane lines is not consistent with the direction of travel of the vehicle guided by the three-dimensional historical navigation guideline, determining a third loss function for the initial spatial model.
In a second aspect, an embodiment of the present application provides an information output apparatus for a vehicle, the vehicle including a camera, the apparatus including: the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire a current environment image containing a current lane line through a camera; a determining unit configured to determine whether a confidence level of the current lane line identified from the current environment image is greater than or equal to a preset confidence level threshold, wherein the confidence level is determined based on at least one of: the shielding degree of the barrier on the lane line and the camera parameters of the camera; and the first output unit is configured to respond to the reliability being more than or equal to the reliability threshold, input the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and output the obtained three-dimensional image information.
In some embodiments, the vehicle further comprises a display screen, the apparatus further comprising: the input unit is configured to input the two-dimensional image information of the current navigation guide line displayed on the display screen into the space model to obtain the three-dimensional image information of the current navigation guide line in response to the reliability being less than the reliability threshold; and a second output unit configured to determine the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line and the three-dimensional image information of the current navigation guide line within the first preset historical time period, and output the determined three-dimensional image information.
In some embodiments, the apparatus further comprises a spatial model building unit comprising: the first input module is configured to acquire two-dimensional image information of the historical navigation guide line displayed on the display screen in a second preset historical time period, and input the two-dimensional image information of the historical navigation guide line into the initial space model to acquire three-dimensional image information of the historical navigation guide line; the second input module is configured to input the two-dimensional image information of the historical lane line in the historical environment image into the initial space model to obtain the three-dimensional image information of the historical lane line in response to the fact that the reliability of the lane line identified from the historical environment image in the second preset historical time period is greater than or equal to the reliability threshold value; the determining module is configured to determine a loss function of the initial space model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines and the three-dimensional image information of the historical navigation guide lines; and the adjusting module is configured to adjust parameters of the initial space model based on the loss function, and take the initial space model after the parameters are adjusted as the space model.
In some embodiments, the determining module comprises: the first determining submodule is configured to determine whether a two-dimensional historical lane line indicated by the two-dimensional image information of the historical lane line is a straight line and whether a three-dimensional historical lane line indicated by the three-dimensional image information of the historical lane line is a straight line; a second determination submodule configured to determine a first loss function of the initial spatial model in response to determining that the two-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are not straight lines.
In some embodiments, the determining module comprises: the third determining submodule is configured to respond to the fact that the three-dimensional historical lane lines are determined to be straight lines and the three-dimensional historical lane lines are at least two, and further determine whether the at least two three-dimensional historical lane lines are parallel or not; a fourth determination submodule configured to determine a second loss function for the initial spatial model in response to determining that the at least two three-dimensional historical lane lines are not parallel.
In some embodiments, the determining module comprises: a fifth determination submodule configured to further determine whether a vehicle traveling direction guided by the three-dimensional history lane line coincides with a vehicle traveling direction guided by the three-dimensional history navigation guide line indicated by the three-dimensional image information of the history navigation guide line, in response to determining that the three-dimensional history lane line is a straight line; and a sixth determining submodule configured to determine a third loss function of the initial space model in response to determining that the vehicle traveling direction guided by the three-dimensional historical lane lines is not consistent with the vehicle traveling direction guided by the three-dimensional historical navigation guideline.
According to the information output method and device for the vehicle, the current environment image containing the current lane line is obtained through the camera of the vehicle, whether the reliability of the current lane line is identified from the current environment image is larger than or equal to a preset reliability threshold value or not is determined, if the reliability is larger than or equal to the reliability threshold value, the two-dimensional image information of the current lane line is input into a pre-established space model to obtain the three-dimensional image information of the current lane line, and the obtained three-dimensional image information is output, so that the current environment image containing the current lane line is effectively utilized, the accuracy of the output three-dimensional image information of the lane line is higher, and the safety of vehicle driving is further improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of an information output method for a vehicle according to the present application;
fig. 3 is a schematic diagram of an application scenario of an information output method for a vehicle according to the present application;
fig. 4 is a flowchart of still another embodiment of an information output method for a vehicle according to the present application;
fig. 5 is a schematic configuration diagram of an embodiment of an information output apparatus for a vehicle according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing a terminal device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the information output method for a vehicle or the information output-side apparatus for a vehicle of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, a server 105, and an image information acquisition apparatus 106. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various communication client applications, such as navigation applications, map applications, music video applications, etc., may be installed on the terminal devices 101, 102, 103. The terminal devices 101, 102, and 103 may acquire a current environment image including a current lane line through the image information acquisition device 106; then, whether the reliability of the current lane line identified from the current environment image is greater than or equal to a preset reliability threshold value can be determined; in response to the reliability being equal to or higher than the reliability threshold, the two-dimensional image information of the current lane line in the current environment image is input to a pre-established spatial model to obtain three-dimensional image information of the current lane line, the obtained three-dimensional image information is output, a three-dimensional image determined by the obtained three-dimensional image information may be presented on the display of the terminal device 101, 102, 103, or the obtained three-dimensional image information may be transmitted to the server 105.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and a camera and supporting information interaction, including but not limited to in-vehicle terminals, smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for the spatial model in the terminal devices 101, 102, 103. The background server may also receive three-dimensional image information output by the terminal devices 101, 102, 103.
It should be noted that the information output method for the vehicle provided by the embodiment of the present application may be executed by the terminal devices 101, 102, 103, and accordingly, the information output apparatus for the vehicle may be provided in the terminal devices 101, 102, 103.
It should be understood that the number of terminal devices, networks, servers, and image information acquisition devices in fig. 1 are merely illustrative. Any number of terminal devices, networks, servers and image information acquisition devices can be provided according to implementation needs.
With continued reference to FIG. 2, a flow 200 of one embodiment of an information output method for a vehicle according to the present application is shown. The information output method for a vehicle includes the steps of:
step 201, acquiring a current environment image containing a current lane line through a camera.
In the present embodiment, an electronic device (e.g., the in-vehicle terminal shown in fig. 1) on which the information output method for a vehicle operates can acquire a current environment image including a current lane line through a camera mounted on the vehicle. The lane lines can also be called as road lane lines, and the lane lines can comprise guide lane lines and variable guide lane lines, wherein the guide lane lines are lane marking lines in a guide direction and used for indicating that a vehicle should run in the direction indicated by the lane marking lines at the entrance section of the intersection, and the guide lane lines are generally drawn at traffic intersections with large traffic flow and used for determining the running direction and reducing traffic pressure; if the vehicle enters the lane with the variable guide lane line, more than one lane can go, for example, some intersections allow right turn and straight (i.e. right turn and straight merge into one lane), and some intersections allow turn around and left turn (i.e. turn around and left turn merge into one lane). The current environment image may be an environment image of a road ahead of the vehicle at a current time acquired by the camera during driving of the vehicle.
Step 202, determining whether the reliability of the current lane line identified from the current environment image is greater than or equal to a preset reliability threshold.
In this embodiment, the electronic device may first determine the reliability of recognizing the current lane line from the current environment image. The electronic device may determine the degree of reliability based on a degree of obstruction of the lane line by the obstacle. Specifically, the highest value (e.g., 0.9) and the lowest value (e.g., 0.1) of the reliability may be set first, and if the lane line is completely blocked by the obstacle, the reliability may be the set lowest value; if the obstacle blocks half of the lane line, the confidence may be an intermediate value between the set highest and lowest values (e.g., an intermediate value of 0.5 between the highest value 0.9 and the lowest value 0.1); if the lane line is not blocked by the obstacle, the confidence may be the set highest value.
In this embodiment, the electronic device may determine the reliability based on a camera parameter of the camera, where the camera parameter may include at least one of: resolution, texture restoration degree, dynamic speed and automatic focusing speed. The resolution generally refers to the ability of a camera to analyze an image, that is, the number of pixels of an image sensor of the camera; the texture reduction degree refers to the degree that the texture of the fine object can be really reduced by the camera; the dynamic speed may refer to a response speed of the camera, that is, whether an image displayed in the display is synchronized with an image captured by the camera, so that a certain lag time difference does not exist; the automatic focusing is to determine the mode of focal length adjustment (whether the focal length is longer or shorter) according to an image definition evaluation algorithm so as to enable the image definition to be optimal, and the automatic focusing speed is the speed degree for enabling the image definition to be optimal by adjusting the focal length. Specifically, the electronic device may first search a first score corresponding to the resolution of the camera in a preset correspondence table between the resolution and the first score, may search a second score corresponding to the texture reduction degree of the camera in a preset correspondence table between the texture reduction degree and the second score, may search a third score corresponding to the dynamic speed of the camera in a preset correspondence table between the dynamic speed and the third score, and may search a fourth score corresponding to the autofocus speed of the camera in a preset correspondence table between the autofocus speed and the fourth score; then, weights corresponding to the resolution, the texture restoration degree, the dynamic speed and the automatic focusing speed can be respectively obtained; and finally, multiplying the first fraction by a weight corresponding to resolution to obtain a first product, multiplying the second fraction by a weight corresponding to texture reduction to obtain a second product, multiplying the third fraction by a weight corresponding to dynamic speed to obtain a third product, multiplying the fourth fraction by a weight corresponding to automatic focusing speed to obtain a fourth product, and determining the sum of the first product, the second product, the third product and the fourth product as the credibility.
In this embodiment, the electronic device may further determine the reliability by integrating a degree of shielding of the lane line by the obstacle and a camera parameter of the camera. Specifically, the electronic device may determine a first confidence level by using the method for determining confidence level based on the blocking degree of the obstacle to the lane line, and may determine a second confidence level by using the method for determining confidence level based on the camera parameter of the camera; then, respectively acquiring weights corresponding to the first reliability and the second reliability; and finally, multiplying the first credibility by the weight corresponding to the first credibility, multiplying the second credibility by the weight corresponding to the second credibility, and determining the weighted sum obtained by adding the products as the credibility of the current lane line identified from the current environment image.
Then, the electronic device may determine whether the reliability is greater than or equal to a preset reliability threshold (e.g., 0.7), and if the reliability is greater than or equal to the reliability threshold, step 203 may be executed.
Step 203, inputting the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and outputting the obtained three-dimensional image information.
In this embodiment, if the electronic device determines that the reliability of identifying the current lane line from the current environment image is greater than or equal to the reliability threshold, the two-dimensional image information of the current lane line in the current environment image may be input into a pre-established spatial model to obtain three-dimensional image information of the current lane line, and the obtained three-dimensional image information may be output. The two-dimensional image generally refers to a planar image containing height information and width information without containing depth information, and thus, the two-dimensional image information may include the height information and the width information, or two-dimensional point coordinates in an image coordinate system. The digital images acquired by the camera can be stored in the computer as an array, and the value of each element (pixel) in the array is the brightness (gray scale) of an image point. A rectangular coordinate system u-v is defined on the image, and the coordinate (u, v) of each pixel is the column number and the row number of the pixel in the array respectively. The three-dimensional image generally refers to a stereoscopic image containing height information, width information, and depth information, and thus, the three-dimensional image information may include the height information, the width information, and the depth information, or include three-dimensional point coordinates in a world coordinate system.
In this embodiment, the spatial model may be a model that converts two-dimensional image information of an object into three-dimensional image information by using a three-dimensional reconstruction technique, and may also be used to convert two-dimensional point coordinates of the object in an image coordinate system into three-dimensional point coordinates in a world coordinate system. As an example, the spatial model may be a conversion matrix for converting two-dimensional image information into three-dimensional image information, which is determined by a technician based on camera parameters of a camera acquiring the image; the correspondence table in which the two-dimensional image information and the three-dimensional image information of the object are stored may be prepared in advance by a technician based on the result of imaging the object at each angle and each distance. The three-dimensional reconstruction technology is to acquire a two-dimensional image of a scene object through a camera, analyze and process information of the two-dimensional image, and deduce three-dimensional image information of the object in a real environment by combining computer vision knowledge.
In this embodiment, the electronic device may generate a three-dimensional image of the current lane line by using the obtained three-dimensional image information, and superimpose the generated three-dimensional image on a navigation map of the electronic device.
In some optional implementations of the present embodiment, the vehicle on which the electronic device operates may further include a sensor. The spatial model can be obtained by the following steps:
first, the electronic device may obtain two-dimensional image information of a historical navigation guideline displayed on the display screen in a second preset historical time period (for example, the past 1 day), and then may input the two-dimensional image information of the historical navigation guideline into an initial space model to obtain three-dimensional image information of the historical navigation guideline, where the initial space model may be a model that converts two-dimensional image information of an object into three-dimensional image information by using a three-dimensional reconstruction technique, and may also be used to convert two-dimensional point coordinates of the object in an image coordinate system into three-dimensional point coordinates in a world coordinate system. The electronic device may initialize parameters of an initial spatial model based on camera parameters of the camera. Since the camera obtains a two-dimensional image of a three-dimensional object when shooting the object, that is, three-dimensional point coordinates of the object in the world coordinate system are converted into two-dimensional point coordinates in the image coordinate system, a conversion matrix for converting the three-dimensional image into the two-dimensional image can be determined from the camera parameters (focal length, optical center, optical axis, etc.), and the initial space model can be used for converting the two-dimensional image into the three-dimensional image, so that the parameters of the initial space model can be determined based on the conversion matrix.
Then, the electronic device may determine the reliability of identifying the lane line from the historical environment image in the second preset historical time period, where the determination method of the reliability is substantially the same as the determination method of the reliability of identifying the current lane line from the current environment image, and details are not repeated here. In response to determining that the reliability of the lane line identified from the historical environmental image is greater than or equal to the reliability threshold, the electronic device may input two-dimensional image information of the historical lane line in the historical environmental image into the initial spatial model to obtain three-dimensional image information of the historical lane line.
Then, the electronic device may determine a loss function (loss function) of the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline, where the loss function may be used to estimate a degree of disparity between a predicted value f (x) and a true value Y of the model, and is a non-negative real value function, and is usually expressed by L (Y, f (x)). The loss function may include a logarithmic loss function (logistic regression), a quadratic loss function (Least Squares), an exponential loss function (Adaboost), and the like. How to solve the function optimal solution by using the logarithmic loss function, the quadratic loss function and the exponential loss function is common knowledge which is widely researched and applied at present, and is not described herein again.
In some optional implementations of the embodiment, the electronic device may first determine whether the two-dimensional historical lane line indicated by the two-dimensional image information of the historical lane line is a straight line, and may determine whether the three-dimensional historical lane line indicated by the three-dimensional image information of the historical lane line is a straight line. In response to determining that the two-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are not straight lines, the electronic device may determine a first loss function for the initial spatial model. If the two-dimensional historical lane line is a straight line, it can be predicted that the three-dimensional historical lane line should also be a straight line, and if the real three-dimensional historical lane line is not a straight line, the degree of inconsistency between the real three-dimensional historical lane line and the straight line can be determined, so as to determine the first loss function of the initial space model.
In some optional implementations of this embodiment, in response to determining that the three-dimensional historical lane lines are straight lines and that the three-dimensional historical lane lines are at least two, the electronic device may further determine whether the at least two three-dimensional historical lane lines are parallel to each other. Specifically, the electronic device may determine whether slopes of the at least two three-dimensional historical lane lines are the same, and if the slopes of the at least two three-dimensional historical lane lines are the same, the at least two three-dimensional historical lane lines are parallel to each other, and if the slopes of the at least two three-dimensional historical lane lines are not the same, the at least two three-dimensional historical lane lines are not parallel. In response to determining that the at least two three-dimensional historical lane lines are not parallel, the electronic device may determine a second loss function for the initial spatial model. If at least two three-dimensional historical lane lines exist, it can be predicted that the at least two three-dimensional historical lane lines should be parallel to each other, and the true at least two three-dimensional historical lane lines are not parallel, and then the degree of non-parallelism (difference between slopes of the historical lane lines) between the true at least two three-dimensional historical lane lines can be determined, so as to determine the second loss function of the initial space model.
In some optional implementations of the embodiment, in response to determining that the three-dimensional history lane line is a straight line, the electronic device may further determine whether a vehicle traveling direction guided by the three-dimensional history lane line coincides with a vehicle traveling direction guided by the three-dimensional history navigation guide line indicated by the three-dimensional image information of the history navigation guide line. In response to determining that the direction of travel of the vehicle guided by the three-dimensional historical lane lines does not coincide with the direction of travel of the vehicle guided by the three-dimensional historical navigation guideline, the electronic device may determine a third loss function for the initial space model. If the three-dimensional history lane line is a straight line, it is predicted that the vehicle traveling direction guided by the three-dimensional history lane line should coincide with the vehicle traveling direction guided by the three-dimensional history navigation guideline, and the vehicle traveling direction guided by the real three-dimensional history lane line does not coincide with the vehicle traveling direction guided by the three-dimensional history navigation guideline, and it is determined that the degree of the disparity between the vehicle traveling direction guided by the three-dimensional history lane line and the vehicle traveling direction guided by the three-dimensional history navigation guideline is high, and the third loss function of the initial space model is determined. The electronic device may determine the degree of disagreement between the actual traveling direction of the vehicle and the traveling direction of the vehicle guided by the three-dimensional history navigation guideline, in conjunction with the actual traveling parameters (traveling distance, steering angle, etc.) of the vehicle during the second preset history time period.
Finally, the electronic device may adjust parameters of the initial spatial model based on a loss function of the initial spatial model, and may use the initial spatial model after parameter adjustment as the spatial model. Specifically, the electronic device may adjust parameters of the initial space model based on a degree of inconsistency between the three-dimensional historical lane lines and the straight lines so as to minimize the degree of inconsistency between the three-dimensional historical lane lines and the straight lines; the electronic device may also adjust parameters of the initial spatial model based on a degree of non-parallelism between the at least two three-dimensional historical lane lines so as to minimize the degree of non-parallelism between the at least two three-dimensional historical lane lines; the electronic device may further adjust the parameters of the initial space model so that the degree of disparity between the vehicle traveling direction guided by the three-dimensional history lane line and the vehicle traveling direction guided by the three-dimensional history navigation guide line is minimized, based on the degree of disparity between the vehicle traveling direction guided by the three-dimensional history lane line and the vehicle traveling direction guided by the three-dimensional history navigation guide line. The electronic device may use the initial space model after the parameter adjustment as the space model.
With continued reference to fig. 3, fig. 3 is a schematic view of an application scenario of the information output method for a vehicle according to the present embodiment. In the application scenario of fig. 3, the vehicle-mounted terminal 301 first obtains a current environment image 303 including a current lane line through the camera 302; then, since the lane line in front of the vehicle is not shielded by the obstacle, the vehicle-mounted terminal 301 determines that the reliability of the current lane line identified from the current environment image 303 is 0.9, and determines that the reliability 0.9 is greater than a preset reliability threshold value 0.7; then, the vehicle-mounted terminal 301 acquires that the two-dimensional image information 304 of the current lane line is the coordinate value of each two-dimensional coordinate point representing the lane line, and inputs the two-dimensional image information 304 into a pre-established space model 305 to obtain the three-dimensional image information 306 of the current lane line as the coordinate value of each three-dimensional coordinate point representing the lane line; finally, the in-vehicle terminal 301 may output the three-dimensional image information 306 of the current lane line.
According to the method provided by the embodiment of the application, the three-dimensional image information of the current lane line is determined by using the current environment image containing the current lane line, so that the three-dimensional image information of the lane line with higher accuracy is output, and the safety of vehicle driving is further improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of an information output method for a vehicle is shown. The flow 400 of the information output method for a vehicle includes the steps of:
step 401, acquiring a current environment image including a current lane line through a camera.
Step 402, determining whether the reliability of the current lane line identified from the current environment image is greater than or equal to a preset reliability threshold.
Step 403, inputting the two-dimensional image information of the current lane line in the current environment image into a pre-established spatial model to obtain the three-dimensional image information of the current lane line, and outputting the obtained three-dimensional image information.
In the present embodiment, the operations of steps 401 and 403 are substantially the same as the operations of steps 201 and 203, and are not described herein again.
Step 404, inputting the two-dimensional image information of the current navigation guide line displayed on the display screen into the spatial model to obtain the three-dimensional image information of the current navigation guide line.
In this embodiment, the vehicle on which the electronic device operates may further include a display screen. If the electronic device determines that the reliability of identifying the current lane line from the current environment image is less than the reliability threshold, the two-dimensional image information of the current navigation guide line displayed on the display screen may be input into the spatial model to obtain the three-dimensional image information of the current navigation guide line. The navigation guidance line may be route guidance indicating a direction of a destination, or may refer to a trajectory to be traveled by the vehicle within a preset time period (e.g., 10 seconds).
And step 405, determining the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line and the three-dimensional image information of the current navigation guide line in the first preset historical time period, and outputting the determined three-dimensional image information.
In this embodiment, the electronic device may determine the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line in the first preset history period (for example, a period formed from 10 seconds before the current time to the current time) and the three-dimensional image information of the current navigation guidance line obtained in step 404, and may output the determined three-dimensional image information. Specifically, the electronic device may determine the position information of the historical lane lines in the three-dimensional image information of the lane lines in the first preset historical time period, and if there are at least two lane lines, may determine the distance between the historical lane lines; then, the length and the turning angle of the current navigation guide line can be acquired from the three-dimensional image information of the current navigation guide line; finally, the electronic device may determine a position indicated by the position information of the historical lane lines as a start position of the current lane line, may determine a length and a turning angle of the current navigation guide line as a length and a turning angle of the current lane line, respectively, and may determine a distance between the historical lane lines as a distance between the current lane lines.
In this embodiment, the electronic device may generate a three-dimensional image of a current lane line by using the determined three-dimensional image information, and superimpose the generated three-dimensional image on a navigation map of the electronic device.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the information output method for a vehicle in the present embodiment highlights a step of determining the three-dimensional image information of the current lane line if the reliability of recognizing the current lane line from the current environment image is less than the reliability threshold. Therefore, the scheme described in this embodiment may determine the three-dimensional image information of the current lane line based on the three-dimensional image information of the historical lane line and the three-dimensional image information of the current navigation guidance line, so as to output more accurate three-dimensional image information of the lane line when the confidence level of identifying the current lane line from the current environment image is low.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an information output apparatus for a vehicle, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the information output apparatus 500 for a vehicle of the present embodiment includes: an acquisition unit 501, a determination unit 502, and a first output unit 503. The obtaining unit 501 is configured to obtain a current environment image including a current lane line through a camera; the determining unit 502 is configured to determine whether the confidence level of the current lane line identified from the current environment image is greater than or equal to a preset confidence level threshold, wherein the confidence level is determined based on at least one of the following: the shielding degree of the barrier on the lane line and the camera parameters of the camera; the first output unit 503 is configured to, in response to the reliability being greater than or equal to the reliability threshold, input the two-dimensional image information of the current lane line in the current environment image into a pre-established spatial model to obtain three-dimensional image information of the current lane line, and output the obtained three-dimensional image information.
In the present embodiment, specific processing of the acquisition unit 501, the determination unit 502, and the first output unit 503 of the information output apparatus 500 for a vehicle may refer to step 201, step 202, and step 203 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the vehicle may further include a display screen. The above-described information output apparatus 500 for a vehicle may further include an input unit (not shown in the drawings) and a second output unit (not shown in the drawings). If the determining unit 502 determines that the reliability of identifying the current lane line from the current environment image is less than the reliability threshold, the input unit may input the two-dimensional image information of the current navigation guide line displayed on the display screen into the spatial model to obtain the three-dimensional image information of the current navigation guide line. The navigation guidance line may be a route guidance indicating a direction of a destination, or may refer to a trajectory to be traveled by the vehicle within a preset time period. The above-mentioned second output unit may determine the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line within the first preset history period and the three-dimensional image information of the current navigation guide line obtained in step 404, and may output the determined three-dimensional image information. Specifically, the second output unit may determine position information of a historical lane line in the three-dimensional image information of the lane line in the first preset historical time period, and if there are at least two lane lines, may determine a distance between the historical lane lines; then, the length and the turning angle of the current navigation guide line can be acquired from the three-dimensional image information of the current navigation guide line; finally, the second output unit may determine a position indicated by the position information of the historical lane lines as a start position of the current lane line, may determine a length and a turning angle of the current navigation guide line as a length and a turning angle of the current lane line, respectively, and may determine a distance between the historical lane lines as a distance between the current lane lines. The second output unit may generate a three-dimensional image of the current lane line using the determined three-dimensional image information, and superimpose the generated three-dimensional image on the navigation map.
In some optional implementations of the present embodiment, the information output apparatus 500 for a vehicle described above may further include a spatial model building unit (not shown in the drawings). The spatial model building unit may include a first input module (not shown), a second input module (not shown), a determination module (not shown), and an adjustment module (not shown).
The spatial model establishing unit may establish the spatial model by:
first, the first input module may obtain two-dimensional image information of a historical navigation guideline displayed on the display screen within a second preset historical time period, and then may input the two-dimensional image information of the historical navigation guideline into an initial space model to obtain three-dimensional image information of the historical navigation guideline, where the initial space model may be a model that converts two-dimensional image information of an object into three-dimensional image information by using a three-dimensional reconstruction technique, and may also be used to convert two-dimensional point coordinates of the object in an image coordinate system into three-dimensional point coordinates in a world coordinate system. The first input module may initialize a parameter of an initial spatial model based on a camera parameter of the camera. Since the camera obtains a two-dimensional image of a three-dimensional object when shooting the object, that is, three-dimensional point coordinates of the object in a world coordinate system are converted into two-dimensional point coordinates in an image coordinate system, a conversion matrix for converting the three-dimensional image into the two-dimensional image can be determined from the camera parameters, and the initial space model can be used for converting the two-dimensional image into the three-dimensional image, so that parameters of the initial space model can be determined based on the conversion matrix.
Then, the second input module may determine the reliability of identifying the lane line from the historical environmental image within the second preset historical time period, where the determination method of the reliability is substantially the same as the determination method of the reliability of identifying the current lane line from the current environmental image, and is not described herein again. In response to determining that the reliability of the lane line identified from the historical environmental image is greater than or equal to the reliability threshold, the second input module may input the two-dimensional image information of the historical lane line in the historical environmental image into the initial spatial model to obtain the three-dimensional image information of the historical lane line.
Then, the determining module may determine a loss function of the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline, where the loss function may be a non-negative real-valued function for estimating a degree of disparity between a predicted value f (x) and a real value Y of the model, and is usually expressed by L (Y, f (x)). The loss function may include a logarithmic loss function, a quadratic loss function, an exponential loss function, and the like. How to solve the function optimal solution by using the logarithmic loss function, the quadratic loss function and the exponential loss function is common knowledge which is widely researched and applied at present, and is not described herein again.
Finally, the adjusting module may adjust parameters of the initial spatial model based on a loss function of the initial spatial model, and may use the initial spatial model after parameter adjustment as the spatial model. Specifically, the adjusting module may adjust parameters of the initial space model based on a degree of inconsistency between the three-dimensional historical lane lines and the straight lines so as to minimize the degree of inconsistency between the three-dimensional historical lane lines and the straight lines; the adjusting module may also adjust parameters of the initial space model based on a degree of non-parallelism between the at least two three-dimensional historical lane lines so as to minimize the degree of non-parallelism between the at least two three-dimensional historical lane lines; the adjusting module may further adjust parameters of the initial space model based on a degree of disparity between a vehicle traveling direction guided by the three-dimensional historical lane line and a vehicle traveling direction guided by the three-dimensional historical navigation guideline, so that the degree of disparity between the vehicle traveling direction guided by the three-dimensional historical lane line and the vehicle traveling direction guided by the three-dimensional historical navigation guideline is minimized. The adjusting module may use the initial space model after the parameter adjustment as the space model.
In some optional implementations of the present embodiment, the determining module may include a first determining sub-module (not shown in the figure) and a second determining sub-module (not shown in the figure). The first determination sub-module may first determine whether the two-dimensional history lane line indicated by the two-dimensional image information of the history lane line is a straight line, and may determine whether the three-dimensional history lane line indicated by the three-dimensional image information of the history lane line is a straight line. In response to determining that the two-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are not straight lines, the second determination submodule may determine a first loss function for the initial spatial model. If the two-dimensional historical lane line is a straight line, it can be predicted that the three-dimensional historical lane line should also be a straight line, and if the real three-dimensional historical lane line is not a straight line, the degree of inconsistency between the real three-dimensional historical lane line and the straight line can be determined, so as to determine the first loss function of the initial space model.
In some optional implementations of the present embodiment, the determining module may include a third determining sub-module (not shown in the figure) and a fourth determining sub-module (not shown in the figure). The third determination submodule may further determine whether the at least two three-dimensional history lane lines are parallel to each other in response to determining that the three-dimensional history lane lines are straight lines and the three-dimensional history lane lines are at least two. Specifically, the third determining submodule may determine whether slopes of the at least two three-dimensional historical lane lines are the same, and if the slopes of the at least two three-dimensional historical lane lines are the same, the at least two three-dimensional historical lane lines are parallel to each other, and if the slopes of the at least two three-dimensional historical lane lines are not the same, the at least two three-dimensional historical lane lines are not parallel to each other. In response to determining that the at least two three-dimensional historical lane lines are not parallel, the fourth determination submodule may determine a second loss function for the initial spatial model. If at least two three-dimensional historical lane lines exist, the at least two three-dimensional historical lane lines can be predicted to be parallel to each other, and the real at least two three-dimensional historical lane lines are not parallel to each other, so that the non-parallel degree of the real at least two three-dimensional historical lane lines can be determined, and the second loss function of the initial space model can be determined.
In some optional implementations of the present embodiment, the determining module may include a fifth determining sub-module (not shown in the figure) and a sixth determining sub-module (not shown in the figure). In response to determining that the three-dimensional history lane line is a straight line, the fifth determination sub-module may further determine whether a vehicle traveling direction guided by the three-dimensional history lane line coincides with a vehicle traveling direction guided by the three-dimensional history navigation guide line indicated by the three-dimensional image information of the history navigation guide line. The sixth determination submodule may determine a third loss function of the initial space model in response to a determination that a vehicle traveling direction guided by the three-dimensional history lane line does not coincide with a vehicle traveling direction guided by the three-dimensional history navigation guideline. If the three-dimensional history lane line is a straight line, it is predicted that the vehicle traveling direction guided by the three-dimensional history lane line should coincide with the vehicle traveling direction guided by the three-dimensional history navigation guideline, and the vehicle traveling direction guided by the real three-dimensional history lane line does not coincide with the vehicle traveling direction guided by the three-dimensional history navigation guideline, and it is determined that the degree of the disparity between the vehicle traveling direction guided by the three-dimensional history lane line and the vehicle traveling direction guided by the three-dimensional history navigation guideline is high, and the third loss function of the initial space model is determined. The sixth determining submodule may determine, in combination with the actual traveling parameter of the vehicle in the second preset history time period, a degree of disagreement between the actual traveling direction of the vehicle and the traveling direction of the vehicle guided by the three-dimensional history navigation guideline.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a determination unit, and a first output unit. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. For example, the acquisition unit may also be described as a "unit that acquires a current environment image containing a current lane line by a camera".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a current environment image containing a current lane line through a camera; determining whether the credibility of the current lane line identified from the current environment image is greater than or equal to a preset credibility threshold, wherein the credibility is determined based on at least one of the following items: the shielding degree of the barrier on the lane line and the camera parameters of the camera; and responding to the reliability being more than or equal to the reliability threshold, inputting the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and outputting the obtained three-dimensional image information.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the scope of the invention as defined by the appended claims. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (14)

1. An information output method for a vehicle, wherein the vehicle includes a camera, the method comprising:
acquiring a current environment image containing a current lane line through the camera;
determining whether the credibility of the current lane line identified from the current environment image is greater than or equal to a preset credibility threshold, wherein the credibility is determined based on at least one of the following items: the shielding degree of the barrier to the lane line and the camera parameters of the camera;
and responding to the credibility being more than or equal to the credibility threshold, inputting the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and outputting the obtained three-dimensional image information.
2. The method of claim 1, wherein the vehicle further comprises a display screen, the method further comprising:
responding to the credibility being smaller than the credibility threshold, inputting the two-dimensional image information of the current navigation guide line displayed on the display screen into the space model to obtain the three-dimensional image information of the current navigation guide line;
and determining the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line in the first preset historical time period and the three-dimensional image information of the current navigation guide line, and outputting the determined three-dimensional image information.
3. The method of claim 2, wherein the spatial model is obtained by:
acquiring two-dimensional image information of a historical navigation guide line displayed on the display screen in a second preset historical time period, and inputting the two-dimensional image information of the historical navigation guide line into an initial space model to obtain three-dimensional image information of the historical navigation guide line;
in response to the fact that the reliability of the lane line identified from the historical environment image in the second preset historical time period is larger than or equal to the reliability threshold value, inputting the two-dimensional image information of the historical lane line in the historical environment image into the initial space model to obtain the three-dimensional image information of the historical lane line;
determining a loss function of the initial space model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines and the three-dimensional image information of the historical navigation guide lines;
and adjusting parameters of the initial space model based on the loss function, and taking the initial space model after parameter adjustment as a space model.
4. The method of claim 3, wherein the determining a loss function for the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline comprises:
determining whether a two-dimensional historical lane line indicated by the two-dimensional image information of the historical lane line is a straight line and whether a three-dimensional historical lane line indicated by the three-dimensional image information of the historical lane line is a straight line;
in response to determining that the two-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are not straight lines, determining a first loss function for the initial spatial model.
5. The method of claim 3, wherein the determining a loss function for the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline comprises:
in response to determining that the three-dimensional historical lane lines indicated by the three-dimensional image information of the historical lane lines are straight lines and the three-dimensional historical lane lines are at least two, further determining whether the at least two three-dimensional historical lane lines are parallel;
in response to determining that the at least two three-dimensional historical lane lines are not parallel, determining a second loss function for the initial spatial model.
6. The method of claim 3, wherein the determining a loss function for the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline comprises:
in response to determining that the three-dimensional history lane line is a straight line, further determining whether a vehicle traveling direction guided by the three-dimensional history lane line coincides with a vehicle traveling direction guided by a three-dimensional history navigation guide line indicated by three-dimensional image information of the history navigation guide line;
determining a third loss function for the initial spatial model in response to determining that the direction of vehicle travel guided by the three-dimensional historical lane lines is not consistent with the direction of vehicle travel guided by the three-dimensional historical navigation guideline.
7. An information output apparatus for a vehicle, wherein the vehicle includes a camera, the apparatus comprising:
the acquisition unit is configured to acquire a current environment image containing a current lane line through the camera;
a determining unit configured to determine whether a confidence level of identifying a current lane line from the current environment image is greater than or equal to a preset confidence level threshold, wherein the confidence level is determined based on at least one of: the shielding degree of the barrier to the lane line and the camera parameters of the camera;
and the first output unit is configured to respond to the reliability being greater than or equal to the reliability threshold, input the two-dimensional image information of the current lane line in the current environment image into a pre-established space model to obtain the three-dimensional image information of the current lane line, and output the obtained three-dimensional image information.
8. The apparatus of claim 7, wherein the vehicle further comprises a display screen, the apparatus further comprising:
the input unit is configured to respond to the reliability being smaller than the reliability threshold value, and input the two-dimensional image information of the current navigation guide line displayed on the display screen into the space model to obtain the three-dimensional image information of the current navigation guide line;
and the second output unit is configured to determine the three-dimensional image information of the current lane line based on the three-dimensional image information of the lane line in the first preset historical time period and the three-dimensional image information of the current navigation guide line, and output the determined three-dimensional image information.
9. The apparatus of claim 8, wherein the apparatus further comprises a spatial model building unit comprising:
the first input module is configured to acquire two-dimensional image information of a historical navigation guide line displayed on the display screen within a second preset historical time period, and input the two-dimensional image information of the historical navigation guide line into the initial space model to acquire three-dimensional image information of the historical navigation guide line;
the second input module is configured to input the two-dimensional image information of the historical lane line in the historical environment image into the initial space model to obtain the three-dimensional image information of the historical lane line in response to determining that the reliability of the lane line identified from the historical environment image in the second preset historical time period is greater than or equal to the reliability threshold;
a determining module configured to determine a loss function of the initial spatial model based on the two-dimensional image information of the historical lane lines, the three-dimensional image information of the historical lane lines, and the three-dimensional image information of the historical navigation guideline;
and the adjusting module is configured to adjust the parameters of the initial space model based on the loss function, and take the initial space model after parameter adjustment as a space model.
10. The apparatus of claim 9, wherein the means for determining comprises:
a first determining submodule configured to determine whether a two-dimensional historical lane line indicated by the two-dimensional image information of the historical lane line is a straight line and whether a three-dimensional historical lane line indicated by the three-dimensional image information of the historical lane line is a straight line;
a second determination submodule configured to determine a first loss function of the initial spatial model in response to determining that the two-dimensional historical lane lines are straight lines and the three-dimensional historical lane lines are not straight lines.
11. The apparatus of claim 9, wherein the means for determining comprises:
a third determining sub-module, configured to, in response to determining that the three-dimensional historical lane lines indicated by the three-dimensional image information of the historical lane lines are straight lines and that the three-dimensional historical lane lines are at least two, further determine whether the at least two three-dimensional historical lane lines are parallel;
a fourth determination submodule configured to determine a second loss function for the initial spatial model in response to determining that the at least two three-dimensional historical lane lines are not parallel.
12. The apparatus of claim 9, wherein the means for determining comprises:
a fifth determination sub-module configured to further determine whether a vehicle traveling direction guided by the three-dimensional history lane line coincides with a vehicle traveling direction guided by a three-dimensional history navigation guide line indicated by three-dimensional image information of the history navigation guide line in response to determining that the three-dimensional history lane line is a straight line;
a sixth determining submodule configured to determine a third loss function of the initial space model in response to determining that a vehicle traveling direction guided by the three-dimensional historical lane lines does not coincide with a vehicle traveling direction guided by the three-dimensional historical navigation guideline.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201711139956.XA 2017-11-16 2017-11-16 Information output method and apparatus for vehicle Active CN107830869B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711139956.XA CN107830869B (en) 2017-11-16 2017-11-16 Information output method and apparatus for vehicle
PCT/CN2018/099164 WO2019095735A1 (en) 2017-11-16 2018-08-07 Information output method and device for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711139956.XA CN107830869B (en) 2017-11-16 2017-11-16 Information output method and apparatus for vehicle

Publications (2)

Publication Number Publication Date
CN107830869A CN107830869A (en) 2018-03-23
CN107830869B true CN107830869B (en) 2020-12-11

Family

ID=61651848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711139956.XA Active CN107830869B (en) 2017-11-16 2017-11-16 Information output method and apparatus for vehicle

Country Status (2)

Country Link
CN (1) CN107830869B (en)
WO (1) WO2019095735A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107830869B (en) * 2017-11-16 2020-12-11 百度在线网络技术(北京)有限公司 Information output method and apparatus for vehicle
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device
CN113743228B (en) * 2018-12-10 2023-07-14 百度在线网络技术(北京)有限公司 Obstacle existence detection method and device based on multi-data fusion result
CN109703569B (en) * 2019-02-21 2021-07-27 百度在线网络技术(北京)有限公司 Information processing method, device and storage medium
CN109765902B (en) * 2019-02-22 2022-10-11 阿波罗智能技术(北京)有限公司 Unmanned vehicle driving reference line processing method and device and vehicle
CN112183415A (en) * 2020-09-30 2021-01-05 上汽通用五菱汽车股份有限公司 Lane line processing method, vehicle, and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3121760A1 (en) * 2015-07-20 2017-01-25 Dura Operating, LLC System and method for generating and communicating lane information from a host vehicle to a vehicle-to-vehicle network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102806913B (en) * 2011-05-31 2015-04-15 德尔福电子(苏州)有限公司 Novel lane line deviation detection method and device
CN105260699B (en) * 2015-09-10 2018-06-26 百度在线网络技术(北京)有限公司 A kind of processing method and processing device of lane line data
CN107025432B (en) * 2017-02-28 2018-08-21 合肥工业大学 A kind of efficient lane detection tracking and system
CN107830869B (en) * 2017-11-16 2020-12-11 百度在线网络技术(北京)有限公司 Information output method and apparatus for vehicle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3121760A1 (en) * 2015-07-20 2017-01-25 Dura Operating, LLC System and method for generating and communicating lane information from a host vehicle to a vehicle-to-vehicle network

Also Published As

Publication number Publication date
CN107830869A (en) 2018-03-23
WO2019095735A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
CN107830869B (en) Information output method and apparatus for vehicle
CN108961327B (en) Monocular depth estimation method and device, equipment and storage medium thereof
CN107941226B (en) Method and device for generating a direction guideline for a vehicle
CN109118532B (en) Visual field depth estimation method, device, equipment and storage medium
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN110231832B (en) Obstacle avoidance method and obstacle avoidance device for unmanned aerial vehicle
CN111860227A (en) Method, apparatus, and computer storage medium for training trajectory planning model
CN113483774B (en) Navigation method, navigation device, electronic equipment and readable storage medium
CN111353453B (en) Obstacle detection method and device for vehicle
US20230326055A1 (en) System and method for self-supervised monocular ground-plane extraction
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN115565158B (en) Parking space detection method, device, electronic equipment and computer readable medium
US11741671B2 (en) Three-dimensional scene recreation using depth fusion
EP3842757B1 (en) Verification method and device for modeling route, unmanned vehicle, and storage medium
CN113902047A (en) Image element matching method, device, equipment and storage medium
JP7324792B2 (en) Method and apparatus for generating location information
US11620831B2 (en) Register sets of low-level features without data association
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN111461982B (en) Method and apparatus for splice point cloud
CN111383337B (en) Method and device for identifying objects
CN116051832A (en) Three-dimensional labeling method and device for vehicle
CN114581602A (en) 3D imaging method and device, electronic equipment and storage medium
CN116958761A (en) Point cloud image fusion method, device, related equipment and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant