CN118004035B - Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment - Google Patents

Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment Download PDF

Info

Publication number
CN118004035B
CN118004035B CN202410418832.9A CN202410418832A CN118004035B CN 118004035 B CN118004035 B CN 118004035B CN 202410418832 A CN202410418832 A CN 202410418832A CN 118004035 B CN118004035 B CN 118004035B
Authority
CN
China
Prior art keywords
coordinate
vehicle
direction vector
projector
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410418832.9A
Other languages
Chinese (zh)
Other versions
CN118004035A (en
Inventor
王环龙
孙明芳
黄志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanbo Technology Co ltd
Original Assignee
Wanbo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanbo Technology Co ltd filed Critical Wanbo Technology Co ltd
Priority to CN202410418832.9A priority Critical patent/CN118004035B/en
Publication of CN118004035A publication Critical patent/CN118004035A/en
Application granted granted Critical
Publication of CN118004035B publication Critical patent/CN118004035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/205Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an auxiliary driving method and device based on a vehicle-mounted projector and electronic equipment, and relates to the technical field of vehicle navigation, wherein the auxiliary driving method comprises the following steps: receiving a first driving video acquired by a projector camera, wherein the first driving video comprises lane lines of a road on which a preset vehicle runs; mapping the first driving video to obtain a second driving video under a preset viewing angle of a driver of the vehicle; identifying a lane line in the second driving video, and calculating a guide line function of the lane line; generating a guide line for projection display of the vehicle-mounted projector according to the guide line function; and adjusting projection parameters of the vehicle-mounted projector, wherein the projection parameters comprise a projection angle and a projection focal length of the vehicle-mounted projector, so that the guide line is reflected to a preset driver view of the vehicle based on the electronically controlled reflector. The application can reduce the influence of the lane guiding information on the driver.

Description

Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment
Technical Field
The application relates to the technical field of vehicle navigation, in particular to an auxiliary driving method and device based on a vehicle-mounted projector and electronic equipment.
Background
With the rapid development of automobile technology, driving assistance systems have become an integral part of modern automobiles. These systems integrate various sensors and cameras that are capable of sensing environmental conditions around the vehicle in real time. From vehicle status to road conditions to potential hazards, the assisted driving system can accurately convey this information to the driver, helping them to better address driving challenges.
However, although the driving assistance system is powerful, there is a limit to the layout of the conventional in-vehicle display apparatus. Typically, these display devices are located on a center console or dashboard of the vehicle, and the driver often needs to divert the line of sight while looking at the lane guide, which has an impact on the driver and increases the risk of distraction during driving. Along with the continuous progress of vehicle-mounted technology, how to effectively reduce the influence of lane guiding information on a driver becomes a problem to be solved urgently.
Disclosure of Invention
The application provides an auxiliary driving method, an auxiliary driving device and electronic equipment based on a vehicle-mounted projector, which can reduce the influence of lane guiding information on a driver.
In a first aspect of the present application, the driving assistance method is applied to a driving assistance control apparatus, and the driving assistance control apparatus is connected with a vehicle-mounted projector, an electronically controlled mirror and a projector camera;
the driving assistance method includes:
receiving a first driving video acquired by the projector camera, wherein the first driving video comprises lane lines of a road on which a preset vehicle runs;
Mapping the first driving video to obtain a second driving video under a preset vehicle driver visual angle;
identifying a lane line in the second driving video, and calculating a guide line function of the lane line;
Generating a guide line for projection display of the vehicle-mounted projector according to the guide line function;
and adjusting projection parameters of the vehicle-mounted projector, wherein the projection parameters comprise a projection angle and a projection focal length of the vehicle-mounted projector, so that the guide line is reflected to a driver view of the preset vehicle based on the electronically controlled reflector.
By adopting the technical scheme, the lane guide information is projected into the visual field of the driver in a virtual form, so that the driver can acquire the lane guide information without taking the attention away from the road. Firstly, a first driving video acquired by a projector camera is received, a lane line of a road on which a preset vehicle runs is mapped to a view angle of a driver, and a second driving video is obtained, so that the road condition can be analyzed and the subsequent driving guidance can be carried out according to the view angle of the driver. Then, a lane line in the second driving video is identified, and a guide line function of the lane line is calculated. And then, generating a guide line for projection display of the vehicle-mounted projector according to the guide line function, and adjusting projection parameters of the projector to enable the guide line to be reflected to the visual field of the driver based on the electronically controlled reflector. By the method, a driver can directly see the guide line in the driving visual field range, and the lane guiding information can be acquired without moving the line of sight away from the road, so that the influence of the lane guiding information on the driver is reduced, and the driving safety is improved.
Optionally, the mapping processing is performed on the first driving video to obtain a second driving video under a preset viewing angle of a driver of the vehicle, which specifically includes:
converting the installation position of the projector camera below a preset coordinate system to obtain a camera coordinate;
Acquiring a focus coordinate of the projector camera below the preset coordinate system and a coordinate of an optical center of the projector camera on a first video frame;
Mapping the first coordinates of a first pixel point to obtain second coordinates of a corresponding second pixel point, wherein the first pixel point is any one of a plurality of pixel points contained in the first video frame, the second pixel point is any one of a plurality of pixel points contained in the second video frame, the first video frame is any one of a plurality of video frames contained in the first driving video, and the second video frame is any one of a plurality of video frames contained in the second driving video; the mapping process is specifically performed on the first coordinate through the following formula:
Wherein, (u, v) is the second coordinate, (f x,fy) is the focal coordinate, (c x,cy) is the coordinate of the optical center of the projector camera on the first video frame, (T x,Ty,Tz) is the coordinate corresponding to the installation position of the projector camera, (X, Y, Z) is the first coordinate, (X e,Ye,Ze) is the binocular coordinate of the driver in the preset coordinate system, and (X p,Yp,Zp) is the camera coordinate.
By adopting the technical scheme, the first driving video acquired by the projector camera installed on the vehicle is mapped to the view angle of the preset vehicle driver, and the second driving video is obtained. Through the mapping processing, the road condition under the visual angle of the driver can be reflected in the second driving video, so that the guide line generated according to the lane line can be conveniently projected to the visual field of the driver, and can be attached to the lane line more.
Optionally, the adjusting the projection parameters of the vehicle-mounted projector specifically includes:
Acquiring a light source point coordinate of the vehicle-mounted projector, a first mirror surface coordinate of the center of a reflecting area, a second mirror surface coordinate of the mirror surface center of the electronically controlled reflector and a binocular coordinate of a driver, wherein the reflecting area is positioned on a front windshield of the preset vehicle;
calculating a first direction vector according to the light source point coordinates and the first mirror coordinates:
Wherein PM 1 is the first direction vector, (x M1,yM1,zM1) is the first mirror coordinate, (x p,yp,zp) is the light source point coordinate;
Calculating a second direction vector according to the first mirror coordinates and the second mirror coordinates;
Wherein M 1M2 is the second direction vector, (x M1,yM1,zM1) is the first mirror coordinate, (x M2,yM2,zM2) is the second mirror coordinate;
calculating a third direction vector according to the first direction vector and the second direction vector;
Wherein PM 2 is the third direction vector, PM 1 is the first direction vector, and M 1M2 is the second direction vector;
calculating a fourth direction vector from the second specular coordinates and the binocular coordinates:
Wherein M 2 E is the fourth direction vector, (X e,Ye,Ze) is the binocular coordinates, (X m2,ym2,zm2) is the second specular coordinates;
calculating the projection angle according to the third direction vector and the fourth direction vector:
Wherein PM 2 is the third direction vector and M 2 E is the fourth direction vector.
By adopting the technical scheme, the projection angle is calculated according to the parameters such as the light source point coordinate of the vehicle-mounted projector, the first mirror surface coordinate of the center of the reflecting area, the second mirror surface coordinate of the mirror surface center of the electric control reflecting mirror, the binocular coordinates of the driver and the like, so that the projection parameters of the vehicle-mounted projector are adjusted, and the projected guide line can be accurately reflected into the visual field of the driver of the preset vehicle.
Optionally, the adjusting the projection parameter of the vehicle-mounted projector specifically further includes:
Based on the first direction vector, the second direction vector and the fourth direction vector, the projection focal length is calculated according to the following specific calculation formula:
Wherein f is the projection focal length, PM 1 is the first direction vector, M 1M2 is the second direction vector, and M 2 E is the fourth direction vector.
By adopting the technical scheme, the projection focal length is calculated, the projection focal length can be adjusted according to the position of the vehicle-mounted projector and the installation angle of the reflecting mirror, and the guide line is ensured to be kept clearly visible when being projected to the visual field of the driver of the preset vehicle. By properly adjusting the projection focal length, the projection effect can be optimized, so that a driver can obtain clearer and more accurate information when observing the guide line, and the driving safety and the comfort are improved.
Optionally, the identifying the lane line in the second driving video specifically includes:
Preprocessing the second driving video to obtain a processed video;
extracting features of the processed video to obtain lane line color features;
performing edge detection on the region corresponding to the lane line color characteristics, and extracting a lane line region;
and extracting the inner side line of the lane line area to obtain the lane line.
By adopting the technical scheme, the second driving video is preprocessed, so that the accuracy of subsequent processing is improved. Through feature extraction, the region with lane line color features is extracted, and then the position information of the lane line can be effectively extracted by combining edge detection. And finally, obtaining final lane line information by extracting the inner line of the lane line region. Such a process flow can accurately and rapidly identify the lane line.
Optionally, the calculating the guide line function of the lane line specifically includes
Placing the lane line into a coordinate system to obtain a lane line curve;
extracting a plurality of coordinate points from the lane line curve;
performing fitting operation on a plurality of coordinate points through a fitting model to obtain fitting parameters;
And determining the guide line function according to the fitting parameters.
By adopting the technical scheme, the detected lane line is placed in a coordinate system to obtain a lane line curve, and a plurality of coordinate points are extracted from the lane line curve. And then, carrying out fitting operation on the coordinate points through a fitting model to obtain fitting parameters, and further determining a guide line function. Therefore, the accurate description of the shape of the lane line can be realized, a guide line which accords with the actual road condition is generated, accurate guide information is provided for a driver, and the driving safety is improved.
Optionally, the generating a guide line for projection display of the vehicle-mounted projector according to the guide line function specifically includes:
generating a plurality of drawing points according to the guide line function, wherein the drawing points are positioned on a curve corresponding to the guide line function;
And connecting a plurality of drawing points to obtain the guide line.
By adopting the technical scheme, a plurality of drawing points on the curve are calculated according to the guide line function, the points can accurately describe the shape and the direction of the guide line, and the drawing points are connected, so that a continuous guide line is formed.
In a second aspect of the present application, there is provided a driving support apparatus based on an in-vehicle projector, the apparatus being a driving support control device, including an acquisition module, a processing module, an identification module, a generation module, and a projection control module, wherein:
the acquisition module is used for receiving a first driving video acquired by the projector camera, wherein the first driving video comprises lane lines of a road on which a preset vehicle runs;
The processing module is used for carrying out mapping processing on the first driving video to obtain a second driving video under the view angle of a driver of a preset vehicle;
The identification module is used for identifying a lane line in the second driving video and calculating a guide line function of the lane line;
the generating module is used for generating a guide line for projection display of the vehicle-mounted projector according to the guide line function;
the projection control module is used for adjusting projection parameters of the vehicle-mounted projector, wherein the projection parameters comprise a projection angle and a projection focal length of the vehicle-mounted projector.
Optionally, the processing module is configured to convert an installation position of the projector camera to a position below a preset coordinate system to obtain a camera coordinate;
The acquisition module is used for acquiring the focal coordinates of the projector camera below the preset coordinate system and the coordinates of the optical center of the projector camera on the first video frame;
The generating module is configured to map a first coordinate of a first pixel to obtain a second coordinate of a second pixel, where the first pixel is any one of a plurality of pixels included in the first video frame, the second pixel is any one of a plurality of pixels included in the second video frame, the first video frame is any one of a plurality of video frames included in the first driving video, and the second video frame is any one of a plurality of video frames included in the second driving video; the mapping process is specifically performed on the first coordinate through the following formula:
Wherein, (u, v) is the second coordinate, (f x,fy) is the focal coordinate, (c x,cy) is the coordinate of the optical center of the projector camera on the first video frame, (T x,Ty,Tz) is the coordinate corresponding to the installation position of the projector camera, (X, Y, Z) is the first coordinate, (X e,Ye,Ze) is the binocular coordinate of the driver in the preset coordinate system, and (X p,Yp,Zp) is the camera coordinate.
Optionally, the acquiring module is configured to acquire a light source point coordinate of the vehicle-mounted projector, a first mirror surface coordinate of a center of a reflection area, a second mirror surface coordinate of a mirror surface center of the electronically controlled mirror, and a binocular coordinate of a driver, where the reflection area is located on a front windshield of the preset vehicle;
The processing module is configured to calculate a first direction vector according to the light source point coordinate and the first mirror coordinate:
wherein PM 1 is the first direction vector, (x m1,ym1,zm1) is the second mirror coordinate, (x p,yp,zp) is the light source point coordinate;
the processing module is used for calculating a second direction vector according to the first mirror coordinates and the second mirror coordinates;
Wherein M 1M2 is the second direction vector, (x m1,ym1,zm1) is the first mirror coordinate, (x m2,ym2,zm2) is the second mirror coordinate;
the processing module is used for calculating a third direction vector according to the first direction vector and the second direction vector;
Wherein PM 2 is the third direction vector, PM 1 is the first direction vector, and M 1M2 is the second direction vector;
the processing module is configured to calculate a fourth direction vector according to the second mirror coordinate and the binocular coordinate:
Wherein M 2 E is the fourth direction vector, (X e,Ye,Ze) is the binocular coordinates, (X m2,ym2,zm2) is the second specular coordinates;
The processing module is configured to calculate the projection angle according to the third direction vector and the fourth direction vector:
Wherein PM 2 is the third direction vector and M 2 E is the fourth direction vector.
Optionally, the processing module is configured to calculate the projection focal length based on the first direction vector, the second direction vector, and the fourth direction vector, and a specific calculation formula is as follows:
Wherein f is the projection focal length, PM 1 is the first direction vector, M 1M2 is the second direction vector, and M 2 E is the fourth direction vector.
Optionally, the processing module is configured to perform preprocessing on the second driving video to obtain a processed video;
the recognition module is used for extracting the characteristics of the processed video to obtain lane line color characteristics;
the recognition module is used for carrying out edge detection on the area corresponding to the lane line color characteristics and extracting a lane line area;
The recognition module is used for extracting the inner line of the lane line area to obtain the lane line.
Optionally, the generating module is configured to put the lane line into a coordinate system to obtain a lane line curve;
the identification module is used for extracting a plurality of coordinate points from the lane line curve;
The processing module is used for carrying out fitting operation on a plurality of coordinate points through a fitting model to obtain fitting parameters;
and the processing module is used for determining the guide line function according to the fitting parameters.
Optionally, the generating module is configured to generate a plurality of drawing points according to the guide line function, where the drawing points are located on a curve corresponding to the guide line function;
The generating module is used for connecting a plurality of drawing points to obtain the guide line.
In a third aspect the application provides an electronic device comprising a processor, a memory for storing instructions, a user interface and a network interface, both for communicating with other devices, the processor being for executing instructions stored in the memory to cause the electronic device to perform a method as claimed in any one of the preceding claims.
In a fourth aspect of the application there is provided a computer readable storage medium storing instructions which, when executed, perform a method as claimed in any one of the preceding claims.
In summary, one or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
The lane guidance information is projected in virtual form into the driver's field of view so that the driver can acquire the lane guidance information without taking the attention away from the road. Firstly, a first driving video acquired by a projector camera is received, a lane line of a road on which a preset vehicle runs is mapped to a view angle of a driver, and a second driving video is obtained, so that the road condition can be analyzed and the subsequent driving guidance can be carried out according to the view angle of the driver. Then, a lane line in the second driving video is identified, and a guide line function of the lane line is calculated. And then, generating a guide line for projection display of the vehicle-mounted projector according to the guide line function, and adjusting projection parameters of the projector to enable the guide line to be reflected to the visual field of the driver based on the electronically controlled reflector. By the method, a driver can directly see the guide line in the driving visual field range, and the lane guiding information can be acquired without moving the line of sight away from the road, so that the influence of the lane guiding information on the driver is reduced, and the driving safety is improved.
Drawings
FIG. 1 is a schematic diagram of a vehicle projector and an electronically controlled mirror according to an embodiment of the present application;
FIG. 2 is a schematic diagram of another vehicle projector and electronically controlled mirror according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an auxiliary driving method based on a vehicle-mounted projector according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a vehicle HUD effect disclosed in an embodiment of the application;
FIG. 5 is a schematic diagram of a light reflection path of a vehicle projector according to an embodiment of the present application;
FIG. 6 is a schematic block diagram of a driving assistance device based on a vehicle projector according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals illustrate: 601. an acquisition module; 602. a processing module; 603. an identification module; 604. a generating module; 605. a projection control module; 701. a processor; 702. a communication bus; 703. a user interface; 704. a network interface; 705. a memory.
Detailed Description
In order that those skilled in the art will better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
In describing embodiments of the present application, words such as "for example" or "for example" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "such as" or "for example" in embodiments of the application should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of embodiments of the application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The auxiliary driving system integrates various sensors and cameras, can sense the surrounding environment of the vehicle in real time, and provides information such as vehicle state, road condition and potential danger for a driver. However, limitations of the conventional in-vehicle display device layout result in the need for the driver to divert the line of sight while viewing the navigation information, increasing the risk of distraction. The key to solving this problem is how to effectively reduce the influence of the lane guidance information on the driver, thereby improving the safety and comfort of the driving process.
Head-up Display (HUD) is also called Head-up Display system, and refers to a blind operation, multifunctional instrument panel centered on the vehicle driver. The important information of the vehicle is projected in a transparent or semitransparent manner into the visual field of the driver, usually onto the front windshield of the vehicle. The HUD may display driving information such as a vehicle speed, a navigation instruction, a steering prompt, a lane departure warning, etc., so that a driver can acquire important information without moving a line of sight from a road, thereby improving driving safety and comfort. HUD technology is widely applied to vehicles such as automobiles, airplanes and the like, and becomes an important mode of modern driving assistance and information display. Typically, HUD technology requires factory assembly at the time of vehicle manufacture. This is because HUD systems involve multiple aspects of vehicle design, electronics, windshields, etc., and need to be matched to the overall design and construction of the vehicle. Therefore, for most conventional fuel vehicles, if the vehicle factory does not have the HUD function, it is generally impossible to add the existing head-up display device at a later stage.
The embodiment discloses an auxiliary driving method based on a vehicle-mounted projector, which is applied to auxiliary driving control equipment, wherein the auxiliary driving control equipment is connected with the vehicle-mounted projector, an electric control reflecting mirror and a projector camera, and referring to fig. 1, the vehicle-mounted projector is arranged on the roof of a preset vehicle, and the lens faces the direction of a vehicle head. Referring to fig. 2, the in-vehicle projector is disposed between the main driving and the co-driving, thereby preventing the head of the driver from blocking the projector light, affecting the projection effect.
The auxiliary driving control device is a control system for controlling the vehicle-mounted projector in the application, and can also be understood as a central controller, and the auxiliary driving control device controls the projection angle and the projection focal length of the vehicle-mounted projector according to the video collected by the camera of the projector and the positioning data of a global navigation system such as a GPS (global positioning system), and a specific auxiliary driving method is described in detail below.
The base is fixedly arranged on the roof, the vehicle-mounted projector is connected with the base through the movable connecting structure, and the overall steering of the vehicle-mounted projector is controlled through the electric control structure, so that the lens steering of the vehicle-mounted projector is realized. The projector camera is arranged on the vehicle-mounted projector, and the lens faces the direction of the vehicle head. The electronically controlled reflector is mounted on an instrument desk of a preset vehicle, and the mirror surface is approximately oriented to a driver and used for reflecting light rays of the vehicle-mounted projector. Referring to fig. 1, light of a vehicle-mounted projector is primarily reflected to a reflection area of a front windshield of a preset vehicle through an electronically controlled mirror, and secondarily reflected to a field of view of a driver through the front windshield. The reflecting area of the front windshield is preferably adhered with a semi-transparent film with lower light transmittance on the inner side of the glass so as to increase the brightness of reflected light and enhance the projection effect. According to the vehicle-mounted projector, the HUD effect can be achieved through connecting the navigation system under the condition that an automobile factory does not have the HUD function.
Referring to fig. 3, the steps S110 to S150 are included as follows:
s110, receiving a first driving video acquired by a projector camera.
Referring to fig. 1, since the in-vehicle projector is fixed to a roof of a preset vehicle and the projector camera is mounted on the in-vehicle projector, the projector camera collects a driving video of a road right in front of the preset vehicle in real time during a driving process of the preset vehicle, and when the preset vehicle passes through a road on which a lane line is drawn, a lane line of the road on which the preset vehicle is driven will be included in a first driving video collected by the projector camera. In the acquisition process, the projector camera transmits the first driving video to the auxiliary driving control equipment.
And S120, mapping the first driving video to obtain a second driving video under the visual angle of the driver of the preset vehicle.
Referring to fig. 2, since the in-vehicle projector is installed between the main driver and the co-driver and the projector camera is installed above the in-vehicle projector, there is a slight distortion between the lane line in the first car video collected by the projector camera and the lane line under the view angle of the driver. For example, the curvature of the lane line at the driver's perspective of the main drive may be slightly different from the curvature of the lane line in the first car video captured by the projector camera. It is therefore necessary to first perspective-transform the first car video, and to project an image from one projector camera view angle to the driver view angle, to correct the view angle of the image.
When the camera is installed at the main driving position, the video collected by the camera of the main driving position can be directly compared with the video collected by the camera of the projector, and the parameters of perspective transformation are constructed according to the shape difference of the same feature in the video. In the application, because the camera is not installed at the main driving position, the video under the visual angle of the driver can not be obtained, and the parameters of perspective transformation can be constructed only according to the dual-purpose relative position relation between the projector camera and the driver.
Specifically, when installing the in-vehicle projector and the electronically controlled mirror, it is necessary to manually measure the coordinates of both below a certain reference coordinate system, and under this reference coordinate system, the position of the in-vehicle projector, the position of the electronically controlled mirror, and the position of the driver are all stationary and fixed. The reference coordinate system is preferably a coordinate system established based on a preset vehicle, for example, a coordinate system established based on a certain point of the roof of the preset vehicle as an origin, and the direction of the coordinate axis can be selected according to practical situations.
Firstly, the space in the vehicle is scanned by a laser radar, and detailed data of the size of the space in the vehicle, including relative positions and relative distances among different points in the vehicle, are determined. After the vehicle projector is installed, a laser range finder is used to measure the distance from a certain point (such as a point on a corner outline) on the vehicle projector to a plurality of characteristic points in the vehicle, so as to mark the installation position of the vehicle projector in the model. The mounting location may be selected as the coordinates of any point on the vehicle projector below the reference coordinate system. In order to ensure the accuracy of measurement, measurement is performed on a plurality of feature points, and a plurality of measurements are performed to obtain an average value, so as to reduce errors. For the method for acquiring the installation position of the electronically controlled reflector, the method for acquiring the installation position of the vehicle-mounted projector can be referred to, so that the relative position of the vehicle-mounted projector and the electronically controlled reflector can be obtained. Further, since the projector camera is mounted on the in-vehicle projector, the coordinates of the projector camera below the reference coordinate system can also be obtained by measuring the relative positions of the two. It should be noted that, although there may be a certain error in the coordinates of each device obtained by the above method, this error is within the calculation allowable range due to the larger subsequent projectable range.
In one possible implementation, the mapping process is performed on the first driving video to obtain a second driving video under a preset viewing angle of the driver of the vehicle, which needs to be calculated by combining an internal parameter and an external parameter of the projector camera.
Specifically, firstly, the installation position of the projector camera needs to be converted under a preset coordinate system to obtain the corresponding camera coordinate, wherein the preset coordinate system is a coordinate system set by a unified coordinate system calculated later. The preset coordinate system can also be a reference coordinate system established based on a preset vehicle, so that coordinate conversion of a projector camera is not needed at the moment.
The optical center of the projector camera refers to the intersection point of the optical axis of the camera and the imaging plane, i.e., the center point of the optical system. In an ideal case, the optical center is located at the center of the imaging plane, and the optical axis is perpendicular to the imaging plane. The coordinates of the optical center are usually expressed in pixel units, with the upper left corner of the image as the origin, the right direction as the positive X-axis direction, and the downward direction as the positive Y-axis direction. The coordinates of the optical center on the first video frame may be obtained by means of camera calibration. Camera calibration is the estimation of the camera's internal and external parameters, including the coordinates of the optical center on the image, by taking a set of calibration plate images of known world coordinates and analyzing these images.
The focus of the projector camera refers to a specific point in the optical system, i.e. the focus. In projector cameras, the focal point generally refers to the position at which the lens or optic of the camera converges light. When light passes through the lens or mirror, it is focused to a focal point, forming a clear image. At the focal point of the camera, the object will show a clear image, while the object outside the focal point shows a blurred or blurred image. And calculating the focus coordinates under the preset coordinates according to the relative positions of the focus position and the projector camera in practice and the coordinates of the projector camera under the preset coordinate system. Since the calculation process only involves the calculation of the relative positions of two points and the transformation of coordinates in the mathematical field, further description thereof will not be given here.
Whereas the binocular coordinates of the driver below the preset coordinate system first require marking the position of the driver's head in a model constructed from the preset vehicle. Because different drivers have different heights in practice and the positions of the drivers also change to a certain extent, a decentered average position can be selected when calibrating the positions of the heads of the drivers. And further determines the approximate position of the driver's eyes in the preset vehicle (in the reference coordinate system) based on the head position. Similarly, the binocular coordinates of the driver under the preset coordinate system are calculated according to the conversion parameters of the preset coordinate system and the reference coordinate system.
The method comprises the steps of constructing a projection matrix based on internal parameters and external parameters of a projector, obtaining a second driving video by performing perspective transformation on a first driving video, and calculating according to the following mapping formula:
wherein, (u, v) is a second coordinate, (f x,fy) is a focal coordinate, (c x,cy) is a coordinate of an optical center of the projector camera on the first video frame, (T x,Ty,Tz) is a coordinate corresponding to an installation position of the projector camera, (X, Y, Z) is a first coordinate, (X e,Ye,Ze) is a binocular coordinate of the driver in a preset coordinate system, and (X p,Yp,Zp) is a camera coordinate.
The first coordinate is the coordinate of a first pixel point in the first driving video, the second coordinate is the coordinate of a second pixel point under a camera coordinate system corresponding to the projector camera, the first pixel point is any one of a plurality of pixel points contained in the first video frame, the second pixel point is any one of a plurality of pixel points contained in the second video frame, the first video frame is any one of a plurality of video frames contained in the first driving video, and the second video frame is any one of a plurality of video frames contained in the second driving video.
The camera projection model describes how a camera projects points in three-dimensional space onto a two-dimensional image plane. The most common projection model is the perspective projection model, which can be described by the focal length and optical center of the camera. The perspective projection model assumes that light rays originate from an optical center, pass through a focal point, and then are projected onto an image plane for imaging. Based on a camera projection model, firstly, transforming points under a world coordinate system into the camera coordinate system through a camera external reference matrix, then, performing projection transformation through a camera internal reference matrix, and finally obtaining the points on an image plane.
However, before a point in the three-dimensional space is projected onto the two-dimensional image plane, since the second pixel is any one of a plurality of pixels included in the second video frame, which is a point on the plane, typically a coordinate point under an image coordinate system, it is necessary to change to a point under a normalized camera coordinate system, in projector camera imaging, the image coordinate system is a pixel coordinate system on a camera sensor, and the camera coordinate system is a normalized coordinate system of the camera, that is, the pixel coordinate is converted to a point in the camera coordinate system with an optical center as an origin and a unit length as a focal length. The optical center of the camera is usually taken as the origin, the optical axis of the camera is the z-axis, and the u 'and v' coordinates on the image plane are parallel to the X and Y axes, respectively. Specifically, the method is converted by the following formula:
Wherein, (X, Y, Z) is a first coordinate, (u ', v') is a coordinate of a second pixel point in the second driving video, (f x,fy) is a focal coordinate, (c x,cy) is a coordinate of an optical center of the projector camera on the first video frame.
The internal reference matrix contains the internal parameters of the projector camera, including focal length and optical center, and is usually the following parts:
Where f x and f y are the horizontal and vertical focal lengths of the camera, and c x and c y are the positions of the optical centers on the image plane.
In the above mapping formula, w is a scaling factor in perspective transformation, and is generally used to convert homogeneous coordinates into non-homogeneous coordinates. In the calculation of the perspective transformation, w is not necessary, but it is used to ensure that no information is lost when dividing w when converting homogeneous coordinates into non-homogeneous coordinates. Typically, w will be set to 1, so the transformed image coordinates (u, v) can be obtained by removing w from (u, v, w).
The mapping formula includes the following rotation matrix:
Wherein R is a rotation matrix, (X e,Ye,Ze) is a binocular coordinate of a driver under a preset coordinate system, and (X p,Yp,Zp) is a camera coordinate.
The rotation matrix R is a 3x3 matrix representing the rotation relationship from one coordinate system to another. The rotation matrix describes the process of converting the coordinates of the projector camera in the preset coordinate system to the driver's binocular coordinates of the point in the preset coordinate system. In particular, each row in this rotation matrix represents a representation of one basis vector in the camera coordinate system in the preset coordinate system. That is, each row of the rotation matrix is a vector that constitutes a base vector in the camera coordinate system. For example, the first row (X c·Xp,Yc·Xp,Zc·Xp) represents the representation of the X-axis unit vector under the camera coordinate system under the preset coordinate system. Similarly, the second and third rows represent representations of Y-axis and Z-axis unit vectors in a camera coordinate system in a preset coordinate system. Through this rotation matrix, the coordinates of the camera in the preset coordinate system can be converted to driver binocular coordinates in the preset coordinate system.
During camera calibration, the position of the camera in the world coordinate system can be estimated by aligning the camera with known spatial points and observing their projected positions in the image. This process may be accomplished using calibration plates or other objects of known shape. By observing at least the spatial points at a plurality of different positions, position and pose information of the camera, including translation vectors, can be obtained. The translation vector is typically represented as a three-dimensional vector describing the position of the camera in the world coordinate system. In the projector camera extrinsic matrix, the translation vector is typically represented as follows:
Wherein, (T x,Ty,Tz) is the coordinate corresponding to the installation position of the projector camera.
Based on the mapping formula, converting the second coordinates of each first pixel point in each first video frame in the first driving video to obtain second coordinates of each second pixel point in each second video frame in the corresponding second driving video, and further synthesizing the second driving video.
S130, identifying the lane line in the second driving video, and calculating a guide line function of the lane line.
In one possible implementation manner, the identifying the lane line in the second driving video specifically includes: preprocessing the second driving video to obtain a processed video; extracting features of the processed video to obtain lane line color features; performing edge detection on the region corresponding to the lane line color characteristics, and extracting a lane line region; and extracting an inner side line of the lane line area to obtain the lane line.
Specifically, in a real situation, since the lane line is usually a line with a distinct color or brightness compared with the road background, the lane line is usually white or yellow, and the road background is black or gray, etc., the lane line in the second driving video can be quickly identified by performing color feature extraction on the second driving video. While prior to feature extraction, the second live video typically needs to be preprocessed to reduce noise and optimize image quality. The preprocessing steps may include denoising, color space conversion (e.g., converting an RGB image to a gray scale image), image enhancement, and the like. This process is merely a conventional technical means and will not be further described herein.
And extracting color characteristics of the lane lines on the video frames of the preprocessed processed video by using an image processing technology. Color space conversion (such as HSV) can be adopted, and pixel areas with lane line color characteristics are extracted by setting a threshold value which is between the lane line color value and the road color value, so that a computer can automatically filter off the road areas, and the areas corresponding to the lane line color characteristics are reserved. And (3) carrying out edge detection on any image area (an area corresponding to any lane line) obtained by the feature extraction. A commonly used edge detection algorithm is the Canny edge detection algorithm. Edge pixels in the image can be found using Canny edge detection. According to the result of the edge detection, an edge region which possibly represents a lane line is found, a straight line can be detected by using the Hough transform, a straight line with a lane line characteristic is found in the output of the Hough transform, and the lane line is further extracted from the lane line region obtained by the edge detection in the detected edge region. This can be achieved by fitting a straight line or curve. For straight line lane lines, a least squares method may be used to fit the straight line, and the straight line with the best fit may be found from the results of edge detection. For curve lane lines, a polynomial fitting curve can be used to obtain curvature and shape information of the lane lines by fitting continuous edge pixel points in the image. Since a lane line usually comprises two long sides and two short sides, wherein the long sides divide an inner side line and an outer side line, both the inner side line and the outer side line can be used for calculating a guide line function, preferably using the inner side line as the lane.
In one possible implementation manner, calculating a guide line function of the lane line specifically comprises placing the lane line into a coordinate system to obtain a lane line curve; extracting a plurality of coordinate points from the lane line curve; fitting operation is carried out on a plurality of coordinate points through a fitting model, so that fitting parameters are obtained; and determining a guide line function according to the fitting parameters.
Specifically, first, the detected lane lines are converted from the image coordinate system to the real world coordinate system by the internal and external parameters of the camera and the motion state of the vehicle. Once the lane lines are placed in the real world coordinate system, a curve of the lane lines can be obtained. For straight lane lines, the equation of the curve can be directly obtained. For a curved lane, a series of coordinate points may be obtained representing the shape of the curve in the real world. For the curve lane line, a series of coordinate points can be obtained by sampling the curve at equal intervals or intercepting the curve at equal intervals. These coordinate points may be used to represent the shape of the lane lines. For a straight lane line, two points on the lane line may be determined by coordinates of a start point and an end point. Fitting operation is carried out on a plurality of coordinate points through a fitting model, so that fitting parameters are obtained: for curve lane lines, polynomial fitting, spline fitting, or other curve fitting models may be used to fit the coordinate points to obtain the fitting parameters of the curve. Common fitting methods include least squares fitting and optimization algorithms that minimize the sum of squares of residuals. For a straight line lane line, two points or intercept type can be directly used for fitting the straight line, so that fitting parameters of the straight line are obtained. And according to the fitting parameters, a fitting function of the lane line can be obtained. For a curved lane, the fit function may be a polynomial function or other curved equation; for straight lane lines, the fitting function is then a straight line equation. From the fitting function, a guide line function can be obtained. This function describes the course of the lane line in front of the vehicle, which can be used to subsequently generate a corresponding virtual guideline from the guideline function.
And S140, generating a guide line for projection display of the vehicle-mounted projector according to the guide line function.
In one possible implementation, a plurality of drawing points are generated according to the guide line function, and the drawing points are positioned on a curve corresponding to the guide line function; and connecting the plurality of drawing points to obtain a guide line.
In particular, a series of discrete abscissa (X) values are determined from the guideline function, which can be sampled uniformly over a range in front of the vehicle, or adjusted as desired. These abscissa values represent the abscissa positions on the guide line. These abscissa values are substituted into the guide line function, and the corresponding ordinate (Y) value, that is, the ordinate position on the guide line, is calculated. Thus, the coordinates of a series of plotted points are obtained. And connecting the obtained drawing points according to the sequence to obtain the track of the guide line, wherein for the curve guide line, a curve connection method such as Bezier curve, spline curve and the like can be used for realizing smooth curve connection. For straight-line guide lines, each drawing point is directly connected. At this time, the guide line is a straight line or a curve, and has no width, so that the drawing operation of the guide line can be further realized by using a graph drawing library (such as OpenCV, matplotlib and the like), and the guide line with width is drawn by using a straight line or curve drawing function according to the coordinates of the drawing point, so that the subsequent projection use is facilitated.
And S150, adjusting projection parameters of the vehicle-mounted projector so that the guide line is reflected to a preset driver visual field of the vehicle based on the electronically controlled reflector.
Referring to fig. 4, the HUD generates virtual guide lines and virtual guide arrows corresponding to actual lane lines and guide arrows based on navigation information, and displays the virtual guide lines and virtual guide arrows on a HUD display area of a front windshield of a vehicle. In the case of a large number of lane lines and a complex road condition, it is difficult for the driver to quickly select a suitable lane according to the guide lines, which may cause distraction to the driver, thereby affecting driving safety.
Therefore, after generating the virtual guide line according to the actual lane line, it is also necessary to adjust the projection reference of the in-vehicle projector, including adjusting the projection angle and the projection focal length, so that the guide line under the driver's view angle can be almost superimposed on the lane line.
Referring to fig. 1, since light of the vehicle projector is primarily reflected to the reflection area of the front windshield through the electronically controlled mirror and secondarily reflected to the driver's binocular, it is necessary to acquire the coordinates of the light source point of the vehicle projector, the first mirror coordinates of the center of the reflection area, the second mirror coordinates of the mirror center of the electronically controlled mirror, and the binocular coordinates of the driver before calculating the projection angle.
Referring to fig. 5, after the light of the vehicle projector is reflected multiple times, the reflection path of the light forms a triangle, so that the vector of the light path can be calculated based on the coordinates of the radiation point, and finally, an inverse trigonometric function is performed to calculate the projection angle of the vehicle projector.
First, according to the light source point coordinates and the first mirror coordinates, a first direction vector (representing the direction of light rays emitted by the vehicle-mounted projector) is calculated:
Wherein PM 1 is a first direction vector, (x M1,yM1,zM1) is a second mirror coordinate, (x p,yp,zp) is a light source point coordinate;
then, calculating a second direction vector (which represents the direction of the light after being reflected by the electronically controlled reflector) according to the first mirror coordinates and the second mirror coordinates;
Wherein M 1M2 is a second direction vector, (x M1,yM1,zM1) is a first mirror coordinate, (x M2,yM2,zM2) is a second mirror coordinate;
Then, calculating a third direction vector (which represents the direction of the light ray reaching the reflecting area after being reflected by the electronically controlled reflector) according to the first direction vector and the second direction vector;
Wherein, PM 2 is a third direction vector, PM 1 is a first direction vector, and M 1M2 is a second direction vector;
And then calculating a fourth direction vector (which indicates the direction that the light reaches the driver after being reflected by the reflecting area) according to the second mirror coordinates and the binocular coordinates:
Wherein M 2 E is a fourth direction vector, (X e,Ye,Ze) is a binocular coordinate, (X M2,yM2,zM2) is a second specular coordinate;
The light source point coordinates of the in-vehicle projector refer to the position coordinates of the light source inside the projector under a preset coordinate system. The light source point is typically a bulb or LED inside the projector that generates light and projects it out through an optical system to form a projection screen. Typically, the location of the light source point is determined by the projector manufacturer during the design and manufacturing process and is recorded and used as an internal parameter of the projector. This coordinate is typically relative to a reference or center point of the projector's internal optics, rather than absolute coordinates relative to the outside world. Therefore, the position of the vehicle-mounted projector below the preset coordinate system is combined to comprehensively determine the coordinates of the light source points below the preset coordinate system. The first mirror plane coordinates are coordinates of the center of the reflective area under a preset coordinate system, the second mirror plane coordinates are coordinates of the center of the mirror plane of the electronically controlled mirror under the preset coordinate system, the binocular coordinates of the driver are coordinates of the binocular of the driver under the preset coordinate system, and the coordinates are all coordinates under the same reference coordinate system, and the specific acquisition principle and method are described in detail in the above steps and are not further described herein.
Finally, the inner product of the direction vector PM 2 from the vehicle projector to the reflection area and the double-purpose direction vector M 2 E from the reflection area to the driver is calculated by using the inner product formula of the vectors. The modulo length of the vectors is calculated using the modulo length of the vectors. And finally, dividing the inner product by the product of the modular lengths of the two vectors to obtain the cosine value of the included angle.
Calculating a projection angle according to the third direction vector and the fourth direction vector:
;/>
Wherein PM 2 is a third direction vector and M 2 E is a fourth direction vector.
Further, based on the first direction vector, the second direction vector and the fourth direction vector, the projection focal length is calculated, and a specific calculation formula is as follows:
where f is the projected focal length, PM 1 is the first direction vector, M 1M2 is the second direction vector, and M 2 E is the fourth direction vector.
Based on a convex lens formula of optical imaging, the projection focal length f can be obtained through PM 1、M1M2 and M 2 E, and the vector PM 1 represents the distance from the vehicle-mounted projector to the electronically controlled reflector and represents the distance from the projector to the electronically controlled reflector. The vector M 1M2 represents the distance from the galvanometer mirror to the reflective area, and represents the path length of the light between the galvanometer mirror and the reflective area. The vector M 2 E represents the distance from the reflective area to the driver's eye, representing the distance of light from the reflective area to the human eye. The projection focal length f is the distance to which a lens can focus light, and this distance depends on the sum of the path lengths of the light from the electronically controlled projector to the electronically controlled mirror, from the electronically controlled mirror to the reflective area, and from the reflective area to the driver. The inverse of the three path lengths are added in the formula because in the convex lens formula, the inverse of the focal length is the sum of the inverse of these distances.
By adopting the technical scheme of the application, the lane guiding information is projected into the visual field of the driver in a virtual form, so that the driver can acquire the lane guiding information without taking the attention away from the road. Firstly, a first driving video acquired by a projector camera is received, a lane line of a road on which a preset vehicle runs is mapped to a view angle of a driver, and a second driving video is obtained, so that the road condition can be analyzed and the subsequent driving guidance can be carried out according to the view angle of the driver. Then, a lane line in the second driving video is identified, and a guide line function of the lane line is calculated. And then, generating a guide line for projection display of the vehicle-mounted projector according to the guide line function, and adjusting projection parameters of the projector to enable the guide line to be reflected to the visual field of the driver based on the electronically controlled reflector. By the method, a driver can directly see the guide line in the driving visual field range, and the lane guiding information can be acquired without moving the line of sight away from the road, so that the influence of the lane guiding information on the driver is reduced, and the driving safety is improved.
The embodiment also discloses a driving assisting device based on a vehicle-mounted projector, which is driving assisting control equipment, referring to fig. 6, and includes an acquisition module 601, a processing module 602, an identification module 603, a generation module 604 and a projection control module 605, wherein:
the acquiring module 601 is configured to receive a first driving video acquired by a projector camera, where the first driving video includes a lane line of a road on which a preset vehicle is driving.
The processing module 602 is configured to map the first driving video to obtain a second driving video under a preset viewing angle of a driver of the vehicle.
The identifying module 603 is configured to identify a lane line in the second driving video, and calculate a guide line function of the lane line.
And the generating module 604 is used for generating a guide line for the projection display of the vehicle-mounted projector according to the guide line function.
The projection control module 605 is configured to adjust projection parameters of the vehicle-mounted projector, where the projection parameters include a projection angle and a projection focal length of the vehicle-mounted projector.
In a possible implementation, the processing module 602 is configured to convert the installation position of the projector camera below a preset coordinate system to obtain the camera coordinates.
The acquiring module 601 is configured to acquire a focal coordinate of the projector camera under a preset coordinate system and a coordinate of an optical center of the projector camera on the first video frame.
The generating module 604 is configured to map the first coordinate of the first pixel to obtain a second coordinate corresponding to a second pixel, where the first pixel is any one of a plurality of pixels included in the first video frame, the second pixel is any one of a plurality of pixels included in the second video frame, the first video frame is any one of a plurality of video frames included in the first driving video, and the second video frame is any one of a plurality of video frames included in the second driving video. The mapping process is specifically performed on the first coordinate through the following formula:
wherein, (u, v) is a second coordinate, (f x,fy) is a focal coordinate, (c x,cy) is a coordinate of an optical center of the projector camera on the first video frame, (T x,Ty,Tz) is a coordinate corresponding to an installation position of the projector camera, (X, Y, Z) is a first coordinate, (X e,Ye,Ze) is a binocular coordinate of the driver in a preset coordinate system, and (X p,Yp,Zp) is a camera coordinate.
In one possible implementation, the acquiring module 601 is configured to acquire a light source point coordinate of the vehicle-mounted projector, a first mirror coordinate of a center of a reflection area, a second mirror coordinate of a mirror center of the electronically controlled mirror, and a binocular coordinate of a driver, where the reflection area is located on a front windshield of a preset vehicle.
The processing module 602 is configured to calculate a first direction vector according to the light source point coordinates and the first mirror coordinates:
Wherein PM 1 is a first direction vector, (x m1,ym1,zm1) is a second mirror coordinate, and (x p,yp,zp) is a light source point coordinate.
The processing module 602 is configured to calculate a second direction vector according to the first mirror coordinate and the second mirror coordinate.
Wherein M 1M2 is a second direction vector, (x m1,ym1,zm1) is a first mirror coordinate and (x m2,ym2,zm2) is a second mirror coordinate.
The processing module 602 is configured to calculate a third direction vector according to the first direction vector and the second direction vector.
Wherein, PM 2 is a third direction vector, PM 1 is a first direction vector, and M 1M2 is a second direction vector.
A processing module 602, configured to calculate a fourth direction vector according to the second mirror coordinates and the binocular coordinates:
Wherein M 2 E is the fourth direction vector, (X e,Ye,Ze) is the binocular coordinates, (X m2,ym2,zm2) is the second specular coordinates.
The processing module 602 is configured to calculate the projection angle according to the third direction vector and the fourth direction vector:
Wherein PM 2 is a third direction vector and M 2 E is a fourth direction vector.
In a possible implementation manner, the processing module 602 is configured to calculate the projection focal length based on the first direction vector, the second direction vector, and the fourth direction vector, and the specific calculation formula is as follows:
;/>
where f is the projected focal length, PM 1 is the first direction vector, M 1M2 is the second direction vector, and M 2 E is the fourth direction vector.
In a possible implementation manner, the processing module 602 is configured to perform preprocessing on the second driving video to obtain a processed video.
The recognition module 603 is configured to perform feature extraction on the processed video, so as to obtain lane line color features.
The recognition module 603 is configured to perform edge detection on an area corresponding to the lane line color feature, and extract a lane line area.
The recognition module 603 is configured to extract an inner line of the lane line area, and obtain a lane line.
In one possible implementation, the generating module 604 is configured to put the lane line into a coordinate system to obtain a lane line curve.
The recognition module 603 is configured to extract a plurality of coordinate points from the lane line curve.
The processing module 602 is configured to perform a fitting operation on the plurality of coordinate points through a fitting model, so as to obtain fitting parameters.
A processing module 602, configured to determine a guideline function according to the fitting parameters.
In one possible implementation, the generating module 604 is configured to generate a plurality of drawing points according to the guide line function, where the drawing points are located on a curve corresponding to the guide line function.
And the generating module 604 is used for connecting the plurality of drawing points to obtain a guide line.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The embodiment also discloses an electronic device, referring to fig. 7, the electronic device may include: at least one processor 701, at least one communication bus 702, a user interface 703, a network interface 704, at least one memory 705.
Wherein the communication bus 702 is used to enable connected communications between these components.
The user interface 703 may include a Display screen (Display), a Camera (Camera), and the optional user interface 703 may further include a standard wired interface, and a wireless interface.
The network interface 704 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 701 may include one or more processing cores. The processor 701 connects various portions of the overall server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 705, and invoking data stored in the memory 705. Alternatively, the processor 701 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor may integrate one or a combination of several of a central processing unit 701 (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 701 and may be implemented by a single chip.
The Memory 705 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 705 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 705 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 705 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 705 may also optionally be at least one storage device located remotely from the processor 701. An operating system, a network communication module, a user interface module, and an application program of a driving assistance method based on an in-vehicle projector may be included in the memory 705 as a computer storage medium.
In the electronic device shown in fig. 7, the user interface 703 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 701 may be configured to invoke the application of the vehicle projector-based driving assistance method stored in the memory 705, which when executed by the one or more processors 701, causes the electronic device to perform the method as in one or more of the embodiments described above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all of the preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory 705. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory 705, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. Whereas the aforementioned memory 705 includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The application also discloses a computer readable storage medium, which stores instructions. When executed by the one or more processors 701, cause the electronic device to perform the method as described in one or more of the embodiments above.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (8)

1. An auxiliary driving method based on a vehicle-mounted projector is characterized in that the auxiliary driving method is applied to auxiliary driving control equipment, and the auxiliary driving control equipment is connected with the vehicle-mounted projector, an electric control reflector and a projector camera;
the driving assistance method includes:
receiving a first driving video acquired by the projector camera, wherein the first driving video comprises lane lines of a road on which a preset vehicle runs;
Mapping the first driving video to obtain a second driving video under a preset vehicle driver visual angle;
identifying a lane line in the second driving video, and calculating a guide line function of the lane line;
Generating a guide line for projection display of the vehicle-mounted projector according to the guide line function;
Adjusting projection parameters of the vehicle-mounted projector, wherein the projection parameters comprise a projection angle and a projection focal length of the vehicle-mounted projector, so that the guide line is reflected to a driver view of the preset vehicle based on the electronically controlled reflector;
the mapping processing is performed on the first driving video to obtain a second driving video under a preset driving visual angle of a driver of the vehicle, and the mapping processing specifically comprises the following steps:
converting the installation position of the projector camera below a preset coordinate system to obtain a camera coordinate;
Acquiring a focus coordinate of the projector camera below the preset coordinate system and a coordinate of an optical center of the projector camera on a first video frame;
Mapping the first coordinates of a first pixel point to obtain second coordinates of a corresponding second pixel point, wherein the first pixel point is any one of a plurality of pixel points contained in the first video frame, the second pixel point is any one of a plurality of pixel points contained in the second video frame, the first video frame is any one of a plurality of video frames contained in the first driving video, and the second video frame is any one of a plurality of video frames contained in the second driving video; the mapping process is specifically performed on the first coordinate through the following formula:
wherein, (u, v) is the second coordinate, w is a scaling factor, (f x,fy) is the focal coordinate, (c x,cy) is a coordinate of an optical center of the projector camera on the first video frame, (T x,Ty,Tz) is a coordinate corresponding to an installation position of the projector camera, (X, Y, Z) is the first coordinate, (X e,Ye,Ze) is a binocular coordinate of a driver in the preset coordinate system, (X p,Yp,Zp) is the camera coordinate;
the adjusting the projection parameters of the vehicle-mounted projector specifically comprises:
Acquiring a light source point coordinate of the vehicle-mounted projector, a first mirror surface coordinate of the center of a reflecting area, a second mirror surface coordinate of the mirror surface center of the electronically controlled reflector and a binocular coordinate of a driver, wherein the reflecting area is positioned on a front windshield of the preset vehicle;
calculating a first direction vector according to the light source point coordinates and the first mirror coordinates:
Wherein PM 1 is the first direction vector, (x M1,yM1,zM1) is the first mirror coordinate, (x p,yp,zp) is the light source point coordinate;
Calculating a second direction vector according to the first mirror coordinates and the second mirror coordinates;
Wherein M 1M2 is the second direction vector, (x M1,yM1,zM1) is the first mirror coordinate, (x M2,yM2,zM2) is the second mirror coordinate;
calculating a third direction vector according to the first direction vector and the second direction vector;
Wherein PM 2 is the third direction vector, PM 1 is the first direction vector, and M 1M2 is the second direction vector;
calculating a fourth direction vector from the second specular coordinates and the binocular coordinates:
Wherein M 2 E is the fourth direction vector, (X e,Ye,Ze) is the binocular coordinates, (X M2,yM2,zM2) is the second specular coordinates;
calculating the projection angle according to the third direction vector and the fourth direction vector:
Wherein PM 2 is the third direction vector and M 2 E is the fourth direction vector.
2. The vehicle projector-based driving assistance method according to claim 1, wherein the adjusting the projection parameters of the vehicle projector specifically further comprises:
Based on the first direction vector, the second direction vector and the fourth direction vector, the projection focal length is calculated according to the following specific calculation formula:
Wherein f is the projection focal length, PM 1 is the first direction vector, M 1M2 is the second direction vector, and M 2 E is the fourth direction vector.
3. The vehicle-mounted projector-based driving assistance method according to claim 1, wherein the identifying the lane line in the second driving video specifically includes:
Preprocessing the second driving video to obtain a processed video;
extracting features of the processed video to obtain lane line color features;
performing edge detection on the region corresponding to the lane line color characteristics, and extracting a lane line region;
and extracting the inner side line of the lane line area to obtain the lane line.
4. The vehicle projector-based driving assistance method according to claim 1, wherein said calculating a guide line function of the lane line comprises
Placing the lane line into a coordinate system to obtain a lane line curve;
Extracting a plurality of coordinate points from the lane line curve;
performing fitting operation on a plurality of coordinate points through a fitting model to obtain fitting parameters;
And determining the guide line function according to the fitting parameters.
5. The vehicle projector-based driving assistance method according to claim 1, wherein the generating a guide line for the vehicle projector to project and display according to the guide line function specifically includes:
generating a plurality of drawing points according to the guide line function, wherein the drawing points are positioned on a curve corresponding to the guide line function;
And connecting a plurality of drawing points to obtain the guide line.
6. The utility model provides a driving assisting device based on-vehicle projecting apparatus, its characterized in that, the device is driving assisting control equipment, driving assisting control equipment is connected with on-vehicle projecting apparatus, automatically controlled speculum and projecting apparatus camera, the device includes acquisition module (601), processing module (602), identification module (603), generates module (604) and projection control module (605), wherein:
the acquisition module (601) is configured to receive a first driving video acquired by the projector camera, where the first driving video includes a lane line of a road on which a preset vehicle is traveling;
The processing module (602) is configured to map the first driving video to obtain a second driving video under a preset viewing angle of a driver of the vehicle;
the identification module (603) is used for identifying a lane line in the second driving video and calculating a guide line function of the lane line;
The generating module (604) is used for generating a guide line for projection display of the vehicle-mounted projector according to the guide line function;
the projection control module (605) is used for adjusting projection parameters of the vehicle-mounted projector, wherein the projection parameters comprise a projection angle and a projection focal length of the vehicle-mounted projector;
the processing module (602) is used for converting the installation position of the projector camera below a preset coordinate system to obtain a camera coordinate;
the acquisition module (601) is configured to acquire a focal coordinate of the projector camera under the preset coordinate system, and a coordinate of an optical center of the projector camera on a first video frame;
The generating module is configured to map a first coordinate of a first pixel to obtain a second coordinate of a second pixel, where the first pixel is any one of a plurality of pixels included in the first video frame, the second pixel is any one of a plurality of pixels included in the second video frame, the first video frame is any one of a plurality of video frames included in the first driving video, and the second video frame is any one of a plurality of video frames included in the second driving video; the mapping process is specifically performed on the first coordinate through the following formula:
wherein, (u, v) is the second coordinate, w is a scaling factor, (f x,fy) is the focal coordinate, (c x,cy) is a coordinate of an optical center of the projector camera on the first video frame, (T x,Ty,Tz) is a coordinate corresponding to an installation position of the projector camera, (X, Y, Z) is the first coordinate, (X e,Ye,Ze) is a binocular coordinate of a driver in the preset coordinate system, (X p,Yp,Zp) is the camera coordinate;
The acquisition module (601) is configured to acquire a light source point coordinate of the vehicle-mounted projector, a first mirror surface coordinate of a center of a reflection area, a second mirror surface coordinate of a mirror surface center of the electronically controlled mirror, and a binocular coordinate of a driver, where the reflection area is located on a front windshield of the preset vehicle;
-the processing module (602) configured to calculate a first direction vector from the light source point coordinates and the first mirror coordinates:
Wherein PM 1 is the first direction vector, (x M1,yM1,zM1) is the second mirror coordinate, (x p,yp,zp) is the light source point coordinate;
-the processing module (602) for calculating a second direction vector from the first mirror coordinates and the second mirror coordinates;
Wherein M 1M2 is the second direction vector, (x M1,yM1,zM1) is the first mirror coordinate, (x M2,yM2,zM2) is the second mirror coordinate;
-the processing module (602) for calculating a third direction vector from the first direction vector and the second direction vector;
Wherein PM 2 is the third direction vector, PM 1 is the first direction vector, and M 1M2 is the second direction vector;
-the processing module (602) for calculating a fourth direction vector from the second mirror coordinates and the binocular coordinates:
Wherein M 2 E is the fourth direction vector, (X e,Ye,Ze) is the binocular coordinates, (X M2,yM2,zM2) is the second specular coordinates;
-the processing module (602) configured to calculate the projection angle from the third direction vector and the fourth direction vector:
Wherein PM 2 is the third direction vector and M 2 E is the fourth direction vector.
7. An electronic device comprising a processor (701), a memory (705), a user interface (703), a network interface (704) and a memory (705), the memory (705) being configured to store instructions, the user interface (703) and the network interface (704) being configured to communicate with other devices, the processor (701) being configured to execute the instructions stored in the memory (705) to cause the electronic device to perform the driving assistance method according to any one of claims 1-5.
8. A computer-readable storage medium storing instructions that, when executed, perform the driving assistance method according to any one of claims 1 to 5.
CN202410418832.9A 2024-04-09 2024-04-09 Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment Active CN118004035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410418832.9A CN118004035B (en) 2024-04-09 2024-04-09 Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410418832.9A CN118004035B (en) 2024-04-09 2024-04-09 Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment

Publications (2)

Publication Number Publication Date
CN118004035A CN118004035A (en) 2024-05-10
CN118004035B true CN118004035B (en) 2024-06-07

Family

ID=90950424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410418832.9A Active CN118004035B (en) 2024-04-09 2024-04-09 Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment

Country Status (1)

Country Link
CN (1) CN118004035B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442618A (en) * 2008-12-31 2009-05-27 葛晨阳 Method for synthesizing 360 DEG ring-shaped video of vehicle assistant drive
CN108437896A (en) * 2018-02-08 2018-08-24 深圳市赛格导航科技股份有限公司 Vehicle drive assisting method, device, equipment and storage medium
CN111231833A (en) * 2020-01-30 2020-06-05 华东交通大学 Automobile auxiliary driving system based on combination of holographic projection and AR
CN111829549A (en) * 2020-07-30 2020-10-27 吉林大学 Snow road surface virtual lane line projection method based on high-precision map
CN112954309A (en) * 2021-02-05 2021-06-11 的卢技术有限公司 Test method for target tracking effect on vehicle based on AR-HUD augmented reality
CN115984122A (en) * 2022-11-23 2023-04-18 深圳市瀚达美电子有限公司 HUD backlight display system and method
KR20230052326A (en) * 2021-10-12 2023-04-20 박유천 Dangerous situation notification and forward gaze assistance solution while driving through lidar, camera sensor and ar hud

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442618A (en) * 2008-12-31 2009-05-27 葛晨阳 Method for synthesizing 360 DEG ring-shaped video of vehicle assistant drive
CN108437896A (en) * 2018-02-08 2018-08-24 深圳市赛格导航科技股份有限公司 Vehicle drive assisting method, device, equipment and storage medium
CN111231833A (en) * 2020-01-30 2020-06-05 华东交通大学 Automobile auxiliary driving system based on combination of holographic projection and AR
CN111829549A (en) * 2020-07-30 2020-10-27 吉林大学 Snow road surface virtual lane line projection method based on high-precision map
CN112954309A (en) * 2021-02-05 2021-06-11 的卢技术有限公司 Test method for target tracking effect on vehicle based on AR-HUD augmented reality
KR20230052326A (en) * 2021-10-12 2023-04-20 박유천 Dangerous situation notification and forward gaze assistance solution while driving through lidar, camera sensor and ar hud
CN115984122A (en) * 2022-11-23 2023-04-18 深圳市瀚达美电子有限公司 HUD backlight display system and method

Also Published As

Publication number Publication date
CN118004035A (en) 2024-05-10

Similar Documents

Publication Publication Date Title
US10726576B2 (en) System and method for identifying a camera pose of a forward facing camera in a vehicle
JP6866440B2 (en) Object identification methods, devices, equipment, vehicles and media
JP4803449B2 (en) On-vehicle camera calibration device, calibration method, and vehicle production method using this calibration method
US10732412B2 (en) Display device for vehicle
CN109941277A (en) The method, apparatus and vehicle of display automobile pillar A blind image
US20200012097A1 (en) Head-up display device, display control method, and control program
CN111819571B (en) Panoramic all-around system with adapted projection surface
WO2020012879A1 (en) Head-up display
US20140085409A1 (en) Wide fov camera image calibration and de-warping
CN111664839B (en) Vehicle-mounted head-up display virtual image distance measuring method
CN109447901B (en) Panoramic imaging method and device
JP6614754B2 (en) Method for converting an omnidirectional image from an omnidirectional camera placed on a vehicle into a rectilinear image
US11145112B2 (en) Method and vehicle control system for producing images of a surroundings model, and corresponding vehicle
JP5624370B2 (en) Moving body detection apparatus and moving body detection method
WO2018146048A1 (en) Apparatus and method for controlling a vehicle display
US9849835B2 (en) Operating a head-up display of a vehicle and image determining system for the head-up display
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
JP2008037118A (en) Display for vehicle
JP7074546B2 (en) Image processing equipment and methods
KR101351911B1 (en) Apparatus and method for processing image of camera
CN118004035B (en) Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment
Gao et al. A calibration method for automotive augmented reality head-up displays using a chessboard and warping maps
EP4016444A1 (en) Method for rectification of images and/or image points, camera-based system and vehicle
JP2018113622A (en) Image processing apparatus, image processing system, and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant