CN113112413B - Image generation method, image generation device and vehicle-mounted head-up display system - Google Patents

Image generation method, image generation device and vehicle-mounted head-up display system Download PDF

Info

Publication number
CN113112413B
CN113112413B CN202010034379.3A CN202010034379A CN113112413B CN 113112413 B CN113112413 B CN 113112413B CN 202010034379 A CN202010034379 A CN 202010034379A CN 113112413 B CN113112413 B CN 113112413B
Authority
CN
China
Prior art keywords
image
vehicle
target object
motion
variation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010034379.3A
Other languages
Chinese (zh)
Other versions
CN113112413A (en
Inventor
蔡正波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN202010034379.3A priority Critical patent/CN113112413B/en
Publication of CN113112413A publication Critical patent/CN113112413A/en
Application granted granted Critical
Publication of CN113112413B publication Critical patent/CN113112413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Abstract

An image generation method, an image generation device and a vehicle-mounted head-up display system are disclosed. According to one embodiment, the method comprises: acquiring image sequence data of a vehicle running environment acquired by an imaging device; identifying a target object in the vehicle driving environment according to the image sequence data; calculating the motion variation of the vehicle in a preset time interval according to the acquired vehicle motion data; and generating an image based on the target object and the motion variation. The method can solve the problem of poor effect caused by the fact that the display patterns on the display screen cannot be overlapped with the actual targets due to system delay, vehicle body shaking in the running process of the vehicle and the like, and improves user experience.

Description

Image generation method, image generation device and vehicle-mounted head-up display system
Technical Field
The present disclosure relates to the field of image display, and more particularly, to an image generation method and apparatus for vehicle-mounted head-up display, and a vehicle-mounted head-up display system including the same.
Background
The augmented reality vehicle-mounted head-up display (ARHUD) is an important interface for vehicle and human interaction, and utilizes the technology of optical projection to project on a windshield, and then human eyes see that the recognized target pattern on the windshield and an actual target are overlapped together, so that the aim of reality augmentation is fulfilled. Combining navigation and ADAS technologies, the ARHUD can greatly improve user experience and security. However, because the optical-mechanical system of the ARHUD is embedded in the vehicle and displayed on the windshield, when the vehicle encounters a situation such as bumpy road condition or other system delay in the driving process, the virtual image imaging and the actual target display will not overlap, so that the displayed AR effect cannot indicate the actual position of the target, the user experience effect is poor, and even potential safety hazards are caused.
Disclosure of Invention
The present disclosure has been made in order to solve the above-mentioned technical problems occurring in the prior art. Embodiments of the present disclosure provide an image generation method, an image generation apparatus, a vehicle-mounted head-up display system, a computer program product, and a computer-readable storage medium, which improve the overlap ratio of a display pattern and an actual target in consideration of factors such as jitter, system delay, and the like in image generation.
According to one aspect of the present disclosure, there is provided an image generation method including: acquiring image sequence data of a vehicle running environment acquired by an imaging device; identifying a target object in the vehicle driving environment according to the image sequence data; calculating the motion variation of the vehicle in a preset time interval according to the acquired vehicle motion data; and generating an image based on the target object and the motion variation.
According to another aspect of the present disclosure, there is provided an image generating apparatus including: an image acquisition unit for acquiring image sequence data of the vehicle running environment acquired by the imaging device; a target recognition unit for recognizing a target object from the image sequence data; a compensation calculation unit for calculating a motion variation in a predetermined time interval according to the collected vehicle motion data; and an image generation unit configured to generate an image based on the target object and the motion variation.
According to another aspect of the present disclosure, there is provided a head-up display system for a vehicle, including: the image generating device; and an image display unit for receiving the generated image and projecting the image on a windshield of the vehicle in an enhanced display manner.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the above-described image generation method.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the above-described image generation method.
Therefore, compared with the prior art, by adopting the image generation method, the device, the vehicle-mounted head-up display system, the computer program product and the computer readable storage medium according to the embodiment of the disclosure, the relative displacement condition of the vehicle caused by shaking or system delay can be predicted, and the problem of poor effect caused by the fact that the target pattern on the display screen cannot be overlapped with the actual target due to shaking of the vehicle body in the driving process of the vehicle is solved by fusing the relative displacement into the image generation process, so that the user experience is remarkably improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments thereof in more detail with reference to the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 illustrates a schematic diagram of an application scenario of an image generation method provided according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of an image generation method provided in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart for determining the position of a target object provided in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates a flow chart for calculating motion variance provided in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of how system delays and vehicle jitter affect a target object's observed location, provided in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates a composition schematic of a time interval provided in accordance with an embodiment of the present disclosure;
FIG. 7 illustrates a flow chart for correcting a predetermined time interval provided in accordance with an embodiment of the present disclosure;
FIG. 8 illustrates a flow chart for generating a virtual image of a target provided in accordance with an embodiment of the present disclosure;
FIG. 9 illustrates a block diagram of an image generation apparatus provided in accordance with an embodiment of the present disclosure;
FIG. 10 illustrates a block diagram of an electronic device provided in accordance with an embodiment of the present disclosure;
fig. 11 illustrates a block diagram of a vehicle head-up display system provided in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
SUMMARY
In view of the problems of the prior art, the basic concept of the present disclosure is to provide a new image generating method, apparatus, vehicle-mounted head-up display system, computer program product and computer readable storage medium, which can acquire image sequence data of a vehicle driving environment, identify a target object in the vehicle driving environment, calculate a motion variation of the vehicle within a predetermined time interval, and fuse the motion variation into an image generating process of the target object, so that a position indicated by a virtual image is an actual position of the target object for a driver, thereby meeting requirements of accurate positioning, automatic driving and the like.
Embodiments of the present disclosure may be applied to various scenarios. For example, the embodiments of the present disclosure may be used to enhance display in real time of road conditions such as pedestrians, roadblocks, lane lines, etc. in a driving environment where a vehicle is located. The vehicle may be of a different type, which may be a vehicle, an aircraft, an in-water vehicle, etc. Through the information, effective driving references can be provided for the vehicle drivers, and the aircraft drivers can be assisted to accurately land on the preset runway. For ease of explanation of the present disclosure, the description will be continued with the vehicle as an example of a vehicle.
Fig. 1 illustrates a schematic diagram of an application scenario of an image generation method according to an embodiment of the present disclosure.
As shown in fig. 1, a heads-up display system 10 for a vehicle 20 may include one or more imaging devices 110, an image generation module 120, and an image display module 130. The image generation module 120 is configured to generate an actual image of the target object, and the image display module 130 includes an optical module such as a liquid crystal display, a mirror, etc., so as to amplify and reflect an optical signal of the actual image, and then project the amplified optical signal onto a windshield of the vehicle 20 to generate the virtual image 140.
In the case of a vehicle traveling on a bumpy road or the like, there is inevitably a vehicle body shake, which will cause the imaging position of the target object in the driver's field of view to change from 140 to 140', which will cause the target and actual objects seen by the human eye to be misaligned; in addition, system delays such as transmission time of image data and processing time of an image algorithm may cause misalignment between an image display position and an actual target object. These display deviations will cause that the virtual image cannot be correctly "matched" with the real road conditions, greatly affecting the user experience and even creating potential safety hazards.
For this reason, in an embodiment of the present disclosure, by acquiring image sequence data of a vehicle running environment, identifying a target object in the running environment from the image sequence data, calculating a motion variation amount of the vehicle within a predetermined time interval from the acquired vehicle motion data, and generating an image based on the target object and the motion variation amount. Accordingly, the embodiments of the present disclosure according to the basic concept can cancel display deviation due to vehicle body shake and/or system delay, etc., thereby reliably virtually imaging a target object.
Of course, although the embodiments of the present disclosure are described above by taking vehicles as examples, the present disclosure is not limited thereto. Embodiments of the present disclosure may be applied to various devices such as mobile robots, flight training machines, and the like.
Various non-limiting embodiments according to the present disclosure will be described in more detail below in conjunction with the application scenario of fig. 1, with reference to the accompanying drawings.
Exemplary method
Fig. 2 illustrates a flowchart of an image generation method according to an embodiment of the present disclosure.
As shown in fig. 2, an image generation method according to an embodiment of the present disclosure may include:
in step S210, image series data of the vehicle running environment acquired by the imaging device is acquired.
For example, the imaging device may be an image sensor for capturing image information, which may be a front-facing camera or a camera array. For example, the image information acquired by the image sensor may be a continuous image frame sequence (i.e., a video stream) or a discrete image frame sequence (i.e., a set of image data sampled at a predetermined sampling time point), or the like. For example, the camera may be a monocular camera, a binocular camera, a multi-view camera, or the like, and may be used to capture a gray scale image, or may capture a color image with color information. Of course, any other type of camera known in the art and which may appear in the future may be applied to the present disclosure, and the present disclosure is not particularly limited in the manner in which an image is captured, as long as gray-scale or color information of an input image can be obtained. In order to reduce the amount of computation in subsequent operations, in one embodiment, the color map may be grayed out prior to analysis and processing.
Referring to fig. 1, the imaging device 110 may be mounted in a vehicle. In an application scene where the imaging device is equipped on a vehicle, image information of a road environment where the current vehicle is located can be acquired through the imaging device, and objects in the image can be monitored and identified.
In a specific embodiment, the imaging device may be integrated in an Advanced Driving Assistance System (ADAS). In addition to imaging devices, embodiments of the present disclosure may include other types of sensors, such as rear cameras, lidar, acoustic radar, infrared detectors, ambient light sensors, etc., that may collect environmental data inside and outside of a vehicle to assist the imaging devices in more clearly and accurately acquiring information about the vehicle's driving environment.
In step S220, a target object in the vehicle running environment is identified from the image sequence data.
After the image information of the running environment in which the present vehicle is located acquired by the imaging device is acquired, a target object in the running environment may be identified by various methods.
For example, the collected image information is transmitted to a target recognition module through an interface such as DVP, MIPI, LVDS, AHD, which can recognize a target object of interest from the collected image information, which can include road boundaries, surrounding vehicles, obstacle regions, pedestrians, traffic identifications, and the like.
In a specific embodiment, the target object may be detected and identified from the image information of the driving environment by using a machine learning manner, and the learning model may be trained in advance according to sample data of the road driving environment for training. For example, sample images of various target objects possibly appearing on a road surface can be collected in advance, a sample library is established, feature extraction is performed on the various target objects, and a classifier of the target objects, such as a road surface boundary, a lane line, a pedestrian, a vehicle, a traffic sign classifier and the like, is obtained through training by a machine learning method.
In order to be able to display the identified target object in a virtual imaging manner, after identifying the type of target object, the position of the target object in the real physical world needs to be determined.
In addition to identifying the type and location of the target object, in some embodiments, the motion state information of the identified object may be further determined by image analysis and/or other sensors in step 220. The motion state information includes the speed and azimuth angle of a target object (pedestrian, other vehicle, etc.) with respect to the current vehicle, etc. According to the information, the relative positions of the vehicle and the target object can be predicted and calculated, and the position of the virtual image can be timely adjusted to prevent deviation of the imaging position and the actual position of the target object.
In step S230, a motion variation amount of the vehicle in a predetermined time interval is calculated from the acquired vehicle motion data.
In order to correct an error due to vehicle body shake or system delay, a calculation determination of the error is required. According to one embodiment of the present disclosure, the error may be measured by measuring the amount of change in motion of the vehicle over a predetermined time interval, which may be the time (t 1 ) And the time (t 2 ) Time difference of (c) at t 1 If the vehicle body shakes after the moment, the virtual image presented to the driver and the actual position of the target object have larger deviation, and meanwhile, the image processing and final projection to the windshield have a system delay time, and certain deviation can be caused. Thus, the vehicle is at t 1 To t 2 The change amount of the movement can reflect the influence of vehicle body shake and system delay on the position deviation of the virtual image to a certain extent.
In one embodiment, the amount of change in motion of the vehicle may be calculated by acquiring motion data of the vehicle. For example, referring to fig. 1, the motion data of the vehicle may be acquired by a motion sensor 150 in the vehicle, the motion sensor 150 may be an inertial measurement unit and a motion encoder (including an accelerometer and a gyroscope, etc.) built in the vehicle for measuring motion parameters of the vehicle, such as speed, acceleration, displacement, etc., to determine the position and direction of the vehicle in the road surface environment, and may also be a built-in magnetometer, etc., to calibrate the accumulated error of the attitude sensor in real time. Thus, more accurate vehicle motion data can be obtained by more input conditions.
In order to accurately evaluate deviations caused by vehicle body jerk, the motion data of the vehicle may include three-axis acceleration and three-axis angular velocity of the vehicle, for example, an accelerometer may detect acceleration signals of the vehicle 20 in three mutually independent coordinate axis directions along a vehicle body coordinate system, and a gyroscope may detect angular velocity signals of the vehicle 20 with respect to the three coordinate axes. Each acceleration value and angular velocity value has a corresponding time stamp and the motion data and corresponding time value may be transmitted to the image generation module via a protocol such as I2C, SPI, UART. Because the vehicle body shakes and changes rapidly, the time period for collecting the acceleration and the angular velocity is preferably less than or equal to 1ms.
After determining the target object and the amount of change in vehicle motion within a predetermined time interval in the image sequence data, the method may proceed to step S240 to generate an image based on the target object and the amount of change in motion.
The image sequence data of the target object collected by the imaging device may be transmitted to the image generation module 120 through a CAN bus or the like, and the image sequence data of the target object and the calculated motion variation of the vehicle or the motion variation on a predetermined plane may be used to generate an actual image of the target object to be displayed, and the generated image signal may be transmitted to the image display module 130 to be processed by the optical module and then may be projected onto a windshield of the vehicle to form a virtual image. In this embodiment, since the image generation process performs calculation compensation for vehicle body shake and system delay, the displayed image has a good contact ratio with the target object.
As previously described, in order to be able to image-display an identified target object, it is necessary to first know the position of the target object in the real physical world. Fig. 3 illustrates a flow chart for determining the position of a target object provided in accordance with an embodiment of the present disclosure.
As shown in fig. 3, step S220 may further include:
in sub-step S221, image position coordinates of the target object are identified from the image sequence data.
Since the target object such as a lane line, a pedestrian or the like generally has a spatial volume, the position coordinates of the target object in the acquired image can be determined from the contour of the identified target object.
In sub-step S222, first spatial position coordinates of the target object are acquired from image position coordinates of the target object.
For example, the image sequence data may be based on calibration parameters of the imaging device, at t 1 The first image acquired at the moment identifies a first spatial position coordinate of the target object relative to the vehicle.
Due to manufacturing tolerances, after the imaging device is mounted on the vehicle, each vehicle must perform independent final line camera calibration or subsequent market camera adjustments in order to determine calibration parameters such as the pitch angle of the imaging device on the vehicle for final use for driving assistance purposes and the like. For example, the calibration parameter may refer to an extrinsic matrix of the imaging device, which may include one or more of a pitch angle, a tilt angle, etc. of the imaging device with respect to a traveling direction of the current vehicle. For example, after a target object such as a road surface boundary is detected in the image information, the distance and angle between the road surface boundary and the current vehicle may be calculated according to the calibrated pitch angle and the like and a preset algorithm and according to the position coordinates of the road surface boundary in the image.
After determining the relative distance and angle of the target object with respect to the vehicle, it may be converted into coordinate values under a predetermined rectangular or spherical coordinate system, and the coordinate values may be transmitted to the following image generation module 120 together with the type of the target object.
In one embodiment of the invention, by calculating t 1 To t 2 The influence of factors such as vehicle body shake, system delay and the like on imaging is obtained by the vehicle motion variation in the time of (a). Fig. 4 illustrates a flowchart of calculating a motion variation according to an embodiment of the present disclosure.
As described in fig. 4, the step S230 of calculating the motion variation may include:
in sub-step S231, the acquired vehicle motion data is subjected to a filtering process to obtain corrected motion data.
Because of temperature drift of gyroscopes, accelerometers and the like and fluctuation of sampling sensitivity, motion data needs to be filtered. In one embodiment, a complementary filtering algorithm may be used to modify the acquired motion data.
In sub-step S232, a displacement variation and an attitude variation within the predetermined time interval are calculated from the corrected motion data.
To obtain a predetermined time interval deltat (t 2 -t 1 ) The displacement variation in the phase is integrated on three axes by utilizing the value obtained by the accelerometer, and the relative displacement in one period (relative to the previous frame moment) can be obtained. For example, the instantaneous amount of triaxial acceleration is a t By integrating the values within a predetermined time interval asThe displacement amount within the predetermined time interval can be obtained.
In order to obtain the vehicle attitude change amount within the predetermined time interval, the rotation angle can be obtained by integrating the data acquired by the gyroscope in the three-axis directions within the predetermined time interval Δtθ、/>The rotation matrix is then calculated according to the following formula:
wherein in the above formulaθ、/>Respectively indicate the rotation angles on the x, y and z axes, corresponding +.>R y (θ)、Rotation matrices in x, y, z axes, respectively, and thus rotation matrices in three dimensions can be obtained:
assuming that the pose of the last frame time is pos t The vehicle attitude of the current frame can be obtained as pos t’ =pos t * R is defined as the formula. Due to the predetermined time interval deltat (t 2 -t 1 ) Since the sampling period of motion data such as acceleration and angular velocity is generally longer than that of the vehicle, it is necessary to repeat the above operation to obtain the vehicle posture change amount in Δt time interval.
In sub-step S233, the displacement variation and the attitude variation are mapped to a predetermined plane to obtain a motion variation on the predetermined plane.
Referring to fig. 5, assume that at t 1 The target object observed by the driver at the moment is at P1, and at t 2 At the position P2 of the target object actually observed by the driver at the moment, if the observing angle deviation alpha of the P1 and the P2 to the human eyes is larger than a certain value, the displayed image is obviously not overlapped with the actual target object, and the user experience is further affected.
In order to compensate for such misalignment deviations, in sub-step S233, the displacement variation (Δd) and the attitude variation (spatial deflection β) determined in step S222 are mapped to a predetermined two-dimensional plane, which may be parallel to a predetermined virtual imaging plane, to determine the motion variation of the deviations on the plane. And compensating and correcting the deviation caused by the motion variation in the subsequent image generation process, so that the coincidence ratio of the imaging image and the actual target object can be improved.
In order to obtain the motion variation of the vehicle, it is also necessary to specifically determine the timing (t 1 ) And the time (t 2 ) Is a time interval of (a) for a time period of (b).
As shown in fig. 6, the time interval may be determined based on at least one of an image acquisition delay (T1), a target recognition algorithm delay (T2), and a display delay (T3), for example, the time interval is determined by t1+t2+t3, wherein the display delay T3 is generally determined by hardware and an algorithm, which may be obtained through actual measurement or estimation.
In one embodiment, the image acquisition delay and the target recognition algorithm delay are dynamically acquired based on a sampling period of the image sequence data and a sampling period of the vehicle motion data. The image information acquired by the imaging device may be a sequence of discrete image frames, i.e. a sequence of image frames is acquired at a predetermined sampling time period, typically with an accuracy of more than 0.2ms. Different image sampling periods will cause the above-mentioned computation amounts for the displacement variation amount and the attitude variation amount to be different, and higher frequency sampling will generally cause the computation amount and computation time of the recognition algorithm to be increased. Similarly, the amount of calculation of the recognition algorithm increases as the sampling frequency of the vehicle motion data increases. Therefore, the dynamic acquisition image acquisition delay and the target recognition algorithm delay can more accurately predict the motion variation of the vehicle.
Meanwhile, in one embodiment, for the sampling period of the image sequence data and the sampling period of the vehicle motion data, the sampling period can be dynamically adjusted according to the running environment of the vehicle, for example, in the case of a flat road surface, the sampling period can be increased (the sampling frequency is reduced), the calculation amount can be reduced, and the calculation efficiency can be improved; in the case of a bumpy road surface, the sampling period can be reduced (sampling frequency is increased) to obtain more accurate data of the vehicle movement, so that the calculated expected position of the movement of the vehicle coincides with the actual position.
In another embodiment, the image acquisition delay T1 and the target recognition algorithm delay T2 may also measure an average value as the delay time according to the system operation condition, but may deviate from the actual value because it is the measured average value. To correct for this deviation, referring to fig. 7, the method of this embodiment may further include the steps of:
in sub-step S231', a second image frame of the current image frame after the predetermined time interval is acquired in the image sequence data.
Determining the image acquisition delay T1 and the target recognition algorithm delay T2 from the measured mean, e.g. T1+T2 is typically 30ms-60ms, with the same time as beforeThe display delay T3 can be obtained by actual measurement, thereby determining a predetermined time interval, and obtaining the current image frame from the image sequence acquired by the imaging device after the determined predetermined time interval 2 And a second image frame acquired at the moment.
In sub-step S232', the target object is identified in the second image frame and second spatial position coordinates of the target object are acquired.
This step is the same as the aforementioned step S221 and step S222, and will not be described here again.
In sub-step S233', the predetermined time interval is adjusted according to the first spatial position coordinates, the second spatial position coordinates, and the motion variation.
For example, by the method of t 1 First spatial position coordinates determined by the current frame of time and from t 2 The spatial position difference of the second spatial position coordinates determined by the second image frame of the instant may determine an actual amount of change in the movement of the vehicle within a predetermined time interval, compare the actual amount of change with a movement change calculated by the movement data acquired by the sensor, determine whether the predetermined time interval is appropriate, and adjust the predetermined time interval according to the comparison value and the vehicle speed of the vehicle so that the predetermined time interval approaches a real system delay. Preferably, the target object is selected as a stationary object, for example, it is a traffic sign.
After determining the motion variation of the vehicle, the acquired image frames can be modeled and rendered according to the parameters to generate an image to be displayed. Fig. 8 illustrates a flowchart of generating an actual image of a target according to an embodiment of the present disclosure.
As described in fig. 8, the step S240 of generating an image may include:
in sub-step S241, a display model of the vehicle running environment is established under a predetermined coordinate system.
Typically the vehicle is in motion, a coordinate system is established with a fixed location on the vehicle, and establishing a display model under this coordinate system may facilitate determining the actual image of the target object. The fixed position may be, for example, a position where the imaging device is located, or may be another position such as a center of gravity of the vehicle.
Alternatively, in an embodiment, the coordinate system may be set up at a non-fixed position, for example, the coordinate system is set up with the eye position of the driver as the origin, so that the generated image corresponds to the viewing angle of the driver, and further, the virtual image and the target object are highly overlapped.
In sub-step S242, the motion variance and the spatial position coordinates of the target object are fused, and image pixel information of the target object is generated based on the display model.
With reference to the foregoing description, it is possible to determine the target object at t based on the image series data acquired by the imaging device 1 The spatial position coordinates of the moment under the preset coordinate system can obtain the motion variation data of the vehicle in the preset time based on the motion data acquired by the motion sensor, and the target object at t can be calculated by utilizing algorithms such as ray tracing 2 Position coordinates of the moment in time at the predetermined coordinates.
Referring back to FIG. 5, fusing the motion variance of the vehicle and the spatial position coordinates of the target object may expect the target object to be at t 2 The spatial coordinates (x, y, z) of the position P2 on the predetermined plane at the moment, the line OP2 of the human eye and the position P2 intersects the virtual imaging plane S at the position F2, which has two-dimensional coordinates (x ', y') on the imaging plane, from which two-dimensional coordinates (x ', y') the corresponding pixel coordinates of the projected image can be determined.
And repeatedly executing the steps aiming at the outline characteristics of the target object, and performing treatments such as rendering and the like on the outline characteristics of the target object to generate the image characteristics of the target object.
Since windshields are generally curved in shape, in one embodiment, the image generation method described above may further comprise:
in sub-step S243, the generated image is subjected to distortion correction, thereby canceling distortion caused by factors such as a glass curved surface.
For example, in the case of a display screen of a windshield, a mirror of a projection system, or the like having curvature, if this factor is not calibrated, the displayed image may deviate from the actual object. For this reason, the image generation module 120 also performs distortion correction using an image distortion (warping) or the like before transmitting the image signal to the display module, thereby ensuring that the driver can see an image overlapping with the actual object through the display screen.
In sub-step S244, the distortion corrected image is projected on the windshield of the vehicle in an enhanced display.
Referring to fig. 1, after receiving the image signal of the image generating module 120, the image display module 130 may convert the image signal into an optical signal by an LED or a laser, and project the optical signal onto a display screen of a windshield through an optical module such as a liquid crystal display screen and a mirror, so as to form a virtual image, thereby presenting important information in a driving environment to a driver in an enhanced display manner, and effectively assisting the driver in controlling the information of road conditions.
Therefore, by adopting the image generation method of the embodiment of the disclosure, the image sequence data of the vehicle running environment can be acquired, the target object in the vehicle running environment is identified, the motion variable quantity of the vehicle in a preset time interval is calculated, and the motion variable quantity is fused into the image generation process of the target object, so that the position indicated by the generated virtual image is the actual position of the target object even if jolt and shake occur to a driver, and the requirements of accurate positioning, automatic driving and the like are met.
Exemplary apparatus
Next, an image generating apparatus according to an embodiment of the present disclosure is described with reference to fig. 1 and 9.
Fig. 9 illustrates a block diagram of an image generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 9, an image generating apparatus 300 according to an embodiment of the present disclosure may include: an image acquisition unit 310 for acquiring image series data of the vehicle running environment acquired by the imaging device; a target recognition unit 320 for recognizing a target object based on the image sequence data; a compensation calculating unit 330 for calculating a motion variation amount within a predetermined time interval according to the collected vehicle motion data; and an image processing unit 340 for generating an image based on the target object and the motion variation.
In one example, the object recognition unit 320 may include: the image position coordinate recognition module is used for recognizing the image position coordinate of the target object according to the image sequence data; and the first space position coordinate acquisition module is used for acquiring the first space position coordinate of the target object according to the image position coordinate of the target object.
In a specific embodiment, the object identifying unit 320 may be integrated in an ADAS, which identifies image information, identifies information such as a road boundary, surrounding vehicles, pedestrians, traffic signs, and the like therein, and outputs parameters obtained after analysis processing of the information to a subsequent image processing unit, where the parameters include, but are not limited to, a position of a drivable road boundary relative to a vehicle, a position of a pedestrian from the vehicle, a distance from the surrounding vehicle, a specific meaning of a traffic sign, and the like.
In one example, the motion variation amount within the predetermined time may be calculated by the compensation calculating unit 330: filtering the acquired vehicle motion data to obtain corrected motion data, and calculating displacement variation and attitude variation in the preset time interval according to the corrected motion data; and mapping the displacement variation and the attitude variation to a predetermined plane to obtain a motion variation on the predetermined plane.
In one example, the vehicle motion data includes tri-axial acceleration and tri-axial angular velocity, which may be obtained by motion sensors within the vehicle, which may be inertial measurement units and motion encoders (including accelerometers and gyroscopes, etc.) built into the vehicle.
In one example, generating an image using image processing unit 340 may include: establishing a display model of a vehicle running environment under a preset coordinate system; and fusing the motion variation and the space position coordinates of the target object, and generating image pixel information of the target object under the preset coordinates based on the display model.
In one example, the image processing unit 340 may be further configured to distortion correct the generated image, which may be projected on the windshield of the vehicle in an enhanced display manner.
In one example, the image generating apparatus 300 may further include: a predetermined time interval determining unit 350 configured to determine a predetermined time interval based on at least one of the image acquisition delay, the target recognition algorithm delay, and the display delay.
In one example, the image acquisition delay and the target recognition algorithm delay are dynamically acquired based on a sampling period of the image sequence data and a sampling period of the vehicle motion data, and the display delay is acquired based on actual measurements.
In one example, the sampling period of the image sequence data and/or the sampling period of the vehicle motion data is dynamically adjusted according to the vehicle driving environment.
In one example, the predetermined time interval determination unit may be configured to adjust a predetermined time interval, which includes: acquiring a second image frame of the current image frame after the preset time interval from the image sequence data; identifying the target object in the second image frame and acquiring second spatial position coordinates of the target object; and adjusting the preset time interval according to the first space position coordinate, the second space position coordinate and the motion variation.
The specific functions and operations of the respective units and modules in the above-described image generating apparatus 300 have been described in detail in the image generating method described above with reference to fig. 1 to 8, and thus are only briefly described herein, and unnecessary repetitive descriptions are omitted.
Exemplary electronic devices and systems
Next, an electronic device according to an embodiment of the present disclosure will be described with reference to fig. 10. As shown in fig. 10, electronic device 400 includes one or more processors 410 and memory 420.
Processor 410 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in electronic device 400 to perform desired functions.
Memory 420 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 410 to implement the image generation methods and/or other desired functions of the various embodiments of the present disclosure described above.
In one example, electronic device 400 may also include an input device 430 and an output device 440, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 430 may include an image sensor, which may be a camera or an array of cameras. As another example, the input device 430 may also include an Inertial Measurement Unit (IMU) and motion encoders (including accelerometers and gyroscopes, etc.) built into the vehicle for measuring motion parameter data of the vehicle, such as speed, acceleration, displacement, etc., to determine the amount of change in motion of the vehicle over a predetermined time interval.
The output device 440 may output various information to the outside, including the determined virtual image, etc. The output device 440 may also include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
For simplicity, only some of the components of the electronic device 400 relevant to the present disclosure are shown in fig. 10, such as some of the relevant peripheral or auxiliary components being omitted. For example, the electronic device 400 may further include one or more interfaces for connecting to an imaging device that obtains images, where the interfaces may be conventional AHD, MIPI, or other interface approaches. In addition, electronic device 400 may include any other suitable components depending on the particular application.
Further, although not shown, the electronic apparatus 400 may also include a communication device or the like, which may communicate with other devices (e.g., personal computers, servers, mobile stations, base stations, etc.) through a network, which may be the internet, a wireless local area network, a mobile communication network, or the like, or other technologies, which may include, for example, bluetooth communication, infrared communication, or the like.
An in-vehicle head-up display system is described below with reference to fig. 11. As shown in fig. 11, the in-vehicle head-up display system 600 may include at least the image generating apparatus 300, and the image display apparatus 500.
The image generating apparatus 300 is configured to receive data of an imaging device, a motion sensor, etc. and generate an image signal, and can be specifically referred to fig. 9 and related description, and will not be described herein. The image display device 600 is configured to receive the generated image and project the image onto a windshield of the vehicle in an enhanced display manner. As shown in fig. 11, the image display device 500 generally includes a light source, a display screen, a mirror, etc., where the light source may be an LED, a laser light source, etc., and the light emitted by the light source may be projected onto the display screen, and the display screen may be a liquid crystal display screen, a fluorescent display screen, etc., which may receive the image signal of the image generating device 300 and convert the image signal into a visible light signal, and the visible light may be projected onto the mirror to be reflected, amplified, etc., and may be projected onto a display area portion of the windshield, thereby forming a virtual image and being presented in a visual field of the driver.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in a method according to various embodiments of the present disclosure described in the "exemplary methods" section of the present description.
The computer program product may include program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in an image generation method according to various embodiments of the present disclosure described in the above "exemplary method" section of the present disclosure.
A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (12)

1. An image generation method, comprising:
acquiring image sequence data of a vehicle running environment acquired by an imaging device;
Identifying a target object in the vehicle driving environment according to the image sequence data;
calculating the motion variation of the vehicle in a preset time interval according to the acquired vehicle motion data; and
generating an image based on the target object and the motion variation,
performing distortion correction on the generated image;
the distortion corrected image is projected on the windshield of the vehicle in an enhanced display.
2. The image generating method according to claim 1, wherein identifying the target object in the vehicle running environment from the image sequence data includes:
identifying image position coordinates of the target object according to the image sequence data; and
and acquiring a first space position coordinate of the target object according to the image position coordinate of the target object.
3. The image generating method according to claim 1, wherein calculating the motion variation amount of the vehicle within a predetermined time interval from the acquired vehicle motion data comprises:
filtering the acquired vehicle motion data to obtain corrected motion data;
calculating a displacement variation and an attitude variation within the predetermined time interval according to the corrected motion data; and
The displacement variation and the attitude variation are mapped to a predetermined plane to obtain a motion variation on the predetermined plane.
4. The image generation method according to claim 3, wherein the vehicle motion data includes a triaxial acceleration and a triaxial angular velocity.
5. The image generation method according to claim 1, wherein a sampling period of the image series data and/or a sampling period of the vehicle motion data is dynamically adjusted according to the vehicle running environment.
6. The image generation method of claim 1, wherein the predetermined time interval is determined based on at least one of an image acquisition delay, a target recognition algorithm delay, and a display delay.
7. The image generation method according to claim 6, wherein the image acquisition delay and the target recognition algorithm delay are dynamically acquired based on a sampling period of the image sequence data and a sampling period of the vehicle motion data, and the display delay is acquired based on an actual measurement.
8. The image generation method according to claim 2, wherein the method further comprises:
acquiring a second image frame of the current image frame after the preset time interval from the image sequence data;
Identifying the target object in the second image frame and acquiring second spatial position coordinates of the target object; and
and adjusting the preset time interval according to the first space position coordinate, the second space position coordinate and the motion variation.
9. The image generation method according to claim 1, wherein generating an image based on the target object and the motion variation amount includes:
establishing a display model of the vehicle running environment under a preset coordinate system; and
and fusing the motion variation and the space position coordinates of the target object, and generating image pixel information of the target object under the preset coordinate system based on the display model.
10. An image generating apparatus comprising:
an image acquisition unit for acquiring image sequence data of the vehicle running environment acquired by the imaging device;
a target recognition unit for recognizing a target object from the image sequence data;
a compensation calculation unit for calculating a motion variation in a predetermined time interval according to the collected vehicle motion data;
an image generation unit configured to generate an image based on the target object and the motion variation; and
And the image processing unit is used for carrying out distortion correction on the generated image and projecting the image subjected to distortion correction on a windshield of the vehicle in an enhanced display mode.
11. An in-vehicle heads-up display system, comprising:
the image generation apparatus according to claim 10; and
and the image display unit is used for receiving the generated image and projecting the image on a windshield of the vehicle in an enhanced display mode.
12. A computer readable storage medium comprising computer program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-9.
CN202010034379.3A 2020-01-13 2020-01-13 Image generation method, image generation device and vehicle-mounted head-up display system Active CN113112413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010034379.3A CN113112413B (en) 2020-01-13 2020-01-13 Image generation method, image generation device and vehicle-mounted head-up display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010034379.3A CN113112413B (en) 2020-01-13 2020-01-13 Image generation method, image generation device and vehicle-mounted head-up display system

Publications (2)

Publication Number Publication Date
CN113112413A CN113112413A (en) 2021-07-13
CN113112413B true CN113112413B (en) 2023-12-01

Family

ID=76709971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010034379.3A Active CN113112413B (en) 2020-01-13 2020-01-13 Image generation method, image generation device and vehicle-mounted head-up display system

Country Status (1)

Country Link
CN (1) CN113112413B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703352A (en) * 2021-07-27 2021-11-26 北京三快在线科技有限公司 Safety early warning method and device based on remote driving
CN115917254A (en) * 2021-07-31 2023-04-04 华为技术有限公司 Display method, device and system
CN115578682B (en) * 2022-12-07 2023-03-21 北京东舟技术股份有限公司 Augmented reality head-up display test method, system and storage medium
CN116563505B (en) * 2023-05-09 2024-04-05 阿波罗智联(北京)科技有限公司 Avatar generation method, apparatus, electronic device, and storage medium
CN116243880B (en) * 2023-05-11 2023-07-25 江苏泽景汽车电子股份有限公司 Image display method, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981082A (en) * 2017-03-08 2017-07-25 驭势科技(北京)有限公司 Vehicle-mounted camera scaling method, device and mobile unit
CN107389088A (en) * 2017-05-27 2017-11-24 纵目科技(上海)股份有限公司 Error correcting method, device, medium and the equipment of vehicle-mounted inertial navigation
CN209542965U (en) * 2019-03-12 2019-10-25 苏州车萝卜汽车电子科技有限公司 Image stabilization head up display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981082A (en) * 2017-03-08 2017-07-25 驭势科技(北京)有限公司 Vehicle-mounted camera scaling method, device and mobile unit
CN107389088A (en) * 2017-05-27 2017-11-24 纵目科技(上海)股份有限公司 Error correcting method, device, medium and the equipment of vehicle-mounted inertial navigation
CN209542965U (en) * 2019-03-12 2019-10-25 苏州车萝卜汽车电子科技有限公司 Image stabilization head up display

Also Published As

Publication number Publication date
CN113112413A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113112413B (en) Image generation method, image generation device and vehicle-mounted head-up display system
JP7161410B2 (en) System and method for identifying camera pose in scene
Zhu et al. The multivehicle stereo event camera dataset: An event camera dataset for 3D perception
JP7160040B2 (en) Signal processing device, signal processing method, program, moving object, and signal processing system
US11241960B2 (en) Head up display apparatus and display control method thereof
JP2019528501A (en) Camera alignment in a multi-camera system
EP2485203B1 (en) Vehicle-surroundings monitoring device
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
WO2020133172A1 (en) Image processing method, apparatus, and computer readable storage medium
JP6701532B2 (en) Image processing apparatus and image processing method
WO2020012879A1 (en) Head-up display
CN111415387A (en) Camera pose determining method and device, electronic equipment and storage medium
CN108603933A (en) The system and method exported for merging the sensor with different resolution
JP6669182B2 (en) Occupant monitoring device
CN109345591A (en) A kind of vehicle itself attitude detecting method and device
JP5214355B2 (en) Vehicle traveling locus observation system, vehicle traveling locus observation method, and program thereof
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
EP3795952A1 (en) Estimation device, estimation method, and computer program product
CN108322698B (en) System and method based on fusion of multiple cameras and inertial measurement unit
Shan et al. Probabilistic egocentric motion correction of lidar point cloud and projection to camera images for moving platforms
CN113807282A (en) Data processing method and device and readable storage medium
CN110836656B (en) Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment
JP2021107607A (en) Mobile calibration of smart helmet display
KR20170011881A (en) Radar for vehicle, and vehicle including the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant