WO2020083318A1 - Head-up display system and display method, and automobile - Google Patents
Head-up display system and display method, and automobile Download PDFInfo
- Publication number
- WO2020083318A1 WO2020083318A1 PCT/CN2019/112816 CN2019112816W WO2020083318A1 WO 2020083318 A1 WO2020083318 A1 WO 2020083318A1 CN 2019112816 W CN2019112816 W CN 2019112816W WO 2020083318 A1 WO2020083318 A1 WO 2020083318A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- projection
- image acquisition
- acquisition device
- preset
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 54
- 230000033001 locomotion Effects 0.000 claims abstract description 40
- 230000010365 information processing Effects 0.000 claims abstract description 14
- 230000005540 biological transmission Effects 0.000 claims description 8
- 238000000691 measurement method Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 12
- 230000004297 night vision Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 241000282414 Homo sapiens Species 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000004438 eyesight Effects 0.000 description 4
- 239000005357 flat glass Substances 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000007599 discharging Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003331 infrared imaging Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012502 risk assessment Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/30—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing vision in the non-visible spectrum, e.g. night or infrared vision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/26—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/31—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/301—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
Definitions
- the present disclosure relates to the technical field of vehicle-mounted equipment, and particularly to a head-up display system, a display method, and an automobile.
- the rearview mirrors on both sides of the car have blind spots.
- the traffic situation in the blind zone has no way for drivers to find out from the rearview mirrors on both sides.
- a common method is to add a convex mirror on the side mirrors on both sides or make part of the side mirrors into convex mirrors.
- the rear-view mirror mainly relies on the driver's active observation and lacks active prompt information.
- An embodiment of the present disclosure provides a head-up display system, including: an image acquisition device, a processing device, and an image projection device.
- the image acquisition device is used to capture a real-time image of a preset image acquisition area.
- the processing device is used to recognize the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system using a preset image recognition rule, and used to / Or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image.
- the image projection device is used to project the output image to a designated area of a preset projection surface.
- An embodiment of the present disclosure also provides a head-up display method, which includes: capturing a real-time image of a preset image collection area; using a preset image recognition rule to identify the identification object in the real-time image and / or the identification object and head-up Display relative motion parameter information of the carrier of the system, and generate prompt information according to the recognition object and / or the relative motion parameter information, using preset information processing rules, and merge the prompt information and the real-time image into an output An image; and projecting the output image to a designated area of a preset projection surface.
- An embodiment of the present disclosure also provides an automobile, including a body and the head-up display system described above.
- FIG. 1 is a schematic structural diagram of a head-up display system according to an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of components of a head-up display system according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure being provided outside the vehicle;
- FIG. 4 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure provided inside a vehicle;
- FIG. 5 is a schematic diagram of a working process of a head-up display system according to an embodiment of the present disclosure
- FIG. 6 is a schematic flowchart of a process according to the first embodiment of the present disclosure.
- FIG. 7 is a schematic diagram of a head-up display system projecting to a side window according to an embodiment of the present disclosure
- FIG. 8 is a schematic diagram of a processing flow in case 2 according to an embodiment of the present disclosure.
- FIG. 9 is a schematic diagram of a processing flow of a steering scenario according to an embodiment of the present disclosure.
- FIG. 10 is a schematic flowchart of a head-up display method according to an embodiment of the present disclosure.
- the image acquisition device captures the real-time image of the preset image acquisition area; the processing device identifies the identification object in the real-time image and the carrier of the identification object and the head-up display system according to the preset image recognition rules Relative motion parameter information, and merge the identification object and the relative motion parameter information into an output image according to a preset information processing rule; an image projection device projects the output image to a designated area of a preset projection surface.
- the head-up display system provided by an embodiment of the present disclosure includes an image acquisition device 11, a processing device 12 and an image projection device 13.
- the head-up display system 10 may be a single whole, or may be composed of different devices distributed at various positions of the vehicle body.
- the head-up display system 10 may be installed on a carrier such as an automobile. Similar to other in-vehicle systems, the head-up display system 10 may be powered by a carrier such as an automobile, or may be powered by the head-up display system 10 itself.
- Augmented reality is a combination of real scenes and virtual scenes.
- the virtual data is superimposed on the real environment, and then the same picture is used to display and bring An interactive mode.
- the head-up display system 10 can provide driving information to the driver through the AR mode.
- the image acquisition device 11 is used to capture a real-time image of a preset image acquisition area.
- the image acquisition device 11 may be an image or video capture device such as a camera.
- the preset image acquisition area may be an area that covers a blind spot in the rearview mirror on both sides.
- the image acquisition device 11 may include a first image acquisition device 111 and a second image acquisition device 112.
- the first image acquisition device 111 and the second image acquisition device 112 may capture real-time images of the preset image acquisition area from two different angles, respectively, for subsequent image processing.
- the detailed internal structure of the head-up display system 10 may be as shown in FIG. 2, which includes an image acquisition device 11, a processing device 12 (for example, a main control circuit board), and an image projection device 13 .
- the image acquisition device 11 includes a first image acquisition device 111 and a second image acquisition device 112, for example, two cameras.
- the main control circuit board is also provided with a power management circuit for managing the power supply of the entire head-up display system 10, including managing external power supply and optional internal battery-powered power supply switching, charging and discharging, and various devices in the head-up display system 10 Power supply distribution etc.
- the main control circuit board may include a processor for data processing, and circuits and interfaces corresponding to various functions such as a power supply.
- the processing device 12 is used for identifying the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system 10 by using preset image recognition rules, and
- the identification object and / or the relative motion parameter information uses preset information processing rules to generate prompt information, and merges the prompt information and the real-time image into an output image.
- the preset image recognition rule may be set according to the image acquisition device 11, and when the image acquisition device 11 is a camera, a real-time image captured by a single camera may be identified; the image acquisition device 11 is two Or when there are more than two cameras, the real-time images captured by the two cameras can be identified through binocular parallax ranging algorithm and other methods. Recognition of recognized objects can be achieved by methods such as model training and deep learning.
- the relative motion parameter may include a relative distance and a relative motion speed of the identification object and the carrier. The relative motion speed may be determined by dividing the relative distance between two time points by the time interval between the two time points.
- a real-time image of one camera may be pre-selected and combined as an output image according to the preset information processing rule.
- the preset information processing rule may be based on the recognition object and the relative motion parameter settings, and merge the required prompt information with the captured image, that is, display the prompt information directly on the captured image, for example, in the recognition object
- it can indicate the general outline of the identified object, etc.
- the relative motion parameter exceeds the predicted value, it can display prompt information such as distance or warning information.
- single camera in order to strengthen the identification of key target objects and enhance targeted warning measures, single camera can be integrated with deep learning based object recognition capabilities; roads and obstacles or various special objects can be performed on the scenes in the image
- the segmentation is used for feature extraction, and on this basis, pattern matching based on machine learning and deep learning is performed to realize the recognition of the recognition object. Focus on identifying human, vehicle and other recognition objects that have a significant impact on driving safety.
- auxiliary means such as machine learning, deep learning or radar can be used to obtain the distance and displacement of the recognition object for the single camera situation, so that more clear prompt information can be given later.
- the processing device 12 may detect the distance between the identified object and the carrier according to the real-time images captured by the first image acquisition device 111 and the second image acquisition device 112 respectively, using the dual-camera ranging principle .
- the distance and displacement of obstacles can be judged by the principle of dual-camera distance measurement, that is, the principle of binocular parallax distance measurement. Similar to the binocular imaging of human beings, the position of the same object is different in the respective imaging of the scene during the simultaneous imaging of the scene with the two cameras, the position of the near object changes greatly in both, and the object The position difference between the two is small. In this way, you can detect the proximity and speed of recognized objects such as people or cars in relative motion, and then generate prompt information according to preset information processing rules, for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
- preset information processing rules for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
- the first image acquisition device 111 may be a visible light image acquisition device
- the second image acquisition device 112 may be an infrared image acquisition device.
- the visible light image acquisition device may be a visible light camera
- the infrared image acquisition device may be an infrared camera, or a camera that supports both visible light and infrared functions, and may be switched to an infrared function when needed.
- a wide-angle + telephoto configuration may be used, a visible light camera uses a wide-angle, an infrared camera uses a telephoto, and a black-and-white image captured and parsed by a color image of a visible light camera plus an infrared camera It is the basis of image processing; in this way, you can obtain more differentiated environmental information and enhance the ability of the stereo vision system to cope with night scenes or special weather.
- one more visible light camera can be added to form a three-camera.
- the two visible light cameras are responsible for binocular parallax distance measurement, and the infrared camera is specifically responsible for dark light imaging.
- the system further includes an infrared emitting device 14 for sending an infrared beam to the identified object.
- the processing device 12 is further configured to send the infrared beam transmission time according to the infrared transmission device 14, and the second image acquisition device 112 receives the light beam generated by the infrared beam reflected by the identified object, and adopts the flight time (TOF, Time Of Flight) distance measurement method to determine the distance between the identification object and the carrier.
- TOF Time Of Flight
- the processing device 12 acquires obstacle depth information according to the time difference between the infrared beam emitted by the infrared emitting device 14 being transmitted to the obstacle and the infrared beam reflected by the second image acquisition device 112 (ie, infrared camera) receiving the obstacle,
- the second image acquisition device 112 ie, infrared camera
- the three-dimensional information of the obstacle can be determined more quickly and accurately.
- the image projection device 13 is configured to project the output image to a designated area of a preset projection surface.
- the preset projection surface may be rear-view mirrors on both sides of the car, that is, the output image may be projected to a designated area of the rear-view mirrors on both sides.
- Two head-up display systems 10 may be provided on one car, respectively corresponding to the rear-view mirrors on both sides. The two head-up display systems 10 respectively project the blind spot images on both sides and the prompt information to the rearview mirrors on both sides, so that the driver can directly observe the blind spot in the rear vision and get the prompt information, which greatly enhances the driving safety.
- the image projection device 13 may illuminate a digital micromirror device based on digital optical processing (DLP, Digital Processing) technology with a dynamic zoom projection lens and a light source using LED (Light Emitting Diode) DMD, Digital (mirror, Device) realizes the conversion of electrical signals into optical signals.
- DLP digital optical processing
- DMD Light Emitting Diode
- DMD Digital (mirror, Device)
- the head-up display system 10 shown can be placed outside the car as shown in FIG. 3 and can be mechanically fixed to the outer edge of the window and assisted by magnetic attraction to the metal part of the door; or as shown in FIG. 4 can be mechanically fixed to the side car The upper or lower edge of the inside of the window.
- reference numeral 31 represents a projected image
- 32 represents a virtual image of the projected image visually formed when the user views the rearview mirror.
- step 501 the camera captures real-time image information of the close-up environment on the side of the vehicle body and transmits it to the main control circuit board.
- the processor of the main control circuit board and the like can perform step 502 and step 503 simultaneously using a parallel processing method.
- step 502 the real-time image is subjected to compression, difference, sharpening, etc. to adapt the real-time image to subsequent projection data transmission processing, etc., and the process proceeds to step 507.
- step 503 the preset target image recognition rules are used to extract the key target feature of the video information and identify the recognition object, and then step 504 is executed.
- step 504 the preset image recognition rules are used to determine the relative motion parameter information such as the distance and speed of the recognized object and the body. If you have a dual camera, you can use the binocular parallax ranging algorithm. If you have an infrared transmitter, you can press TOF to measure the distance. Algorithm, and can estimate the movement trajectory and trend, and then execute step 505.
- step 505 according to the recognition and calculation results, a risk assessment is performed with respect to the distance to the vehicle body and the level is judged, and then step 506 is executed.
- the AR overlay prompt information is given according to the risk level, including: the calculated information that needs to be displayed directly and the associated warning icon pre-stored in the database retrieved according to the calculation result, etc., which may be a warning color bar and an evasion icon Wait, and then perform step 507.
- step 507 the prompt information and the real-time image processed in step 502 are superimposed, and the superimposed projection image is converted into a projection signal suitable for the image projection device 13 (that is, a projection light machine).
- step 508 the projection signal is pushed to the projection light machine, optically processed and projected.
- step 509 the projection result is presented in the projection area of the exterior mirror of the car.
- the head-up display system 10 may further include a distance sensor 15 for measuring relative motion parameter information of the recognized object.
- the distance sensor 15 may be an infrared distance sensor, a ranging radar, or the like.
- infrared distance sensor has limited infrared light power and scattering surface, but infrared distance sensor has a faster response speed, and can be used as a quick predicting tool.
- the infrared emitting device 14 emits infrared light of greater power and a larger scattering surface to the obstacle, and then the second image acquisition device 112 (ie, infrared camera) receives the infrared light reflected by the obstacle to obtain the distance or depth information of the obstacle, and Combined with the color information obtained by the visible light camera, the three-dimensional information of the obstacle can be judged more quickly and accurately.
- the distance sensor 15 may not be placed on the body of the head-up display system 10, for example, it may be placed at a position closer to the suspicious obstacle on the rear side of the vehicle body, or even a plurality of distance sensors 15 may be provided to improve the accuracy of the distance measurement.
- a projection adjustment device 16 and / or a projection image acquisition device 17 for adjusting the projection image may also be provided.
- the projection adjustment device 16 is used to adjust the projection orientation of the image projection device 13 under the control of the processing device 12.
- the projection image collection device 17 is used to collect the projection image generated by the image projection device 13.
- the processing device 12 is further configured to control the projection adjustment device 16 to adjust the projection orientation of the image projection device 13 according to the projected image, using a preset adjustment rule.
- a pan-tilt head driven by a motor, etc. may be used to adjust the projection orientation by adjusting the position of the image projection device 13; or the head-up display system 10 may be directly adjusted by a pan-tilt head or other bracket devices to adjust The effect of projection orientation.
- the gimbal or other types of brackets can also have a manual adjustment function, which can be adjusted by the user.
- the projection image collection device 17 may be a camera or the like, and the projection image collection device 17 may sample the projected image of the image projection device 13.
- the processing device 12 can control the projection adjustment device 16 to adjust the projection direction according to the actual condition of the projected image.
- the preset adjustment rule can be set according to the driver position, the projection position, etc., and the projection orientation is adjusted so that the projection image of the image projection device 13 is projected at a preset position.
- the projection position can be preset in the rearview mirror. Due to the adjustment of the rearview mirror, etc., if the projected image exceeds the preset projection position, the projection orientation can be adjusted to keep the projected image at the preset projection position.
- the projection image acquisition device 17 may use a wide-angle camera, which can simultaneously observe the condition of the turn signal and find that the turn signal is flashing, such as three consecutive flashes. Processing, usually do not need to process real-time images.
- the head-up display system 10 may further include an ambient light sensor 18 for detecting the intensity of ambient light.
- the processing device 12 is also used to adjust the projection brightness of the image projection device 13 according to the intensity of the ambient light, using a preset brightness adjustment rule.
- the preset brightness adjustment rule may be set according to the visibility of the projection brightness under different ambient light brightness. Different projection brightness can be set according to different ambient light brightness, so that the projected image can be observed under different ambient light brightness.
- the head-up display system 10 may further include a wireless transceiver device 19 for transmitting control information received from an external terminal to the processing device 12.
- the wireless transceiver 19 may use Bluetooth, wireless (wifi), or a mobile communication network to transmit control information.
- the user can select the head-up display system 10 in a remote control mode through an external terminal.
- the external terminal may be a mobile terminal such as a wireless remote controller or a mobile phone.
- the external terminal can activate different functions in the head-up display system 10 by setting different working modes, such as the night vision mode, and the image acquisition device 17 can be switched to infrared imaging in the night vision mode.
- the external terminal can send instructions to adjust the projection direction.
- the button-type Bluetooth remote control as an example, you can set the driving, reversing, parking, night vision and other modes.
- the image acquisition device 17 can be switched to infrared camera, and it can automatically switch to night vision mode by light recognition ; Or under the night vision mode there are driving, reversing, parking and other modes; you can also set up microphones and other radio devices, used to transmit the sound of the remote control in the car to the head-up display system 10 speakers.
- the mobile terminal can also obtain real-time images and output images from the head-up display system 10 through wireless transmission.
- a fill light device 20 may be provided in the head-up display system 10 to fill the light with a flash or the like when the image acquisition device 11 captures an image.
- the head-up display system 10 may also be provided with a sound-generating device 21, such as a speaker, etc., which emits different prompt sounds according to different levels of prompt information while projecting.
- the head-up display system 10 in FIG. 2 further includes an optional battery 22 and an external power supply interface 23, so that different power supplies can be used to power the head-up display system 10.
- Case 1 When the head-up display system 10 is placed in a car, when projecting to the mirrors on both sides, the projected light will be refracted due to the influence of the side windows. In this case, the projection orientation may need to be adjusted or corrected. In addition, changes in external light can also cause adjustment of the brightness of the projected image.
- the specific adjustment process is shown in FIG. 6 and includes steps 601 to 608.
- step 601 the projection light beam passes through or does not pass through the window to learn, and determine the position range of the projection area in the captured image of the projection image acquisition device 17 with and without the side window .
- the ambient light sensor 18 detects the intensity of ambient light.
- step 603 it is compared with a preset light intensity threshold to determine whether it is in a dark environment, if it is in a dark environment, step 604 is performed, otherwise, step 605 is performed.
- step 604 the projection direction is adjusted to a preset dark light angle, the projection area of the external rearview mirror is avoided, and the projection picture is directly projected onto the side window glass to avoid multi-directional scattering. As shown in FIG. 7, at a specific projection angle, the projection image of the image projection device 13 on the side window glass can be observed by the driver.
- step 605 the projection image acquisition device 17 identifies the position of the projection area in the acquired image.
- step 606 the position of the identified projection area in the captured image is compared with the position range obtained through learning. If it does not exceed the position range, step 607 is executed, otherwise step 608 is executed.
- step 607 no adjustment and compensation are made to the projection orientation.
- step 608 the projection orientation is adjusted and compensated so that the projection area falls within the position range.
- Case 2 When the side mirrors are adjusted, the head-up display system 10 tracks the projection position in real time and performs real-time adjustment, as shown in FIG. 8, and the specific process includes steps 801 to 804.
- step 801 the projection image acquisition device 17 recognizes the edge of the projected image of the exterior mirror of the vehicle.
- step 802 whether the edge area of the projected image exceeds the side mirror's mirror surface or a predetermined range recognized by the system. If it exceeds, then step 804 is executed; otherwise, step 803 is executed.
- step 803 no projection orientation adjustment is made.
- step 804 the projection orientation is adjusted so that the projected image is projected into the predetermined range of the side mirror, and if the adjustable range is exceeded, the user may be prompted to perform human intervention.
- Case three according to the steering situation of the vehicle, real-time projection is performed, as shown in FIG. 9, including steps 901 to 904.
- step 901 it is determined that the vehicle is turning.
- judging the turning of the vehicle can also be achieved by using the projection image acquisition device 17 to capture the turn signal, for example, the image captured by the projection image acquisition device 17 has a steering The light flashes, if three consecutive times, it is determined that the vehicle is turning.
- step 902 the processing function of the head-up display system 10 on the steering side is started, and the movement trend analysis is performed for the movement state of the target.
- step 903 target features are extracted and fitted and classified, and high-risk features are retrieved for matching.
- step 904 more data analysis results and warning prompts than normal information can be superimposed on the physical image.
- the driver can be prompted with projection information and sound information.
- the image acquisition device 11 collects the environmental image on the outside and rear of the vehicle body and sends it to the processing device 12 for image processing and analysis and calculation, and then sends the environmental real video information and the virtual information calculated based on the analysis of the real image data
- the image projection device 13 is projected onto a partial mirror surface of the exterior mirror of the car.
- the projection interface gives graphics or voice prompts superimposed on the physical image. In this way, the driver can obtain more information based on the projected image and improve driving safety.
- the head-up display method provided by an embodiment of the present disclosure includes steps 1001 to 1003.
- step 1001 a real-time image of a preset image acquisition area is captured.
- a real-time image can be captured by the image acquisition device 11 in the head-up display system 10.
- the head-up display system 10 may be a single whole, or may be composed of different devices distributed at various positions on the vehicle body.
- the head-up display system 10 may be installed on a carrier such as an automobile. Similar to other in-vehicle systems, the head-up display system 10 may be powered by a carrier such as an automobile, or it may be powered by the battery of the head-up display system 10 itself.
- AR is a combination of real scenes and virtual scenes. Based on the pictures taken by the camera, through computer processing capabilities, the virtual data is superimposed on the real environment, and then the same picture is used to display an interactive mode.
- the head-up display system 10 can provide driving information to the driver through the AR mode.
- the image acquisition device 11 may be an image or video capture device such as a camera.
- the preset image acquisition area may be an area that covers a blind spot in the rearview mirror on both sides.
- the image acquisition device 11 may include a first image acquisition device 111 and a second image acquisition device 112.
- the first image acquisition device 111 and the second image acquisition device 112 may capture real-time images of the preset image acquisition area from two different angles, respectively, for subsequent image processing.
- the detailed internal structure of the head-up display system 10 may be as shown in FIG. 2, which includes an image acquisition device 11, a processing device 12 (for example, a main control circuit board), and an image projection device 13 .
- the image acquisition device 11 includes a first image acquisition device 111 and a second image acquisition device 112, for example, two cameras; the main control circuit board is also provided with a power management circuit for managing the power of the entire head-up display system 10, This includes management of external power supply and optional built-in battery power supply switching, charging and discharging, and power supply distribution of each device in the head-up display system 10.
- the main control circuit board may include a processor for data processing, and circuits and interfaces corresponding to various functions such as power supply.
- a preset image recognition rule is used to identify the recognition object in the real-time image and / or relative motion parameter information of the recognition object and the carrier of the head-up display system 10, and according to the recognition object and / or Or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image.
- the step 1002 may be performed by the processing device 12 in the head-up display system 10 as described in FIG. 1.
- the preset image recognition rule may be set according to the image acquisition device 11, when the image acquisition device 11 is a camera, the real-time image captured by a single camera may be identified; the image acquisition device 11 is two or two When there are more than one camera, the real-time images captured by the two cameras can be identified through methods such as binocular parallax ranging. Recognition of recognized objects can be achieved by methods such as model training and deep learning.
- the relative motion parameter may include a relative distance and a relative motion speed of the identification object and the carrier. The relative motion speed may be determined by dividing the relative distance between two time points by the time interval between the two time points.
- a real-time image of one camera may be pre-selected and combined as an output image according to the preset information processing rule.
- the preset information processing rule may be based on the recognition object and the relative motion parameter settings, and merge the required prompt information with the captured image, that is, display the prompt information directly on the captured image, for example, in the recognition object
- it can indicate the general outline of the identified object, etc.
- the relative motion parameter exceeds the predicted value, it can display prompt information such as distance or warning information.
- single camera in order to strengthen the identification of key target objects and enhance targeted warning measures, single camera can be integrated with deep learning based object recognition capabilities; roads and obstacles or various special objects can be performed on the scenes in the image
- the segmentation is used for feature extraction, and on this basis, pattern matching based on machine learning and deep learning is performed to realize the recognition of the recognition object. Focus on identifying human, vehicle and other recognition objects that have a significant impact on driving safety.
- auxiliary means such as machine learning, deep learning or radar can be used to obtain the distance and displacement of the recognition object for the single camera situation, so that more clear prompt information can be given later.
- the processing device 12 may detect the distance between the identified object and the carrier according to the real-time images captured by the first image acquisition device 111 and the second image acquisition device 112 respectively, using the dual-camera ranging principle .
- the distance and displacement of obstacles can be judged by the principle of dual-camera distance measurement, that is, the principle of binocular parallax distance measurement. Similar to the binocular imaging of human beings, the position of the same object is different in the respective imaging of the scene during the simultaneous imaging of the scene with the two cameras, the position of the near object varies greatly between the two, and the object in the distance The position difference between the two is small. In this way, you can detect the proximity and speed of recognized objects such as people or cars in relative motion, and then generate prompt information according to preset information processing rules, for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
- preset information processing rules for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
- the first image acquisition device 111 may be a visible light image acquisition device
- the second image acquisition device 112 may be an infrared image acquisition device.
- the visible light image acquisition device may be a visible light camera
- the infrared image acquisition device may be an infrared camera, or a camera that supports both visible light and infrared functions, and may be switched to an infrared function when needed.
- a wide-angle + telephoto configuration may be used, a visible light camera uses a wide angle, an infrared camera uses a telephoto, and a black-and-white image captured and parsed by a color image of a visible light camera plus an infrared camera It is the basis of image processing; in this way, you can obtain more differentiated environmental information and enhance the ability of the stereo vision system to cope with night scenes or special weather.
- one more visible light camera can be added to form a three-camera.
- the two visible light cameras are responsible for binocular parallax distance measurement, and the infrared camera is specifically responsible for dark light imaging.
- the system further includes an infrared emitting device 14 for sending an infrared beam to the identified object.
- the processing device 12 is further configured to transmit the infrared beam according to the infrared beam emitting time of the infrared emitting device 14, and the second image acquisition device 112 receives the infrared beam received by the object reflected by the beam of the receiving time, using TOF measurement
- the distance method determines the distance between the identification object and the carrier.
- the processing device 12 acquires obstacle depth information according to the time difference between the infrared beam emitted by the infrared emitting device 14 being transmitted to the obstacle and the infrared beam reflected by the second image acquisition device 112 (ie, infrared camera) receiving the obstacle,
- the second image acquisition device 112 ie, infrared camera
- the three-dimensional information of the obstacle can be determined more quickly and accurately.
- step 1003 the output image is projected to a designated area of a preset projection surface.
- the image projection device 13 in the head-up display system 10 may perform projection.
- the preset projection surface may be rear-view mirrors on both sides of the car, that is, the output image may be projected to a designated area of the rear-view mirrors on both sides.
- Two head-up display systems 10 may be provided on one car, respectively corresponding to the rear-view mirrors on both sides. The two head-up display systems 10 respectively project the blind spot images on both sides and the prompt information to the rearview mirrors on both sides, so that the driver can directly observe the blind spot in the rear vision and get the prompt information, which greatly enhances the driving safety.
- the image projection device 13 may use a dynamic zoom projection lens and an LED light source to illuminate a DMD based on DLP technology to achieve conversion of electrical signals into optical signals.
- the head-up display system 10 shown can be placed outside the car as shown in FIG. 3 and can be mechanically fixed to the outer edge of the window and supplemented by magnetic force to the metal part of the door; as shown in FIG. The upper or lower edge of the inside of the window.
- reference numeral 31 represents a projected image
- 32 represents a virtual image of the projected image visually formed when the user views the rearview mirror.
- step 501 the camera captures real-time image information of the close-up environment on the side of the vehicle body and transmits it to the main control circuit board.
- the processor of the main control circuit board and the like can perform step 502 and step 503 simultaneously using a parallel processing method.
- step 502 the real-time image is subjected to compression, difference, sharpening, etc. to adapt the real-time image to subsequent projection data transmission processing, etc., and the process proceeds to step 507.
- step 503 the preset target image recognition rules are used to extract the key target feature of the video information and identify the recognition object, and then step 504 is executed.
- step 504 the preset image recognition rules are used to determine the relative motion parameter information such as the distance and speed of the recognized object and the body. If you have a dual camera, you can use the binocular parallax ranging algorithm. If you have an infrared transmitter, you can press TOF to measure the distance. Algorithm, and can estimate the movement trajectory and trend, and then execute step 505.
- step 505 according to the recognition and calculation results, a risk assessment is performed with respect to the distance to the vehicle body and the level is judged, and then step 506 is executed.
- the AR overlay prompt information is given according to the risk level, including: the calculated information that needs to be displayed directly and the associated warning icon pre-stored in the database retrieved according to the calculation result, etc., which may be a warning color bar and an evasion icon Wait, and then perform step 507.
- step 507 the prompt information and the real-time image processed in step 502 are superimposed, and the superimposed projection image is converted into a projection signal suitable for the image projection device 13 (that is, a projection light machine).
- step 508 the projection signal is pushed to the projection light machine, optically processed and projected.
- step 509 the projection result is presented in the projection area of the exterior mirror of the car.
- the head-up display system 10 may further include a distance sensor 15 for measuring relative motion parameter information of the recognized object.
- the distance sensor 15 may be an infrared distance sensor, a ranging radar, or the like.
- infrared distance sensor has limited infrared light power and scattering surface, but infrared distance sensor has a faster response speed, and can be used as a quick predicting tool.
- the infrared emitting device 14 emits infrared light of greater power and a larger scattering surface to the obstacle, and then the second image acquisition device 112 (ie, infrared camera) receives the infrared light reflected by the obstacle to obtain the distance or depth information of the obstacle, and Combined with the color information obtained by the visible light camera, the three-dimensional information of the obstacle can be judged more quickly and accurately.
- the distance sensor 15 may not be placed on the body of the head-up display system 10, for example, it may be placed at a position closer to the suspicious obstacle on the rear side of the vehicle body, or even a plurality of distance sensors 15 may be provided to improve the accuracy of the ranging.
- a projection adjustment device 16 and / or a projection image acquisition device 17 for adjusting the projection image may also be provided.
- the projection adjustment device 16 is used to adjust the projection orientation of the image projection device 13 under the control of the processing device 12.
- the projection image acquisition device 17 is used to acquire the projection image generated by the image projection device 13.
- the processing device 12 is further configured to control the projection adjustment device 16 to adjust the projection orientation of the image projection device 13 according to the projected image, using a preset adjustment rule.
- a pan-tilt head driven by a motor, etc. may be used to adjust the projection orientation by adjusting the position of the image projection device 13; or the head-up display system 10 may be directly adjusted by a pan-tilt head or other bracket devices to adjust The effect of projection orientation.
- the gimbal or other types of brackets can also have a manual adjustment function, which can be adjusted by the user.
- the projection image collection device 17 may be a camera or the like, and the projection image collection device 17 may sample the projected image of the image projection device 13.
- the processing device 12 can control the projection adjustment device 16 to adjust the projection direction according to the actual condition of the projected image.
- the preset adjustment rule can be set according to the driver position, the projection position, etc., and the projection orientation is adjusted so that the projection image of the image projection device 13 is projected at a preset position.
- the projection position can be preset in the rearview mirror. Due to the adjustment of the rearview mirror, etc., if the projected image exceeds the preset projection position, the projection orientation can be adjusted to keep the projected image at the preset projection position.
- the projection image acquisition device 17 may use a wide-angle camera, which can simultaneously observe the condition of the turn signal and find that the turn signal is flashing, such as three consecutive flashes. Processing, usually do not need to process real-time images.
- the head-up display system 10 may further include an ambient light sensor 18 for detecting the intensity of ambient light.
- the processing device 12 is also used to adjust the projection brightness of the image projection device 13 according to the intensity of the ambient light, using a preset brightness adjustment rule.
- the preset brightness adjustment rule may be set according to the visibility of the projection brightness under different ambient light brightness. Different projection brightness can be set according to different ambient light brightness, so that the projected image can be observed under different ambient light brightness.
- the head-up display system 10 may further include a wireless transceiver device 19 for transmitting control information received from an external terminal to the processing device 12.
- the wireless transceiver 19 may use Bluetooth, wireless (wifi), or a mobile communication network to transmit control information.
- the user can select the head-up display system 10 in a remote control mode through an external terminal.
- the external terminal may be a mobile terminal such as a wireless remote controller or a mobile phone.
- the external terminal can activate different functions in the head-up display system 10 by setting different working modes, such as the night vision mode, and the image acquisition device 17 can be switched to infrared imaging in the night vision mode.
- the external terminal can send instructions to adjust the projection direction.
- the button-type Bluetooth remote control as an example, you can set the driving, reversing, parking, night vision and other modes.
- the image acquisition device 17 can be switched to infrared camera, and it can automatically switch to night vision mode by light recognition ; Or under the night vision mode there are driving, reversing, parking and other modes; you can also set up microphones and other radio devices, used to transmit the sound of the remote control in the car to the head-up display system 10 speakers.
- the mobile terminal can also obtain real-time images and output images from the head-up display system 10 through wireless transmission.
- a fill light device 20 may be provided in the head-up display system 10 to fill the light with a flash or the like when the image acquisition device 11 captures an image.
- the head-up display system 10 may also be provided with a sound-generating device 21, such as a speaker, etc., which emits different prompt sounds according to different levels of prompt information while projecting.
- the head-up display system 10 in FIG. 2 further includes an optional battery 22 and an external power supply interface 23, so that different power supplies can be used to power the head-up display system 10.
- Case 1 When the head-up display system 10 is placed in a car, when projecting to the mirrors on both sides, the projected light will be refracted due to the influence of the side windows. In this case, the projection orientation may need to be adjusted or corrected. In addition, changes in external light can also cause adjustment of the brightness of the projected image.
- the specific adjustment process is shown in FIG. 6 and includes steps 601 to 608.
- step 601 the projection light beam passes through or does not pass through the window to learn, and determine the position range of the projection area in the captured image of the projection image acquisition device 17 with and without the side window .
- the ambient light sensor 18 detects the intensity of ambient light.
- step 603 it is compared with a preset light intensity threshold to determine whether it is in a dark environment, if it is in a dark environment, step 604 is performed, otherwise, step 605 is performed.
- step 604 the projection direction is adjusted to a preset dark light angle, the projection area of the external rearview mirror is avoided, and the projection picture is directly projected onto the side window glass to avoid multi-directional scattering. As shown in FIG. 7, at a specific projection angle, the projection image of the image projection device 13 on the side window glass can be observed by the driver.
- step 605 the projection image acquisition device 17 identifies the position of the projection area in the acquired image.
- step 606 the position of the identified projection area in the captured image is compared with the position range obtained through learning. If it does not exceed the position range, step 607 is executed, otherwise step 608 is executed.
- step 607 no adjustment and compensation are made to the projection orientation.
- step 608 the projection orientation is adjusted and compensated so that the projection area falls within the position range.
- Case 2 When the side mirrors are adjusted, the head-up display system 10 tracks the projection position in real time and performs real-time adjustment, as shown in FIG. 8, and the specific process includes steps 801 to 804. :
- step 801 the projection image acquisition device 17 recognizes the edge of the projected image of the exterior mirror of the vehicle.
- step 802 whether the edge area of the projected image exceeds the side mirror's mirror surface or a predetermined range recognized by the system. If it exceeds, then step 804 is executed; otherwise, step 803 is executed.
- step 803 no projection orientation adjustment is made.
- step 804 the projection orientation is adjusted so that the projected image is projected into the predetermined range of the side mirror, and if the adjustable range is exceeded, the user may be prompted to perform human intervention.
- Case three according to the steering situation of the vehicle, real-time projection is performed, as shown in FIG. 9, including steps 901 to 904.
- step 901 it is determined that the vehicle is turning.
- judging the turning of the vehicle can also be achieved by using the projection image acquisition device 17 to capture the turn signal, for example, the image captured by the projection image acquisition device 17 has a steering The light flashes, if three consecutive times, it is determined that the vehicle is turning.
- step 902 the processing function of the head-up display system 10 on the steering side is started, and the movement trend analysis is performed for the movement state of the target.
- step 903 target features are extracted and fitted and classified, and high-risk features are retrieved for matching.
- step 904 more data analysis results and warning prompts than normal information can be superimposed on the physical image.
- the driver can be prompted with projection information and sound information.
- the image acquisition device 11 collects the environmental image on the outside and rear of the vehicle body and sends it to the processing device 12 for image processing and analysis and calculation, and then sends the environmental real video information and the virtual information calculated based on the analysis of the real image data
- the image projection device 13 is projected onto a partial mirror surface of the exterior mirror of the car.
- the projection interface gives graphics or voice prompts superimposed on the physical image. In this way, the driver can obtain more information based on the projected image and improve driving safety.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims (13)
- 一种平视显示系统,包括:图像采集装置,处理装置和图像投影装置;其中,A head-up display system includes: an image acquisition device, a processing device, and an image projection device; wherein,所述图像采集装置用于捕获预设图像采集区域的实时图像;The image acquisition device is used to capture a real-time image of a preset image acquisition area;所述处理装置用于采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与所述平视显示系统的载体的相对运动参数信息,并且用于根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像;The processing device is used to recognize the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system using preset image recognition rules, and is used to identify The object and / or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image;所述图像投影装置用于向预设投影面的指定区域投影所述输出图像。The image projection device is used to project the output image to a designated area of a preset projection surface.
- 根据权利要求1所述的系统,其中,所述图像采集装置包括第一图像采集装置和第二图像采集装置;The system according to claim 1, wherein the image acquisition device includes a first image acquisition device and a second image acquisition device;所述处理装置用于根据所述第一图像采集装置和所述第二图像采集装置分别捕获的实时图像,采用双摄测距原理,检测所述识别对象与所述载体的距离。The processing device is used to detect the distance between the identification object and the carrier according to the real-time images captured by the first image acquisition device and the second image acquisition device respectively, using the principle of dual-camera ranging.
- 根据权利要求2所述的系统,其中,The system according to claim 2, wherein所述第一图像采集装置为可见光图像采集装置,所述第二图像采集装置为红外图像采集装置。The first image acquisition device is a visible light image acquisition device, and the second image acquisition device is an infrared image acquisition device.
- 根据权利要求3所述的系统,还包括红外发射装置,用于向所述识别对象发送红外波束;The system according to claim 3, further comprising an infrared emitting device for sending an infrared beam to the identified object;所述处理装置还用于根据所述红外发射装置发送所述红外波束的发射时间,和所述第二图像采集装置接收所述红外波束被所述识别对象反射产生的光束的接收时间,采用飞行时间TOF测距法,确定所述识别对象与所述载体的距离。The processing device is further used for transmitting the infrared beam according to the transmission time of the infrared beam and the second image acquisition device receiving the light beam generated by the infrared beam reflected by the identification object The time TOF distance measurement method determines the distance between the identification object and the carrier.
- 根据权利要求1至4任一项所述的系统,还包括投影调整装置和/或投影图像采集装置,其中,The system according to any one of claims 1 to 4, further comprising a projection adjustment device and / or a projection image acquisition device, wherein,所述投影调整装置用于在所述处理装置的控制下调整所述图像投影装置的投影方位;The projection adjustment device is used to adjust the projection orientation of the image projection device under the control of the processing device;所述投影图像采集装置用于捕获所述图像投影装置产生的投影图像;The projection image acquisition device is used to capture the projection image generated by the image projection device;所述处理装置还用于根据所述投影图像,采用预设调整规则控制所述投影调整装置以调整所述图像投影装置的投影方位。The processing device is further configured to control the projection adjustment device to adjust the projection orientation of the image projection device using preset adjustment rules according to the projected image.
- 根据权利要求1至4任一项所述的系统,还包括环境光线传感器,用于检测环境光线的强度;The system according to any one of claims 1 to 4, further comprising an ambient light sensor for detecting the intensity of ambient light;所述处理装置还用于根据所述环境光线的强度,采用预设调整规则,调整所述图像投影装置的投影亮度。The processing device is further used for adjusting the projection brightness of the image projection device according to the intensity of the ambient light and using preset adjustment rules.
- 一种平视显示方法,包括:A head-up display method, including:捕获预设图像采集区域的实时图像;Capture real-time images of preset image collection areas;采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与平视显示系统的载体的相对运动参数信息,并且根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像;以及A preset image recognition rule is used to identify the relative motion parameter information of the recognition object in the real-time image and / or the recognition object and the head-up display system carrier, and according to the recognition object and / or the relative motion parameter information , Using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image; and向预设投影面的指定区域投影所述输出图像。The output image is projected to the designated area of the preset projection surface.
- 根据权利要求7所述的方法,其中,所述采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与平视显示系统的载体的相对运动参数信息的步骤包括:The method according to claim 7, wherein the step of using a preset image recognition rule to recognize the recognition object in the real-time image and / or the relative motion parameter information of the recognition object and the carrier of the head-up display system includes :根据第一图像采集装置和第二图像采集装置分别捕获的实时图像,采用双摄测距原理,检测所述识别对象与所述载体的距离。According to the real-time images captured by the first image acquisition device and the second image acquisition device respectively, the distance between the identification object and the carrier is detected by using the principle of dual-camera ranging.
- 根据权利要求8所述的方法,其中,所述第一图像采集装置 捕获可见光实时图像,所述第二图像采集装置捕获的红外实时图像。The method according to claim 8, wherein the first image acquisition device captures a real-time image of visible light, and the infrared image real-time image captured by the second image acquisition device.
- 根据权利要求9所述的方法,还包括:The method of claim 9, further comprising:向所述识别对象发送红外波束;Sending an infrared beam to the identified object;根据发送所述红外波束的发射时间,和所述第二图像采集装置接收所述红外波束被所述识别对象反射产生的光束的接收时间,采用TOF测距法,确定所述识别对象与所述载体的距离。According to the transmission time of transmitting the infrared beam and the receiving time of the light beam generated by the second image acquisition device receiving the infrared beam reflected by the identification object, a TOF ranging method is used to determine Carrier distance.
- 根据权利要求7至10任一项所述的方法,还包括:The method according to any one of claims 7 to 10, further comprising:采用预设调整规则调整投影方位。Use preset adjustment rules to adjust the projection orientation.
- 根据权利要求7至10任一项所述的方法,还包括:The method according to any one of claims 7 to 10, further comprising:根据环境光线的强度,采用预设调整规则,调整投影亮度。According to the intensity of ambient light, the preset adjustment rules are adopted to adjust the projection brightness.
- 一种汽车,包括车身和权利要求1至6任一项所述的平视显示系统。An automobile comprising a body and the head-up display system according to any one of claims 1 to 6.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811239821.5A CN111086451B (en) | 2018-10-23 | 2018-10-23 | Head-up display system, display method and automobile |
CN201811239821.5 | 2018-10-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020083318A1 true WO2020083318A1 (en) | 2020-04-30 |
Family
ID=70331862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/112816 WO2020083318A1 (en) | 2018-10-23 | 2019-10-23 | Head-up display system and display method, and automobile |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111086451B (en) |
WO (1) | WO2020083318A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112057107A (en) * | 2020-09-14 | 2020-12-11 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN113552905B (en) * | 2021-06-22 | 2024-09-13 | 歌尔光学科技有限公司 | Vehicle-mounted HUD position adjustment method and system |
CN114155617A (en) * | 2021-11-22 | 2022-03-08 | 支付宝(杭州)信息技术有限公司 | Parking payment method and collection equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106817568A (en) * | 2016-12-05 | 2017-06-09 | 网易(杭州)网络有限公司 | A kind of augmented reality display methods and device |
CN106856566A (en) * | 2016-12-16 | 2017-06-16 | 中国商用飞机有限责任公司北京民用飞机技术研究中心 | A kind of information synchronization method and system based on AR equipment |
CN107274725A (en) * | 2017-05-26 | 2017-10-20 | 华中师范大学 | A kind of mobile augmented reality type card identification method based on mirror-reflection |
CN207164368U (en) * | 2017-08-31 | 2018-03-30 | 北京新能源汽车股份有限公司 | Vehicle-mounted augmented reality system |
JP6384856B2 (en) * | 2014-07-10 | 2018-09-05 | Kddi株式会社 | Information device, program, and method for drawing AR object based on predicted camera posture in real time |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103434448B (en) * | 2013-08-07 | 2016-01-27 | 燕山大学 | A kind of elimination car post blind area system and using method thereof |
DE102013219556A1 (en) * | 2013-09-27 | 2015-04-02 | Continental Automotive Gmbh | Method and device for controlling an image generation device of a head-up display |
CN104608695A (en) * | 2014-12-17 | 2015-05-13 | 杭州云乐车辆技术有限公司 | Vehicle-mounted electronic rearview mirror head-up displaying device |
JP6811106B2 (en) * | 2017-01-25 | 2021-01-13 | 矢崎総業株式会社 | Head-up display device and display control method |
CN108515909B (en) * | 2018-04-04 | 2021-04-20 | 京东方科技集团股份有限公司 | Automobile head-up display system and obstacle prompting method thereof |
-
2018
- 2018-10-23 CN CN201811239821.5A patent/CN111086451B/en active Active
-
2019
- 2019-10-23 WO PCT/CN2019/112816 patent/WO2020083318A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6384856B2 (en) * | 2014-07-10 | 2018-09-05 | Kddi株式会社 | Information device, program, and method for drawing AR object based on predicted camera posture in real time |
CN106817568A (en) * | 2016-12-05 | 2017-06-09 | 网易(杭州)网络有限公司 | A kind of augmented reality display methods and device |
CN106856566A (en) * | 2016-12-16 | 2017-06-16 | 中国商用飞机有限责任公司北京民用飞机技术研究中心 | A kind of information synchronization method and system based on AR equipment |
CN107274725A (en) * | 2017-05-26 | 2017-10-20 | 华中师范大学 | A kind of mobile augmented reality type card identification method based on mirror-reflection |
CN207164368U (en) * | 2017-08-31 | 2018-03-30 | 北京新能源汽车股份有限公司 | Vehicle-mounted augmented reality system |
Also Published As
Publication number | Publication date |
---|---|
CN111086451A (en) | 2020-05-01 |
CN111086451B (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101949358B1 (en) | Apparatus for providing around view and Vehicle including the same | |
US20180352167A1 (en) | Image pickup apparatus, image pickup control method, and program | |
KR101579100B1 (en) | Apparatus for providing around view and Vehicle including the same | |
KR102043060B1 (en) | Autonomous drive apparatus and vehicle including the same | |
WO2020083318A1 (en) | Head-up display system and display method, and automobile | |
JP4807263B2 (en) | Vehicle display device | |
EP1961613B1 (en) | Driving support method and driving support device | |
WO2020061794A1 (en) | Vehicle driver assistance device, vehicle and information processing method | |
CN114228491B (en) | System and method for enhancing virtual reality head-up display with night vision | |
KR20160144829A (en) | Driver assistance apparatus and control method for the same | |
CN103661163A (en) | Mobile object and storage medium | |
KR20170011882A (en) | Radar for vehicle, and vehicle including the same | |
KR101698781B1 (en) | Driver assistance apparatus and Vehicle including the same | |
JPWO2020100664A1 (en) | Image processing equipment, image processing methods, and programs | |
US20160225186A1 (en) | System and method for augmented reality support | |
JP2012099085A (en) | Real-time warning system on windshield glass for vehicle, and operating method thereof | |
KR20170043212A (en) | Apparatus for providing around view and Vehicle | |
WO2019111529A1 (en) | Image processing device and image processing method | |
WO2023284748A1 (en) | Auxiliary driving system and vehicle | |
US10999488B2 (en) | Control device, imaging device, and control method | |
CN206907232U (en) | One kind is based on optics multi-vision visual vehicle rear-end collision prior-warning device | |
KR101816570B1 (en) | Display apparatus for vehicle | |
WO2022004412A1 (en) | Information processing device, information processing method, and program | |
KR20160144643A (en) | Apparatus for prividing around view and vehicle including the same | |
TWI699999B (en) | Vehicle vision auxiliary system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19875292 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19875292 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.10.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19875292 Country of ref document: EP Kind code of ref document: A1 |