WO2020083318A1 - Head-up display system and display method, and automobile - Google Patents

Head-up display system and display method, and automobile Download PDF

Info

Publication number
WO2020083318A1
WO2020083318A1 PCT/CN2019/112816 CN2019112816W WO2020083318A1 WO 2020083318 A1 WO2020083318 A1 WO 2020083318A1 CN 2019112816 W CN2019112816 W CN 2019112816W WO 2020083318 A1 WO2020083318 A1 WO 2020083318A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projection
image acquisition
acquisition device
preset
Prior art date
Application number
PCT/CN2019/112816
Other languages
French (fr)
Chinese (zh)
Inventor
张永亮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020083318A1 publication Critical patent/WO2020083318A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/30Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing vision in the non-visible spectrum, e.g. night or infrared vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/31Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Definitions

  • the present disclosure relates to the technical field of vehicle-mounted equipment, and particularly to a head-up display system, a display method, and an automobile.
  • the rearview mirrors on both sides of the car have blind spots.
  • the traffic situation in the blind zone has no way for drivers to find out from the rearview mirrors on both sides.
  • a common method is to add a convex mirror on the side mirrors on both sides or make part of the side mirrors into convex mirrors.
  • the rear-view mirror mainly relies on the driver's active observation and lacks active prompt information.
  • An embodiment of the present disclosure provides a head-up display system, including: an image acquisition device, a processing device, and an image projection device.
  • the image acquisition device is used to capture a real-time image of a preset image acquisition area.
  • the processing device is used to recognize the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system using a preset image recognition rule, and used to / Or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image.
  • the image projection device is used to project the output image to a designated area of a preset projection surface.
  • An embodiment of the present disclosure also provides a head-up display method, which includes: capturing a real-time image of a preset image collection area; using a preset image recognition rule to identify the identification object in the real-time image and / or the identification object and head-up Display relative motion parameter information of the carrier of the system, and generate prompt information according to the recognition object and / or the relative motion parameter information, using preset information processing rules, and merge the prompt information and the real-time image into an output An image; and projecting the output image to a designated area of a preset projection surface.
  • An embodiment of the present disclosure also provides an automobile, including a body and the head-up display system described above.
  • FIG. 1 is a schematic structural diagram of a head-up display system according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of components of a head-up display system according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure being provided outside the vehicle;
  • FIG. 4 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure provided inside a vehicle;
  • FIG. 5 is a schematic diagram of a working process of a head-up display system according to an embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of a process according to the first embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a head-up display system projecting to a side window according to an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of a processing flow in case 2 according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a processing flow of a steering scenario according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic flowchart of a head-up display method according to an embodiment of the present disclosure.
  • the image acquisition device captures the real-time image of the preset image acquisition area; the processing device identifies the identification object in the real-time image and the carrier of the identification object and the head-up display system according to the preset image recognition rules Relative motion parameter information, and merge the identification object and the relative motion parameter information into an output image according to a preset information processing rule; an image projection device projects the output image to a designated area of a preset projection surface.
  • the head-up display system provided by an embodiment of the present disclosure includes an image acquisition device 11, a processing device 12 and an image projection device 13.
  • the head-up display system 10 may be a single whole, or may be composed of different devices distributed at various positions of the vehicle body.
  • the head-up display system 10 may be installed on a carrier such as an automobile. Similar to other in-vehicle systems, the head-up display system 10 may be powered by a carrier such as an automobile, or may be powered by the head-up display system 10 itself.
  • Augmented reality is a combination of real scenes and virtual scenes.
  • the virtual data is superimposed on the real environment, and then the same picture is used to display and bring An interactive mode.
  • the head-up display system 10 can provide driving information to the driver through the AR mode.
  • the image acquisition device 11 is used to capture a real-time image of a preset image acquisition area.
  • the image acquisition device 11 may be an image or video capture device such as a camera.
  • the preset image acquisition area may be an area that covers a blind spot in the rearview mirror on both sides.
  • the image acquisition device 11 may include a first image acquisition device 111 and a second image acquisition device 112.
  • the first image acquisition device 111 and the second image acquisition device 112 may capture real-time images of the preset image acquisition area from two different angles, respectively, for subsequent image processing.
  • the detailed internal structure of the head-up display system 10 may be as shown in FIG. 2, which includes an image acquisition device 11, a processing device 12 (for example, a main control circuit board), and an image projection device 13 .
  • the image acquisition device 11 includes a first image acquisition device 111 and a second image acquisition device 112, for example, two cameras.
  • the main control circuit board is also provided with a power management circuit for managing the power supply of the entire head-up display system 10, including managing external power supply and optional internal battery-powered power supply switching, charging and discharging, and various devices in the head-up display system 10 Power supply distribution etc.
  • the main control circuit board may include a processor for data processing, and circuits and interfaces corresponding to various functions such as a power supply.
  • the processing device 12 is used for identifying the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system 10 by using preset image recognition rules, and
  • the identification object and / or the relative motion parameter information uses preset information processing rules to generate prompt information, and merges the prompt information and the real-time image into an output image.
  • the preset image recognition rule may be set according to the image acquisition device 11, and when the image acquisition device 11 is a camera, a real-time image captured by a single camera may be identified; the image acquisition device 11 is two Or when there are more than two cameras, the real-time images captured by the two cameras can be identified through binocular parallax ranging algorithm and other methods. Recognition of recognized objects can be achieved by methods such as model training and deep learning.
  • the relative motion parameter may include a relative distance and a relative motion speed of the identification object and the carrier. The relative motion speed may be determined by dividing the relative distance between two time points by the time interval between the two time points.
  • a real-time image of one camera may be pre-selected and combined as an output image according to the preset information processing rule.
  • the preset information processing rule may be based on the recognition object and the relative motion parameter settings, and merge the required prompt information with the captured image, that is, display the prompt information directly on the captured image, for example, in the recognition object
  • it can indicate the general outline of the identified object, etc.
  • the relative motion parameter exceeds the predicted value, it can display prompt information such as distance or warning information.
  • single camera in order to strengthen the identification of key target objects and enhance targeted warning measures, single camera can be integrated with deep learning based object recognition capabilities; roads and obstacles or various special objects can be performed on the scenes in the image
  • the segmentation is used for feature extraction, and on this basis, pattern matching based on machine learning and deep learning is performed to realize the recognition of the recognition object. Focus on identifying human, vehicle and other recognition objects that have a significant impact on driving safety.
  • auxiliary means such as machine learning, deep learning or radar can be used to obtain the distance and displacement of the recognition object for the single camera situation, so that more clear prompt information can be given later.
  • the processing device 12 may detect the distance between the identified object and the carrier according to the real-time images captured by the first image acquisition device 111 and the second image acquisition device 112 respectively, using the dual-camera ranging principle .
  • the distance and displacement of obstacles can be judged by the principle of dual-camera distance measurement, that is, the principle of binocular parallax distance measurement. Similar to the binocular imaging of human beings, the position of the same object is different in the respective imaging of the scene during the simultaneous imaging of the scene with the two cameras, the position of the near object changes greatly in both, and the object The position difference between the two is small. In this way, you can detect the proximity and speed of recognized objects such as people or cars in relative motion, and then generate prompt information according to preset information processing rules, for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
  • preset information processing rules for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
  • the first image acquisition device 111 may be a visible light image acquisition device
  • the second image acquisition device 112 may be an infrared image acquisition device.
  • the visible light image acquisition device may be a visible light camera
  • the infrared image acquisition device may be an infrared camera, or a camera that supports both visible light and infrared functions, and may be switched to an infrared function when needed.
  • a wide-angle + telephoto configuration may be used, a visible light camera uses a wide-angle, an infrared camera uses a telephoto, and a black-and-white image captured and parsed by a color image of a visible light camera plus an infrared camera It is the basis of image processing; in this way, you can obtain more differentiated environmental information and enhance the ability of the stereo vision system to cope with night scenes or special weather.
  • one more visible light camera can be added to form a three-camera.
  • the two visible light cameras are responsible for binocular parallax distance measurement, and the infrared camera is specifically responsible for dark light imaging.
  • the system further includes an infrared emitting device 14 for sending an infrared beam to the identified object.
  • the processing device 12 is further configured to send the infrared beam transmission time according to the infrared transmission device 14, and the second image acquisition device 112 receives the light beam generated by the infrared beam reflected by the identified object, and adopts the flight time (TOF, Time Of Flight) distance measurement method to determine the distance between the identification object and the carrier.
  • TOF Time Of Flight
  • the processing device 12 acquires obstacle depth information according to the time difference between the infrared beam emitted by the infrared emitting device 14 being transmitted to the obstacle and the infrared beam reflected by the second image acquisition device 112 (ie, infrared camera) receiving the obstacle,
  • the second image acquisition device 112 ie, infrared camera
  • the three-dimensional information of the obstacle can be determined more quickly and accurately.
  • the image projection device 13 is configured to project the output image to a designated area of a preset projection surface.
  • the preset projection surface may be rear-view mirrors on both sides of the car, that is, the output image may be projected to a designated area of the rear-view mirrors on both sides.
  • Two head-up display systems 10 may be provided on one car, respectively corresponding to the rear-view mirrors on both sides. The two head-up display systems 10 respectively project the blind spot images on both sides and the prompt information to the rearview mirrors on both sides, so that the driver can directly observe the blind spot in the rear vision and get the prompt information, which greatly enhances the driving safety.
  • the image projection device 13 may illuminate a digital micromirror device based on digital optical processing (DLP, Digital Processing) technology with a dynamic zoom projection lens and a light source using LED (Light Emitting Diode) DMD, Digital (mirror, Device) realizes the conversion of electrical signals into optical signals.
  • DLP digital optical processing
  • DMD Light Emitting Diode
  • DMD Digital (mirror, Device)
  • the head-up display system 10 shown can be placed outside the car as shown in FIG. 3 and can be mechanically fixed to the outer edge of the window and assisted by magnetic attraction to the metal part of the door; or as shown in FIG. 4 can be mechanically fixed to the side car The upper or lower edge of the inside of the window.
  • reference numeral 31 represents a projected image
  • 32 represents a virtual image of the projected image visually formed when the user views the rearview mirror.
  • step 501 the camera captures real-time image information of the close-up environment on the side of the vehicle body and transmits it to the main control circuit board.
  • the processor of the main control circuit board and the like can perform step 502 and step 503 simultaneously using a parallel processing method.
  • step 502 the real-time image is subjected to compression, difference, sharpening, etc. to adapt the real-time image to subsequent projection data transmission processing, etc., and the process proceeds to step 507.
  • step 503 the preset target image recognition rules are used to extract the key target feature of the video information and identify the recognition object, and then step 504 is executed.
  • step 504 the preset image recognition rules are used to determine the relative motion parameter information such as the distance and speed of the recognized object and the body. If you have a dual camera, you can use the binocular parallax ranging algorithm. If you have an infrared transmitter, you can press TOF to measure the distance. Algorithm, and can estimate the movement trajectory and trend, and then execute step 505.
  • step 505 according to the recognition and calculation results, a risk assessment is performed with respect to the distance to the vehicle body and the level is judged, and then step 506 is executed.
  • the AR overlay prompt information is given according to the risk level, including: the calculated information that needs to be displayed directly and the associated warning icon pre-stored in the database retrieved according to the calculation result, etc., which may be a warning color bar and an evasion icon Wait, and then perform step 507.
  • step 507 the prompt information and the real-time image processed in step 502 are superimposed, and the superimposed projection image is converted into a projection signal suitable for the image projection device 13 (that is, a projection light machine).
  • step 508 the projection signal is pushed to the projection light machine, optically processed and projected.
  • step 509 the projection result is presented in the projection area of the exterior mirror of the car.
  • the head-up display system 10 may further include a distance sensor 15 for measuring relative motion parameter information of the recognized object.
  • the distance sensor 15 may be an infrared distance sensor, a ranging radar, or the like.
  • infrared distance sensor has limited infrared light power and scattering surface, but infrared distance sensor has a faster response speed, and can be used as a quick predicting tool.
  • the infrared emitting device 14 emits infrared light of greater power and a larger scattering surface to the obstacle, and then the second image acquisition device 112 (ie, infrared camera) receives the infrared light reflected by the obstacle to obtain the distance or depth information of the obstacle, and Combined with the color information obtained by the visible light camera, the three-dimensional information of the obstacle can be judged more quickly and accurately.
  • the distance sensor 15 may not be placed on the body of the head-up display system 10, for example, it may be placed at a position closer to the suspicious obstacle on the rear side of the vehicle body, or even a plurality of distance sensors 15 may be provided to improve the accuracy of the distance measurement.
  • a projection adjustment device 16 and / or a projection image acquisition device 17 for adjusting the projection image may also be provided.
  • the projection adjustment device 16 is used to adjust the projection orientation of the image projection device 13 under the control of the processing device 12.
  • the projection image collection device 17 is used to collect the projection image generated by the image projection device 13.
  • the processing device 12 is further configured to control the projection adjustment device 16 to adjust the projection orientation of the image projection device 13 according to the projected image, using a preset adjustment rule.
  • a pan-tilt head driven by a motor, etc. may be used to adjust the projection orientation by adjusting the position of the image projection device 13; or the head-up display system 10 may be directly adjusted by a pan-tilt head or other bracket devices to adjust The effect of projection orientation.
  • the gimbal or other types of brackets can also have a manual adjustment function, which can be adjusted by the user.
  • the projection image collection device 17 may be a camera or the like, and the projection image collection device 17 may sample the projected image of the image projection device 13.
  • the processing device 12 can control the projection adjustment device 16 to adjust the projection direction according to the actual condition of the projected image.
  • the preset adjustment rule can be set according to the driver position, the projection position, etc., and the projection orientation is adjusted so that the projection image of the image projection device 13 is projected at a preset position.
  • the projection position can be preset in the rearview mirror. Due to the adjustment of the rearview mirror, etc., if the projected image exceeds the preset projection position, the projection orientation can be adjusted to keep the projected image at the preset projection position.
  • the projection image acquisition device 17 may use a wide-angle camera, which can simultaneously observe the condition of the turn signal and find that the turn signal is flashing, such as three consecutive flashes. Processing, usually do not need to process real-time images.
  • the head-up display system 10 may further include an ambient light sensor 18 for detecting the intensity of ambient light.
  • the processing device 12 is also used to adjust the projection brightness of the image projection device 13 according to the intensity of the ambient light, using a preset brightness adjustment rule.
  • the preset brightness adjustment rule may be set according to the visibility of the projection brightness under different ambient light brightness. Different projection brightness can be set according to different ambient light brightness, so that the projected image can be observed under different ambient light brightness.
  • the head-up display system 10 may further include a wireless transceiver device 19 for transmitting control information received from an external terminal to the processing device 12.
  • the wireless transceiver 19 may use Bluetooth, wireless (wifi), or a mobile communication network to transmit control information.
  • the user can select the head-up display system 10 in a remote control mode through an external terminal.
  • the external terminal may be a mobile terminal such as a wireless remote controller or a mobile phone.
  • the external terminal can activate different functions in the head-up display system 10 by setting different working modes, such as the night vision mode, and the image acquisition device 17 can be switched to infrared imaging in the night vision mode.
  • the external terminal can send instructions to adjust the projection direction.
  • the button-type Bluetooth remote control as an example, you can set the driving, reversing, parking, night vision and other modes.
  • the image acquisition device 17 can be switched to infrared camera, and it can automatically switch to night vision mode by light recognition ; Or under the night vision mode there are driving, reversing, parking and other modes; you can also set up microphones and other radio devices, used to transmit the sound of the remote control in the car to the head-up display system 10 speakers.
  • the mobile terminal can also obtain real-time images and output images from the head-up display system 10 through wireless transmission.
  • a fill light device 20 may be provided in the head-up display system 10 to fill the light with a flash or the like when the image acquisition device 11 captures an image.
  • the head-up display system 10 may also be provided with a sound-generating device 21, such as a speaker, etc., which emits different prompt sounds according to different levels of prompt information while projecting.
  • the head-up display system 10 in FIG. 2 further includes an optional battery 22 and an external power supply interface 23, so that different power supplies can be used to power the head-up display system 10.
  • Case 1 When the head-up display system 10 is placed in a car, when projecting to the mirrors on both sides, the projected light will be refracted due to the influence of the side windows. In this case, the projection orientation may need to be adjusted or corrected. In addition, changes in external light can also cause adjustment of the brightness of the projected image.
  • the specific adjustment process is shown in FIG. 6 and includes steps 601 to 608.
  • step 601 the projection light beam passes through or does not pass through the window to learn, and determine the position range of the projection area in the captured image of the projection image acquisition device 17 with and without the side window .
  • the ambient light sensor 18 detects the intensity of ambient light.
  • step 603 it is compared with a preset light intensity threshold to determine whether it is in a dark environment, if it is in a dark environment, step 604 is performed, otherwise, step 605 is performed.
  • step 604 the projection direction is adjusted to a preset dark light angle, the projection area of the external rearview mirror is avoided, and the projection picture is directly projected onto the side window glass to avoid multi-directional scattering. As shown in FIG. 7, at a specific projection angle, the projection image of the image projection device 13 on the side window glass can be observed by the driver.
  • step 605 the projection image acquisition device 17 identifies the position of the projection area in the acquired image.
  • step 606 the position of the identified projection area in the captured image is compared with the position range obtained through learning. If it does not exceed the position range, step 607 is executed, otherwise step 608 is executed.
  • step 607 no adjustment and compensation are made to the projection orientation.
  • step 608 the projection orientation is adjusted and compensated so that the projection area falls within the position range.
  • Case 2 When the side mirrors are adjusted, the head-up display system 10 tracks the projection position in real time and performs real-time adjustment, as shown in FIG. 8, and the specific process includes steps 801 to 804.
  • step 801 the projection image acquisition device 17 recognizes the edge of the projected image of the exterior mirror of the vehicle.
  • step 802 whether the edge area of the projected image exceeds the side mirror's mirror surface or a predetermined range recognized by the system. If it exceeds, then step 804 is executed; otherwise, step 803 is executed.
  • step 803 no projection orientation adjustment is made.
  • step 804 the projection orientation is adjusted so that the projected image is projected into the predetermined range of the side mirror, and if the adjustable range is exceeded, the user may be prompted to perform human intervention.
  • Case three according to the steering situation of the vehicle, real-time projection is performed, as shown in FIG. 9, including steps 901 to 904.
  • step 901 it is determined that the vehicle is turning.
  • judging the turning of the vehicle can also be achieved by using the projection image acquisition device 17 to capture the turn signal, for example, the image captured by the projection image acquisition device 17 has a steering The light flashes, if three consecutive times, it is determined that the vehicle is turning.
  • step 902 the processing function of the head-up display system 10 on the steering side is started, and the movement trend analysis is performed for the movement state of the target.
  • step 903 target features are extracted and fitted and classified, and high-risk features are retrieved for matching.
  • step 904 more data analysis results and warning prompts than normal information can be superimposed on the physical image.
  • the driver can be prompted with projection information and sound information.
  • the image acquisition device 11 collects the environmental image on the outside and rear of the vehicle body and sends it to the processing device 12 for image processing and analysis and calculation, and then sends the environmental real video information and the virtual information calculated based on the analysis of the real image data
  • the image projection device 13 is projected onto a partial mirror surface of the exterior mirror of the car.
  • the projection interface gives graphics or voice prompts superimposed on the physical image. In this way, the driver can obtain more information based on the projected image and improve driving safety.
  • the head-up display method provided by an embodiment of the present disclosure includes steps 1001 to 1003.
  • step 1001 a real-time image of a preset image acquisition area is captured.
  • a real-time image can be captured by the image acquisition device 11 in the head-up display system 10.
  • the head-up display system 10 may be a single whole, or may be composed of different devices distributed at various positions on the vehicle body.
  • the head-up display system 10 may be installed on a carrier such as an automobile. Similar to other in-vehicle systems, the head-up display system 10 may be powered by a carrier such as an automobile, or it may be powered by the battery of the head-up display system 10 itself.
  • AR is a combination of real scenes and virtual scenes. Based on the pictures taken by the camera, through computer processing capabilities, the virtual data is superimposed on the real environment, and then the same picture is used to display an interactive mode.
  • the head-up display system 10 can provide driving information to the driver through the AR mode.
  • the image acquisition device 11 may be an image or video capture device such as a camera.
  • the preset image acquisition area may be an area that covers a blind spot in the rearview mirror on both sides.
  • the image acquisition device 11 may include a first image acquisition device 111 and a second image acquisition device 112.
  • the first image acquisition device 111 and the second image acquisition device 112 may capture real-time images of the preset image acquisition area from two different angles, respectively, for subsequent image processing.
  • the detailed internal structure of the head-up display system 10 may be as shown in FIG. 2, which includes an image acquisition device 11, a processing device 12 (for example, a main control circuit board), and an image projection device 13 .
  • the image acquisition device 11 includes a first image acquisition device 111 and a second image acquisition device 112, for example, two cameras; the main control circuit board is also provided with a power management circuit for managing the power of the entire head-up display system 10, This includes management of external power supply and optional built-in battery power supply switching, charging and discharging, and power supply distribution of each device in the head-up display system 10.
  • the main control circuit board may include a processor for data processing, and circuits and interfaces corresponding to various functions such as power supply.
  • a preset image recognition rule is used to identify the recognition object in the real-time image and / or relative motion parameter information of the recognition object and the carrier of the head-up display system 10, and according to the recognition object and / or Or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image.
  • the step 1002 may be performed by the processing device 12 in the head-up display system 10 as described in FIG. 1.
  • the preset image recognition rule may be set according to the image acquisition device 11, when the image acquisition device 11 is a camera, the real-time image captured by a single camera may be identified; the image acquisition device 11 is two or two When there are more than one camera, the real-time images captured by the two cameras can be identified through methods such as binocular parallax ranging. Recognition of recognized objects can be achieved by methods such as model training and deep learning.
  • the relative motion parameter may include a relative distance and a relative motion speed of the identification object and the carrier. The relative motion speed may be determined by dividing the relative distance between two time points by the time interval between the two time points.
  • a real-time image of one camera may be pre-selected and combined as an output image according to the preset information processing rule.
  • the preset information processing rule may be based on the recognition object and the relative motion parameter settings, and merge the required prompt information with the captured image, that is, display the prompt information directly on the captured image, for example, in the recognition object
  • it can indicate the general outline of the identified object, etc.
  • the relative motion parameter exceeds the predicted value, it can display prompt information such as distance or warning information.
  • single camera in order to strengthen the identification of key target objects and enhance targeted warning measures, single camera can be integrated with deep learning based object recognition capabilities; roads and obstacles or various special objects can be performed on the scenes in the image
  • the segmentation is used for feature extraction, and on this basis, pattern matching based on machine learning and deep learning is performed to realize the recognition of the recognition object. Focus on identifying human, vehicle and other recognition objects that have a significant impact on driving safety.
  • auxiliary means such as machine learning, deep learning or radar can be used to obtain the distance and displacement of the recognition object for the single camera situation, so that more clear prompt information can be given later.
  • the processing device 12 may detect the distance between the identified object and the carrier according to the real-time images captured by the first image acquisition device 111 and the second image acquisition device 112 respectively, using the dual-camera ranging principle .
  • the distance and displacement of obstacles can be judged by the principle of dual-camera distance measurement, that is, the principle of binocular parallax distance measurement. Similar to the binocular imaging of human beings, the position of the same object is different in the respective imaging of the scene during the simultaneous imaging of the scene with the two cameras, the position of the near object varies greatly between the two, and the object in the distance The position difference between the two is small. In this way, you can detect the proximity and speed of recognized objects such as people or cars in relative motion, and then generate prompt information according to preset information processing rules, for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
  • preset information processing rules for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
  • the first image acquisition device 111 may be a visible light image acquisition device
  • the second image acquisition device 112 may be an infrared image acquisition device.
  • the visible light image acquisition device may be a visible light camera
  • the infrared image acquisition device may be an infrared camera, or a camera that supports both visible light and infrared functions, and may be switched to an infrared function when needed.
  • a wide-angle + telephoto configuration may be used, a visible light camera uses a wide angle, an infrared camera uses a telephoto, and a black-and-white image captured and parsed by a color image of a visible light camera plus an infrared camera It is the basis of image processing; in this way, you can obtain more differentiated environmental information and enhance the ability of the stereo vision system to cope with night scenes or special weather.
  • one more visible light camera can be added to form a three-camera.
  • the two visible light cameras are responsible for binocular parallax distance measurement, and the infrared camera is specifically responsible for dark light imaging.
  • the system further includes an infrared emitting device 14 for sending an infrared beam to the identified object.
  • the processing device 12 is further configured to transmit the infrared beam according to the infrared beam emitting time of the infrared emitting device 14, and the second image acquisition device 112 receives the infrared beam received by the object reflected by the beam of the receiving time, using TOF measurement
  • the distance method determines the distance between the identification object and the carrier.
  • the processing device 12 acquires obstacle depth information according to the time difference between the infrared beam emitted by the infrared emitting device 14 being transmitted to the obstacle and the infrared beam reflected by the second image acquisition device 112 (ie, infrared camera) receiving the obstacle,
  • the second image acquisition device 112 ie, infrared camera
  • the three-dimensional information of the obstacle can be determined more quickly and accurately.
  • step 1003 the output image is projected to a designated area of a preset projection surface.
  • the image projection device 13 in the head-up display system 10 may perform projection.
  • the preset projection surface may be rear-view mirrors on both sides of the car, that is, the output image may be projected to a designated area of the rear-view mirrors on both sides.
  • Two head-up display systems 10 may be provided on one car, respectively corresponding to the rear-view mirrors on both sides. The two head-up display systems 10 respectively project the blind spot images on both sides and the prompt information to the rearview mirrors on both sides, so that the driver can directly observe the blind spot in the rear vision and get the prompt information, which greatly enhances the driving safety.
  • the image projection device 13 may use a dynamic zoom projection lens and an LED light source to illuminate a DMD based on DLP technology to achieve conversion of electrical signals into optical signals.
  • the head-up display system 10 shown can be placed outside the car as shown in FIG. 3 and can be mechanically fixed to the outer edge of the window and supplemented by magnetic force to the metal part of the door; as shown in FIG. The upper or lower edge of the inside of the window.
  • reference numeral 31 represents a projected image
  • 32 represents a virtual image of the projected image visually formed when the user views the rearview mirror.
  • step 501 the camera captures real-time image information of the close-up environment on the side of the vehicle body and transmits it to the main control circuit board.
  • the processor of the main control circuit board and the like can perform step 502 and step 503 simultaneously using a parallel processing method.
  • step 502 the real-time image is subjected to compression, difference, sharpening, etc. to adapt the real-time image to subsequent projection data transmission processing, etc., and the process proceeds to step 507.
  • step 503 the preset target image recognition rules are used to extract the key target feature of the video information and identify the recognition object, and then step 504 is executed.
  • step 504 the preset image recognition rules are used to determine the relative motion parameter information such as the distance and speed of the recognized object and the body. If you have a dual camera, you can use the binocular parallax ranging algorithm. If you have an infrared transmitter, you can press TOF to measure the distance. Algorithm, and can estimate the movement trajectory and trend, and then execute step 505.
  • step 505 according to the recognition and calculation results, a risk assessment is performed with respect to the distance to the vehicle body and the level is judged, and then step 506 is executed.
  • the AR overlay prompt information is given according to the risk level, including: the calculated information that needs to be displayed directly and the associated warning icon pre-stored in the database retrieved according to the calculation result, etc., which may be a warning color bar and an evasion icon Wait, and then perform step 507.
  • step 507 the prompt information and the real-time image processed in step 502 are superimposed, and the superimposed projection image is converted into a projection signal suitable for the image projection device 13 (that is, a projection light machine).
  • step 508 the projection signal is pushed to the projection light machine, optically processed and projected.
  • step 509 the projection result is presented in the projection area of the exterior mirror of the car.
  • the head-up display system 10 may further include a distance sensor 15 for measuring relative motion parameter information of the recognized object.
  • the distance sensor 15 may be an infrared distance sensor, a ranging radar, or the like.
  • infrared distance sensor has limited infrared light power and scattering surface, but infrared distance sensor has a faster response speed, and can be used as a quick predicting tool.
  • the infrared emitting device 14 emits infrared light of greater power and a larger scattering surface to the obstacle, and then the second image acquisition device 112 (ie, infrared camera) receives the infrared light reflected by the obstacle to obtain the distance or depth information of the obstacle, and Combined with the color information obtained by the visible light camera, the three-dimensional information of the obstacle can be judged more quickly and accurately.
  • the distance sensor 15 may not be placed on the body of the head-up display system 10, for example, it may be placed at a position closer to the suspicious obstacle on the rear side of the vehicle body, or even a plurality of distance sensors 15 may be provided to improve the accuracy of the ranging.
  • a projection adjustment device 16 and / or a projection image acquisition device 17 for adjusting the projection image may also be provided.
  • the projection adjustment device 16 is used to adjust the projection orientation of the image projection device 13 under the control of the processing device 12.
  • the projection image acquisition device 17 is used to acquire the projection image generated by the image projection device 13.
  • the processing device 12 is further configured to control the projection adjustment device 16 to adjust the projection orientation of the image projection device 13 according to the projected image, using a preset adjustment rule.
  • a pan-tilt head driven by a motor, etc. may be used to adjust the projection orientation by adjusting the position of the image projection device 13; or the head-up display system 10 may be directly adjusted by a pan-tilt head or other bracket devices to adjust The effect of projection orientation.
  • the gimbal or other types of brackets can also have a manual adjustment function, which can be adjusted by the user.
  • the projection image collection device 17 may be a camera or the like, and the projection image collection device 17 may sample the projected image of the image projection device 13.
  • the processing device 12 can control the projection adjustment device 16 to adjust the projection direction according to the actual condition of the projected image.
  • the preset adjustment rule can be set according to the driver position, the projection position, etc., and the projection orientation is adjusted so that the projection image of the image projection device 13 is projected at a preset position.
  • the projection position can be preset in the rearview mirror. Due to the adjustment of the rearview mirror, etc., if the projected image exceeds the preset projection position, the projection orientation can be adjusted to keep the projected image at the preset projection position.
  • the projection image acquisition device 17 may use a wide-angle camera, which can simultaneously observe the condition of the turn signal and find that the turn signal is flashing, such as three consecutive flashes. Processing, usually do not need to process real-time images.
  • the head-up display system 10 may further include an ambient light sensor 18 for detecting the intensity of ambient light.
  • the processing device 12 is also used to adjust the projection brightness of the image projection device 13 according to the intensity of the ambient light, using a preset brightness adjustment rule.
  • the preset brightness adjustment rule may be set according to the visibility of the projection brightness under different ambient light brightness. Different projection brightness can be set according to different ambient light brightness, so that the projected image can be observed under different ambient light brightness.
  • the head-up display system 10 may further include a wireless transceiver device 19 for transmitting control information received from an external terminal to the processing device 12.
  • the wireless transceiver 19 may use Bluetooth, wireless (wifi), or a mobile communication network to transmit control information.
  • the user can select the head-up display system 10 in a remote control mode through an external terminal.
  • the external terminal may be a mobile terminal such as a wireless remote controller or a mobile phone.
  • the external terminal can activate different functions in the head-up display system 10 by setting different working modes, such as the night vision mode, and the image acquisition device 17 can be switched to infrared imaging in the night vision mode.
  • the external terminal can send instructions to adjust the projection direction.
  • the button-type Bluetooth remote control as an example, you can set the driving, reversing, parking, night vision and other modes.
  • the image acquisition device 17 can be switched to infrared camera, and it can automatically switch to night vision mode by light recognition ; Or under the night vision mode there are driving, reversing, parking and other modes; you can also set up microphones and other radio devices, used to transmit the sound of the remote control in the car to the head-up display system 10 speakers.
  • the mobile terminal can also obtain real-time images and output images from the head-up display system 10 through wireless transmission.
  • a fill light device 20 may be provided in the head-up display system 10 to fill the light with a flash or the like when the image acquisition device 11 captures an image.
  • the head-up display system 10 may also be provided with a sound-generating device 21, such as a speaker, etc., which emits different prompt sounds according to different levels of prompt information while projecting.
  • the head-up display system 10 in FIG. 2 further includes an optional battery 22 and an external power supply interface 23, so that different power supplies can be used to power the head-up display system 10.
  • Case 1 When the head-up display system 10 is placed in a car, when projecting to the mirrors on both sides, the projected light will be refracted due to the influence of the side windows. In this case, the projection orientation may need to be adjusted or corrected. In addition, changes in external light can also cause adjustment of the brightness of the projected image.
  • the specific adjustment process is shown in FIG. 6 and includes steps 601 to 608.
  • step 601 the projection light beam passes through or does not pass through the window to learn, and determine the position range of the projection area in the captured image of the projection image acquisition device 17 with and without the side window .
  • the ambient light sensor 18 detects the intensity of ambient light.
  • step 603 it is compared with a preset light intensity threshold to determine whether it is in a dark environment, if it is in a dark environment, step 604 is performed, otherwise, step 605 is performed.
  • step 604 the projection direction is adjusted to a preset dark light angle, the projection area of the external rearview mirror is avoided, and the projection picture is directly projected onto the side window glass to avoid multi-directional scattering. As shown in FIG. 7, at a specific projection angle, the projection image of the image projection device 13 on the side window glass can be observed by the driver.
  • step 605 the projection image acquisition device 17 identifies the position of the projection area in the acquired image.
  • step 606 the position of the identified projection area in the captured image is compared with the position range obtained through learning. If it does not exceed the position range, step 607 is executed, otherwise step 608 is executed.
  • step 607 no adjustment and compensation are made to the projection orientation.
  • step 608 the projection orientation is adjusted and compensated so that the projection area falls within the position range.
  • Case 2 When the side mirrors are adjusted, the head-up display system 10 tracks the projection position in real time and performs real-time adjustment, as shown in FIG. 8, and the specific process includes steps 801 to 804. :
  • step 801 the projection image acquisition device 17 recognizes the edge of the projected image of the exterior mirror of the vehicle.
  • step 802 whether the edge area of the projected image exceeds the side mirror's mirror surface or a predetermined range recognized by the system. If it exceeds, then step 804 is executed; otherwise, step 803 is executed.
  • step 803 no projection orientation adjustment is made.
  • step 804 the projection orientation is adjusted so that the projected image is projected into the predetermined range of the side mirror, and if the adjustable range is exceeded, the user may be prompted to perform human intervention.
  • Case three according to the steering situation of the vehicle, real-time projection is performed, as shown in FIG. 9, including steps 901 to 904.
  • step 901 it is determined that the vehicle is turning.
  • judging the turning of the vehicle can also be achieved by using the projection image acquisition device 17 to capture the turn signal, for example, the image captured by the projection image acquisition device 17 has a steering The light flashes, if three consecutive times, it is determined that the vehicle is turning.
  • step 902 the processing function of the head-up display system 10 on the steering side is started, and the movement trend analysis is performed for the movement state of the target.
  • step 903 target features are extracted and fitted and classified, and high-risk features are retrieved for matching.
  • step 904 more data analysis results and warning prompts than normal information can be superimposed on the physical image.
  • the driver can be prompted with projection information and sound information.
  • the image acquisition device 11 collects the environmental image on the outside and rear of the vehicle body and sends it to the processing device 12 for image processing and analysis and calculation, and then sends the environmental real video information and the virtual information calculated based on the analysis of the real image data
  • the image projection device 13 is projected onto a partial mirror surface of the exterior mirror of the car.
  • the projection interface gives graphics or voice prompts superimposed on the physical image. In this way, the driver can obtain more information based on the projected image and improve driving safety.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Disclosed is a head-up display system (10), comprising: an image collection apparatus (11) for capturing a real-time image of a pre-set image collection region; a processing apparatus (12) for identifying an identification object in the real-time image and/or relative motion parameter information of the identification object and a carrier of the head-up display system by using a pre-set image identification rule, for generating, according to the identification object and/or the relative motion parameter information, prompt information by using a pre-set information processing rule, and for combining the prompt information and the real-time image into an output image; and an image projection apparatus (13) for projecting the output image to an assigned region of a pre-set projection surface. Further provided are a head-up display method and an automobile.

Description

平视显示系统、显示方法和汽车Head-up display system, display method and automobile 技术领域Technical field
本公开涉及车载设备技术领域,尤其涉及一种平视显示系统、显示方法和汽车。The present disclosure relates to the technical field of vehicle-mounted equipment, and particularly to a head-up display system, a display method, and an automobile.
背景技术Background technique
汽车两侧后视镜大都存在盲区,盲区内的交通情况司机没有办法从两侧后视镜中发现。为扩大两侧后视镜的观察视野,通常采用的办法是在两侧后视镜上额外添加凸镜或将两侧后视镜的一部分做成凸镜。此外,后视镜主要依靠司机的主动观察,缺乏主动的提示信息。Most of the rearview mirrors on both sides of the car have blind spots. The traffic situation in the blind zone has no way for drivers to find out from the rearview mirrors on both sides. In order to expand the observation field of view of the side mirrors on both sides, a common method is to add a convex mirror on the side mirrors on both sides or make part of the side mirrors into convex mirrors. In addition, the rear-view mirror mainly relies on the driver's active observation and lacks active prompt information.
发明内容Summary of the invention
本公开实施例提供了一种平视显示系统,包括:图像采集装置,处理装置和图像投影装置。图像采集装置用于捕获预设图像采集区域的实时图像。处理装置用于采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与所述平视显示系统的载体的相对运动参数信息,并且用于根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像。图像投影装置用于向预设投影面的指定区域投影所述输出图像。An embodiment of the present disclosure provides a head-up display system, including: an image acquisition device, a processing device, and an image projection device. The image acquisition device is used to capture a real-time image of a preset image acquisition area. The processing device is used to recognize the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system using a preset image recognition rule, and used to / Or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image. The image projection device is used to project the output image to a designated area of a preset projection surface.
本公开实施例还提供了一种平视显示方法,包括:捕获预设图像采集区域的实时图像;采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与平视显示系统的载体的相对运动参数信息,并且根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像;以及向预设投影面的指定区域投影所述输出图像。An embodiment of the present disclosure also provides a head-up display method, which includes: capturing a real-time image of a preset image collection area; using a preset image recognition rule to identify the identification object in the real-time image and / or the identification object and head-up Display relative motion parameter information of the carrier of the system, and generate prompt information according to the recognition object and / or the relative motion parameter information, using preset information processing rules, and merge the prompt information and the real-time image into an output An image; and projecting the output image to a designated area of a preset projection surface.
本公开实施例还提供了一种汽车,包括车身和上面所述的平视显示系统。An embodiment of the present disclosure also provides an automobile, including a body and the head-up display system described above.
附图说明BRIEF DESCRIPTION
图1为本公开实施例平视显示系统组成结构示意图;1 is a schematic structural diagram of a head-up display system according to an embodiment of the present disclosure;
图2为根据本公开实施例的平视显示系统的各部件组成的示意图;2 is a schematic diagram of components of a head-up display system according to an embodiment of the present disclosure;
图3为根据本公开实施例的平视显示系统设置在车外部的示意图;3 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure being provided outside the vehicle;
图4为根据本公开实施例的平视显示系统设置在车内部的示意图;4 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure provided inside a vehicle;
图5为根据本公开实施例的平视显示系统的工作流程示意图;5 is a schematic diagram of a working process of a head-up display system according to an embodiment of the present disclosure;
图6为根据本公开实施例的情况一的处理流程示意图;FIG. 6 is a schematic flowchart of a process according to the first embodiment of the present disclosure;
图7为根据本公开实施例的平视显示系统向侧窗投影的示意图;7 is a schematic diagram of a head-up display system projecting to a side window according to an embodiment of the present disclosure;
图8为根据本公开实施例的情况二的处理流程示意图;FIG. 8 is a schematic diagram of a processing flow in case 2 according to an embodiment of the present disclosure;
图9为根据本公开实施例的转向场景的处理流程示意图;9 is a schematic diagram of a processing flow of a steering scenario according to an embodiment of the present disclosure;
图10为根据本公开实施例的平视显示方法的流程示意图。10 is a schematic flowchart of a head-up display method according to an embodiment of the present disclosure.
具体实施方式detailed description
本公开实施例中,图像采集装置捕获预设图像采集区域的实时图像;处理装置按预设图像识别规则识别所述实时图像中的识别对象及所述识别对象与所述平视显示系统的载体的相对运动参数信息,并根据预设信息处理规则将所述识别对象及所述相对运动参数信息合并为输出图像;图像投影装置向预设投影面的指定区域投影所述输出图像。In the embodiment of the present disclosure, the image acquisition device captures the real-time image of the preset image acquisition area; the processing device identifies the identification object in the real-time image and the carrier of the identification object and the head-up display system according to the preset image recognition rules Relative motion parameter information, and merge the identification object and the relative motion parameter information into an output image according to a preset information processing rule; an image projection device projects the output image to a designated area of a preset projection surface.
以下将参照附图,结合实施例对本公开的技术方案的实现、功能特点及优点再作进一步详细的说明。本公开实施例提供的平视显示系统,如图1所示,所述平视显示系统10包括图像采集装置11,处理装置12和图像投影装置13。The implementation, functional characteristics, and advantages of the technical solutions of the present disclosure will be described in further detail with reference to the accompanying drawings and embodiments below. As shown in FIG. 1, the head-up display system provided by an embodiment of the present disclosure includes an image acquisition device 11, a processing device 12 and an image projection device 13.
这里,所述平视显示系统10可以是单独的一个整体,也可以是由分布在车身各个位置的不同装置组成。所述平视显示系统10可以安装在汽车等载体上。与其他车载系统类似,所述平视显示系统10 可以由汽车等载体等供电,也可以平视显示系统10自身电池供电。Here, the head-up display system 10 may be a single whole, or may be composed of different devices distributed at various positions of the vehicle body. The head-up display system 10 may be installed on a carrier such as an automobile. Similar to other in-vehicle systems, the head-up display system 10 may be powered by a carrier such as an automobile, or may be powered by the head-up display system 10 itself.
增强现实(AR,Augmented Reality)是现实场景和虚拟场景的结合,在摄像头拍摄的画面基础上,通过计算机处理能力,将虚拟的数据叠加在现实环境当中,再利用同一个画面进行显示而带来的一种交互模式。所述平视显示系统10可以通过AR方式向司机提供驾驶信息。Augmented reality (AR, Augmented Reality) is a combination of real scenes and virtual scenes. On the basis of the pictures taken by the camera, through the computer processing power, the virtual data is superimposed on the real environment, and then the same picture is used to display and bring An interactive mode. The head-up display system 10 can provide driving information to the driver through the AR mode.
所述图像采集装置11用于捕获预设图像采集区域的实时图像。The image acquisition device 11 is used to capture a real-time image of a preset image acquisition area.
这里,所述图像采集装置11可以是摄像头等图像、视频捕捉设备。所述预设图像采集区域可以是覆盖两侧后视镜时的视觉盲区的区域。Here, the image acquisition device 11 may be an image or video capture device such as a camera. The preset image acquisition area may be an area that covers a blind spot in the rearview mirror on both sides.
在一些实施方式中,所述图像采集装置11可以包括第一图像采集装置111和第二图像采集装置112。第一图像采集装置111和第二图像采集装置112可以分别从两个不同角度捕获预设图像采集区域的实时图像,用于后续图像处理。In some embodiments, the image acquisition device 11 may include a first image acquisition device 111 and a second image acquisition device 112. The first image acquisition device 111 and the second image acquisition device 112 may capture real-time images of the preset image acquisition area from two different angles, respectively, for subsequent image processing.
以单独成一个整体的平视显示系统10为例,平视显示系统10内部详细构造可以如图2所示,其中包括图像采集装置11,处理装置12(例如,主控电路板)和图像投影装置13。图像采集装置11包括第一图像采集装置111和第二图像采集装置112,例如,两个摄像头。所述主控电路板上还设置有电源管理电路,用于管理整个平视显示系统10的电源,其中包括管理外接电源和可选内置电池供电的电源切换、充放电以及平视显示系统10中各装置的电源供应分配等。主控电路板可以包括处理器以进行数据处理,以及电源等各功能相应的电路和接口。Taking the head-up display system 10 as a whole as an example, the detailed internal structure of the head-up display system 10 may be as shown in FIG. 2, which includes an image acquisition device 11, a processing device 12 (for example, a main control circuit board), and an image projection device 13 . The image acquisition device 11 includes a first image acquisition device 111 and a second image acquisition device 112, for example, two cameras. The main control circuit board is also provided with a power management circuit for managing the power supply of the entire head-up display system 10, including managing external power supply and optional internal battery-powered power supply switching, charging and discharging, and various devices in the head-up display system 10 Power supply distribution etc. The main control circuit board may include a processor for data processing, and circuits and interfaces corresponding to various functions such as a power supply.
所述处理装置12用于采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与所述平视显示系统10的载体的相对运动参数信息,并且用于根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像。The processing device 12 is used for identifying the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system 10 by using preset image recognition rules, and The identification object and / or the relative motion parameter information uses preset information processing rules to generate prompt information, and merges the prompt information and the real-time image into an output image.
这里,所述预设图像识别规则可以根据所述图像采集装置11设置,所述图像采集装置11为一个摄像头时,可以对单个摄像头捕获 的实时图像进行识别;所述图像采集装置11为两个或两个以上摄像头时,可以通过双目视差测距算法等方法对两个摄像头捕获的实时图像进行识别。对识别对象的识别可以采用模型训练、深度学习等方法实现。所述相对运动参数可以包括所述识别对象与所述载体的相对距离和相对运动速度等。所述相对运动速度可以通过两个时间点的相对距离除以两个时间点的时间间隔确定。当所述图像采集装置11为两个或两个以上摄像头时,可以根据所述预设信息处理规则预先选定一个摄像头的实时图像合并作为输出图像。Here, the preset image recognition rule may be set according to the image acquisition device 11, and when the image acquisition device 11 is a camera, a real-time image captured by a single camera may be identified; the image acquisition device 11 is two Or when there are more than two cameras, the real-time images captured by the two cameras can be identified through binocular parallax ranging algorithm and other methods. Recognition of recognized objects can be achieved by methods such as model training and deep learning. The relative motion parameter may include a relative distance and a relative motion speed of the identification object and the carrier. The relative motion speed may be determined by dividing the relative distance between two time points by the time interval between the two time points. When the image acquisition device 11 is two or more cameras, a real-time image of one camera may be pre-selected and combined as an output image according to the preset information processing rule.
所述预设信息处理规则可以根据所述识别对象和所述相对运动参数设置,将需要的提示信息与捕捉图像进行合并,即,直接在捕捉到的图像上显示提示信息,例如,在识别对象为预设对象时,可以指示识别对象的大体轮廓等,又例如,在相对运动参数超出预测值时,可以显示距离或警告信息等提示信息。The preset information processing rule may be based on the recognition object and the relative motion parameter settings, and merge the required prompt information with the captured image, that is, display the prompt information directly on the captured image, for example, in the recognition object When it is a preset object, it can indicate the general outline of the identified object, etc. For example, when the relative motion parameter exceeds the predicted value, it can display prompt information such as distance or warning information.
在一些实施方式中,为加强对重点目标物体的辨识,提升针对性的警示措施,可以融合单摄像头基于深度学习的物体识别能力;可以对图像中的场景进行道路以及障碍物或者各种特殊物体的分割进行特征提取,在此基础上进行基于机器学习和深度学习进行模式匹配,实现识别对象的识别。重点识别出对行车安全有重大影响的人体,车辆等识别对象。In some embodiments, in order to strengthen the identification of key target objects and enhance targeted warning measures, single camera can be integrated with deep learning based object recognition capabilities; roads and obstacles or various special objects can be performed on the scenes in the image The segmentation is used for feature extraction, and on this basis, pattern matching based on machine learning and deep learning is performed to realize the recognition of the recognition object. Focus on identifying human, vehicle and other recognition objects that have a significant impact on driving safety.
识别出识别对象后,针对单摄像头情况可以采用机器学习、深度学习或雷达等其他辅助手段,获取识别对象的距离和位移情况,从而,可以在后续给出更加明确的提示信息。After the recognition object is recognized, other auxiliary means such as machine learning, deep learning or radar can be used to obtain the distance and displacement of the recognition object for the single camera situation, so that more clear prompt information can be given later.
在一些实施方式中,所述处理装置12可以根据第一图像采集装置111和第二图像采集装置112分别捕获的实时图像,采用双摄测距原理,检测所述识别对象与所述载体的距离。In some embodiments, the processing device 12 may detect the distance between the identified object and the carrier according to the real-time images captured by the first image acquisition device 111 and the second image acquisition device 112 respectively, using the dual-camera ranging principle .
具有两个摄像头时,可以通过双摄测距原理,即,双目视差测距原理可以判别出障碍物的距离和位移等情况。和人的双眼成像类似,在双摄像头同步对场景进行成像过程中的各自的成像中,同一物体所处的位置并不同,近处的物体在两者中的位置变化较大,远处的物体在两者中的位置差异较小。如此,可以对相对运动中的人或车等识别 对象的接近度和速度进行检测,进而根据预设信息处理规则生成提示信息,例如,通过计算生成红色警示图文标识,其中包括:物体距离、移动速度、极限距离红色告警,并叠加在实物影像上,并同时辅以声音警示。When there are two cameras, the distance and displacement of obstacles can be judged by the principle of dual-camera distance measurement, that is, the principle of binocular parallax distance measurement. Similar to the binocular imaging of human beings, the position of the same object is different in the respective imaging of the scene during the simultaneous imaging of the scene with the two cameras, the position of the near object changes greatly in both, and the object The position difference between the two is small. In this way, you can detect the proximity and speed of recognized objects such as people or cars in relative motion, and then generate prompt information according to preset information processing rules, for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
在一些实施方式中,所述第一图像采集装置111可以是可见光图像采集装置,所述第二图像采集装置112可以是红外图像采集装置。In some embodiments, the first image acquisition device 111 may be a visible light image acquisition device, and the second image acquisition device 112 may be an infrared image acquisition device.
例如,可见光图像采集装置可以是可见光摄像头;红外图像采集装置可以是红外摄像头,也可以采用同时支持可见光和红外功能的摄像头,需要时可以切换至红外功能。For example, the visible light image acquisition device may be a visible light camera; the infrared image acquisition device may be an infrared camera, or a camera that supports both visible light and infrared functions, and may be switched to an infrared function when needed.
在一些实施方式中,为了达到更理想的图像采集效果,可以采用广角+长焦的配置,可见光摄像头采用广角,红外摄像头采用长焦,以可见光摄像的彩色图像加红外摄像头采集并解析的黑白图像为图像处理基础;如此,可以获取更多差异化环境信息,增强立体视觉系统应对夜间情景或特殊天气的能力。In some embodiments, in order to achieve a more ideal image collection effect, a wide-angle + telephoto configuration may be used, a visible light camera uses a wide-angle, an infrared camera uses a telephoto, and a black-and-white image captured and parsed by a color image of a visible light camera plus an infrared camera It is the basis of image processing; in this way, you can obtain more differentiated environmental information and enhance the ability of the stereo vision system to cope with night scenes or special weather.
在两个摄像头基础上,还可以再增加一个可见光摄像头,形成三摄,两个可见光摄像头负责双目视差测距,红外摄像头专门负责暗光摄像。On the basis of two cameras, one more visible light camera can be added to form a three-camera. The two visible light cameras are responsible for binocular parallax distance measurement, and the infrared camera is specifically responsible for dark light imaging.
在一些实施方式中,所述系统还包括红外发射装置14,用于向识别对象发送红外波束。所述处理装置12还用于根据所述红外发射装置14发送所述红外波束的发射时间,以及第二图像采集装置112接收所述红外波束被识别对象反射产生的光束的接收时间,采用飞行时间(TOF,Time Of Flight)测距法,确定所述识别对象与所述载体的距离。In some embodiments, the system further includes an infrared emitting device 14 for sending an infrared beam to the identified object. The processing device 12 is further configured to send the infrared beam transmission time according to the infrared transmission device 14, and the second image acquisition device 112 receives the light beam generated by the infrared beam reflected by the identified object, and adopts the flight time (TOF, Time Of Flight) distance measurement method to determine the distance between the identification object and the carrier.
例如,处理装置12根据红外发射装置14发射的红外光束发射到障碍物以及第二图像采集装置112(即,红外摄像头)接收障碍物反射的红外光束两者之间的时间差获取障碍物深度信息,并且结合所述第一图像采集装置111(即,可见光摄像头)获取的图像信息,对障碍物的三维信息可以更快速准确的判断。For example, the processing device 12 acquires obstacle depth information according to the time difference between the infrared beam emitted by the infrared emitting device 14 being transmitted to the obstacle and the infrared beam reflected by the second image acquisition device 112 (ie, infrared camera) receiving the obstacle, In addition, in combination with the image information acquired by the first image acquisition device 111 (that is, the visible light camera), the three-dimensional information of the obstacle can be determined more quickly and accurately.
所述图像投影装置13用于向预设投影面的指定区域投影所述输出图像。The image projection device 13 is configured to project the output image to a designated area of a preset projection surface.
这里,所述预设投影面可以是汽车两侧后视镜,即,可以向两侧后视镜的指定区域投影所述输出图像。可以在一辆车上设置两台平视显示系统10,分别对应于两侧后视镜。两台平视显示系统10分别将两侧的盲区影像以及提示信息投影到两侧后视镜,从而司机可以直接观察到后视盲区,并可以得到提示信息,大大加强了驾驶安全性。Here, the preset projection surface may be rear-view mirrors on both sides of the car, that is, the output image may be projected to a designated area of the rear-view mirrors on both sides. Two head-up display systems 10 may be provided on one car, respectively corresponding to the rear-view mirrors on both sides. The two head-up display systems 10 respectively project the blind spot images on both sides and the prompt information to the rearview mirrors on both sides, so that the driver can directly observe the blind spot in the rear vision and get the prompt information, which greatly enhances the driving safety.
在一些实施方式中,图像投影装置13可以用由可动态变焦的投影镜头和采用发光二极管(LED,Light Emitting Diode)光源照射基于数字光学处理(DLP,Digital Light Processing)技术的数字微镜装置(DMD,Digital Micro mirror Device)实现电信号转换为光信号。In some embodiments, the image projection device 13 may illuminate a digital micromirror device based on digital optical processing (DLP, Digital Processing) technology with a dynamic zoom projection lens and a light source using LED (Light Emitting Diode) DMD, Digital (mirror, Device) realizes the conversion of electrical signals into optical signals.
所示平视显示系统10,可以如图3所示,放到汽车外部,可机械固定于车窗外沿并辅以磁力吸附到车门金属部分;也可以如图4所示,可机械固定于侧面车窗内侧上沿或下沿。图中标号31表示投影图像,32表示用户观察后视镜时视觉形成的所述投影图像的虚像。The head-up display system 10 shown can be placed outside the car as shown in FIG. 3 and can be mechanically fixed to the outer edge of the window and assisted by magnetic attraction to the metal part of the door; or as shown in FIG. 4 can be mechanically fixed to the side car The upper or lower edge of the inside of the window. In the figure, reference numeral 31 represents a projected image, and 32 represents a virtual image of the projected image visually formed when the user views the rearview mirror.
如图5所示,以两个摄像头为例,解释平视显示系统10的工作过程,包括步骤501至509。As shown in FIG. 5, taking two cameras as an example, the working process of the head-up display system 10 is explained, including steps 501 to 509.
在步骤501,摄像头摄取车身侧边近距环境实时图像信息,并传送至主控电路板,主控电路板的处理器等可以采用并行处理的方法同时执行步骤502和步骤503。In step 501, the camera captures real-time image information of the close-up environment on the side of the vehicle body and transmits it to the main control circuit board. The processor of the main control circuit board and the like can perform step 502 and step 503 simultaneously using a parallel processing method.
在步骤502,对实时图像进行压缩、差值和锐化等处理,使所述实时图像适应后续投影数据传输处理等,并转至步骤507。In step 502, the real-time image is subjected to compression, difference, sharpening, etc. to adapt the real-time image to subsequent projection data transmission processing, etc., and the process proceeds to step 507.
在步骤503,采用预设图像识别规则对视频信息进行重点目标特征提取并对识别对象进行识别,然后执行步骤504。In step 503, the preset target image recognition rules are used to extract the key target feature of the video information and identify the recognition object, and then step 504 is executed.
在步骤504,采用预设图像识别规则确定识别对象和车身的距离、速度等相对运动参数信息,如果具有双摄像头,可以按双目视差测距算法,如果具有红外发射器,可以按TOF测距算法,并可以对运动轨迹和趋势进行估算,然后执行步骤505。In step 504, the preset image recognition rules are used to determine the relative motion parameter information such as the distance and speed of the recognized object and the body. If you have a dual camera, you can use the binocular parallax ranging algorithm. If you have an infrared transmitter, you can press TOF to measure the distance. Algorithm, and can estimate the movement trajectory and trend, and then execute step 505.
在步骤505,根据识别及计算结果,针对与车身的距离进行风险评估并判别等级,然后执行步骤506。In step 505, according to the recognition and calculation results, a risk assessment is performed with respect to the distance to the vehicle body and the level is judged, and then step 506 is executed.
在步骤506,根据风险等级给出AR叠加提示信息,包括:计算 得来的需要直接显示的信息以及根据计算结果调取的数据库预存的关联警示图示等,可以是警色条和规避图示等,然后执行步骤507。In step 506, the AR overlay prompt information is given according to the risk level, including: the calculated information that needs to be displayed directly and the associated warning icon pre-stored in the database retrieved according to the calculation result, etc., which may be a warning color bar and an evasion icon Wait, and then perform step 507.
在步骤507,将提示信息和在步骤502中处理过的实时图像进行叠加,并将叠加后的投影图像转换为图像投影装置13(即,投影光机)适用的投影信号。In step 507, the prompt information and the real-time image processed in step 502 are superimposed, and the superimposed projection image is converted into a projection signal suitable for the image projection device 13 (that is, a projection light machine).
在步骤508,将投影信号推送到投影光机,进行光学处理并投射。In step 508, the projection signal is pushed to the projection light machine, optically processed and projected.
在步骤509,在汽车外部后视镜投影区呈现投射结果。In step 509, the projection result is presented in the projection area of the exterior mirror of the car.
所述平视显示系统10还可以包括距离传感器15,用于测量所述识别对象的相对运动参数信息。The head-up display system 10 may further include a distance sensor 15 for measuring relative motion parameter information of the recognized object.
这里,所述距离传感器15可以是红外距离传感器或测距雷达等。以红外距离传感器为例,红外距离传感器的红外光功率和散射面有限,但红外距离传感器响应速度较快,可以作为一个快速预判工具,预判车身侧后部附近有障碍物后,立即启动红外发射装置14发出更大功率和更大散射面的红外光到障碍物,然后第二图像采集装置112(即,红外摄像头)接收障碍物反射的红外光,获取障碍物距离或深度信息,并结合可见光摄像头获取的彩色信息,对障碍物的三维信息可以更快速准确的判断。距离传感器15可以不放在平视显示系统10本体上,例如,可以放置在车身侧后部更接近可疑障碍物的位置,甚至可以设置多个距离传感器15以提高测距精确性。Here, the distance sensor 15 may be an infrared distance sensor, a ranging radar, or the like. Taking infrared distance sensor as an example, infrared distance sensor has limited infrared light power and scattering surface, but infrared distance sensor has a faster response speed, and can be used as a quick predicting tool. It can start immediately after predicting that there is an obstacle near the rear of the body side The infrared emitting device 14 emits infrared light of greater power and a larger scattering surface to the obstacle, and then the second image acquisition device 112 (ie, infrared camera) receives the infrared light reflected by the obstacle to obtain the distance or depth information of the obstacle, and Combined with the color information obtained by the visible light camera, the three-dimensional information of the obstacle can be judged more quickly and accurately. The distance sensor 15 may not be placed on the body of the head-up display system 10, for example, it may be placed at a position closer to the suspicious obstacle on the rear side of the vehicle body, or even a plurality of distance sensors 15 may be provided to improve the accuracy of the distance measurement.
在一些实施例中,还可以提供对投影画面进行调整的投影调整装置16和/或投影图像采集装置17。所述投影调整装置16用于在所述处理装置12的控制下调整所述图像投影装置13的投影方位。所述投影图像采集装置17用于采集所述图像投影装置13投影产生的投影图像。所述处理装置12还用于根据所述投影图像,采用预设调整规则控制所述投影调整装置16调整所述图像投影装置13的投影方位。In some embodiments, a projection adjustment device 16 and / or a projection image acquisition device 17 for adjusting the projection image may also be provided. The projection adjustment device 16 is used to adjust the projection orientation of the image projection device 13 under the control of the processing device 12. The projection image collection device 17 is used to collect the projection image generated by the image projection device 13. The processing device 12 is further configured to control the projection adjustment device 16 to adjust the projection orientation of the image projection device 13 according to the projected image, using a preset adjustment rule.
这里,可以采用由电动机驱动的云台等,通过调整图像投影装置13的位置起到调整投影方位的作用;或者也可以通过云台或其他支架装置直接调整整个平视显示系统10,以起到调整投影方位的效果。所述云台或其他类型支架同时可以具备手动调整功能,可以由用户进行调整。Here, a pan-tilt head driven by a motor, etc. may be used to adjust the projection orientation by adjusting the position of the image projection device 13; or the head-up display system 10 may be directly adjusted by a pan-tilt head or other bracket devices to adjust The effect of projection orientation. The gimbal or other types of brackets can also have a manual adjustment function, which can be adjusted by the user.
所述投影图像采集装置17可以是摄像头等,投影图像采集装置17可以对所述图像投影装置13的投影图像进行采样。处理装置12可以根据投影图像实际状况,控制投影调整装置16调整投影方向。所述预设调整规则可以根据司机位置和投影位置等设置,调整投影方位使图像投影装置13的投影图像投射在预设的位置。可以在后视镜预设投影位置,由于后视镜调整等情况,如果投影图像超出预设投影位置时,可以对投影方位进行调整,使投影图像保持在预设投影位置。The projection image collection device 17 may be a camera or the like, and the projection image collection device 17 may sample the projected image of the image projection device 13. The processing device 12 can control the projection adjustment device 16 to adjust the projection direction according to the actual condition of the projected image. The preset adjustment rule can be set according to the driver position, the projection position, etc., and the projection orientation is adjusted so that the projection image of the image projection device 13 is projected at a preset position. The projection position can be preset in the rearview mirror. Due to the adjustment of the rearview mirror, etc., if the projected image exceeds the preset projection position, the projection orientation can be adjusted to keep the projected image at the preset projection position.
在一些实施例中,投影图像采集装置17可以采用广角摄像头,可以同时观测转向灯的情况,发现转向灯闪动,如连续闪动三次等情况时,处理装置12可以开始执行对盲区实时图像的处理,平时可以不对实时图像进行处理。In some embodiments, the projection image acquisition device 17 may use a wide-angle camera, which can simultaneously observe the condition of the turn signal and find that the turn signal is flashing, such as three consecutive flashes. Processing, usually do not need to process real-time images.
在一些实施例中,平视显示系统10还可以包括环境光线传感器18,用于检测环境光线的强度。所述处理装置12,还用于根据所述环境光线的强度,采用预设亮度调整规则,调整所述图像投影装置13的投影亮度。可以根据投影亮度在不同环境光线亮度下的可视性来设置所述预设亮度调整规则。可以根据不同环境光线亮度设置不同的投影亮度,使投影图像在不同环境光线亮度下都可以被观察到。In some embodiments, the head-up display system 10 may further include an ambient light sensor 18 for detecting the intensity of ambient light. The processing device 12 is also used to adjust the projection brightness of the image projection device 13 according to the intensity of the ambient light, using a preset brightness adjustment rule. The preset brightness adjustment rule may be set according to the visibility of the projection brightness under different ambient light brightness. Different projection brightness can be set according to different ambient light brightness, so that the projected image can be observed under different ambient light brightness.
在一些实施例中,平视显示系统10还可以包括无线收发装置19,用于向所述处理装置12传输从外部终端接收的控制信息。In some embodiments, the head-up display system 10 may further include a wireless transceiver device 19 for transmitting control information received from an external terminal to the processing device 12.
在一些实施例中,所述无线收发装置19可以采用蓝牙、无线(wifi)、或移动通信网络传输控制信息。用户可以通过外部终端以遥控方式选择平视显示系统10。所述外部终端可以是无线遥控器或手机等移动终端。In some embodiments, the wireless transceiver 19 may use Bluetooth, wireless (wifi), or a mobile communication network to transmit control information. The user can select the head-up display system 10 in a remote control mode through an external terminal. The external terminal may be a mobile terminal such as a wireless remote controller or a mobile phone.
外部终端可以通过设置不同工作模式,启动平视显示系统10中的不同功能,如夜视模式,在夜视模式下图像采集装置17可以切换为红外摄像。此外,可以通过外部终端发送指令调整投影方向等。The external terminal can activate different functions in the head-up display system 10 by setting different working modes, such as the night vision mode, and the image acquisition device 17 can be switched to infrared imaging in the night vision mode. In addition, the external terminal can send instructions to adjust the projection direction.
以按键式蓝牙遥控器为例,可设置行车、倒车、驻车、夜视等模式,夜视模式下,图像采集装置17可以切换为红外摄像,可藉由光感识别自动转为夜视模式;或夜视模式下面再有行车,倒车,驻车等模式;还可以设置麦克等收音装置,用于把车内遥控器的声音传递 到平视显示系统10的扬声器。Take the button-type Bluetooth remote control as an example, you can set the driving, reversing, parking, night vision and other modes. In the night vision mode, the image acquisition device 17 can be switched to infrared camera, and it can automatically switch to night vision mode by light recognition ; Or under the night vision mode there are driving, reversing, parking and other modes; you can also set up microphones and other radio devices, used to transmit the sound of the remote control in the car to the head-up display system 10 speakers.
另外,移动终端还可以通过无线传输,从平视显示系统10获取实时图像和输出图像等。In addition, the mobile terminal can also obtain real-time images and output images from the head-up display system 10 through wireless transmission.
在一些实施例中,还可以如图2所示,在平视显示系统10中设置补光装置20,在图像采集装置11进行图像捕捉时采用闪光等方式补光。所述平视显示系统10还可以设置发声装置21,如扬声器等,在投影的同时,根据提示信息的不同等级,发出不同的提示音。图2中的平视显示系统10还包括可选的电池22以及外接电源接口23,如此可以采用不同的电源为平视显示系统10供电。In some embodiments, as shown in FIG. 2, a fill light device 20 may be provided in the head-up display system 10 to fill the light with a flash or the like when the image acquisition device 11 captures an image. The head-up display system 10 may also be provided with a sound-generating device 21, such as a speaker, etc., which emits different prompt sounds according to different levels of prompt information while projecting. The head-up display system 10 in FIG. 2 further includes an optional battery 22 and an external power supply interface 23, so that different power supplies can be used to power the head-up display system 10.
结合实际使用情况,对实际使用过程中可能出现的投影方位调整的情况说明如下。Combined with the actual use, the following describes the adjustment of the projection orientation that may occur during actual use.
情况一、当平视显示系统10放置在车内时,向两侧反光镜投影时,投影光线会受到侧边车窗影响产生折射,此时可能需要对投影方位进行调整或校正。此外,外部光线变化也可引起对投影图像亮度的调整,具体调整过程如图6所示,包括步骤601至608。Case 1: When the head-up display system 10 is placed in a car, when projecting to the mirrors on both sides, the projected light will be refracted due to the influence of the side windows. In this case, the projection orientation may need to be adjusted or corrected. In addition, changes in external light can also cause adjustment of the brightness of the projected image. The specific adjustment process is shown in FIG. 6 and includes steps 601 to 608.
在步骤601,对投影光束穿过或未穿过车窗的情况进行学习,确定在有侧边车窗和无侧边车窗情况下投射区域在投影图像采集装置17的采集图像中的位置范围。In step 601, the projection light beam passes through or does not pass through the window to learn, and determine the position range of the projection area in the captured image of the projection image acquisition device 17 with and without the side window .
在步骤602,环境光线传感器18检测环境光线的强度。At step 602, the ambient light sensor 18 detects the intensity of ambient light.
在步骤603,与预设光线强度阈值比较,确定是否处于暗环境,如果处于暗环境,则执行步骤604,否则,执行步骤605。In step 603, it is compared with a preset light intensity threshold to determine whether it is in a dark environment, if it is in a dark environment, step 604 is performed, otherwise, step 605 is performed.
在步骤604,将投射方位调整到预设的暗光角度,避开车外部后视镜投影区,直接将投影画面投影到侧窗玻璃上,避免多向散射。如图7所示,在特定投影角度,图像投影装置13在侧窗玻璃上的投影图像可以被司机观察到。In step 604, the projection direction is adjusted to a preset dark light angle, the projection area of the external rearview mirror is avoided, and the projection picture is directly projected onto the side window glass to avoid multi-directional scattering. As shown in FIG. 7, at a specific projection angle, the projection image of the image projection device 13 on the side window glass can be observed by the driver.
在步骤605,投影图像采集装置17对投影区域在采集图像中的位置进行识别。In step 605, the projection image acquisition device 17 identifies the position of the projection area in the acquired image.
在步骤606,将识别的投影区域在采集图像中的位置与通过学习得到的位置范围进行比较,如果没有超出所述位置范围,则执行步骤607,否则执行步骤608。In step 606, the position of the identified projection area in the captured image is compared with the position range obtained through learning. If it does not exceed the position range, step 607 is executed, otherwise step 608 is executed.
在步骤607,对投影方位不做调整补偿。In step 607, no adjustment and compensation are made to the projection orientation.
在步骤608,对投影方位进行调整补偿,使投影区域落到所述位置范围内。In step 608, the projection orientation is adjusted and compensated so that the projection area falls within the position range.
情况二、侧后视镜调整时,平视显示系统10实时追踪投影位置,进行实时调整,如图8所示,具体过程包括步骤801至804。Case 2: When the side mirrors are adjusted, the head-up display system 10 tracks the projection position in real time and performs real-time adjustment, as shown in FIG. 8, and the specific process includes steps 801 to 804.
在步骤801,投影图像采集装置17对车外部后视镜投影图像边缘进行识别。In step 801, the projection image acquisition device 17 recognizes the edge of the projected image of the exterior mirror of the vehicle.
在步骤802,投影图像边缘边缘区是否超出侧视镜镜面或系统认定的预定范围,如果超出,则执行步骤804,否则,执行步骤803。In step 802, whether the edge area of the projected image exceeds the side mirror's mirror surface or a predetermined range recognized by the system. If it exceeds, then step 804 is executed; otherwise, step 803 is executed.
在步骤803,不做投影方位调整。In step 803, no projection orientation adjustment is made.
在步骤804,调整投影方位,使投影图像投射到侧后视镜预定范围内,如果超出可调整范围可以提示用户进行人为干涉。In step 804, the projection orientation is adjusted so that the projected image is projected into the predetermined range of the side mirror, and if the adjustable range is exceeded, the user may be prompted to perform human intervention.
情况三、根据车辆转向情况,进行实时投影,如图9所示,包括步骤901至904。Case three, according to the steering situation of the vehicle, real-time projection is performed, as shown in FIG. 9, including steps 901 to 904.
在步骤901,判断车辆转向。At step 901, it is determined that the vehicle is turning.
在一些实施例中,判断车辆转向除了可以通过与车辆控制系统的通信实现外,还可以采用投影图像采集装置17对转向灯的拍摄实现,例如,投影图像采集装置17拍摄的图像中出现了转向灯闪烁,如果连续三次,则确定车辆进入转向状态。In some embodiments, in addition to communicating with the vehicle control system, judging the turning of the vehicle can also be achieved by using the projection image acquisition device 17 to capture the turn signal, for example, the image captured by the projection image acquisition device 17 has a steering The light flashes, if three consecutive times, it is determined that the vehicle is turning.
在步骤902,启动转向侧平视显示系统10的处理功能,针对目标的移动状态进行运动趋势分析。In step 902, the processing function of the head-up display system 10 on the steering side is started, and the movement trend analysis is performed for the movement state of the target.
在步骤903,提取目标特征并拟合归类,调取高风险特征进行匹配。In step 903, target features are extracted and fitted and classified, and high-risk features are retrieved for matching.
在步骤904,可以在实物影像上叠加比常态信息更多的数据分析结果和警示性提示,这里,可以用投影信息和声音信息提示司机。In step 904, more data analysis results and warning prompts than normal information can be superimposed on the physical image. Here, the driver can be prompted with projection information and sound information.
综上所述,图像采集装置11采集车身外部侧后方的环境影像并送达处理装置12进行图像处理和分析计算,之后把环境实景视频信息和根据实景影像数据分析计算得来的虚拟信息均送达图像投影装置13以投射到汽车外部后视镜的局部镜面。典型应用场景如,行人或其他障碍物体靠近车身或通过预测其移动轨迹接近或将穿入车身 时,投影界面给出叠加在实物影像上的图文或语音提示。如此,司机可以根据投影图像获得更多的信息,提高驾驶安全性。In summary, the image acquisition device 11 collects the environmental image on the outside and rear of the vehicle body and sends it to the processing device 12 for image processing and analysis and calculation, and then sends the environmental real video information and the virtual information calculated based on the analysis of the real image data The image projection device 13 is projected onto a partial mirror surface of the exterior mirror of the car. Typical application scenarios, such as pedestrians or other obstacles approaching the body or predicting its movement trajectory to approach or will penetrate into the body, the projection interface gives graphics or voice prompts superimposed on the physical image. In this way, the driver can obtain more information based on the projected image and improve driving safety.
本公开实施例提供的平视显示方法,如图10所示,所述方法包括步骤1001至1003。As shown in FIG. 10, the head-up display method provided by an embodiment of the present disclosure includes steps 1001 to 1003.
在步骤1001,捕获预设图像采集区域的实时图像。In step 1001, a real-time image of a preset image acquisition area is captured.
这里,可以如图1所示,由平视显示系统10中的图像采集装置11来捕获实时图像。所述平视显示系统10可以是单独的一个整体,也可以是由分布在车身各个位置的不同装置组成。所述平视显示系统10可以安装在汽车等载体上。与其他车载系统类似,所述平视显示系统10可以由汽车等载体等供电,也可以平视显示系统10自身电池供电。Here, as shown in FIG. 1, a real-time image can be captured by the image acquisition device 11 in the head-up display system 10. The head-up display system 10 may be a single whole, or may be composed of different devices distributed at various positions on the vehicle body. The head-up display system 10 may be installed on a carrier such as an automobile. Similar to other in-vehicle systems, the head-up display system 10 may be powered by a carrier such as an automobile, or it may be powered by the battery of the head-up display system 10 itself.
AR是现实场景和虚拟场景的结合,在摄像头拍摄的画面基础上,通过计算机处理能力,将虚拟的数据叠加在现实环境当中,再利用同一个画面进行显示而带来的一种交互模式。所述平视显示系统10可以通过AR方式向司机提供驾驶信息。AR is a combination of real scenes and virtual scenes. Based on the pictures taken by the camera, through computer processing capabilities, the virtual data is superimposed on the real environment, and then the same picture is used to display an interactive mode. The head-up display system 10 can provide driving information to the driver through the AR mode.
这里,所述图像采集装置11可以是摄像头等图像、视频捕捉设备。所述预设图像采集区域可以是覆盖两侧后视镜时的视觉盲区的区域。Here, the image acquisition device 11 may be an image or video capture device such as a camera. The preset image acquisition area may be an area that covers a blind spot in the rearview mirror on both sides.
在一些实施例中,所述图像采集装置11可以包括第一图像采集装置111和第二图像采集装置112。第一图像采集装置111和第二图像采集装置112可以分别从两个不同角度捕获预设图像采集区域的实时图像,用于后续图像处理。In some embodiments, the image acquisition device 11 may include a first image acquisition device 111 and a second image acquisition device 112. The first image acquisition device 111 and the second image acquisition device 112 may capture real-time images of the preset image acquisition area from two different angles, respectively, for subsequent image processing.
以单独成一个整体的平视显示系统10为例,平视显示系统10内部详细构造可以如图2所示,其中包括图像采集装置11,处理装置12(例如,主控电路板)和图像投影装置13。图像采集装置11包括第一图像采集装置111和第二图像采集装置112,例如,两个摄像头;所述主控电路板上还设置有电源管理电路,用于管理整个平视显示系统10的电源,其中包括管理外接电源和可选内置电池供电的电源切换、充放电以及平视显示系统10中各装置的电源供应分配等。主控电路板可以包括处理器以进行数据处理,以及电源等各功能相应 的电路和接口。Taking the head-up display system 10 as a whole as an example, the detailed internal structure of the head-up display system 10 may be as shown in FIG. 2, which includes an image acquisition device 11, a processing device 12 (for example, a main control circuit board), and an image projection device 13 . The image acquisition device 11 includes a first image acquisition device 111 and a second image acquisition device 112, for example, two cameras; the main control circuit board is also provided with a power management circuit for managing the power of the entire head-up display system 10, This includes management of external power supply and optional built-in battery power supply switching, charging and discharging, and power supply distribution of each device in the head-up display system 10. The main control circuit board may include a processor for data processing, and circuits and interfaces corresponding to various functions such as power supply.
在步骤1002,采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与所述平视显示系统10的载体的相对运动参数信息,并且根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像。In step 1002, a preset image recognition rule is used to identify the recognition object in the real-time image and / or relative motion parameter information of the recognition object and the carrier of the head-up display system 10, and according to the recognition object and / or Or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image.
在一些实施例中,可以如图1所述,由平视显示系统10中的处理装置12来执行所述步骤1002。所述预设图像识别规则可以根据所述图像采集装置11设置,所述图像采集装置11为一个摄像头时,可以对单个摄像头捕获的实时图像进行识别;所述图像采集装置11为两个或两个以上摄像头时,可以通过双目视差测距算法等方法对两个摄像头捕获的实时图像进行识别。对识别对象的识别可以采用模型训练、深度学习等方法实现。所述相对运动参数可以包括所述识别对象与所述载体的相对距离和相对运动速度等。所述相对运动速度可以通过两个时间点的相对距离除以两个时间点的时间间隔确定。当所述图像采集装置11为两个或两个以上摄像头时,可以根据所述预设信息处理规则预先选定一个摄像头的实时图像合并作为输出图像。In some embodiments, the step 1002 may be performed by the processing device 12 in the head-up display system 10 as described in FIG. 1. The preset image recognition rule may be set according to the image acquisition device 11, when the image acquisition device 11 is a camera, the real-time image captured by a single camera may be identified; the image acquisition device 11 is two or two When there are more than one camera, the real-time images captured by the two cameras can be identified through methods such as binocular parallax ranging. Recognition of recognized objects can be achieved by methods such as model training and deep learning. The relative motion parameter may include a relative distance and a relative motion speed of the identification object and the carrier. The relative motion speed may be determined by dividing the relative distance between two time points by the time interval between the two time points. When the image acquisition device 11 is two or more cameras, a real-time image of one camera may be pre-selected and combined as an output image according to the preset information processing rule.
所述预设信息处理规则可以根据所述识别对象和所述相对运动参数设置,将需要的提示信息与捕捉图像进行合并,即,直接在捕捉到的图像上显示提示信息,例如,在识别对象为预设对象时,可以指示识别对象的大体轮廓等,又例如,在相对运动参数超出预测值时,可以显示距离或警告信息等提示信息。The preset information processing rule may be based on the recognition object and the relative motion parameter settings, and merge the required prompt information with the captured image, that is, display the prompt information directly on the captured image, for example, in the recognition object When it is a preset object, it can indicate the general outline of the identified object, etc. For example, when the relative motion parameter exceeds the predicted value, it can display prompt information such as distance or warning information.
在一些实施方式中,为加强对重点目标物体的辨识,提升针对性的警示措施,可以融合单摄像头基于深度学习的物体识别能力;可以对图像中的场景进行道路以及障碍物或者各种特殊物体的分割进行特征提取,在此基础上进行基于机器学习和深度学习进行模式匹配,实现识别对象的识别。重点识别出对行车安全有重大影响的人体,车辆等识别对象。In some embodiments, in order to strengthen the identification of key target objects and enhance targeted warning measures, single camera can be integrated with deep learning based object recognition capabilities; roads and obstacles or various special objects can be performed on the scenes in the image The segmentation is used for feature extraction, and on this basis, pattern matching based on machine learning and deep learning is performed to realize the recognition of the recognition object. Focus on identifying human, vehicle and other recognition objects that have a significant impact on driving safety.
识别出识别对象后,针对单摄像头情况可以采用机器学习、深度学习或雷达等其他辅助手段,获取识别对象的距离和位移情况,从 而,可以在后续给出更加明确的提示信息。After the recognition object is recognized, other auxiliary means such as machine learning, deep learning or radar can be used to obtain the distance and displacement of the recognition object for the single camera situation, so that more clear prompt information can be given later.
在一些实施例中,所述处理装置12可以根据第一图像采集装置111和第二图像采集装置112分别捕获的实时图像,采用双摄测距原理,检测所述识别对象与所述载体的距离。In some embodiments, the processing device 12 may detect the distance between the identified object and the carrier according to the real-time images captured by the first image acquisition device 111 and the second image acquisition device 112 respectively, using the dual-camera ranging principle .
具有两个摄像头时,可以通过双摄测距原理,即,双目视差测距原理可以判别出障碍物的距离和位移等情况。和人的双眼成像类似,在双摄像头同步对场景进行成像过程中的各自的成像中,同一物体所处的位置并不同,近处的物体在两者中的位置变化较大,远处的物体在两者中的位置差异较小。如此,可以对相对运动中的人或车等识别对象的接近度和速度进行检测,进而根据预设信息处理规则生成提示信息,例如,通过计算生成红色警示图文标识,其中包括:物体距离、移动速度、极限距离红色告警,并叠加在实物影像上,并同时辅以声音警示。When there are two cameras, the distance and displacement of obstacles can be judged by the principle of dual-camera distance measurement, that is, the principle of binocular parallax distance measurement. Similar to the binocular imaging of human beings, the position of the same object is different in the respective imaging of the scene during the simultaneous imaging of the scene with the two cameras, the position of the near object varies greatly between the two, and the object in the distance The position difference between the two is small. In this way, you can detect the proximity and speed of recognized objects such as people or cars in relative motion, and then generate prompt information according to preset information processing rules, for example, generate a red warning graphic mark through calculation, including: object distance, Red warning for moving speed and limit distance, superimposed on the physical image, and supplemented by sound warning at the same time.
在一些实施例中,所述第一图像采集装置111可以是可见光图像采集装置,所述第二图像采集装置112可以是红外图像采集装置。In some embodiments, the first image acquisition device 111 may be a visible light image acquisition device, and the second image acquisition device 112 may be an infrared image acquisition device.
例如,可见光图像采集装置可以是可见光摄像头;红外图像采集装置可以是红外摄像头,也可以采用同时支持可见光和红外功能的摄像头,需要时可以切换至红外功能。For example, the visible light image acquisition device may be a visible light camera; the infrared image acquisition device may be an infrared camera, or a camera that supports both visible light and infrared functions, and may be switched to an infrared function when needed.
在一些实施例中,为了达到更理想的图像采集效果,可以采用广角+长焦的配置,可见光摄像头采用广角,红外摄像头采用长焦,以可见光摄像的彩色图像加红外摄像头采集并解析的黑白图像为图像处理基础;如此,可以获取更多差异化环境信息,增强立体视觉系统应对夜间情景或特殊天气的能力。In some embodiments, in order to achieve a more ideal image collection effect, a wide-angle + telephoto configuration may be used, a visible light camera uses a wide angle, an infrared camera uses a telephoto, and a black-and-white image captured and parsed by a color image of a visible light camera plus an infrared camera It is the basis of image processing; in this way, you can obtain more differentiated environmental information and enhance the ability of the stereo vision system to cope with night scenes or special weather.
在两个摄像头基础上,还可以再增加一个可见光摄像头,形成三摄,两个可见光摄像头负责双目视差测距,红外摄像头专门负责暗光摄像。On the basis of two cameras, one more visible light camera can be added to form a three-camera. The two visible light cameras are responsible for binocular parallax distance measurement, and the infrared camera is specifically responsible for dark light imaging.
在一些实施例中,所述系统还包括红外发射装置14,用于向识别对象发送红外波束。所述处理装置12还用于根据所述红外发射装置14发送所述红外波束的发射时间,以及第二图像采集装置112接收所述红外波束被识别对象反射产生的光束的接收时间,采用TOF 测距法,确定所述识别对象与所述载体的距离。In some embodiments, the system further includes an infrared emitting device 14 for sending an infrared beam to the identified object. The processing device 12 is further configured to transmit the infrared beam according to the infrared beam emitting time of the infrared emitting device 14, and the second image acquisition device 112 receives the infrared beam received by the object reflected by the beam of the receiving time, using TOF measurement The distance method determines the distance between the identification object and the carrier.
例如,处理装置12根据红外发射装置14发射的红外光束发射到障碍物以及第二图像采集装置112(即,红外摄像头)接收障碍物反射的红外光束两者之间的时间差获取障碍物深度信息,并且结合所述第一图像采集装置111(即,可见光摄像头)获取的图像信息,对障碍物的三维信息可以更快速准确的判断。For example, the processing device 12 acquires obstacle depth information according to the time difference between the infrared beam emitted by the infrared emitting device 14 being transmitted to the obstacle and the infrared beam reflected by the second image acquisition device 112 (ie, infrared camera) receiving the obstacle, In addition, in combination with the image information acquired by the first image acquisition device 111 (that is, the visible light camera), the three-dimensional information of the obstacle can be determined more quickly and accurately.
在步骤1003,向预设投影面的指定区域投影所述输出图像。In step 1003, the output image is projected to a designated area of a preset projection surface.
这里,可以如图1所示,由平视显示系统10中的图像投影装置13来进行投影。所述预设投影面可以是汽车两侧后视镜,即,可以向两侧后视镜的指定区域投影所述输出图像。可以在一辆车上设置两台平视显示系统10,分别对应于两侧后视镜。两台平视显示系统10分别将两侧的盲区影像以及提示信息投影到两侧后视镜,从而司机可以直接观察到后视盲区,并可以得到提示信息,大大加强了驾驶安全性。Here, as shown in FIG. 1, the image projection device 13 in the head-up display system 10 may perform projection. The preset projection surface may be rear-view mirrors on both sides of the car, that is, the output image may be projected to a designated area of the rear-view mirrors on both sides. Two head-up display systems 10 may be provided on one car, respectively corresponding to the rear-view mirrors on both sides. The two head-up display systems 10 respectively project the blind spot images on both sides and the prompt information to the rearview mirrors on both sides, so that the driver can directly observe the blind spot in the rear vision and get the prompt information, which greatly enhances the driving safety.
在一些实施例中,图像投影装置13可以用由可动态变焦的投影镜头和采用LED光源照射基于DLP技术的DMD实现电信号转换为光信号。In some embodiments, the image projection device 13 may use a dynamic zoom projection lens and an LED light source to illuminate a DMD based on DLP technology to achieve conversion of electrical signals into optical signals.
所示平视显示系统10,可以如图3所示,放到汽车外部,可机械固定于车窗外沿并辅以磁力吸附到车门金属部分;也可以如图4所示,可机械固定于侧面车窗内侧上沿或下沿。图中标号31表示投影图像,32表示用户观察后视镜时视觉形成的所述投影图像的虚像。The head-up display system 10 shown can be placed outside the car as shown in FIG. 3 and can be mechanically fixed to the outer edge of the window and supplemented by magnetic force to the metal part of the door; as shown in FIG. The upper or lower edge of the inside of the window. In the figure, reference numeral 31 represents a projected image, and 32 represents a virtual image of the projected image visually formed when the user views the rearview mirror.
如图5所示,以两个摄像头为例,解释平视显示系统10的工作过程,包括步骤501至509。As shown in FIG. 5, taking two cameras as an example, the working process of the head-up display system 10 is explained, including steps 501 to 509.
在步骤501,摄像头摄取车身侧边近距环境实时图像信息,并传送至主控电路板,主控电路板的处理器等可以采用并行处理的方法同时执行步骤502和步骤503。In step 501, the camera captures real-time image information of the close-up environment on the side of the vehicle body and transmits it to the main control circuit board. The processor of the main control circuit board and the like can perform step 502 and step 503 simultaneously using a parallel processing method.
在步骤502,对实时图像进行压缩、差值和锐化等处理,使所述实时图像适应后续投影数据传输处理等,并转至步骤507。In step 502, the real-time image is subjected to compression, difference, sharpening, etc. to adapt the real-time image to subsequent projection data transmission processing, etc., and the process proceeds to step 507.
在步骤503,采用预设图像识别规则对视频信息进行重点目标特征提取并对识别对象进行识别,然后执行步骤504。In step 503, the preset target image recognition rules are used to extract the key target feature of the video information and identify the recognition object, and then step 504 is executed.
在步骤504,采用预设图像识别规则确定识别对象和车身的距离、速度等相对运动参数信息,如果具有双摄像头,可以按双目视差测距算法,如果具有红外发射器,可以按TOF测距算法,并可以对运动轨迹和趋势进行估算,然后执行步骤505。In step 504, the preset image recognition rules are used to determine the relative motion parameter information such as the distance and speed of the recognized object and the body. If you have a dual camera, you can use the binocular parallax ranging algorithm. If you have an infrared transmitter, you can press TOF to measure the distance. Algorithm, and can estimate the movement trajectory and trend, and then execute step 505.
在步骤505,根据识别及计算结果,针对与车身的距离进行风险评估并判别等级,然后执行步骤506。In step 505, according to the recognition and calculation results, a risk assessment is performed with respect to the distance to the vehicle body and the level is judged, and then step 506 is executed.
在步骤506,根据风险等级给出AR叠加提示信息,包括:计算得来的需要直接显示的信息以及根据计算结果调取的数据库预存的关联警示图示等,可以是警色条和规避图示等,然后执行步骤507。In step 506, the AR overlay prompt information is given according to the risk level, including: the calculated information that needs to be displayed directly and the associated warning icon pre-stored in the database retrieved according to the calculation result, etc., which may be a warning color bar and an evasion icon Wait, and then perform step 507.
在步骤507,将提示信息和在步骤502中处理过的实时图像进行叠加,并将叠加后的投影图像转换为图像投影装置13(即,投影光机)适用的投影信号。In step 507, the prompt information and the real-time image processed in step 502 are superimposed, and the superimposed projection image is converted into a projection signal suitable for the image projection device 13 (that is, a projection light machine).
在步骤508,将投影信号推送到投影光机,进行光学处理并投射。In step 508, the projection signal is pushed to the projection light machine, optically processed and projected.
在步骤509,在汽车外部后视镜投影区呈现投射结果。In step 509, the projection result is presented in the projection area of the exterior mirror of the car.
所述平视显示系统10还可以包括距离传感器15,用于测量所述识别对象的相对运动参数信息。The head-up display system 10 may further include a distance sensor 15 for measuring relative motion parameter information of the recognized object.
这里,所述距离传感器15可以是红外距离传感器或测距雷达等。以红外距离传感器为例,红外距离传感器的红外光功率和散射面有限,但红外距离传感器响应速度较快,可以作为一个快速预判工具,预判车身侧后部附近有障碍物后,立即启动红外发射装置14发出更大功率和更大散射面的红外光到障碍物,然后第二图像采集装置112(即,红外摄像头)接收障碍物反射的红外光,获取障碍物距离或深度信息,并结合可见光摄像头获取的彩色信息,对障碍物的三维信息可以更快速准确的判断。距离传感器15可以不放在平视显示系统10本体上,例如,可以放置在车身侧后部更接近可疑障碍物的位置,甚至可以设置多个距离传感器15以提高测距精确性。Here, the distance sensor 15 may be an infrared distance sensor, a ranging radar, or the like. Taking infrared distance sensor as an example, infrared distance sensor has limited infrared light power and scattering surface, but infrared distance sensor has a faster response speed, and can be used as a quick predicting tool. It can start immediately after predicting that there is an obstacle near the rear of the body side The infrared emitting device 14 emits infrared light of greater power and a larger scattering surface to the obstacle, and then the second image acquisition device 112 (ie, infrared camera) receives the infrared light reflected by the obstacle to obtain the distance or depth information of the obstacle, and Combined with the color information obtained by the visible light camera, the three-dimensional information of the obstacle can be judged more quickly and accurately. The distance sensor 15 may not be placed on the body of the head-up display system 10, for example, it may be placed at a position closer to the suspicious obstacle on the rear side of the vehicle body, or even a plurality of distance sensors 15 may be provided to improve the accuracy of the ranging.
在一些实施例中,还可以提供对投影画面进行调整的投影调整装置16和/或投影图像采集装置17。所述投影调整装置16用于在所述处理装置12的控制下调整所述图像投影装置13的投影方位。所述投影图像采集装置17用于采集所述图像投影装置13投影产生的投影 图像。所述处理装置12还用于根据所述投影图像,采用预设调整规则控制所述投影调整装置16调整所述图像投影装置13的投影方位。In some embodiments, a projection adjustment device 16 and / or a projection image acquisition device 17 for adjusting the projection image may also be provided. The projection adjustment device 16 is used to adjust the projection orientation of the image projection device 13 under the control of the processing device 12. The projection image acquisition device 17 is used to acquire the projection image generated by the image projection device 13. The processing device 12 is further configured to control the projection adjustment device 16 to adjust the projection orientation of the image projection device 13 according to the projected image, using a preset adjustment rule.
这里,可以采用由电动机驱动的云台等,通过调整图像投影装置13的位置起到调整投影方位的作用;或者也可以通过云台或其他支架装置直接调整整个平视显示系统10,以起到调整投影方位的效果。所述云台或其他类型支架同时可以具备手动调整功能,可以由用户进行调整。Here, a pan-tilt head driven by a motor, etc. may be used to adjust the projection orientation by adjusting the position of the image projection device 13; or the head-up display system 10 may be directly adjusted by a pan-tilt head or other bracket devices to adjust The effect of projection orientation. The gimbal or other types of brackets can also have a manual adjustment function, which can be adjusted by the user.
所述投影图像采集装置17可以是摄像头等,投影图像采集装置17可以对所述图像投影装置13的投影图像进行采样。处理装置12可以根据投影图像实际状况,控制投影调整装置16调整投影方向。所述预设调整规则可以根据司机位置和投影位置等设置,调整投影方位使图像投影装置13的投影图像投射在预设的位置。可以在后视镜预设投影位置,由于后视镜调整等情况,如果投影图像超出预设投影位置时,可以对投影方位进行调整,使投影图像保持在预设投影位置。The projection image collection device 17 may be a camera or the like, and the projection image collection device 17 may sample the projected image of the image projection device 13. The processing device 12 can control the projection adjustment device 16 to adjust the projection direction according to the actual condition of the projected image. The preset adjustment rule can be set according to the driver position, the projection position, etc., and the projection orientation is adjusted so that the projection image of the image projection device 13 is projected at a preset position. The projection position can be preset in the rearview mirror. Due to the adjustment of the rearview mirror, etc., if the projected image exceeds the preset projection position, the projection orientation can be adjusted to keep the projected image at the preset projection position.
在一些实施例中,投影图像采集装置17可以采用广角摄像头,可以同时观测转向灯的情况,发现转向灯闪动,如连续闪动三次等情况时,处理装置12可以开始执行对盲区实时图像的处理,平时可以不对实时图像进行处理。In some embodiments, the projection image acquisition device 17 may use a wide-angle camera, which can simultaneously observe the condition of the turn signal and find that the turn signal is flashing, such as three consecutive flashes. Processing, usually do not need to process real-time images.
在一些实施例中,平视显示系统10还可以包括环境光线传感器18,用于检测环境光线的强度。所述处理装置12,还用于根据所述环境光线的强度,采用预设亮度调整规则,调整所述图像投影装置13的投影亮度。可以根据投影亮度在不同环境光线亮度下的可视性来设置所述预设亮度调整规则。可以根据不同环境光线亮度设置不同的投影亮度,使投影图像在不同环境光线亮度下都可以被观察到。In some embodiments, the head-up display system 10 may further include an ambient light sensor 18 for detecting the intensity of ambient light. The processing device 12 is also used to adjust the projection brightness of the image projection device 13 according to the intensity of the ambient light, using a preset brightness adjustment rule. The preset brightness adjustment rule may be set according to the visibility of the projection brightness under different ambient light brightness. Different projection brightness can be set according to different ambient light brightness, so that the projected image can be observed under different ambient light brightness.
在一些实施例中,平视显示系统10还可以包括无线收发装置19,用于向所述处理装置12传输从外部终端接收的控制信息。In some embodiments, the head-up display system 10 may further include a wireless transceiver device 19 for transmitting control information received from an external terminal to the processing device 12.
在一些实施例中,所述无线收发装置19可以采用蓝牙、无线(wifi)、或移动通信网络传输控制信息。用户可以通过外部终端以遥控方式选择平视显示系统10。所述外部终端可以是无线遥控器或手机等移动终端。In some embodiments, the wireless transceiver 19 may use Bluetooth, wireless (wifi), or a mobile communication network to transmit control information. The user can select the head-up display system 10 in a remote control mode through an external terminal. The external terminal may be a mobile terminal such as a wireless remote controller or a mobile phone.
外部终端可以通过设置不同工作模式,启动平视显示系统10中的不同功能,如夜视模式,在夜视模式下图像采集装置17可以切换为红外摄像。此外,可以通过外部终端发送指令调整投影方向等。The external terminal can activate different functions in the head-up display system 10 by setting different working modes, such as the night vision mode, and the image acquisition device 17 can be switched to infrared imaging in the night vision mode. In addition, the external terminal can send instructions to adjust the projection direction.
以按键式蓝牙遥控器为例,可设置行车、倒车、驻车、夜视等模式,夜视模式下,图像采集装置17可以切换为红外摄像,可藉由光感识别自动转为夜视模式;或夜视模式下面再有行车,倒车,驻车等模式;还可以设置麦克等收音装置,用于把车内遥控器的声音传递到平视显示系统10的扬声器。Take the button-type Bluetooth remote control as an example, you can set the driving, reversing, parking, night vision and other modes. In the night vision mode, the image acquisition device 17 can be switched to infrared camera, and it can automatically switch to night vision mode by light recognition ; Or under the night vision mode there are driving, reversing, parking and other modes; you can also set up microphones and other radio devices, used to transmit the sound of the remote control in the car to the head-up display system 10 speakers.
另外,移动终端还可以通过无线传输,从平视显示系统10获取实时图像和输出图像等。In addition, the mobile terminal can also obtain real-time images and output images from the head-up display system 10 through wireless transmission.
在一些实施例中,还可以如图2所示,在平视显示系统10中设置补光装置20,在图像采集装置11进行图像捕捉时采用闪光等方式补光。所述平视显示系统10还可以设置发声装置21,如扬声器等,在投影的同时,根据提示信息的不同等级,发出不同的提示音。图2中的平视显示系统10还包括可选的电池22以及外接电源接口23,如此可以采用不同的电源为平视显示系统10供电。In some embodiments, as shown in FIG. 2, a fill light device 20 may be provided in the head-up display system 10 to fill the light with a flash or the like when the image acquisition device 11 captures an image. The head-up display system 10 may also be provided with a sound-generating device 21, such as a speaker, etc., which emits different prompt sounds according to different levels of prompt information while projecting. The head-up display system 10 in FIG. 2 further includes an optional battery 22 and an external power supply interface 23, so that different power supplies can be used to power the head-up display system 10.
结合实际使用情况,对实际使用过程中可能出现的投影方位调整的情况说明如下。Combined with the actual use, the following describes the adjustment of the projection orientation that may occur during actual use.
情况一、当平视显示系统10放置在车内时,向两侧反光镜投影时,投影光线会受到侧边车窗影响产生折射,此时可能需要对投影方位进行调整或校正。此外,外部光线变化也可引起对投影图像亮度的调整,具体调整过程如图6所示,包括步骤601至608。Case 1: When the head-up display system 10 is placed in a car, when projecting to the mirrors on both sides, the projected light will be refracted due to the influence of the side windows. In this case, the projection orientation may need to be adjusted or corrected. In addition, changes in external light can also cause adjustment of the brightness of the projected image. The specific adjustment process is shown in FIG. 6 and includes steps 601 to 608.
在步骤601,对投影光束穿过或未穿过车窗的情况进行学习,确定在有侧边车窗和无侧边车窗情况下投射区域在投影图像采集装置17的采集图像中的位置范围。In step 601, the projection light beam passes through or does not pass through the window to learn, and determine the position range of the projection area in the captured image of the projection image acquisition device 17 with and without the side window .
在步骤602,环境光线传感器18检测环境光线的强度。At step 602, the ambient light sensor 18 detects the intensity of ambient light.
在步骤603,与预设光线强度阈值比较,确定是否处于暗环境,如果处于暗环境,则执行步骤604,否则,执行步骤605。In step 603, it is compared with a preset light intensity threshold to determine whether it is in a dark environment, if it is in a dark environment, step 604 is performed, otherwise, step 605 is performed.
在步骤604,将投射方位调整到预设的暗光角度,避开车外部后视镜投影区,直接将投影画面投影到侧窗玻璃上,避免多向散射。如 图7所示,在特定投影角度,图像投影装置13在侧窗玻璃上的投影图像可以被司机观察到。In step 604, the projection direction is adjusted to a preset dark light angle, the projection area of the external rearview mirror is avoided, and the projection picture is directly projected onto the side window glass to avoid multi-directional scattering. As shown in FIG. 7, at a specific projection angle, the projection image of the image projection device 13 on the side window glass can be observed by the driver.
在步骤605,投影图像采集装置17对投影区域在采集图像中的位置进行识别。In step 605, the projection image acquisition device 17 identifies the position of the projection area in the acquired image.
在步骤606,将识别的投影区域在采集图像中的位置与通过学习得到的位置范围进行比较,如果没有超出所述位置范围,则执行步骤607,否则执行步骤608。In step 606, the position of the identified projection area in the captured image is compared with the position range obtained through learning. If it does not exceed the position range, step 607 is executed, otherwise step 608 is executed.
在步骤607,对投影方位不做调整补偿。In step 607, no adjustment and compensation are made to the projection orientation.
在步骤608,对投影方位进行调整补偿,使投影区域落到所述位置范围内。In step 608, the projection orientation is adjusted and compensated so that the projection area falls within the position range.
情况二、侧后视镜调整时,平视显示系统10实时追踪投影位置,进行实时调整,如图8所示,具体过程包括步骤801至804。:Case 2: When the side mirrors are adjusted, the head-up display system 10 tracks the projection position in real time and performs real-time adjustment, as shown in FIG. 8, and the specific process includes steps 801 to 804. :
在步骤801,投影图像采集装置17对车外部后视镜投影图像边缘进行识别。In step 801, the projection image acquisition device 17 recognizes the edge of the projected image of the exterior mirror of the vehicle.
在步骤802,投影图像边缘边缘区是否超出侧视镜镜面或系统认定的预定范围,如果超出,则执行步骤804,否则,执行步骤803。In step 802, whether the edge area of the projected image exceeds the side mirror's mirror surface or a predetermined range recognized by the system. If it exceeds, then step 804 is executed; otherwise, step 803 is executed.
在步骤803,不做投影方位调整。In step 803, no projection orientation adjustment is made.
在步骤804,调整投影方位,使投影图像投射到侧后视镜预定范围内,如果超出可调整范围可以提示用户进行人为干涉。In step 804, the projection orientation is adjusted so that the projected image is projected into the predetermined range of the side mirror, and if the adjustable range is exceeded, the user may be prompted to perform human intervention.
情况三、根据车辆转向情况,进行实时投影,如图9所示,包括步骤901至904。Case three, according to the steering situation of the vehicle, real-time projection is performed, as shown in FIG. 9, including steps 901 to 904.
在步骤901,判断车辆转向。At step 901, it is determined that the vehicle is turning.
在一些实施例中,判断车辆转向除了可以通过与车辆控制系统的通信实现外,还可以采用投影图像采集装置17对转向灯的拍摄实现,例如,投影图像采集装置17拍摄的图像中出现了转向灯闪烁,如果连续三次,则确定车辆进入转向状态。In some embodiments, in addition to communicating with the vehicle control system, judging the turning of the vehicle can also be achieved by using the projection image acquisition device 17 to capture the turn signal, for example, the image captured by the projection image acquisition device 17 has a steering The light flashes, if three consecutive times, it is determined that the vehicle is turning.
在步骤902,启动转向侧平视显示系统10的处理功能,针对目标的移动状态进行运动趋势分析。In step 902, the processing function of the head-up display system 10 on the steering side is started, and the movement trend analysis is performed for the movement state of the target.
在步骤903,提取目标特征并拟合归类,调取高风险特征进行匹配。In step 903, target features are extracted and fitted and classified, and high-risk features are retrieved for matching.
在步骤904,可以在实物影像上叠加比常态信息更多的数据分析结果和警示性提示,这里,可以用投影信息和声音信息提示司机。In step 904, more data analysis results and warning prompts than normal information can be superimposed on the physical image. Here, the driver can be prompted with projection information and sound information.
综上所述,图像采集装置11采集车身外部侧后方的环境影像并送达处理装置12进行图像处理和分析计算,之后把环境实景视频信息和根据实景影像数据分析计算得来的虚拟信息均送达图像投影装置13以投射到汽车外部后视镜的局部镜面。典型应用场景如,行人或其他障碍物体靠近车身或通过预测其移动轨迹接近或将穿入车身时,投影界面给出叠加在实物影像上的图文或语音提示。如此,司机可以根据投影图像获得更多的信息,提高驾驶安全性。In summary, the image acquisition device 11 collects the environmental image on the outside and rear of the vehicle body and sends it to the processing device 12 for image processing and analysis and calculation, and then sends the environmental real video information and the virtual information calculated based on the analysis of the real image data The image projection device 13 is projected onto a partial mirror surface of the exterior mirror of the car. Typical application scenarios, such as pedestrians or other obstacles approaching the body or predicting their trajectory to approach or will penetrate the body, the projection interface gives graphics or voice prompts superimposed on the physical image. In this way, the driver can obtain more information based on the projected image and improve driving safety.
以上所述,仅为本公开的最佳实施例而已,并非用于限定本公开的保护范围,凡在本公开的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本公开的保护范围之内。The above are only the best embodiments of the present disclosure and are not intended to limit the scope of protection of the present disclosure. Any modification, equivalent replacement and improvement made within the spirit and principle of the present disclosure should be included Within the scope of this disclosure.

Claims (13)

  1. 一种平视显示系统,包括:图像采集装置,处理装置和图像投影装置;其中,A head-up display system includes: an image acquisition device, a processing device, and an image projection device; wherein,
    所述图像采集装置用于捕获预设图像采集区域的实时图像;The image acquisition device is used to capture a real-time image of a preset image acquisition area;
    所述处理装置用于采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与所述平视显示系统的载体的相对运动参数信息,并且用于根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像;The processing device is used to recognize the identification object in the real-time image and / or the relative motion parameter information of the identification object and the carrier of the head-up display system using preset image recognition rules, and is used to identify The object and / or the relative motion parameter information, using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image;
    所述图像投影装置用于向预设投影面的指定区域投影所述输出图像。The image projection device is used to project the output image to a designated area of a preset projection surface.
  2. 根据权利要求1所述的系统,其中,所述图像采集装置包括第一图像采集装置和第二图像采集装置;The system according to claim 1, wherein the image acquisition device includes a first image acquisition device and a second image acquisition device;
    所述处理装置用于根据所述第一图像采集装置和所述第二图像采集装置分别捕获的实时图像,采用双摄测距原理,检测所述识别对象与所述载体的距离。The processing device is used to detect the distance between the identification object and the carrier according to the real-time images captured by the first image acquisition device and the second image acquisition device respectively, using the principle of dual-camera ranging.
  3. 根据权利要求2所述的系统,其中,The system according to claim 2, wherein
    所述第一图像采集装置为可见光图像采集装置,所述第二图像采集装置为红外图像采集装置。The first image acquisition device is a visible light image acquisition device, and the second image acquisition device is an infrared image acquisition device.
  4. 根据权利要求3所述的系统,还包括红外发射装置,用于向所述识别对象发送红外波束;The system according to claim 3, further comprising an infrared emitting device for sending an infrared beam to the identified object;
    所述处理装置还用于根据所述红外发射装置发送所述红外波束的发射时间,和所述第二图像采集装置接收所述红外波束被所述识别对象反射产生的光束的接收时间,采用飞行时间TOF测距法,确定所述识别对象与所述载体的距离。The processing device is further used for transmitting the infrared beam according to the transmission time of the infrared beam and the second image acquisition device receiving the light beam generated by the infrared beam reflected by the identification object The time TOF distance measurement method determines the distance between the identification object and the carrier.
  5. 根据权利要求1至4任一项所述的系统,还包括投影调整装置和/或投影图像采集装置,其中,The system according to any one of claims 1 to 4, further comprising a projection adjustment device and / or a projection image acquisition device, wherein,
    所述投影调整装置用于在所述处理装置的控制下调整所述图像投影装置的投影方位;The projection adjustment device is used to adjust the projection orientation of the image projection device under the control of the processing device;
    所述投影图像采集装置用于捕获所述图像投影装置产生的投影图像;The projection image acquisition device is used to capture the projection image generated by the image projection device;
    所述处理装置还用于根据所述投影图像,采用预设调整规则控制所述投影调整装置以调整所述图像投影装置的投影方位。The processing device is further configured to control the projection adjustment device to adjust the projection orientation of the image projection device using preset adjustment rules according to the projected image.
  6. 根据权利要求1至4任一项所述的系统,还包括环境光线传感器,用于检测环境光线的强度;The system according to any one of claims 1 to 4, further comprising an ambient light sensor for detecting the intensity of ambient light;
    所述处理装置还用于根据所述环境光线的强度,采用预设调整规则,调整所述图像投影装置的投影亮度。The processing device is further used for adjusting the projection brightness of the image projection device according to the intensity of the ambient light and using preset adjustment rules.
  7. 一种平视显示方法,包括:A head-up display method, including:
    捕获预设图像采集区域的实时图像;Capture real-time images of preset image collection areas;
    采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与平视显示系统的载体的相对运动参数信息,并且根据所述识别对象和/或所述相对运动参数信息,采用预设信息处理规则生成提示信息,并将所述提示信息和所述实时图像合并为输出图像;以及A preset image recognition rule is used to identify the relative motion parameter information of the recognition object in the real-time image and / or the recognition object and the head-up display system carrier, and according to the recognition object and / or the relative motion parameter information , Using preset information processing rules to generate prompt information, and merging the prompt information and the real-time image into an output image; and
    向预设投影面的指定区域投影所述输出图像。The output image is projected to the designated area of the preset projection surface.
  8. 根据权利要求7所述的方法,其中,所述采用预设图像识别规则,识别所述实时图像中的识别对象和/或所述识别对象与平视显示系统的载体的相对运动参数信息的步骤包括:The method according to claim 7, wherein the step of using a preset image recognition rule to recognize the recognition object in the real-time image and / or the relative motion parameter information of the recognition object and the carrier of the head-up display system includes :
    根据第一图像采集装置和第二图像采集装置分别捕获的实时图像,采用双摄测距原理,检测所述识别对象与所述载体的距离。According to the real-time images captured by the first image acquisition device and the second image acquisition device respectively, the distance between the identification object and the carrier is detected by using the principle of dual-camera ranging.
  9. 根据权利要求8所述的方法,其中,所述第一图像采集装置 捕获可见光实时图像,所述第二图像采集装置捕获的红外实时图像。The method according to claim 8, wherein the first image acquisition device captures a real-time image of visible light, and the infrared image real-time image captured by the second image acquisition device.
  10. 根据权利要求9所述的方法,还包括:The method of claim 9, further comprising:
    向所述识别对象发送红外波束;Sending an infrared beam to the identified object;
    根据发送所述红外波束的发射时间,和所述第二图像采集装置接收所述红外波束被所述识别对象反射产生的光束的接收时间,采用TOF测距法,确定所述识别对象与所述载体的距离。According to the transmission time of transmitting the infrared beam and the receiving time of the light beam generated by the second image acquisition device receiving the infrared beam reflected by the identification object, a TOF ranging method is used to determine Carrier distance.
  11. 根据权利要求7至10任一项所述的方法,还包括:The method according to any one of claims 7 to 10, further comprising:
    采用预设调整规则调整投影方位。Use preset adjustment rules to adjust the projection orientation.
  12. 根据权利要求7至10任一项所述的方法,还包括:The method according to any one of claims 7 to 10, further comprising:
    根据环境光线的强度,采用预设调整规则,调整投影亮度。According to the intensity of ambient light, the preset adjustment rules are adopted to adjust the projection brightness.
  13. 一种汽车,包括车身和权利要求1至6任一项所述的平视显示系统。An automobile comprising a body and the head-up display system according to any one of claims 1 to 6.
PCT/CN2019/112816 2018-10-23 2019-10-23 Head-up display system and display method, and automobile WO2020083318A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811239821.5A CN111086451B (en) 2018-10-23 2018-10-23 Head-up display system, display method and automobile
CN201811239821.5 2018-10-23

Publications (1)

Publication Number Publication Date
WO2020083318A1 true WO2020083318A1 (en) 2020-04-30

Family

ID=70331862

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112816 WO2020083318A1 (en) 2018-10-23 2019-10-23 Head-up display system and display method, and automobile

Country Status (2)

Country Link
CN (1) CN111086451B (en)
WO (1) WO2020083318A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112057107A (en) * 2020-09-14 2020-12-11 无锡祥生医疗科技股份有限公司 Ultrasonic scanning method, ultrasonic equipment and system
CN113552905B (en) * 2021-06-22 2024-09-13 歌尔光学科技有限公司 Vehicle-mounted HUD position adjustment method and system
CN114155617A (en) * 2021-11-22 2022-03-08 支付宝(杭州)信息技术有限公司 Parking payment method and collection equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106817568A (en) * 2016-12-05 2017-06-09 网易(杭州)网络有限公司 A kind of augmented reality display methods and device
CN106856566A (en) * 2016-12-16 2017-06-16 中国商用飞机有限责任公司北京民用飞机技术研究中心 A kind of information synchronization method and system based on AR equipment
CN107274725A (en) * 2017-05-26 2017-10-20 华中师范大学 A kind of mobile augmented reality type card identification method based on mirror-reflection
CN207164368U (en) * 2017-08-31 2018-03-30 北京新能源汽车股份有限公司 Vehicle-mounted augmented reality system
JP6384856B2 (en) * 2014-07-10 2018-09-05 Kddi株式会社 Information device, program, and method for drawing AR object based on predicted camera posture in real time

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103434448B (en) * 2013-08-07 2016-01-27 燕山大学 A kind of elimination car post blind area system and using method thereof
DE102013219556A1 (en) * 2013-09-27 2015-04-02 Continental Automotive Gmbh Method and device for controlling an image generation device of a head-up display
CN104608695A (en) * 2014-12-17 2015-05-13 杭州云乐车辆技术有限公司 Vehicle-mounted electronic rearview mirror head-up displaying device
JP6811106B2 (en) * 2017-01-25 2021-01-13 矢崎総業株式会社 Head-up display device and display control method
CN108515909B (en) * 2018-04-04 2021-04-20 京东方科技集团股份有限公司 Automobile head-up display system and obstacle prompting method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6384856B2 (en) * 2014-07-10 2018-09-05 Kddi株式会社 Information device, program, and method for drawing AR object based on predicted camera posture in real time
CN106817568A (en) * 2016-12-05 2017-06-09 网易(杭州)网络有限公司 A kind of augmented reality display methods and device
CN106856566A (en) * 2016-12-16 2017-06-16 中国商用飞机有限责任公司北京民用飞机技术研究中心 A kind of information synchronization method and system based on AR equipment
CN107274725A (en) * 2017-05-26 2017-10-20 华中师范大学 A kind of mobile augmented reality type card identification method based on mirror-reflection
CN207164368U (en) * 2017-08-31 2018-03-30 北京新能源汽车股份有限公司 Vehicle-mounted augmented reality system

Also Published As

Publication number Publication date
CN111086451A (en) 2020-05-01
CN111086451B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
KR101949358B1 (en) Apparatus for providing around view and Vehicle including the same
US20180352167A1 (en) Image pickup apparatus, image pickup control method, and program
KR101579100B1 (en) Apparatus for providing around view and Vehicle including the same
KR102043060B1 (en) Autonomous drive apparatus and vehicle including the same
WO2020083318A1 (en) Head-up display system and display method, and automobile
JP4807263B2 (en) Vehicle display device
EP1961613B1 (en) Driving support method and driving support device
WO2020061794A1 (en) Vehicle driver assistance device, vehicle and information processing method
CN114228491B (en) System and method for enhancing virtual reality head-up display with night vision
KR20160144829A (en) Driver assistance apparatus and control method for the same
CN103661163A (en) Mobile object and storage medium
KR20170011882A (en) Radar for vehicle, and vehicle including the same
KR101698781B1 (en) Driver assistance apparatus and Vehicle including the same
JPWO2020100664A1 (en) Image processing equipment, image processing methods, and programs
US20160225186A1 (en) System and method for augmented reality support
JP2012099085A (en) Real-time warning system on windshield glass for vehicle, and operating method thereof
KR20170043212A (en) Apparatus for providing around view and Vehicle
WO2019111529A1 (en) Image processing device and image processing method
WO2023284748A1 (en) Auxiliary driving system and vehicle
US10999488B2 (en) Control device, imaging device, and control method
CN206907232U (en) One kind is based on optics multi-vision visual vehicle rear-end collision prior-warning device
KR101816570B1 (en) Display apparatus for vehicle
WO2022004412A1 (en) Information processing device, information processing method, and program
KR20160144643A (en) Apparatus for prividing around view and vehicle including the same
TWI699999B (en) Vehicle vision auxiliary system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19875292

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19875292

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.10.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19875292

Country of ref document: EP

Kind code of ref document: A1