WO2022241638A1 - 一种投影方法及装置、车辆及ar-hud - Google Patents

一种投影方法及装置、车辆及ar-hud Download PDF

Info

Publication number
WO2022241638A1
WO2022241638A1 PCT/CN2021/094344 CN2021094344W WO2022241638A1 WO 2022241638 A1 WO2022241638 A1 WO 2022241638A1 CN 2021094344 W CN2021094344 W CN 2021094344W WO 2022241638 A1 WO2022241638 A1 WO 2022241638A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration object
calibration
imaging
projection
hud
Prior art date
Application number
PCT/CN2021/094344
Other languages
English (en)
French (fr)
Inventor
姜欣言
张宇腾
于海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180001479.9A priority Critical patent/CN114258319A/zh
Priority to EP21940095.9A priority patent/EP4339938A1/en
Priority to PCT/CN2021/094344 priority patent/WO2022241638A1/zh
Publication of WO2022241638A1 publication Critical patent/WO2022241638A1/zh
Priority to US18/511,141 priority patent/US20240087491A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • B60K35/23
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0693Calibration of display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/10Automotive applications
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects

Definitions

  • This application relates to the field of smart cars, in particular to a projection method and device, a vehicle and an AR-HUD.
  • Head Up Display is a display device that projects images into the driver's front view. It mainly uses the principle of optical reflection to project and display important relevant information in the form of two-dimensional images on the car. On the windshield, the height is roughly at the level of the driver's eyes. When the driver looks forward through the windshield, he can see the two-dimensional image projected by the HUD displayed on a virtual image surface in front of the windshield. . Compared with traditional instruments and central control screens, the driver does not need to lower his head when observing the images displayed by the HUD projection, avoiding switching back and forth between the images and the road surface, reducing the crisis response time and improving driving safety.
  • AR-HUD augmented reality (Augmented Reality, AR) head-up display
  • the change of the position of the human eye requires that the display screen of the HUD must be adjusted accordingly to ensure that the display screen of the HUD observed by the human eye is always Fusion with real road surface information.
  • the present application provides a projection method and device, a vehicle, and an AR-HUD, which can align projected and displayed images with the real world at all times, and improve projection and display effects.
  • the projection method may be executed by a projection device or some components in the projection device, wherein the projection device has a projection function, for example, AR-HUD, HUD or other devices with a projection function.
  • the projection device may be processing chips, processing circuits, processors, and the like.
  • the first aspect of the present application provides a projection method, including: acquiring image information and position information of a calibration object, and projecting the calibration object according to the image information and position information of the calibration object and an imaging model.
  • a projection method including: acquiring image information and position information of a calibration object, and projecting the calibration object according to the image information and position information of the calibration object and an imaging model.
  • this method obtains the image information and position information of the calibration object in reality, and according to the image information and position information of the calibration object, as well as the imaging model, the calibration object is projected and displayed, and according to the calibration object and the The coincidence degree of the projection surface of the calibration object adjusts the parameters of the imaging model so that the calibration object and the projection surface of the calibration object overlap as much as possible to achieve an alignment effect and improve the user's immersive experience.
  • This method can be applied to AR-HUD, HUD or other devices with projection function, so as to realize the calibration and calibration of the device and improve the projection display effect.
  • the adjusting the parameters of the imaging model includes: adjusting one or more parameters of the imaging model's field of view angle and imaging surface position.
  • a two-dimensional image corresponding to the calibration object can be generated on the imaging surface of the imaging model, and when projecting, the imaging surface of the imaging model can be used as a complete projection image Perform projection display.
  • the imaging model can be in the form of an imaging cone, an imaging cylinder, or an imaging cube, etc., wherein the field angle parameter of the imaging model can determine the area of the imaging surface and the relative size of the two-dimensional image of the calibration object to the imaging surface.
  • the scale of the imaging plane position parameter of the imaging model can determine the position of the two-dimensional image of the calibration object relative to the imaging plane. Therefore, when the coincidence degree of the calibration object and the projection plane of the calibration object is lower than the preset first threshold , the field of view angle or imaging surface position of the imaging model can be adjusted correspondingly according to the area offset or position offset or size offset.
  • adjusting the parameters of the imaging model specifically includes: when the calibration object and the calibration object When the area difference of the projection surface of the object is greater than the second threshold, the field angle of the imaging model is adjusted.
  • the area of the imaging surface can be adjusted by adjusting the field of view of the imaging model.
  • the field of view of the imaging model can be enlarged, and the imaging surface will be enlarged proportionally, and the ratio of the generated two-dimensional image of the calibration object in the imaging surface will be proportionally reduced.
  • the calibration displayed by the projection The area of the projection surface of the calibration object relative to the calibration object will also be proportionally reduced, so that the area difference between the projection surface of the calibration object and the calibration object is less than the preset second threshold; similarly, when the area of the projection surface of the calibration object is smaller than the calibration object.
  • the area of the imaging model is smaller, the field of view of the imaging model can be reduced, and the imaging surface will be proportionally reduced, and the proportion of the generated two-dimensional image of the calibration object in the imaging surface will be proportionally enlarged.
  • the projection of the calibration object displayed by the projection The area of the surface relative to the calibration object will also be enlarged proportionally, so that the difference between the projected surface of the calibration object and the area of the calibration object is smaller than the preset second threshold.
  • adjusting the parameters of the imaging model specifically includes: when the calibration object and the calibration object When the offset of the projection plane of the object is greater than the third threshold, the two-dimensional position of the imaging plane of the imaging model is adjusted.
  • the two-dimensional position of the imaging plane of the imaging model specifically refers to the up-and-down position and left-right position of the imaging surface, so as to adjust the relative position of the generated two-dimensional image of the calibration object in the imaging surface correspondingly, so that the offset between the calibration object and the projection surface of the calibration object is less than the preset Set the third threshold.
  • the degree of coincidence between the calibration object and the projection surface of the calibration object is determined by the pixel offset between the calibration object and the projection surface of the calibration object; the pixel offset is It is determined by the image collected by the camera including the calibration object and the projection surface of the calibration object.
  • a camera can be set at the position of the user's human eye to simulate the effect of human eye observation, and the projection of the calibration object and the calibration object through the camera Take pictures on the surface, generate one or more images, and determine the pixel offset between the calibration object and the projection surface of the calibration object according to the generated image, so as to determine the coincidence degree between the calibration object and the projection surface of the calibration object.
  • This method can improve the accuracy of detecting the coincidence degree between the calibration object and the projection surface of the calibration object, and display it intuitively in the form of data, avoiding errors caused by the user's human eye observation.
  • the imaging model is trained according to a training set including multiple training samples, where the training samples include human eye position information parameters, image information and position information parameters of calibration objects, And the coincidence degree parameter of the calibration object and the projection surface of the calibration object.
  • the neural network or deep learning method can be used to train the imaging model using a training set composed of multiple training samples.
  • the human eye position information parameters, calibration object The image information and position information parameters are mainly used as input, and the coincidence parameters of the calibration object and the projection surface of the calibration object are used as output to form a training sample.
  • the accuracy of the calibration object and the projection surface of the calibration object The coincidence degree makes the imaging model have a wider application range, and has the characteristics of deep learning and optimization to meet the experience of different users.
  • the user's calibration requirements Obtain the user's calibration requirements, and send the user a reminder message of the calibration start.
  • the user's eye position is obtained, and the parameters of the imaging model are calibrated according to the user's eye position.
  • a prompt message of the calibration completion is sent to the user.
  • this method can automatically calibrate the parameters of the imaging model according to the position of the user's eyes without the user's perception, and can also guide the user to propose a calibration requirement through human-computer interaction, and give voice prompts and display prompts
  • the calibration of the parameters of the imaging model is realized, and after the calibration is completed, a prompt message of the calibration completion is sent to the user, so as to improve the user experience.
  • the parameters of the calibrated imaging model are adjusted according to the user's adjustment instruction.
  • this method can calibrate the parameters of the imaging model according to the position of the user's human eyes, so that the coincidence degree between the calibration object and the projection surface of the calibration object reaches the preset threshold.
  • the parameters of the imaging model can be adjusted according to the subjective experience to realize the customization of the projection display to meet the user's target needs.
  • a second aspect of the present application provides a projection device, comprising:
  • An acquisition module configured to acquire image information and position information of calibration objects
  • a projection module configured to project the calibration object according to the image information and position information of the calibration object, and an imaging model
  • An adjustment module configured to adjust the parameters of the imaging model when the overlap between the calibration object and the projection plane of the calibration object is less than a first threshold.
  • the adjustment module when used to adjust the parameters of the imaging model, it is specifically used for:
  • the adjustment module is specifically used for:
  • the field angle of the imaging model is adjusted.
  • the adjustment module is specifically used for:
  • the two-dimensional position of the imaging plane of the imaging model is adjusted.
  • the degree of coincidence between the calibration object and the projection surface of the calibration object is determined by the pixel offset between the calibration object and the projection surface of the calibration object; the pixel offset is determined by The images collected by the camera including the calibration object and the projection surface of the calibration object are determined.
  • the imaging model is trained according to a training set including a plurality of training samples, where the training samples include human eye position information parameters, image information and position information parameters of calibration objects, and The coincidence degree parameter of the calibration object and the projection surface of the calibration object.
  • a prompt module configured to send a prompt message of calibration start to the user when obtaining the calibration requirement of the user
  • the adjustment module is also used for calibrating the parameters of the imaging model according to the obtained position of the user's eyes;
  • the prompt module is also used to send a prompt message of calibration completion to the user after the calibration is completed.
  • the prompt module is also used to prompt the user to determine whether the calibration object coincides with the projection surface of the calibration object through human eyes;
  • the adjustment module is also used to adjust the parameters of the calibrated imaging model according to the user's adjustment instruction when the calibration object does not coincide with the projection plane of the calibration object.
  • the third aspect of the present application provides a system, including:
  • the system further includes: a storage device, configured to store the imaging model and a training set of the imaging model; and a communication device, configured to realize communication and interaction between the storage device and the cloud.
  • the system is a vehicle.
  • a fourth aspect of the present application provides a computing device, including: a processor, and a memory, on which program instructions are stored, and when the program instructions are executed by the processor, the processor performs the operations described in the first aspect and the above-mentioned various methods.
  • the projection method in various technical solutions provided by the optional implementation manner.
  • the computing device is one of AR-HUD and HUD.
  • the computing device is a car.
  • the computing device is one of an on-board computer and an on-board computer.
  • a fifth aspect of the present application provides a computer-readable storage medium, on which program code is stored, and when the program code is executed by a computer or a processor, the computer or processor executes the computer or processor as described in the first aspect and The projection methods in the various technical solutions provided by the various optional implementation manners above.
  • a sixth aspect of the present application provides a computer program product.
  • the program code contained in the computer program product is executed by a computer or a processor, the computer or processor executes the program as provided in the first aspect and the above-mentioned various optional implementation manners. Projection methods in various technical solutions.
  • a first threshold a second threshold
  • a third threshold a third threshold.
  • these thresholds are not mutually exclusive and can be used in combination. It can be a decimal or a relative ratio, such as a percentage.
  • the critical state it can be considered that the threshold judgment condition is satisfied and the corresponding follow-up operation is performed, or it can be considered that the threshold judgment condition is not satisfied and the corresponding follow-up operation is not performed.
  • the projection method and device, vehicle and AR-HUD provided by this application can project and display the calibration object according to the imaging model by acquiring the image information and position information of the calibration object, and improve the calibration by adjusting the parameters of the imaging model.
  • the coincidence of the projection surface of the object and the calibration object can improve the effect of projection display.
  • the imaging model can generate a two-dimensional image of the calibration object on the imaging surface of the imaging model based on the acquired user's eye position information, image information and position information of the calibration object, and project and display it through the projection device , where the coincidence degree of the calibration object and the projection surface of the calibration object can be used to evaluate the accuracy and stability of the imaging model.
  • the imaging model can also be trained by means of neural network or deep learning, so that the accuracy and stability of the imaging model can be continuously optimized, making the imaging model suitable for different users. Changes in eye position. And with the rapid development of 5G technology and smart cars, the imaging model can also be optimized and trained through cloud interaction, so as to be suitable for different car projection devices, and automatically adjust according to the hardware parameters of different car projection devices. Adjust one or more parameters of the imaging model to meet the customized needs of different users.
  • Figure 1 is a schematic diagram of the imaging of the existing AR-HUD in use
  • FIG. 2 is a schematic diagram of an application scenario of the projection method provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of another application scenario of the projection method provided by the embodiment of the present application.
  • FIG. 4 is a flowchart of a projection method provided by an embodiment of the present application.
  • Fig. 5 is a flow chart of a calibration method provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of the system architecture of the AR-HUD provided by the embodiment of the present application.
  • FIG. 7 is a flowchart of an AR-HUD projection method provided by an embodiment of the present application.
  • FIG. 8A is a schematic diagram of the imaging frustum provided by the embodiment of the present application.
  • FIG. 8B is a schematic diagram of space conversion from the imaging frustum to the AR-HUD provided by the embodiment of the present application.
  • FIG. 9A is a schematic diagram of a head-up view of a virtual human eye and an imaging frustum in a virtual coordinate system provided by an embodiment of the present application;
  • FIG. 9B is a schematic top view of the composition of the human eye and the virtual image plane of the AR-HUD in the real coordinate system provided by the embodiment of the present application;
  • FIG. 10A is a schematic diagram of the vertical offset between the target frame and the calibration plate displayed on the virtual image plane of the AR-HUD provided by the embodiment of the present application;
  • FIG. 10B is a schematic diagram of the horizontal offset between the target frame and the calibration plate displayed on the virtual image plane of the AR-HUD provided by the embodiment of the present application;
  • FIG. 11 is a structural diagram of a projection device provided by an embodiment of the present application.
  • FIG. 12A is a schematic diagram of a human-computer interaction interface according to an embodiment of the present application.
  • FIG. 12B is a schematic diagram of another human-computer interaction interface according to the embodiment of the present application.
  • FIG. 13 is a structural diagram of a computing device according to an embodiment of the present application.
  • the memory management solution provided by the embodiment of the present application includes a projection method, a device, a vehicle, and an AR-HUD. Since the principles of these technical solutions to solve problems are the same or similar, in the introduction of the following specific embodiments, some repetitions may not be repeated, but it should be considered that these specific embodiments have been referred to each other and can be combined with each other.
  • the head-up display device is usually installed in the car cockpit.
  • the projected display information enters the user's eyes after being reflected by the front windshield, and is presented in front of the vehicle, so that the displayed information is consistent with the real world environment. Fusion to form an augmented reality display effect.
  • a camera coordinate system and a human eye coordinate system determining the corresponding relationship between the camera coordinate system and the human eye coordinate system, according to the image information captured by the vehicle camera, and the corresponding relationship between the camera coordinate system and the human eye coordinate system.
  • the augmented reality display image is determined, and then projection display is performed according to the mapping relationship between the augmented reality display image and the HUD image.
  • it is necessary to calibrate the conversion relationship between the human eye coordinate system and the camera coordinate system in real time which requires a large amount of calculation and high complexity of the task.
  • the embodiment of the present application provides a projection method and device, a vehicle, and an AR-HUD, which can adjust the projection display effect in real time according to the position change of the user's human eyes, so that the AR display of the projection display Images are always aligned with the real world, enhancing projected displays.
  • the user is usually a driver.
  • the user may also be a co-pilot passenger or a rear passenger, etc.
  • the HUD device in the main driving position can be adjusted according to the position of the driver’s eyes, so that the AR display image seen by the driver can be aligned with the real world ahead , the AR display image can be navigation information, vehicle speed information, or other prompt information on the road.
  • the HUD device of the passenger in the passenger seat the HUD device in the passenger seat can be adjusted according to the position of the human eyes of the passenger in the passenger seat, so that the AR display image seen by the passenger can also be aligned with the world ahead.
  • Figure 2- Figure 3 shows a schematic diagram of an application scenario of the projection method provided by the embodiment of the present application, referring to Figure 2- Figure 3, the application scenario of this embodiment specifically relates to a vehicle, the vehicle 1 has a collection device 10.
  • the acquisition device 10 may include an acquisition device outside the vehicle and an acquisition device inside the vehicle.
  • the acquisition device outside the vehicle may be a laser radar, a vehicle-mounted camera, or other equipment or a combination of devices with image acquisition or optical scanning functions, which may be installed in the vehicle.
  • the top, head or the side of the rearview mirror of the vehicle cockpit facing the outside of the vehicle can be installed inside or outside the vehicle. It is mainly used to detect and collect image information and position information of the environment in front of the vehicle.
  • the environment in front of the vehicle can include relevant information such as vehicles in front, obstacles, road instructions, etc.; In the specific implementation process of equipment such as detectors and in-vehicle acquisition devices, locations can be set according to requirements.
  • they can be installed on the A-pillar and B-pillar of the vehicle cockpit or on the side of the rearview mirror of the vehicle cockpit facing the user. It can be installed on the steering wheel, near the center console, or above the display screen behind the seat. It is mainly used to detect and collect the human eye position information of the driver or passenger in the vehicle cockpit. There may be one collection device in the vehicle, or there may be multiple collection devices, and the application does not limit its location and quantity.
  • the projection device 20 can be HUD, AR-HUD or other equipment with projection function, can be installed above the center console of the vehicle cockpit or inside the center console, which usually includes a projector, reflector, projection mirror, adjustment motor and control unit, the control unit is an electronic device, specifically, it may be a conventional chip processor such as a central processing unit (CPU), a microprocessor (MCU), or it may be terminal hardware such as a mobile phone or a tablet.
  • the control unit is communicated with the acquisition device 10 and the display device 30 respectively.
  • An imaging model can be preset in the control unit or by acquiring a preset imaging model in other components of the vehicle. The parameters of the imaging model are the same as those of the acquisition device in the vehicle.
  • the collected human eye position information has a correlation relationship, and the parameters can be calibrated according to the human eye position information, and then according to the environmental information collected by the acquisition device outside the vehicle, a projection image is generated and output on the projector.
  • the projected image may include an augmented reality display image generated according to environmental information, and may also include images such as vehicle speed and navigation.
  • the display device 30 can be the front windshield of the vehicle or an independently displayed transparent screen, which is used to reflect the image light emitted by the projection device and enter the user's eyes, so that the driver can look out of the vehicle through the display device 30 , can see the virtual image with the effect of depth of field, and overlap with the environment of the real world, presenting the display effect of augmented reality to the user.
  • the collection device 10, the projection device 20 and other devices can communicate data through wired communication or wireless communication (such as bluetooth, wifi). For example, after the collection device 10 collects image information, it can send This image information is transmitted to the projection device 20 .
  • the projection device 20 may send a control signal to the collection device 10 through Bluetooth communication, and adjust the collection parameters of the collection device 10, such as the shooting angle.
  • the data processing can be completed in the projection device 20 , can also be completed in the collection device 10 , and can also be completed in other processing devices, such as in-vehicle machines, in-vehicle computers and other equipment.
  • the vehicle can realize the augmented reality display effect based on the environmental information of the real world, and can adjust the generated projection image according to the user's eye position information, so that the augmented reality display image displayed by projection is always consistent with the environmental information of the real world Overlap, improve the user's immersive viewing experience.
  • Fig. 4 shows a flow chart of a projection method provided by an embodiment of the present application.
  • the projection method can be executed by a projection device or some components in the projection device, for example, AR-HUD, HUD, car, processor, etc.
  • functions such as calibration, calibration, and projection display of the above-mentioned projection device or some components in the projection device can be realized, and the application process can be when the vehicle is stationary and started, or it can be during the driving process of the vehicle.
  • the projection method includes:
  • the calibration object can specifically be a static object located outside the vehicle, such as a stationary vehicle, tree, traffic sign, or a calibration plate with a geometric shape, or a dynamic object located outside the vehicle, such as a driving vehicle, walking pedestrians etc.
  • the processor can obtain the image information and position information of the calibration object collected by the acquisition device through the interface circuit, wherein the image information can be the image collected by the camera, or the point cloud data collected by the laser radar or other forms of information.
  • the image information also includes information such as resolution, size, size, and color; the position information can be coordinate data, direction information, or other forms of information.
  • the processor may be a processor of a projection device, or a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer.
  • S402 Project the calibration object according to the image information, position information, and imaging model of the calibration object;
  • the processor can generate a calibration image corresponding to the calibration object in the imaging model, and project and output it through the interface circuit.
  • the imaging model can be based on parameters such as the position of the human eye, the position of the HUD, the field of view (Field of view, FOV) of the HUD, the projection surface (virtual image surface) of the HUD, the display resolution of the HUD, and the downward viewing angle from the human eye to the HUD. Construction, the imaging model constructed includes parameters such as origin, field of view, near plane (imaging plane), and far plane.
  • the imaging model can be in the form of an imaging cone, an imaging cylinder, or an imaging cube.
  • the origin can be determined according to the position of the human eye
  • the field of view can be determined according to the field of view of the HUD to determine the field of view of the imaging frustum
  • the near plane can be used as The imaging plane during imaging
  • the far plane can be determined according to the farthest viewing distance of the human eye.
  • the processor can generate a two-dimensional image corresponding to the calibration object on the imaging surface of the imaging model according to the acquired image information and position information of the calibration object, and when performing projection, the imaging surface of the imaging model can be used as a complete projection image Perform projection display.
  • the degree of coincidence between the calibration object and the projection surface of the calibration object can be determined through observation with the user's human eyes.
  • the first threshold may no longer be a specific value, but the user's subjective experience, for example Whether coincidence. And make subsequent adjustments based on user feedback.
  • the degree of coincidence between the calibration object and the projection plane of the calibration object can be determined through the information obtained by the acquisition device, for example, determined according to the pixel offset between the calibration object and the projection surface of the calibration object, for example , set a camera at the position of the human eye of the simulated user, and collect the image including the calibration object and the projection surface of the calibration object through the camera, and one or more images obtained by shooting, according to the resolution of the image, can be Determine the pixel offset between the calibration object and the projection surface of the calibration object.
  • the coincidence degree between the calibration object and the projection surface of the calibration object can be calculated.
  • the calculated coincidence degree can be specifically a numerical value with a percentage.
  • the first threshold is also a specific percentage value, and it is determined whether to adjust the parameters of the imaging model by comparing the coincidence degree with the first threshold. It should be understood that the coincidence degree may also be in decimal or other forms, which is not limited in this application.
  • the processor of the projection device may adjust the parameters of the imaging model to improve the coincidence degree of the calibration object and the projection plane of the calibration object .
  • the adjustable parameters of the imaging model include one or more parameters in the field of view and the position of the imaging surface.
  • the field of view parameter can determine the size of the imaging surface of the imaging model and the relative According to the scale of the imaging surface, the imaging surface position parameter of the imaging model can determine the position of the two-dimensional image of the calibration object relative to the imaging surface.
  • step S403 includes:
  • the two-dimensional position of the imaging plane of the imaging model is adjusted.
  • the above-mentioned first threshold, second threshold and third threshold can be preset and adjusted according to user needs or industry standards.
  • the area difference between the calibration object and the projection surface of the calibration object is greater than the preset second threshold
  • the area of the imaging surface can be adjusted by adjusting the field of view of the imaging model.
  • the field of view of the imaging model can be enlarged, and the imaging surface will be equal.
  • the scale is enlarged, the proportion of the generated two-dimensional image of the calibration object in the imaging plane will be proportionally reduced.
  • the projection surface of the calibration object displayed by the projection will also be proportionally reduced relative to the area of the calibration object, so that the projection of the calibration object
  • the area difference between the surface and the calibration object is less than the preset second threshold; similarly, when the area of the projection surface of the calibration object is smaller than the area of the calibration object, the field of view angle of the imaging model can be reduced, and the imaging surface will be proportionally reduced.
  • the proportion of the generated two-dimensional image of the calibration object in the imaging plane will be enlarged proportionally, and at this time, the projection surface of the calibration object displayed by projection will also be enlarged in proportion to the area of the calibration object, so that the projection surface of the calibration object is consistent with the calibration object.
  • the area difference of the object is smaller than the preset second threshold.
  • the two-dimensional position of the imaging surface of the imaging model can be adjusted at this time, the two-dimensional position Specifically, it refers to the up-down position and left-right position of the imaging plane on the two-dimensional plane of the imaging model, so as to adjust the relative position of the generated two-dimensional image of the calibration object in the imaging plane correspondingly.
  • the imaging plane of the imaging model is When the two-dimensional position moves upward, the position of the two-dimensional image of the calibration object on the imaging surface will move downward correspondingly.
  • the two-dimensional position of the imaging surface of the imaging model when the two-dimensional position of the imaging surface of the imaging model is moved to the left, the two-dimensional image of the calibration object will move downward in the imaging plane.
  • the position of the plane will move to the right accordingly, and by adjusting the two-dimensional position of the imaging plane of the imaging model, the offset between the calibration object and the projection plane of the calibration object will be smaller than the preset third threshold.
  • the above area difference, coincidence degree, etc. are some exemplary comparison parameters, which can be used in combination or replaced by each other, and can also be replaced by other similar comparison parameters, for example, size difference.
  • the main purpose is to determine the image difference between the calibration object collected by the current acquisition device and the projected calibration object. In order to adjust imaging parameters or imaging models.
  • the imaging model constructed in this embodiment can also be realized by a neural network model or a deep learning model.
  • the imaging model may be trained using a training set composed of multiple training samples.
  • a training sample can be composed of human eye position information parameters, image information and position information parameters of the calibration object as input, and coincidence degree parameters of the calibration object and the projection surface of the calibration object as output.
  • Taking a set coincidence degree threshold as the target (label) by introducing multiple training samples, the imaging model is trained multiple times to obtain a result close to the target, and obtain the corresponding imaging model.
  • the coincidence degree between the calibration object and the projection surface of the calibration object can meet the requirements, and with the use of the imaging model, it has continuous deep learning and optimization
  • the characteristics of the imaging model can make the projection effect of the imaging model better and better, and the scope of application is wider, so as to meet the experience of different users.
  • the projection method provided in this embodiment can automatically realize the parameter calibration of the imaging model according to the position of the user's human eyes, thereby realizing the adjustment of the projection display effect.
  • this projection method can not only be applied to the projection of the driver's position, but also can be applied to the projection of the co-pilot passenger's position or the rear passenger's position, such as the projection of audio-visual entertainment content.
  • the projection method of the present application can also guide the user to realize the calibration of the projection display.
  • a calibration request or a prompt message of calibration start can be sent to the user, and through the camera in the car Or the human eye detector obtains the user's human eye position, calibrates the parameters of the imaging model according to the user's human eye position, and sends a notification message of calibration completion to the user when the calibration is completed.
  • the calibration process can be guided by the user through the human machine interface (Human Machine Interface, HMI) of the vehicle, and can also be guided by the driver monitoring system (Driver Monitor System, DMS).
  • the prompt message can be voice prompts, vehicle Graphic and text prompts on the central control screen, etc., so that users can experience the calibration process intuitively.
  • the calibration function of the projection device can be automatically turned on, and a prompt message "The vehicle has activated the calibration of the projection device, please maintain a correct sitting posture" is displayed on the central control screen shown in Figure 12A, and then By acquiring the position of the user's eyes, the parameters of the imaging model of the projection device are calibrated, and after the calibration is completed, a prompt message "The vehicle has completed the calibration of the projection device" is displayed on the central control screen shown in FIG. 12B.
  • the user can also adjust the parameters of the imaging model on the central control screen of the vehicle according to personal subjective experience.
  • the calibration process can also be realized through voice interaction, the vehicle can send voice prompts to the user through the sound system, and obtain the user's voice feedback through the microphone, so as to realize the calibration process.
  • the projection method provided by the embodiment of the present application can realize the functions of calibration, calibration and projection display of the above-mentioned projection device, and its application process can be when the vehicle is stationary and started, or during the driving process of the vehicle.
  • Fig. 5 shows a flow chart of a calibration method provided by the embodiment of the present application.
  • This calibration method can be implemented when the vehicle is stationary and starts, and specifically involves the construction process and adjustment process of the imaging model.
  • the model can automatically calibrate parameters for different users' eye positions, so that the projected image is always integrated with the real-world environment information.
  • the projection device may be an AR-HUD
  • the imaging model may be an imaging frustum
  • the user may be the driver of the vehicle.
  • the verification of the calibration method may use the driver's human eyes as a verification method.
  • the calibration method shown in Figure 5 includes:
  • S501 Construct a virtual imaging frustum with the driver's eye as the origin;
  • the AR-HUD in the car or other fixed points in the car can be selected as the origin to construct the real coordinate system and the virtual coordinate system, and the corresponding relationship between the virtual coordinate system and the real coordinate system can be determined.
  • the real coordinate system is the coordinate system of the real three-dimensional space, which is used to determine the real position of the human eye, the virtual image surface of AR-HUD and the calibration object in the real world
  • the virtual coordinate system is the coordinate system of the virtual three-dimensional space. It is used to determine the virtual position of the human eye in the real world, the virtual image surface of the AR-HUD and the calibration object, so as to facilitate the drawing of the 3D AR effect.
  • the human eyes are generally not selected as the origin for constructing the real coordinate system and the virtual coordinate system.
  • the detected human eyes, calibration objects, and AR-HUD installation position, projection angle and other information are introduced into the real coordinate system, and the real coordinate system can be obtained respectively The position of the human eye, the position of the virtual image surface of the AR-HUD, and the position of the calibration object.
  • the position can be specifically the three-dimensional coordinates in the real coordinate system.
  • the virtual image surface of AR-HUD is the virtual image plane that human eyes can see through the windshield of the car. Through the observation of human eyes, the two-dimensional image displayed on the virtual image surface can be mapped to the three-dimensional real world.
  • the selected calibration object can be an object with a regular geometric shape.
  • It can be a quadrilateral calibration board.
  • the calibration image generated based on the calibration board can be specifically a quadrilateral virtual frame.
  • the position of the human eye in the virtual coordinate system is obtained;
  • the position of the human eye is the origin, and the imaging frustum is constructed according to the set viewing angle, and the calibration object is located within the range of the imaging frustum.
  • the imaging viewing frustum constructed may specifically be a head-up imaging viewing frustum, that is, the origin of the imaging viewing frustum is on a horizontal line with the center points of the near plane and the far plane of the imaging viewing frustum;
  • the imaging viewing frustum can also be a overlooking imaging viewing frustum, that is, the origin of the imaging viewing frustum is higher than the center point of the near plane and the far plane of the imaging viewing frustum, so that the origin is connected to the near plane and the far plane with a overlooking angle of view.
  • the planes form the viewing frustum.
  • an appropriate viewing angle can be selected to construct an imaging frustum with the position of the human eye in the virtual coordinate system as the origin, so that the imaging frustum can be All objects within the range of the cone of view are drawn with augmented reality AR effects, such as the drawing of complex effects such as lane lines and traffic signs.
  • S502 Generate a calibration image of the calibration object on the imaging surface of the imaging viewing frustum according to the position of the calibration object located outside the vehicle in the imaging viewing frustum;
  • the calibration object in the real coordinate system is converted to the virtual coordinate system, and the position of the calibration object in the virtual coordinate system is obtained.
  • the calibration object is located in the virtual coordinate system.
  • the imaging frustum in the coordinate system according to the position of the calibration object in the imaging frustum and the origin of the imaging frustum, based on the imaging principle of the forward mapping image of the imaging frustum, select the calibration A near plane between the object and the origin of the imaging cone is used as the imaging plane, and according to the distance relationship between the calibration object and the imaging plane, cone mapping is performed on the imaging plane to generate a calibration image of the calibration object, wherein the calibration The image is a two-dimensional image.
  • S503 Project the imaging plane containing the calibration image onto a virtual image plane of an augmented reality head-up display AR-HUD for display;
  • the calibration image When the imaging surface of the imaging frustum is used as the input image of the AR-HUD and is projected onto the virtual image surface of the AR-HUD for display, the calibration image will also correspond to the virtual image surface of the AR-HUD according to its position on the imaging surface. The position is displayed, so that the generated calibration image is projected onto the calibration object in the real world, and is mapped to the three-dimensional world through the observation angle of the human eye to achieve enhanced display.
  • the AR-HUD when the imaging surface of the imaging frustum is input into the AR-HUD as the input image, the AR-HUD will crop the received input image according to the limitation of the image it can display, and crop out a suitable The size of the screen is displayed on its virtual image plane.
  • the alignment effect of the calibration image and the calibration object on the virtual image plane of the AR-HUD is directly verified by human eyes, and the alignment effect may specifically include scale alignment and position alignment.
  • this embodiment can adjust the imaging cone to adjust the scale of the imaging surface. Since the relative distance between the imaging surface and the origin of the imaging frustum has not changed, the scale of the calibration image generated by the imaging surface will not change, but it is relative to The scale of the imaging surface changes.
  • the scale-adjusted imaging surface is re-input into AR-HUD as an input image and projected onto the virtual image surface of AR-HUD for display, the scale of the calibration image on the virtual image surface of AR-HUD will change accordingly.
  • this embodiment can adjust the The position of the imaging surface of the imaging viewing frustum in the virtual coordinate system belongs to the two-dimensional plane. Since the position of the target object in the virtual coordinate system has not changed, when the two-dimensional position of the imaging surface changes, the image generated by the imaging surface The relative position of the calibration image on the imaging surface will adapt to the change, and the adjusted imaging surface will be re-input into the AR-HUD as an input image and projected to the virtual image surface of the AR-HUD for display. The relative position of the virtual image plane will also change accordingly.
  • the adjusted The corresponding relationship between the imaging cone and the position of the human eye, in this correspondence, when the position of the human eye changes, the origin of the imaging cone will also change, and the position of the imaging surface of the imaging cone will change Adjust accordingly according to the above-mentioned two-dimensional offset.
  • the adjusted imaging frustum can also be used to generate real-time calibration images of real-world objects detected during driving, and display them on the virtual image surface of AR-HUD in real time, thereby enhancing the driver's understanding of road information. acquisition to achieve an immersive experience.
  • the embodiment of the present application also provides an AR-HUD projection method.
  • the goal of this method is to make the AR effect displayed by the AR-HUD projection observed by human eyes align with the real world.
  • this embodiment uses As a direct verification method, the human eye is used to calibrate the scale alignment and position alignment between the AR-HUD display screen and the real world by constructing a virtual imaging model corresponding to the real human eye imaging model.
  • the position information of the human eyes is obtained in real time through the human eye detection module, which can realize the real-time adaptation function of the display screen of the AR-HUD to the position changes of the human eyes, thereby ensuring that the display screen of the AR-HUD is always consistent with the real world. Alignment to ensure the display effect and immersive experience of AR-HUD.
  • the system architecture of this embodiment includes a road detection module 601, an AR module 602, a HUD module 603, and a human eye detection module 604; wherein, the HUD Module 603 specifically includes an alignment module 6031 and a display module 6032;
  • the road detection module 601 may be an acquisition device outside the vehicle as shown in FIG. 2, such as a laser radar, a vehicle camera or other equipment or multiple combined equipment with image acquisition or optical scanning functions, which may be arranged on the top of the vehicle. , the side of the head or the rearview mirror of the vehicle cockpit facing the outside of the vehicle, which is mainly used to detect and collect image information and position information of the environment in front of the vehicle.
  • the environment in front of the vehicle can include vehicles in front, obstacles, Relevant information such as road indication;
  • Described human eye detection module 604 can be the in-vehicle collection device shown in Figure 2, such as equipment such as vehicle-mounted camera, human eye detector, can be arranged on the A column of vehicle cockpit, B column or vehicle cockpit
  • the side of the rearview mirror facing the user is mainly used to detect and collect the human eye position information of the driver or passenger in the vehicle cockpit;
  • the AR module 602 and the HUD module 603 can be integrated in the In the projection device 20, a complete AR-HUD terminal product is realized.
  • the road detection module 601 is used to obtain environmental information on the road, such as the three-dimensional coordinates of pedestrians and lanes, the position of lane lines, etc.; the detected environmental information is passed to the AR module 602, and the three-dimensional virtual coordinate system, and realize the drawing of the three-dimensional AR effect at the corresponding position of the environmental information, and map the three-dimensional AR effect into a two-dimensional image;
  • the alignment module 6031 in the HUD module 603 completes the scale alignment and position alignment between the two-dimensional image and the environmental information; finally, the aligned two-dimensional image is input to the display module 6032 for projection display.
  • the display module 6032 for projection display.
  • the AR-HUD projection method provided in this embodiment is introduced in detail, and the alignment effect between the AR-HUD and the real world achieved by this method will be It will run through the entire driving process, and before the driving starts, the alignment calibration between the AR-HUD and the real world can be realized in advance.
  • the alignment calibration process specifically includes:
  • S701 Construct a real coordinate system and a virtual coordinate system with a certain point in space as the origin;
  • a certain point in the vehicle can be used as the origin, and a real coordinate system and a virtual coordinate system can be constructed at the same time.
  • the origins of the real coordinate system and the virtual coordinate system are the same and have a corresponding relationship.
  • a certain point in the car may be a camera in the car, or may be an AR-HUD in the car.
  • the real coordinate system is used to determine the three-dimensional coordinates of the environmental information in the real world, and its unit may be meters, and the unit of the virtual coordinate system may be pixels, wherein 1 meter in the real coordinate system and 1 meter in the virtual coordinate system Units have a proportional relationship.
  • the three-dimensional AR effect corresponding to the environmental information can be drawn in the virtual coordinate system, and the three-dimensional The AR effect is mapped into a two-dimensional image, and the alignment and calibration process in this embodiment is the alignment and calibration process of the two-dimensional image and the environment information.
  • the position of the human eyes in the real coordinate system can be obtained, and according to the installation position and projection angle of the AR-HUD, the virtual image surface of the AR-HUD in the real coordinate system can be obtained
  • the virtual image plane of the AR-HUD is the virtual image display plane of the AR-HUD observed by the driver's eyes.
  • the virtual image plane of the AR-HUD is located 7-10 meters ahead of the driver's eyes towards the front of the vehicle. , through the driver's human eyes observing the two-dimensional image on the virtual image surface, the two-dimensional image can be mapped to the real world to achieve a three-dimensional display effect;
  • the calibration plate By setting a calibration plate on the virtual image surface of the AR-HUD, the calibration plate is used as a calibration reference object in the alignment calibration process of this embodiment.
  • the calibration plate can specifically be a substrate with a regular geometric shape.
  • S703 Generate a target frame on the imaging surface of the virtual coordinate system, and project it onto the virtual image surface of the AR-HUD for display;
  • the corresponding virtual human eye is determined in the virtual coordinate system.
  • the coordinate system and the virtual coordinate system have the same origin, the position of the virtual human eye in the virtual coordinate system corresponds to the position of the human eye in the real coordinate system, and the position of the imaging surface in the virtual coordinate system corresponds to the AR-HUD in the real coordinate system.
  • the position of the virtual image plane, and the imaging plane and the virtual image plane have the same corresponding relationship between the real coordinate system and the virtual coordinate system.
  • a cone-shaped perspective projection model is constructed in the virtual coordinate system, and the perspective projection model is specifically an imaging Viewing frustum to realize AR rendering of environmental information in the real world and two-dimensional mapping of the AR effect.
  • the virtual human eye is the origin of the imaging frustum, and the viewing angle determines the range of the imaging frustum.
  • this embodiment can select the near plane of the corresponding position of the imaging frustum in the virtual coordinate system according to the position of the virtual image surface of the AR-HUD in the real coordinate system.
  • the plane is used as the imaging plane, so that the position of the imaging plane in the virtual coordinate system corresponds to the position of the virtual image plane of the AR-HUD in the real coordinate system.
  • the imaging frustum there is also a far plane at the infinite distance of the imaging frustum.
  • FOV field of view
  • AR effects drawn within the range and located between the imaging plane and the far plane will be proportionally mapped on the imaging plane in the form of cone mapping according to their distance, that is, the two-dimensional AR effect will be generated on the imaging plane. image.
  • the imaging surface mapped with the two-dimensional image is sent to the AR-HUD as an input image.
  • the imaging surface has a corresponding projection relationship with the virtual image surface of the AR-HUD.
  • the imaging surface can be The two-dimensional image above is projected and displayed on the virtual image surface of AR-HUD.
  • the drawing process in the imaging frustum and the projection process of the two-dimensional image are specifically to perform matrix transformation on the three-dimensional coordinates of the AR effect in the virtual coordinate system, and transform them into coordinates in the real coordinate system.
  • the formula of the matrix transformation is:
  • O is the three-dimensional coordinates of the AR effect drawn in the virtual coordinate system
  • V is the observation matrix of the virtual human eye in the virtual coordinate system
  • P is the mapping matrix of the imaging surface of the imaging frustum
  • S is the real coordinate system.
  • the AR effect drawn in the virtual coordinate system is mapped to the imaging surface of the imaging frustum in the form of a two-dimensional image, and the imaging surface is used as the input image of the AR-HUD, and is projected and displayed on the virtual image surface of the AR-HUD.
  • a corresponding target frame can be generated on the imaging plane of the imaging frustum, and the target frame has the same geometric shape as that of the calibration plate, and then the imaging plane As an input image, it is projected and displayed on the virtual image plane of the AR-HUD.
  • the alignment and calibration process in this embodiment is specifically a process of aligning the target frame displayed on the virtual image plane of the AR-HUD with the calibration plate.
  • whether the dimensions are aligned can specifically be whether the size of the target frame on the virtual image surface of the AR-HUD is aligned with the size of the calibration plate. If aligned, go to step S706; if not, go to step S705.
  • the scale of the target frame When the scale of the target frame is not aligned with the scale of the calibration plate, it means that the scale of the target frame generated on the imaging plane is not aligned with the scale of the calibration plate on the virtual image plane of the AR-HUD after projection. Since the imaging plane is used as the input image of the AR-HUD , AR-HUD will crop the input image according to its display pixels, that is, crop the input imaging surface image, and crop the scale that matches the display pixels for display. On the premise that the unit of the virtual coordinate system, the imaging frustum, and the display pixels of the AR-HUD have been determined, when the scale of the target frame is not aligned with the scale of the calibration plate, the imaging surface of the imaging frustum needs to be adjusted proportionally. Scale size, so as to adjust the scale size of the image cropped by AR-HUD proportionally, and then adjust the relative size of the target frame in the cropped image proportionally, so that it is aligned with the scale of the calibration board;
  • adjusting the scale of the imaging surface of the imaging cone can be achieved by adjusting the field angle of the imaging cone.
  • the scale of the target frame is larger than the calibration plate, it can be achieved by increasing the The field of view of the imaging frustum is enlarged in proportion to the scale of the imaging surface, thereby realizing the proportional enlargement of the imaging surface input to the AR-HUD.
  • the scale of the target frame is smaller than that of the calibration plate, the scale of the imaging surface can be reduced in proportion by reducing the field angle of the imaging frustum, thereby realizing the equal proportion of the imaging surface input to the AR-HUD zoom out.
  • the scale of the target frame displayed on the virtual image plane of the AR-HUD can be adjusted by adjusting the size of the viewing angle of the imaging frustum Realized to complete the scale alignment with the calibration plate, that is, to complete the scale alignment between the imaging plane of the imaging frustum and the virtual image plane of the AR-HUD.
  • step S705 Although the imaging surface of the imaging frustum in the virtual coordinate system is aligned with the virtual image surface of the AR-HUD in the real coordinate system, the target frame displayed on the virtual image surface of the AR-HUD and There are still deviations in the position of the calibration plate, and there are usually two reasons for the deviation.
  • the virtual human eye corresponds to the center point of the near plane and the far plane. , as shown in Figure 9A; while in the real coordinate system, the virtual image plane of AR-HUD is usually located below the position of the human eye, that is, the center point of the virtual image plane is lower than the human eye, as shown in Figure 9B.
  • the imaging surface is projected as an input image onto the virtual image surface of the AR-HUD for display
  • the actually displayed 2D image will be lower than the environmental information in the real world, resulting in the position of the displayed target frame being lower than that of the calibration plate.
  • the second is that during the human eye observation process, the position of the human eye is not fixed, but for the installed AR-HUD, the position of the virtual image surface is fixed, so when the position of the human eye moves, the human eye and The relative position of the center point of the AR-HUD's virtual image surface will shift accordingly, resulting in the position of the displayed target frame and the calibration plate not always aligned.
  • whether the position is aligned can be specifically whether the position of the target frame on the virtual image surface of the AR-HUD is aligned with the position of the calibration plate. If aligned, go to step S708; if not, go to step S707.
  • the HUD will crop the input image according to its display pixels, that is, crop the input imaging surface image, and crop the scale that matches the display pixels for display.
  • the imaging plane of the imaging frustum needs to be adjusted In the position of the plane to which it belongs, adjust the position input to the AR-HUD imaging plane, and then adjust the relative position of the target frame in the cropped image so that it is aligned with the position of the calibration plate;
  • the relative position of the target frame on the imaging plane can be adjusted by adjusting the two-dimensional offset of the imaging plane of the imaging viewing frustum in the virtual coordinate system. It should be noted that adjusting the two-dimensional offset of the imaging plane in the virtual coordinate system is essentially adjusting the horizontal or vertical position of the imaging plane on the plane to which it belongs.
  • the position of the imaging surface of the imaging frustum in the virtual coordinate system can be vertically moved downward to Vertically move the relative position of the target frame and the imaging plane upward, so that the relative position of the target frame in the cropped image of the AR-HUD is higher than the original position, so that the adjusted target frame is aligned with the vertical position of the calibration plate.
  • the position adjustment of the target frame displayed on the virtual image plane of the AR-HUD can be realized by adjusting the position of the imaging plane of the imaging frustum , to complete the alignment with the calibration plate, that is, to complete the alignment between the imaging surface of the imaging frustum and the virtual image surface of the AR-HUD.
  • the horizontal offset and vertical offset (X offset , Y offset ) of the imaging surface of the imaging frustum can be calculated as follows,
  • the coordinates, (X eye , Y eye ) are the horizontal and vertical coordinates of the human eye in the real coordinate system. According to the above calculation formula, the horizontal offset X offset and vertical offset Y offset that need to be adjusted by the imaging plane of the imaging viewing frustum in the virtual coordinate system can be calculated.
  • the amount Y offset is used to adjust the imaging surface of the imaging cone in two-dimensional direction in units of pixels, so that the target frame displayed on the virtual image surface of the AR-HUD is aligned with the position of the calibration plate.
  • the effect of scale alignment and position alignment can be verified by moving the position of the calibration plate in the real coordinate system.
  • moving the calibration plate to the back of the virtual image plane of the AR-HUD that is, moving the calibration plate to a farther distance from the human eye, to observe whether the target frame displayed on the virtual image plane is aligned with the calibration plate.
  • the position of the calibration plate in the virtual coordinate system is still between the imaging surface and the far plane of the imaging frustum.
  • the The scale of the target frame will be proportionally reduced as the distance of the calibration plate is extended.
  • S711 The virtual image surface of AR-HUD can display the target frame
  • the calibration plate when selecting the imaging surface, it is selected according to the position of the virtual image surface of the AR-HUD in the real coordinate system. Therefore, when the calibration plate is moved to the front of the virtual image surface of the AR-HUD, the calibration The corresponding position of the plate in the virtual coordinate system is relatively moved to the front of the imaging surface. According to the imaging principle of the imaging cone, the calibration plate located in front of the imaging surface cannot be mapped to the imaging surface at this time.
  • this embodiment Based on the imaging principle of the imaging frustum, this embodiment adjusts the position of the imaging surface in the imaging frustum according to the corresponding position of the calibration plate in the virtual coordinate system, that is, reselects the position of the imaging surface in the imaging frustum.
  • the near plane between the corresponding position in the virtual coordinate system and the origin of the imaging frustum is used as a new imaging plane, and the target frame corresponding to the calibration plate is regenerated in the new imaging plane according to the imaging principle.
  • the change of the relative distance between the imaging surface and the origin will not change the scale of the imaging surface, which is only determined by the viewing angle of the imaging frustum, and by adjusting the relative distance between the imaging surface and the imaging frustum
  • the distance from the origin of the imaging frustum is used to perform selective two-dimensional mapping of the environmental information within the viewing frustum range of the imaging viewing frustum, thereby changing the number of two-dimensional images that can be generated by the imaging surface.
  • step S714 By observing whether the display effect of the regenerated target frame on the virtual image surface of the AR-HUD is completely aligned with the calibration plate moved to a close distance, the calibration effect of this method is verified. If they are completely aligned, go to step S714, if not, go to step S704, and perform the adjustment steps of scale alignment and position alignment again.
  • the construction based on the position of the human eye is realized. Alignment and calibration of the imaging surface of the imaging cone and the virtual image plane of the AR-HUD. After the alignment calibration is completed, when the position of the driver's eyes changes, or when different drivers drive, the constructed imaging cone The volume average will be adjusted accordingly to ensure that the display effect of the virtual image surface of the AR-HUD observed by the human eye is always fully aligned with the real world, improving the driver's observation experience and achieving a better driving assistance effect.
  • the embodiment of the present application provides a projection device, which can be used to implement the projection method, calibration method, AR-HUD projection method and display method in the above embodiments, as shown in Figure 11 , the projection device 1100 has an acquisition module 1101 , a projection module 1102 , and an adjustment module 1103 .
  • the acquiring module 1101 is configured to execute step S401 in the projection method and examples therein.
  • the projection module 1102 is configured to execute any step of S402 in the above projection method, S501-S503 in the above-mentioned calibration method, S701-S703 in the above-mentioned AR-HUD projection method, and any optional example thereof.
  • the adjustment module 1103 is configured to execute any step of S403 in the above projection method, S504 in the above calibration method, S704-S714 in the above AR-HUD projection method, and any optional example thereof.
  • the projection device 1100 can also have a prompt module 1104, which can implement the above-mentioned projection method, calibration method, and AR-HUD projection method that involve human-computer interaction, by sending a prompt message to the user Guide the user to participate in the calibration process or adjustment process in the above-mentioned projection method, calibration method, and AR-HUD projection method. Whether or not the projection surfaces overlap; it is also possible to send a prompt message of calibration start and a calibration completion message to the user through the prompt module 1104 when obtaining the calibration requirement of the user.
  • the projection device in the embodiment of the present application can be implemented by software, for example, by computer programs or instructions having the above-mentioned functions, and the corresponding computer programs or instructions can be stored in the internal memory of the terminal and read by the processor.
  • the corresponding computer programs or instructions inside the memory realize the above functions.
  • the projection device in this embodiment of the present application can also be implemented by hardware.
  • the acquisition module 1101 can be implemented by an acquisition device on the vehicle, such as a vehicle camera or laser radar, or the acquisition module 1101 can also be implemented by a processor It is realized by the interface circuit between the on-board camera or lidar on the vehicle.
  • the prompting module 1104 can be implemented by a central control screen, audio, microphone and other devices on the vehicle.
  • the projection module 1102 may be implemented by a HUD or AR-HUD on the vehicle, or the projection module 1102 may also be implemented by a processor of the HUD or AR-HUD, or the projection module may also be implemented by a terminal such as a mobile phone or a tablet.
  • the adjustment module 1103 may be realized by a processor of the HUD or AR-HUD, or the adjustment module 1103 may also be realized by a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer.
  • the projection device in the embodiment of the present application may also be implemented by a combination of a processor and a software module.
  • the embodiment of the present application also provides a vehicle with the above-mentioned projection device.
  • the vehicle may be a family car or a truck, or a special vehicle such as an ambulance, fire truck, police car or engineering emergency vehicle.
  • the vehicle can use local storage to store the imaging models and related training sets in the above embodiments.
  • the imaging model can be loaded faster, and the user's eye position can be quickly realized.
  • Projection display calibration or adjustment has the advantages of low latency and good experience.
  • the vehicle can also use the method of interacting with the cloud, and download the imaging model stored in the cloud to the local by downloading from the cloud, so as to realize the projection display calibration or adjustment according to the position of the user's human eyes. It has the advantages of abundant data, timely model update and higher accuracy.
  • FIG. 13 is a schematic structural diagram of a computing device 1500 provided by an embodiment of the present application.
  • the computing device can be used as a projection device to execute the optional embodiments of the above-mentioned projection method, calibration method, or AR-HUD projection method, and the computing device can be a terminal, or a chip or chip system inside the terminal.
  • the computing device 1500 includes: a processor 1510 , a memory 1520 , a communication interface 1530 , and a bus 1540 .
  • the communication interface 1530 in the computing device 1500 shown in FIG. 13 may be used to communicate with other devices, and may specifically include one or more transceiver circuits or interface circuits.
  • the processor 1510 may be connected to the memory 1520 .
  • the memory 1520 can be used to store the program codes and data. Therefore, the memory 1520 may be a storage unit inside the processor 1510, or an external storage unit independent of the processor 1510, or may include a storage unit inside the processor 1510 and an external storage unit independent of the processor 1510. part.
  • computing device 1500 may further include a bus 1540 .
  • the memory 1520 and the communication interface 1530 may be connected to the processor 1510 through the bus 1540 .
  • the bus 1540 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus 1540 can be divided into address bus, data bus, control bus and so on. For ease of representation, only one line is used in FIG. 13 , but it does not mean that there is only one bus or one type of bus.
  • the processor 1510 may be a central processing unit (central processing unit, CPU).
  • the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 1510 uses one or more integrated circuits for executing related programs, so as to implement the technical solutions provided by the embodiments of the present application.
  • the memory 1520 may include read-only memory and random-access memory, and provides instructions and data to the processor 1510 .
  • a portion of processor 1510 may also include non-volatile random access memory.
  • processor 1510 may also store device type information.
  • the processor 1510 executes the computer-executed instructions in the memory 1520 to execute any operation steps of the above-mentioned projection method, calibration method, or AR-HUD projection method and any optional embodiment thereof. .
  • the computing device 1500 may correspond to a corresponding body executing the methods according to the various embodiments of the present application, and the above-mentioned and other operations and/or functions of the modules in the computing device 1500 are for realizing the present invention For the sake of brevity, the corresponding processes of the methods in the embodiments are not repeated here.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored.
  • a computer program When the program is executed by a processor, it is used to execute a method for generating a variety of questions.
  • the method includes the methods described in the above-mentioned embodiments. at least one of the options.
  • the computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • connect such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.

Abstract

本申请适用于智能汽车领域,具体提供了一种投影方法及装置、车辆及AR-HUD,其中,该投影方法包括:获取标定物的图像信息和位置信息;根据所述标定物的图像信息和位置信息、以及成像模型,投影所述标定物;在所述标定物与所述标定物的投影面的重合度小于第一阈值时,调整所述成像模型的参数。本申请可使投影显示的图像与现实世界对齐,提高投影显示效果。

Description

一种投影方法及装置、车辆及AR-HUD 技术领域
本申请涉及智能汽车领域,特别涉及一种投影方法及装置、车辆及AR-HUD。
背景技术
抬头显示器(Head Up Display,HUD)是一种将图像投影显示到驾驶员前方视野中的显示装置,其主要是利用光学反射的原理,将重要的相关资讯以二维图像的方式投影显示在汽车的挡风玻璃上面,高度大致与驾驶员的眼睛成水平,驾驶员透过挡风玻璃往前方看的时候,可看到HUD投影的二维图像显示在挡风玻璃前方的一虚像面上。相比传统仪表和中控屏幕,驾驶员在观察HUD投影显示的图像时,无需低头,避免了在图像和路面之间来回切换,减小了危机反应时间,提高了驾驶安全性。近年来提出的增强现实(Augmented Reality,AR)抬头显示器(AR-HUD),可以将HUD投影显示的AR效果与真实路面信息融合起来,增强驾驶员对路面信息的获取,实现AR导航、AR预警等功能。
要实现AR-HUD的道路导航、预警等功能,需要将传感器获得的三维感知数据送入虚拟三维空间进行增强现实效果绘制,绘制完成后映射到HUD显示的二维虚像面,最后通过人眼再映射回三维空间。在此过程中,必须确保“人眼-HUD的显示画面-真实物体”保持三点一线,保证通过人眼观察到的HUD的显示画面与真实物体的尺寸、位置一致,使得如图1所示中人眼观察到HUD的显示画面中的虚拟图像恰好能够与对应的真实物体融合,以实现AR效果与显示场景的匹配融合。并且,对于同一驾驶员在驾驶过程中的不同位置,或者对于不同驾驶员来说,人眼的位置的改变要求HUD的显示画面必须进行相应的调整,以保证人眼观察到的HUD的显示画面始终与真实路面信息融合。
因此,在驾驶员的不同坐姿、或者不同驾驶员的前提下,如何保证HUD的显示画面始终与现实世界融合,成为提高AR-HUD的显示效果的重点研究方向。
发明内容
有鉴于此,本申请提供了一种投影方法及装置、车辆及AR-HUD,可使投影显示的图像始终与现实世界对齐,提高投影显示效果。
应理解,本申请所提供的方案中,投影方法可以由投影装置或该投影装置中的部分器件执行,其中,投影装置具有投影功能,例如,AR-HUD、HUD或其他具有投影功能的装置。投影装置中的部分器件可以是处理芯片、处理电路、处理器等。
本申请的第一方面提供一种投影方法,包括:获取标定物的图像信息和位置信息,根据该标定物的图像信息和位置信息、以及成像模型,投影该标定物。在该标定物与该标定物的投影面的重合度小于第一阈值时,调整该成像模型的参数。
由上,本方法通过获取现实中的标定物的图像信息和位置信息,并根据该标定物的图像信息和位置信息、以及成像模型,对该标定物进行投影显示,并根据该标定物及该标定物的投影面的重合度,调整成像模型的参数,以使标定物和标定物的投影面尽可能的重合,达到对齐效果,提高用户的沉浸式体验。本方法可应用于AR-HUD、HUD或其他具有投影功能的装置,以实现对装置的校准、标定,提高投影显示效果。
在第一方面的一种可能的实现方式中,该调整该成像模型的参数包括:调整该成像模型的视场角和成像面位置中的一个或多个参数。
由上,根据获取的标定物的图像信息和位置信息,可在成像模型的成像面生成对应该标定物的二维图像,并且在进行投影时,将该成像模型的成像面作为完整的投影图像进行投影显示。示例的,该成像模型可以为成像视锥体、成像圆柱体或成像立方体等形式,其中该成像模型的视场角参数可决定成像面的面积大小以及标定物的二维图像相对于该成像面的比例大小,该成像模型的成像面位置参数可决定标定物的二维图像相对于该成像面的位置,因此,当标定物和标定物的投影面的重合度低于预设的第一阈值时,可根据面积偏移或位置偏移或尺寸偏移,对应调整成像模型的视场角或成像面位置。
在第一方面的一种可能的实现方式中,该在该标定物与该标定物的投影面的重合度小于第一阈值时,调整该成像模型的参数具体包括:在该标定物与该标定物的投影面的面积差大于第二阈值时,调整该成像模型的视场角。
由上,当标定物与标定物的投影面的面积差大于预设的第二阈值时,此时可通过调整成像模型的视场角来调整成像面的面积,当标定物的投影面的面积大于标定物的面积时,可放大成像模型的视场角,成像面则会等比例放大,生成的标定物的二维图像在成像面中的比例则会等比例缩小,此时投影显示的标定物的投影面相对于标定物的面积也会等比例缩小,以使标定物的投影面与标定物的面积差小于预设的第二阈值;同理,当标定物的投影面的面积小于标定物的面积时,可缩小成像模型的视场角,成像面则会等比例缩小,生成的标定物的二维图像在成像面中的比例则会等比例放大,此时投影显示的标定物的投影面相对于标定物的面积也会等比例放大,以使标定物的投影面与标定物的面积差小于预设的第二阈值。
在第一方面的一种可能的实现方式中,该在该标定物与该标定物的投影面的重合度小于第一阈值时,调整该成像模型的参数具体包括:在该标定物与该标定物的投影面的偏移量大于第三阈值时,调整该成像模型的成像面的二维位置。
由上,当标定物与标定物的投影面的偏移量大于预设的第三阈值时,由于标定物的位置是固定的,此时可通过调整成像模型的成像面的二维位置,该二维位置具体是指成像面的上下位置和左右位置,以对应调整生成的标定物的二维图像在成像面中的相对位置,从而使得标定物与标定物的投影面的偏移量小于预设的第三阈值。
在第一方面的一种可能的实现方式中,该标定物与该标定物的投影面的重合度是通过该标定物与该标定物的投影面的像素偏移确定的;该像素偏移是通过摄像头采集的包含该标定物与该标定物的投影面的图像确定的。
由上,本方法在实现标定物与标定物的投影面的校准或标定时,可在用户的人眼位置设置一摄像头,以模拟人眼观察的效果,通过摄像头对标定物与标定物的投影面 进行拍摄,生成一张或多张图像,并根据生成的图像确定标定物与标定物的投影面的像素偏移量,从而确定标定物与标定物的投影面的重合度,采用摄像头拍摄的方式,可提高检测标定物与标定物的投影面的重合度时的精确度,并以数据的形式直观展现,避免用户人眼观察带来的误差。
在第一方面的一种可能的实现方式中,该成像模型是根据包括多个训练样本的训练集训练的,其中该训练样本包括人眼位置信息参数、标定物的图像信息和位置信息参数、以及该标定物与该标定物的投影面的重合度参数。
由上,为提高成像模型的精确度,可采用神经网络或深度学习的方式,采用多个训练样本组成的训练集对该成像模型进行训练,其中,可以以人眼位置信息参数、标定物的图像信息和位置信息参数为主作为输入,以标定物与该标定物的投影面的重合度参数作为输出,组成一个训练样本,通过多次训练,以提高标定物与该标定物的投影面的重合度,使得该成像模型的适用范围更广,且具有深度学习、优化的特性,以满足不同用户的使用体验。
在第一方面的一种可能的实现方式中,还包括:
获取用户的校准需求,向用户发送校准开始的提示消息。获取用户的人眼位置,根据该用户的人眼位置对该成像模型的参数进行校准。在校准完成后,向用户发送校准完成的提示消息。
由上,本方法可以在用户无感知的情况下,根据用户的人眼位置,自动校准成像模型的参数,还可以通过人机交互的方式,引导用户提出校准需求,并在语音提示、显示提示等方式下,实现成像模型的参数的校准,并在校准完成后,向用户发送校准完成的提示消息,以提升用户的使用体验。
在第一方面的一种可能的实现方式中,还包括:
通过人眼确定该标定物与该标定物的投影面是否重合;
在该标定物与该标定物的投影面未重合时,根据用户的调整指令,对校准完成的该成像模型的参数进行调整。
由上,本方法可根据用户的人眼位置,校准成像模型的参数,以使标定物与该标定物的投影面的重合度达到预设的阈值,同时,当用户对当前标定物与该标定物的投影面的重合度不满意时,还可根据主观体验,对成像模型的参数进行调整,实现投影显示的客制化,以达到用户的目标需求。
本申请的第二方面提供一种投影装置,包括:
获取模块,用于获取标定物的图像信息和位置信息;
投影模块,用于根据该标定物的图像信息和位置信息、以及成像模型,投影该标定物;
调整模块,用于在该标定物与该标定物的投影面的重合度小于第一阈值时,调整该成像模型的参数。
在第二方面的一种可能的实现方式中,该调整模块用于调整该成像模型的参数时,具体用于:
调整该成像模型的视场角和成像面位置中的一个或多个参数。
在第二方面的一种可能的实现方式中,该调整模块具体用于:
在该标定物与该标定物的投影面的面积差大于第二阈值时,调整该成像模型的视场角。
在第二方面的一种可能的实现方式中,该调整模块具体用于:
在该标定物与该标定物的投影面的偏移量大于第三阈值时,调整该成像模型的成像面的二维位置。
在第二方面的一种可能的实现方式中,该标定物与该标定物的投影面的重合度是通过该标定物与该标定物的投影面的像素偏移确定;该像素偏移是通过摄像头采集的包含该标定物与该标定物的投影面的图像确定。
在第二方面的一种可能的实现方式中,该成像模型是根据包括多个训练样本的训练集训练,其中该训练样本包括人眼位置信息参数、标定物的图像信息和位置信息参数、以及该标定物与该标定物的投影面的重合度参数。
在第二方面的一种可能的实现方式中,还包括:
提示模块,用于在获取用户的校准需求时,向用户发送校准开始的提示消息;
该调整模块还用于根据获取的用户的人眼位置,对该成像模型的参数进行校准;
该提示模块还用于在校准完成后,向用户发送校准完成的提示消息。
在第二方面的一种可能的实现方式中,
该提示模块还用于提示用户通过人眼确定该标定物与该标定物的投影面是否重合;
该调整模块还用于在该标定物与该标定物的投影面未重合时,根据用户的调整指令,对校准完成的该成像模型的参数进行调整。
为达到上述目的,本申请的第三方面提供一种系统,包括:
如第二方面及上述各种可选的实现方式提供的多种技术方案中的投影装置,以及车机。
在一种可能的实现方式中,该系统还包括:存储装置,用于存储成像模型及成像模型的训练集;以及通信装置,用于实现该存储装置与云端的通信交互。
在一种可能的实现方式中,该系统为车辆。
本申请的第四方面提供一种计算设备,包括:处理器,以及存储器,其上存储有程序指令,该程序指令当被该处理器执行时使得该处理器执行如第一方面及上述各种可选的实现方式提供的多种技术方案中的投影方法。
在一种可能的实现方式中,该计算设备为AR-HUD、HUD中的一个。
在一种可能的实现方式中,该计算设备为车。
在一种可能的实现方式中,该计算设备为车机、车载电脑中的一个。
本申请的第五方面提供一种计算机可读存储介质,该计算机可读存储介质上存储有程序代码,该程序代码当被计算机或处理器执行时使得该计算机或处理器执行如第一方面及上述各种可选的实现方式提供的多种技术方案中的投影方法。
本申请的第六方面提供一种计算机程序产品,该计算机程序产品包含的程序代码当被计算机或处理器执行时使得该计算机或处理器执行如第一方面及上述各种可选的实现方式提供的多种技术方案中的投影方法。
应理解,上述多种技术方案中还提供了投影调整相关联的多种阈值,包括:第一阈值,第二阈值,第三阈值。应理解,这些阈值相互之间并不互斥,可以组合使用。可以是小数,也可以是相对比例,比如百分比。对于这些门限中的任一种门限,当投影面积或重合度或面积差或偏移量等于上述一个预设阈值时,可以认为是临界状态。对于临界状态,既可以认为满足门限判断条件,执行相应地后续操作,也可以认为不满足门限判断条件,不执行相应地后续操作。
综上,本申请提供的投影方法及装置、车辆及AR-HUD,通过获取标定物的图像信息和位置信息,根据成像模型,对标定物进行投影显示,并通过调整成像模型的参数,提高标定物和标定物的投影面的重合度,以提高投影显示的效果。在本申请中,成像模型可根据获取的用户的人眼位置信息,标定物的图像信息和位置信息,在成像模型的成像面上生成该标定物的二维图像,并通过投影装置进行投影显示,其中标定物和标定物的投影面的重合度可用于评价该成像模型的准确性和稳定性。在本申请的一些实施例中,该成像模型还可采用神经网络或深度学习的方式进行训练,以使得该成像模型的准确性和稳定性不断得到优化,使得该成像模型适用于不同用户的人眼位置的变化。并且随着5G技术和智能汽车的快速发展,该成像模型还可以通过云端交互的方式,进行优化训练,以适用于不同的车机投影装置,并根据不同的车机投影装置的硬件参数,自动调整成像模型的一个或多个参数,以满足不同用户的客制化需求。
附图说明
图1为现有的AR-HUD在使用场景的成像示意图;
图2为本申请实施例提供的投影方法的一种应用场景的示意图;
图3为本申请实施例提供的投影方法的另一种应用场景的示意图;
图4为本申请实施例提供的一种投影方法的流程图;
图5为本申请实施例提供的一种标定方法的流程图;
图6为本申请实施例提供的AR-HUD的系统架构示意图;
图7为本申请实施例提供的一种AR-HUD的投影方法的流程图;
图8A为本申请实施例提供的成像视锥体的示意图;
图8B为本申请实施例提供的成像视锥体到AR-HUD的空间转换示意图;
图9A为本申请实施例提供的虚拟坐标系下的虚拟人眼与成像视锥体的平视示意图;
图9B为本申请实施例提供的现实坐标系下的人眼与AR-HUD的虚像面组成的俯视示意图;
图10A为本申请实施例提供的AR-HUD的虚像面显示的目标框与标定板的垂直偏移示意图;
图10B为本申请实施例提供的AR-HUD的虚像面显示的目标框与标定板的水平偏移示意图;
图11为本申请实施例提供的一种投影装置的架构图;
图12A为本申请实施例的一种人机交互界面的示意图;
图12B为本申请实施例的另一种人机交互界面的示意图;
图13为本申请实施例的一种计算设备的架构图。
应理解,上述结构示意图中,各框图的尺寸和形态仅供参考,不应构成对本发明实施例的排他性的解读。结构示意图所呈现的各框图间的相对位置和包含关系,仅为示意性地表示各框图间的结构关联,而非限制本发明实施例的物理连接方式。
具体实施方式
下面结合附图并举实施例,对本申请提供的技术方案作进一步说明。应理解,本申请实施例中提供的系统结构和业务场景主要是为了说明本申请的技术方案的可能的实施方式,不应被解读为对本申请的技术方案的唯一限定。本领域普通技术人员可知,随着系统结构的演进和新业务场景的出现,本申请提供的技术方案对类似技术问题同样适用。
应理解,本申请实施例提供的内存管理方案,包括投影方法、装置、车辆及AR-HUD。由于这些技术方案解决问题的原理相同或相似,在如下具体实施例的介绍中,某些重复之处可能不再赘述,但应视为这些具体实施例之间已有相互引用,可以相互结合。
抬头显示设备通常安装于汽车座舱内,通过向汽车的前挡风玻璃投影,投影的显示信息经过前挡风玻璃反射后进入用户的眼睛,在车辆前方呈现,使得显示信息与现实世界的环境相融合,形成增强现实的显示效果。例如,通过建立摄像头坐标系、人眼坐标系,确定所述摄像头坐标系和人眼坐标系的对应关系,根据车载摄像头拍摄的图像信息、以及该摄像头坐标系和人眼坐标系的对应关系,确定增强现实显示图像,然后根据增强现实显示图像与HUD图像的映射关系进行投影显示。然而该实现方式在驾驶过程中,需要实时标定人眼坐标系与摄像头坐标系之间的转换关系,计算量较大,任务的复杂度较高。
为了实现更好的投影显示效果,本申请实施例提供了一种投影方法及装置、车辆及AR-HUD,可实现根据用户人眼的位置变化,实时调整投影显示效果,使投影显示的AR显示图像始终与现实世界对齐,提高投影显示效果。其中,用户通常是驾驶员。用户也可以是副驾乘客或后排乘客等,例如,在车辆座舱内安装有多台HUD设备,不同的HUD设备针对的用户不同。在调整过程中,针对主驾驶位的驾驶员的HUD设备,可根据驾驶员的人眼位置,调整该主驾驶位的HUD设备,使得驾驶员看到的AR显示图像能够与前方的现实世界对齐,该AR显示图像可以为导航信息、车速信息,还可以为道路上的其他提示信息。针对副驾驶位的乘客的HUD设备,可根据副驾乘客的人眼位置,调整该副驾驶位的HUD设备,使得乘客看到的AR显示图像也能够与前方世界对齐。
图2-图3示出了本申请实施例提供的投影方法的一种应用场景的示意图,参照如图2-图3,本实施例的应用场景具体涉及一种车辆,该车辆1具有采集装置10、投影装置20、显示装置30。
采集装置10可以包括车外采集装置和车内采集装置,其中车外采集装置具体可以采用激光雷达、车载摄像头或其他具有图像采集或光学扫描功能的一个设备或多个组合设备,可以设置在车辆1的顶部、头部或车辆座舱的后视镜的朝向车外的一侧,可以安装在车辆的内部,也可以安装在车辆的外部。其主要用于对车辆前方的环境进行图像信息和位置信息进行检测和采集,车辆前方的环境可以包括前方车辆、障碍物、道路指示等相关信息;车内采集装置具体可以采用车载摄像头、人眼检测仪等设备,车内采集装置在具体实现过程中,可以按照需求设置按照位置,例如,可以设置在车辆座舱的A柱、B柱或车辆座舱的后视镜的朝向用户的一侧,还可以设置在方向盘、中控台附近区域,还可以设置在座椅后方显示屏上方等位置。其主要用于对车辆座舱的驾驶员或乘客的人眼位置信息进行检测和采集。车内采集装置可以是一台,也可以是多台,本申请对其位置和数量不做限定。
投影装置20可以为HUD、AR-HUD或其他具有投影功能的设备,可以安装于车辆座舱的中控台上方或中控台内部,其通常包括投影仪、反射镜、投影镜、调节电机及控制单元,所述控制单元为电子设备,具体可以为中央处理器(CPU)、微处理器(MCU)等常规的芯片处理器,也可以为手机、平板等终端硬件。该控制单元分别与所述采集装置10和显示装置30通信连接,该控制单元内可以预设有成像模型或通过获取车辆其他器件内预设的成像模型,该成像模型的参数与车内采集装置采集的人眼位置信息具有关联关系,能够根据人眼位置信息进行参数校准,然后根据车外采集装置采集的环境信息,生成投影图像,并在投影仪输出。如图3所示,投影的图像中可以包括根据环境信息生成的增强现实显示图像,还可以包括车速、导航等图像。
显示装置30可以为车辆的前挡风玻璃或独立显示的透明屏幕,用于反射所述投影装置发出的图像光线后进入到用户的眼中,使驾驶员透过该显示装置30望向车外时,能够看到具有景深效果的虚拟图像,并与现实世界的环境产生重合,向用户呈现增强现实的显示效果。
其中,采集装置10、投影装置20以及其他装置可以分别通过有线通信或无线通信(如蓝牙、wifi)等方式进行数据的通信,例如,采集装置10在采集到图像信息后,可以通过蓝牙通信将该图像信息传输给投影装置20。再例如,投影装置20可以通过蓝牙通信,将控制信令发送给采集装置10,并调整采集装置10的采集参数,如拍摄角度等。应理解的是,数据的处理可以在投影装置20中完成,也可以在采集装置10中完成,还可以在其他处理设备中完成,例如车机、车载电脑等设备。
通过上述结构,车辆能够实现基于现实世界的环境信息的增强现实显示效果,并且能够根据用户的人眼位置信息调整生成的投影图像,以使投影显示的增强现实显示图像始终与现实世界的环境信息重合,提高用户的沉浸式观看体验。
图4示出了本申请实施例提供的一种投影方法的流程图,该投影方法可以由投影装置或投影装置中的部分器件来执行,例如,AR-HUD、HUD、车、处理器等,具体 可以实现上述投影装置或投影装置中的部分器件的校准、标定以及投影显示等功能,其应用过程可以为车辆静止启动的状态下,也可以为车辆的行驶过程中。如图4所示,该投影方法包括:
S401:获取标定物的图像信息和位置信息;
其中,标定物具体可以是位于车外的静态物体,例如静止的车辆、树木、交通标识、或者是一具有几何形状的标定板,还可以是位于车外的动态物体,例如行驶的车辆、走动的行人等。处理器可以通过接口电路,获得采集装置所采集到的该标定物的图像信息和位置信息,其中,图像信息可以是摄像头采集的图像、或者激光雷达采集的点云数据或其他形式的信息,该图像信息中还包括分辨率、大小、尺寸、颜色等信息;位置信息可以是坐标数据、方向信息或其他形式的信息。该处理器可以是投影装置的处理器,也可以是车机或车载电脑等车载处理装置的处理器。
S402:根据所述标定物的图像信息和位置信息、以及成像模型,投影所述标定物;
根据步骤S401获取的标定物的图像信息和位置信息,处理器可以在成像模型中生成与该标定物对应的标定图像,并通过接口电路进行投影输出。该成像模型可以根据人眼位置,HUD的位置、HUD的视场角(Field of view,FOV)、HUD的投影面(虚像面)、HUD的显示分辨率,人眼到HUD的下视角等参数构建,构建的成像模型中包括原点、视场角、近平面(成像面)、远平面等参数,示例的,该成像模型可以为成像视锥体、成像圆柱体或成像立方体等形式。例如,当成像模型为成像视锥体时,原点可以根据所述人眼位置确定,视场角可以根据HUD的视场角确定,用于决定该成像视锥体的视场范围,近平面作为成像时的成像面,远平面可以根据人眼最远观看距离确定。该处理器根据获取的标定物的图像信息和位置信息,可在成像模型的成像面生成对应该标定物的二维图像,并且在进行投影时,将该成像模型的成像面作为完整的投影图像进行投影显示。
S403:在所述标定物与所述标定物的投影面的重合度小于第一阈值时,调整所述成像模型的参数。
在一些实施例中,标定物与标定物的投影面的重合度可以通过用户的人眼进行观察确定,此时该第一阈值可能不再是一个具体的数值,而是用户的主观体验,例如是否重合。并根据用户的反馈进行后续的调整。在另一些实施例中,该标定物与标定物的投影面的重合度可以是通过采集装置获得的信息来确定的,例如,根据标定物与标定物的投影面的像素偏移确定的,例如,在模拟用户的人眼位置处设置一摄像头,通过该摄像头对包含该标定物与标定物的投影面的图像进行采集,通过拍摄得到的一张或多张图像,根据图像的分辨率,可确定标定物与标定物的投影面的像素偏移,根据该像素偏移可计算得到标定物与标定物的投影面的重合度,计算得到的该重合度具体可以是一个具有百分比的数值,此时,该第一阈值也是一个具体的百分比数值,通过对比该重合度和该第一阈值,以确定是否要对成像模型的参数进行调整。应理解,重合度也可以是小数或其他形式,本申请对此不做限定。
当所述标定物与标定物的投影面的重合度低于预设的第一阈值时,投影装置的处理器可以可通过调整成像模型的参数,改善标定物与标定物的投影面的重合度。其中,该成像模型可供调整的参数包括视场角和成像面位置中的一个或多个参数,例如,视 场角参数可决定成像模型的成像面的面积大小以及标定物的二维图像相对于该成像面的比例大小,该成像模型的成像面位置参数可决定标定物的二维图像相对于该成像面的位置,因此,当标定物和标定物的投影面的重合度低于预设的第一阈值时,可根据面积偏移或位置偏移,对应调整成像模型的视场角或成像面位置。具体的,该步骤S403的实现方式包括:
在所述标定物与所述标定物的投影面的面积差大于第二阈值时,调整所述成像模型的视场角;
在所述标定物与所述标定物的投影面的偏移量大于第三阈值时,调整所述成像模型的成像面的二维位置。
本实施例中,上述的第一阈值、第二阈值和第三阈值都可以根据用户需求或行业标准进行预设和调整,当标定物与标定物的投影面的面积差大于预设的第二阈值时,此时可通过调整成像模型的视场角来调整成像面的面积,当标定物的投影面的面积大于标定物的面积时,可放大成像模型的视场角,成像面则会等比例放大,生成的标定物的二维图像在成像面中的比例则会等比例缩小,此时投影显示的标定物的投影面相对于标定物的面积也会等比例缩小,以使标定物的投影面与标定物的面积差小于预设的第二阈值;同理,当标定物的投影面的面积小于标定物的面积时,可缩小成像模型的视场角,成像面则会等比例缩小,生成的标定物的二维图像在成像面中的比例则会等比例放大,此时投影显示的标定物的投影面相对于标定物的面积也会等比例放大,以使标定物的投影面与标定物的面积差小于预设的第二阈值。当标定物与标定物的投影面的偏移量大于预设的第三阈值时,由于标定物的位置是固定的,此时可通过调整成像模型的成像面的二维位置,该二维位置具体是指成像面在该成像模型的二维平面上的上下位置和左右位置,以对应调整生成的标定物的二维图像在成像面中的相对位置,例如,当将成像模型的成像面的二维位置向上移动时,标定物的二维图像在成像面的位置会相应的向下移动,同理当将成像模型的成像面的二维位置向左移动时,标定物的二维图像在成像面的位置会相应的向右移动,通过调整成像模型的成像面的二维位置,从而使得标定物与标定物的投影面的偏移量小于预设的第三阈值。
应理解,上面的面积差、重合度等都是可以一些示例性的比较参数,可以结合使用,或互相替代,也可以使用其他类似的比较参数来替代,例如,尺寸差。主要目的是为了确定当前采集设备采集到的标定物与投影出来的标定物的图像差异大小。以便于调整成像参数或成像模型。
另外,为了提高处理效率,本实施例中构建的成像模型还可以通过神经网络模型或深度学习模型来实现。具体地,可以采用多个训练样本组成的训练集对该成像模型进行训练。其中,可以以人眼位置信息参数、标定物的图像信息和位置信息参数为主作为输入,以标定物与所述标定物的投影面的重合度参数作为输出,组成一个训练样本。以某个设定的重合度阈值作为目标(label),通过引入多个训练样本,对该成像模型多次训练,以获得与目标接近的结果,并获得相应的成像模型。根据训练得到的成像模型,在进行标定物的投影时,可使标定物与所述标定物的投影面的重合度达到需求,并且随着该成像模型的使用,其具有不断地深度学习及优化的特性,能够使得该成像模型的投影效果越来越好,且适用范围更广,以满足不同用户的使用体验。
本实施例提供的投影方法可根据用户的人眼位置自动实现成像模型的参数校准,从而实现投影显示效果的调整。随着智能驾驶技术的发展,该投影方法不仅可以适用于驾驶员位置的投影,还可以适用于副驾乘客位置或后排乘客位置的投影,例如对影音娱乐内容的投影。在一些扩展实施例中,本申请的投影方法还可以通过引导用户实现投影显示的校准,例如在用户具有校准需求时,可以向用户发送校准请求或校准开始的提示消息,并通过车内的摄像头或人眼检测仪获取用户的人眼位置,根据用户的人眼位置,对成像模型的参数进行校准,并在校准完成时,向用户发送校准完成的提示消息。该校准过程可以通过车辆的人机交互界面(Human Machine Interface,HMI)引导用户完成,还可以通过驾驶员监测系统(Driver Monitor System,DMS)引导用户完成,所述提示消息可以为语音提示、车辆的中控屏幕上图文提示等,以使用户能够直观的体验到该校准过程。同时,在该校准过程在,用户还可以根据个人的主观体验,发送调整指令,对成像模型的参数进行调整,以满足用户的客制化需求。如图12A-图12B所示的一种人机交互界面的示意图中,当通过车辆的人机交互界面实现该校准过程时,可以通过车辆的中控屏幕向用户实现图文提示,以提示并引导用户完成对投影装置的校准及调整过程。例如,当检测到用户上车时,可自动开启投影装置的校准功能,在图12A所示的中控屏幕上显示“车辆已激活投影装置的校准,请保持正确的坐姿”的提示消息,然后通过获取用户的人眼位置,对投影装置的成像模型的参数进行校准,并在校准完成后,在图12B所示的中控屏幕上显示“车辆已完成投影装置的校准”的提示消息。在一些变形实施例中,在该校准过程中,用户还可以根据个人的主观体验在该车辆的中控屏幕上对成像模型的参数进行调整。在另一些变形实施例中,该校准过程还可以通过语音交互实现,车辆可以通过音响系统向用户发送语音提示,并通过麦克风获取用户的语音反馈,从而实现该校准过程。
如上所述,本申请实施例提供的投影方法可以实现上述投影装置的校准、标定以及投影显示等功能,其应用过程可以为车辆静止启动的状态下,也可以为车辆的行驶过程中。例如,图5示出了本申请实施例提供的一种标定方法的流程图,该标定方法可以在车辆静止启动的状态下实现,具体涉及了成像模型的构建过程及调整过程,调整完成的成像模型能够针对不同用户的人眼位置自动校准参数,使得投影显示的图像始终与现实世界的环境信息相融合。本实施例中,投影装置可以为AR-HUD,成像模型可以为成像视锥体,用户可以为车辆的驾驶员,该标定方法的验证可以采用驾驶员的人眼作为验证方式。图5所示的该标定方法包括:
S501:构建以驾驶员人眼为原点的虚拟的成像视锥体;
示例性的,可以选用车内的AR-HUD或者车内其他的位置固定的点作为原点构建现实坐标系和虚拟坐标系,确定所述虚拟坐标系与所述现实坐标系的对应关系。其中,现实坐标系是现实三维空间的坐标系,用于对现实世界中的人眼、AR-HUD的虚像面和标定物等进行现实位置的确定,虚拟坐标系是虚拟三维空间的坐标系,用于对现实世界中的人眼、AR-HUD的虚像面和标定物等进行虚拟位置的确定,以便于进行三维AR效果的绘制。
本实施例中,由于驾驶员人眼的位置会不断变化,因此一般情况下不选用人眼作 为构建现实坐标系和虚拟坐标系的原点。
本实施例中,根据构建的现实坐标系,将检测到的人眼、标定物以及AR-HUD的安装位置、投影角度等信息,引入到该现实坐标系中,即可分别获取该现实坐标系下的人眼的位置、AR-HUD的虚像面的位置和标定物的位置,该位置具体可以为该现实坐标系下的三维坐标。其中,AR-HUD的虚像面是人眼透过汽车的挡风玻璃所能看到的虚像平面,通过人眼的观察,可将该虚像面显示的二维图像映射到三维的现实世界。为便于标定,选用标定物时,需要选择所述人眼和所述AR-HUD的虚像面构成的观察范围内,其中,选择的标定物可以为一具有规则几何形状的物体,示例性的,可以为一四边形的标定板,基于该标定板生成的标定图像可以具体为一四边形的虚拟框,将该虚拟框投影到AR-HUD的虚像面进行显示时,通过人眼观察该虚拟框与标定板是否完全重合,以验证该虚拟框与标定板在AR-HUD的虚像面是否对齐显示。
根据所述现实坐标系下的人眼的位置,以及所述虚拟坐标系与所述现实坐标系的对应关系,获取所述虚拟坐标系下的人眼的位置;以所述虚拟坐标系下的人眼的位置为原点,并根据设定的视场角,构建所述成像视锥体,所述标定物位于所述成像视锥体的视锥范围内。本实施例中,构建的成像视锥体具体可以为一平视的成像视锥体,即成像视锥体的原点与该成像视锥体的近平面、远平面的中心点在一条水平线上;该成像视锥体还可以为一俯视的成像视锥体,即成像视锥体的原点高于该成像视锥体的近平面、远平面的中心点,使原点以一俯视角与近平面、远平面构成视锥体。
通过构建与现实坐标系原点相同的虚拟坐标系,可实现虚拟空间与现实空间的对应,在进行标定图像的生成时,仅需要将现实坐标系下的标定物和人眼的位置对应转换到该虚拟坐标系下即可,并且由于虚拟坐标系与现实坐标系的原点相同,转换计算的过程会相对简单。根据虚拟坐标系下的人眼的位置,可选择一合适的视场角,构建以该虚拟坐标系下的人眼的位置为原点的成像视锥体,由此可将该成像视锥体的视锥范围内的所有物体进行增强现实AR效果的绘制,例如车道线、交通标识等复杂效果的绘制。
S502:根据位于车外的标定物在所述成像视锥体中的位置,在所述成像视锥体的成像面生成所述标定物的标定图像;
根据构建的虚拟坐标系与现实坐标系的对应关系,将现实坐标系下的标定物转换到该虚拟坐标系下,并获取该虚拟坐标系下的该标定物的位置,该标定物位于该虚拟坐标系下的成像视锥体的视锥范围内,根据该标定物在成像视锥体的位置和成像视锥体的原点,基于成像视锥体的向前映射图像的成像原理,选取该标定物和成像视锥体的原点之间的一个近平面作为成像面,并根据该标定物与成像面的距离关系,在该成像面进行锥形映射,生成该标定物的标定图像,其中该标定图像为一二维图像。
S503:将包含所述标定图像的所述成像面投影到增强现实抬头显示器AR-HUD的虚像面进行显示;
当该成像视锥体的成像面作为AR-HUD的输入图像,投影到AR-HUD的虚像面显示时,该标定图像也会根据其在成像面的位置,在AR-HUD的虚像面的对应位置进行显示,由此使得生成的标定图像投影到现实世界的标定物上,并通过人眼的观察视角,映射到三维世界中,实现增强显示。
在一些实施例中,将成像视锥体的成像面作为输入图像输入到AR-HUD中时,AR-HUD会根据其所能显示画面的限制,对接收到的输入图像进行裁剪,裁剪出合适大小的画面在其虚像面进行显示。
S504:调整所述成像视锥体的参数,使所述人眼观察到的位于所述虚像面的标定图像与所述标定物对齐。
本实施例中,直接使用人眼对标定图像与标定物在AR-HUD的虚像面的对齐效果进行验证,该对齐效果具体可以包括尺度对齐以及位置对齐。
当通过人眼观察,该标定图像与标定物在AR-HUD的虚像面的尺度未对齐时,由于成像面是所述成像视锥体的一个近平面,本实施例可通过调整该成像视锥体的视场角,以调整该成像面的尺度,由于成像面与成像视锥体的原点的相对距离并未改变,因此成像面生成的标定图像的尺度大小不会发生改变,但其相对于成像面的比例大小发生改变。尺度得到调整后的成像面,作为输入图像重新输入到AR-HUD中,并投影到AR-HUD的虚像面进行显示时,标定图像在AR-HUD的虚像面的尺度会对应改变。因此,根据人眼观察到AR-HUD的虚像面的显示效果,适应性的调整成像视锥体的视场角参数,即可使所述标定图像与所述标定物在所述AR-HUD的虚像面的尺度对齐显示。
同理,当通过人眼观察,该标定图像与标定物在AR-HUD的虚像面的位置未对齐时,由于成像面是所述成像视锥体的一个近平面,本实施例可通过调整该成像视锥体的成像面在虚拟坐标系下所属的二维平面的位置,由于虚拟坐标系下的目标物的位置并未改变,当成像面的二维位置发生变化时,因此成像面生成的标定图像在成像面的相对位置会适应改变,位置得到调整后的成像面,作为输入图像重新输入到AR-HUD中,并投影到AR-HUD的虚像面进行显示时,标定图像在AR-HUD的虚像面的相对位置也会对应改变。因此,根据人眼观察到AR-HUD的虚像面的显示效果,适应性的调整所述成像视锥体的成像面在所述虚拟坐标系的二维偏移量,即可使所述标定图像与所述标定物在所述AR-HUD的虚像面的位置对齐显示。
在本申请的一些实施例中,由于构建的成像视锥体中,人眼的位置影响成像视锥体的原点的初始位置,因此,根据对齐显示的标定图像和标定物,可得到调整后的所述成像视锥体与人眼的位置的对应关系,该对应关系中,人眼的位置发生变化时,成像视锥体的原点也会发生变化,该成像视锥体的成像面的位置会根据上述的二维偏移量对应调整。由此实现驾驶员的人眼发生变化,或者出现不同的驾驶员时,成像视锥体的参数会对应发生变化,从而保证人眼观察到的AR-HUD的虚像面显示的标定图像始终与现实世界对齐,降低投影显示效果的抖动,防止眩晕。同时,调整后的成像视锥体还可用于对驾驶过程中检测到的现实世界的物体进行标定图像的实时生成,并在AR-HUD的虚像面进行实时显示,由此增强驾驶员对路面信息的获取,实现沉浸式的体验。
本申请实施例还提供了一种AR-HUD的投影方法,该方法的目标是使得人眼观察到的AR-HUD投影显示的AR效果能够与现实世界对齐,为达到该目标,本实施例采用人眼作为直接验证方式,通过构建与现实中的人眼成像模型对应的虚拟成像模型, 对AR-HUD的显示画面与现实世界进行尺度对齐和位置对齐的标定。本实施例还通过人眼检测模块实时获取人眼的位置信息,能够实现AR-HUD的显示画面针对人眼的位置变化的实时适配功能,从而保证AR-HUD的显示画面始终与现实世界的对齐,保证AR-HUD的显示效果和沉浸式体验。
如图6所示,首先对本实施例所述投影方法的系统架构进行介绍,本实施例的系统架构包括道路检测模块601、AR模块602、HUD模块603和人眼检测模块604;其中,该HUD模块603中具体还包括对齐模块6031和显示模块6032;
其中,所述道路检测模块601可以为图2所示的车外采集装置,例如激光雷达、车载摄像头或其他具有图像采集或光学扫描功能的一个设备或多个组合设备,可以设置在车辆的顶部、头部或车辆座舱的后视镜的朝向车外的一侧,其主要用于对车辆前方的环境进行图像信息和位置信息进行检测和采集,车辆前方的环境可以包括前方车辆、障碍物、道路指示等相关信息;所述人眼检测模块604可以为图2所示的车内采集装置,例如车载摄像头、人眼检测仪等设备,可以设置在车辆座舱的A柱、B柱或车辆座舱的后视镜的朝向用户的一侧,其主要用于对车辆座舱的驾驶员或乘客的人眼位置信息进行检测和采集;所述AR模块602和HUD模块603可以集成在图2所示的投影装置20中,以一个完整的AR-HUD终端产品实现。
在驾驶过程中,通过道路检测模块601获得道路中的环境信息,例如行人及车道三维坐标、车道线位置等;将检测得到的环境信息传入到AR模块602,在该AR模块602中构建三维的虚拟坐标系,并在环境信息的对应位置实现三维AR效果的绘制,并将该三维的AR效果映射成二维图像;完成二维图像的映射后,结合人眼检测模块604实时检测的人眼的位置,通过HUD模块603中的对齐模块6031完成二维图像与环境信息之间的尺度对齐与位置对齐;最后将对齐后的二维图像输入到显示模块6032上进行投影显示。此时在AR-HUD的有效投影显示范围内,无论人眼的位置如何变化,始终可以观察到AR-HUD投影显示的二维图像与道路中的环境信息完全对齐。
基于图6所示的系统架构,参照如图7所示的流程图,对本实施例提供的AR-HUD的投影方法进行详细介绍,根据该方法所实现的AR-HUD与现实世界的对齐效果将会贯穿于整个驾驶过程中,而在驾驶开始前,可以提前实现AR-HUD与现实世界的对齐标定,该对齐标定过程具体包括:
S701:以空间中某一点为原点构建现实坐标系和虚拟坐标系;
本实施例可以以车内某一点为原点,同时构建现实坐标系和虚拟坐标系,该现实坐标系和虚拟坐标系的原点相同且具有对应关系。具体的,该车内某一点可以为车内的摄像头,或者可以为车内的AR-HUD。其中,该现实坐标系用于确定现实世界中的环境信息的三维坐标,其单位可以为米,该虚拟坐标系的单位可以为像素,其中现实坐标系下的1米与虚拟坐标系下的1个单位具有等比例的对应关系。根据获取到的环境信息在现实坐标系中的三维坐标,以及现实坐标系与虚拟坐标系的对应关系,可在虚拟坐标系中进行对应该环境信息的三维的AR效果的绘制,并将该三维的AR效果映射成二维图像,本实施例的对齐标定过程,即是二维图像与环境信息的对齐标定过程。
S702:在AR-HUD的虚像面所在位置设置一标定板;
根据人眼检测模块检测到的驾驶员的人眼,可获取现实坐标系下的人眼的位置,根据AR-HUD的安装位置及投影角度,可获取现实坐标系下的AR-HUD的虚像面的位置,其中该AR-HUD的虚像面为通过驾驶员的人眼观察到的AR-HUD的虚像显示平面,一般AR-HUD的虚像面位于驾驶员的人眼朝向车辆前方的7-10米处,通过驾驶员的人眼观察该虚像面上的二维图像,可将二维图像映射到现实世界中,实现三维的显示效果;
通过在AR-HUD的虚像面设置一标定板,该标定板在本实施例的对齐标定过程中作为标定参照物,本实施例中,该标定板具体可以为一具有规则几何形状的基板。
S703:在虚拟坐标系的成像面生成目标框,并投影到AR-HUD的虚像面进行显示;
根据步骤S702中,现实坐标系下的人眼的位置、AR-HUD的虚像面的位置,以及现实坐标系和虚拟坐标系的对应关系,在虚拟坐标系中确定对应的虚拟人眼,由于现实坐标系和虚拟坐标系具有相同原点,该虚拟人眼在虚拟坐标系下的位置对应现实坐标系下的人眼的位置,成像面在虚拟坐标系下的位置对应现实坐标系下的AR-HUD的虚像面位置,并且该成像面与虚像面具有现实坐标系和虚拟坐标系的对应关系相同的对应关系。
如图8A所示,以该虚拟人眼为原点,并设定一视场角(Field of view,FOV),在虚拟坐标系中构建锥形的透视投影模型,该透视投影模型具体为一成像视锥体,以实现对现实世界的环境信息的AR效果绘制,以及对该AR效果的二维映射。其中该虚拟人眼为成像视锥体的原点,该视场角决定该成像视锥体的视锥范围。通过选择该成像视锥体的一个近平面作为成像面,本实施例可以根据AR-HUD的虚像面在现实坐标系中的位置,选择成像视锥体在该虚拟坐标系下的对应位置的近平面作为成像面,以使成像面在虚拟坐标系下的位置与AR-HUD的虚像面在现实坐标系下的位置对应相同。
如图8A所示,在该成像视锥体的无限远的距离处还具有一远平面,根据成像视锥体的成像原理,位于该成像视锥体的视场角(Field of view,FOV)范围内,且位于成像面与远平面之间的绘制的AR效果,均会根据其距离远近,以锥形映射的方式等比例映射在该成像面中,即在成像面生成AR效果的二维图像。如图8B所示,将该映射有二维图像的成像面作为输入图像发送到AR-HUD,该成像面与AR-HUD的虚像面具有对应的投影关系,根据该投影关系,可将成像面上的二维图像进行投影显示在AR-HUD的虚像面。其中成像视锥体中的绘制过程以及二维图像的投影过程,具体是将虚拟坐标系下的AR效果的三维坐标进行矩阵变换,转换到现实坐标系下的坐标,该矩阵变换的公式为,
S=P*V*O
其中,O为虚拟坐标系下绘制的AR效果的三维坐标,V为虚拟坐标系下的虚拟人眼的观察矩阵,P为成像视锥体的成像面的映射矩阵,S为现实坐标系下的HUD的虚像面的坐标。通过将虚拟坐标系下绘制的AR效果以二维图像的形式映射到成像视锥体的成像面,并将该成像面作为AR-HUD的输入图像,在AR-HUD的虚像面进行投影显示。
本实施例中,可根据AR-HUD的虚像面的标定板,在该成像视锥体的成像面生成对应的目标框,该目标框具有与标定板的相同的几何形状,然后将该成像面作为输入图像,在AR-HUD的虚像面进行投影显示,本实施例中的对齐标定过程,具体是将该AR-HUD的虚像面显示的目标框与标定板对齐的过程。
S704:观察目标框与标定板的尺度是否对齐;
本实施例中,尺度是否对齐具体可以为目标框在AR-HUD的虚像面的尺寸大小与标定板的尺寸大小是否对齐,若对齐,则进入步骤S706,若未对齐,则进入步骤S705。
S705:尺度对齐调整;
当目标框与标定板的尺度未对齐时,表示在成像面生成的目标框经过投影后,与AR-HUD的虚像面的标定板的尺度未对齐,由于成像面作为AR-HUD的输入图像时,AR-HUD会根据其显示像素,对输入图像进行裁剪,即对输入的成像面图像进行裁剪,裁剪出与显示像素匹配的尺度进行显示。在虚拟坐标系的单位、成像视锥体、以及AR-HUD的显示像素均已确定的前提下,当目标框与标定板的尺度未对齐时,需要等比例调整成像视锥体的成像面的尺度大小,以实现等比例调整AR-HUD裁剪的图像的尺度大小,进而实现等比例调整目标框在裁剪出的图像中的相对大小,使其与标定板的尺度对齐;
本实施例中,调整成像视锥体的成像面的尺度大小,可通过调整成像视锥体的视场角实现,具体的,当目标框的尺度比标定板大时,此时可通过增大成像视锥体的视场角,以等比例放大成像面的尺度,进而实现输入到AR-HUD的成像面的等比例放大。同理,当目标框的尺度比标定板小时,此时可通过减小成像视锥体的视场角,以等比例缩小成像面的尺度,进而实现输入到AR-HUD的成像面的等比例缩小。由此,在生成的目标框的大小不变、成像面的位置不变的情况下,AR-HUD的虚像面显示的目标框的尺度调整,可通过调整成像视锥体的视场角的大小实现,以完成与标定板的尺度对齐,即完成成像视锥体的成像面与AR-HUD的虚像面的尺度对齐。
S706:观察目标框与标定板的位置是否对齐;
通过步骤S705的调整后,尽管使得虚拟坐标系下的成像视锥体的成像面与现实坐标系下的AR-HUD的虚像面实现了尺度对齐,然而AR-HUD的虚像面显示的目标框和标定板的位置仍然存在偏移,造成该偏移的原因通常有两种,其一是在虚拟坐标系下构建的成像视锥体中,虚拟人眼对应的是近平面和远平面的中心点,如图9A所示;而在现实坐标系下,AR-HUD的虚像面位置通常位于人眼的位置的下方,即虚像面的中心点低于人眼,如图9B所示。因此,当成像面作为输入图像投影到AR-HUD的虚像面进行显示时,实际显示的二维图像会低于现实世界中的环境信息,导致显示的目标框的位置低于标定板的位置。其二是,在人眼观察过程中,人眼的位置并非固定的,而对于安装完成的AR-HUD,其虚像面的位置是固定的,因此当人眼的位置发生移动时,人眼与AR-HUD的虚像面的中心点的相对位置会对应的发生偏移,导致显示的目标框与标定板的位置无法始终对齐。
本实施例中,位置是否对齐具体可以为目标框在AR-HUD的虚像面的位置与标定板的位置是否对齐,若对齐,则进入步骤S708,若未对齐,则进入步骤S707。
S707:位置对齐调整;
当目标框与标定板的位置未对齐时,表示在成像面生成的目标框经过投影后,与AR-HUD的虚像面的标定板的位置未对齐,由于成像面作为AR-HUD的输入图像时,HUD会根据其显示像素,对输入图像进行裁剪,即对输入的成像面图像进行裁剪,裁剪出与显示像素匹配的尺度进行显示。在虚拟坐标系的单位、成像视锥体、以及AR-HUD的显示像素、裁剪位置均已确定的前提下,当目标框与标定板的位置未对齐时,需要调整成像视锥体的成像面在其所属平面的位置,以调整输入到AR-HUD成像面的位置,进而调整目标框在裁剪出的图像中的相对位置,使其与标定板的位置对齐;
本实施例中,可通过调整成像视锥体的成像面在虚拟坐标系下的二维偏移量,以调整目标框在该成像面的相对位置。需要说明的是,调整成像面在虚拟坐标系下的二维偏移量,实质上是调整成像面在其所属平面的水平位置或垂直位置。
具体的,如图10A所示,当显示的目标框的垂直位置比标定板的垂直位置低时,此时可通过向下垂直移动成像视锥体的成像面在虚拟坐标系下的位置,以向上垂直移动目标框与成像面的相对位置,使得目标框在AR-HUD裁剪出的图像中的相对位置高于原来的位置,从而使得调整后显示的目标框与标定板的垂直位置对齐。同理,如图10B所示,当目标框的水平位置比标定板的水平位置相对靠右时,此时可通过向右水平移动成像视锥体的成像面在虚拟坐标系下的位置,以向左水平移动目标框与成像面的相对位置,使得目标框在AR-HUD裁剪出的图像中的相对位置偏左于原来的位置,从而使得调整后显示的目标框与标定板的水平位置对齐。由此,在生成的目标框的大小不变、成像面的尺度不变的情况下,AR-HUD的虚像面显示的目标框的位置调整,可通过调整成像视锥体的成像面的位置实现,以完成与标定板的位置对齐,即完成成像视锥体的成像面与AR-HUD的虚像面的位置对齐。
其中,可根据补偿原理,对该成像视锥体的成像面的水平偏移量和垂直偏移量(X offset,Y offset)进行下述计算,
Figure PCTCN2021094344-appb-000001
Figure PCTCN2021094344-appb-000002
其中,虚拟坐标系的单位为像素,现实坐标系的单位为米,则1像素=m米,(X hud,Y hud)为现实坐标系下的AR-HUD的虚像面的中心点的横纵坐标,(X eye,Y eye)为现实坐标系下的人眼的横纵坐标。根据上述计算公式,可计算得到成像视锥体的成像面在虚拟坐标系下所需调整的水平偏移量X offset和垂直偏移量Y offset,根据该水平偏移量X offset和垂直偏移量Y offset,以像素为单位对成像视锥体的成像面进行二维方向上的调整,使得AR-HUD的虚像面显示的目标框与标定板的位置对齐。
S708:将标定板移动至AR-HUD的虚像面的后方;
完成步骤S704-S706的尺度对齐和位置对齐后,可以通过移动标定板在现实坐标系下的位置,对尺度对齐和位置对齐的效果进行验证。通过将标定板移动至AR-HUD的虚像面的后方,即将标定板移动至距离人眼更远的距离,以观察虚像面显示的目标框与标定板是否对齐。
S709:观察目标框与标定板是否完全对齐;
将标定板移动至距离人眼更远的距离时,该标定板在虚拟坐标系下的位置仍然处 于成像视锥体的成像面和远平面之间,此时根据成像原理,在成像面生成的目标框的尺度会随着标定板的距离的拉远而等比例缩小,通过观察重新生成的目标框在AR-HUD的虚像面的显示效果与移动至远距离处的标定板是否完全对齐,以验证本方法的对标标定效果。若完全对齐,则进入步骤S710,若未完全对齐,则进入步骤S704,重新进行尺度对齐和位置对齐的调整步骤。
S710:将标定板移动至AR-HUD的虚像面的前方;
通过将标定板移动至AR-HUD的虚像面的前方,即将标定板移动至距离人眼更近的距离,以观察虚像面是否可显示该标定板对应的目标框,以及该目标框与标定板是否完全对齐。
S711:AR-HUD的虚像面可显示目标框;
由于构建的成像视锥体中,选取成像面时,是根据现实坐标系下的AR-HUD的虚像面的位置选取的,因此当将标定板移动至AR-HUD的虚像面的前方时,标定板在虚拟坐标系下的对应位置,也相对移动到了成像面的前方,根据成像视锥体的成像原理,此时位于成像面前方的标定板无法映射到该成像面上。
S712:近距离显示调整;
基于成像视锥体的成像原理,本实施例根据标定板在虚拟坐标系下的对应位置,调整成像面在成像视锥体中的位置,即重新选择该成像视锥体中的位于该标定板在虚拟坐标系下的对应位置与成像视锥体的原点之间的近平面,作为新的成像面,并根据成像原理,在该新的成像面中重新生成对应该标定板的目标框。本实施例中,成像面与原点的相对距离的变化,不会改变成像面的尺度,该成像面的尺度仅由成像视锥体的视场角决定,而通过调整成像面相对于成像视锥体的原点的距离,以对该成像视锥体的视锥范围内的环境信息进行选择性的二维映射,由此改变成像面可生成的二维图像的数量。
S713:观察目标框与标定板是否完全对齐;
通过观察重新生成的目标框在AR-HUD的虚像面的显示效果与移动至近距离处的标定板是否完全对齐,以验证本方法的对标标定效果。若完全对齐,则进入步骤S714,若未完全对齐,则进入步骤S704,重新进行尺度对齐和位置对齐的调整步骤。
S714:完成AR-HUD与现实世界的对齐;
通过改变标定板在现实坐标系下的位置,并将改变位置的标定板对应生成的目标框与标定板在AR-HUD的虚像面处的显示效果进行对齐,由此实现基于人眼的位置构建的成像视锥体的成像面与AR-HUD的虚像面的对齐标定,完成该对齐标定后,当驾驶员的人眼位置发生变化后,或者不同的驾驶员进行驾驶时,构建的成像视锥体均会对应调整,以保证人眼观察到的AR-HUD的虚像面的显示效果始终与现实世界完全对齐,提高驾驶员的观察体验,达到更好的辅助驾驶的效果。
如图11所示,本申请实施例提供了一种投影装置,该投影装置可以用于实现上述实施例中的投影方法、标定方法、AR-HUD的投影方法与显示方法,如图11所示,该投影装置1100具有获取模块1101、投影模块1102、调整模块1103。
获取模块1101用于执行上述投影方法中的S401步骤以及其中的示例。投影模块 1102用于执行上述投影方法中的S402、上述标定方法中的S501~S503、上述AR-HUD的投影方法中的S701~S703中任一步骤以及其中任一可选的示例。调整模块1103用于执行上述投影方法中的S403、上述标定方法中的S504、上述AR-HUD的投影方法中的S704~S714中任一步骤以及其中任一可选的示例。具体可参见方法实施例中的详细描述,此处不做赘述。
在一些实施例中,该投影装置1100还可以具有提示模块1104,该提示模块1104可以实现上述投影方法、标定方法、AR-HUD的投影方法中涉及人机交互的部分,通过向用户发送提示消息,引导用户参与完成上述投影方法、标定方法、AR-HUD的投影方法中的校准过程或调整过程,例如,可以通过该提示模块1104提示用户通过人眼确定所述标定物与所述标定物的投影面是否重合;还可以在获取用户的校准需求时,通过该提示模块1104向用户发送校准开始的提示消息,以及校准完成的提示消息。
应理解的是,本申请实施例中的投影装置可以由软件实现,例如可以由具有上述功能计算机程序或指令来实现,相应计算机程序或指令可以存储在终端内部的存储器中,通过处理器读取该存储器内部的相应计算机程序或指令来实现上述功能。或者,本申请实施例的投影装置还可以由硬件来实现,例如,该获取模块1101可以由车辆上的采集装置实现,例如车载摄像头或激光雷达等,或者,该获取模块1101也可以由处理器与车辆上的车载摄像头或激光雷达之间的接口电路来实现。该提示模块1104可以由车辆上的中控屏幕或音响、麦克风等装置来实现。该投影模块1102可以由车辆上的HUD或AR-HUD实现,或者该投影模块1102也可以由HUD或AR-HUD的处理器实现,又或者该投影模块还可以由手机或平板等终端实现。该调整模块1103可以由HUD或AR-HUD的处理器实现,或者该调整模块1103也可以由车机或车载电脑等车载处理装置的处理器实现。或者,本申请实施例中的投影装置还可以由处理器和软件模块的结合实现。
应理解,本申请实施例中的装置或模块的处理细节可以参考图4、图5、图7所示的实施例及相关扩展实施例的相关表述,本申请实施例将不再重复赘述。
另外,本申请实施例还提供了具有上述投影装置的车辆,该车辆可以是家用轿车或载货汽车等,还可以是特种车辆例如救护车、消防车、警车或工程抢险车等。该车辆可以采用本地存储的方式,存储上述实施例中的成像模型及相关训练集,当需要实现上述投影方法、标定方法时,可以更快的载入成像模型,实现快速根据用户人眼位置的投影显示校准或调整,具有低延时、体验好的优势。除此之外,该车辆还可以采用与云端交互的方式,通过从云端下载的方式,将云端存储的成像模型下载到本地,以实现根据用户人眼位置的投影显示校准或调整,采用云端交互具有数据量丰富、模型更新及时,精确度更高的优势。
图13是本申请实施例提供的一种计算设备1500的结构性示意性图。该计算设备可以作为投影装置,执行上述投影方法、标定方法或AR-HUD的投影方法中的各可选实施例,该计算设备可以是终端,也可以是终端内部的芯片或芯片系统。如图13所示,该计算设备1500包括:处理器1510、存储器1520、通信接口1530、总线1540。
应理解,图13所示的计算设备1500中的通信接口1530可以用于与其他设备之间进行通信,具体可以包括一个或多个收发电路或接口电路。
其中,该处理器1510可以与存储器1520连接。该存储器1520可以用于存储该程序代码和数据。因此,该存储器1520可以是处理器1510内部的存储单元,也可以是与处理器1510独立的外部存储单元,还可以是包括处理器1510内部的存储单元和与处理器1510独立的外部存储单元的部件。
可选的,计算设备1500还可以包括总线1540。其中,存储器1520、通信接口1530可以通过总线1540与处理器1510连接。总线1540可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线1540可以分为地址总线、数据总线、控制总线等。为便于表示,图13中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。
应理解,在本申请实施例中,该处理器1510可以采用中央处理单元(central processing unit,CPU)。该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。或者该处理器1510采用一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。
该存储器1520可以包括只读存储器和随机存取存储器,并向处理器1510提供指令和数据。处理器1510的一部分还可以包括非易失性随机存取存储器。例如,处理器1510还可以存储设备类型的信息。
在计算设备1500运行时,所述处理器1510执行所述存储器1520中的计算机执行指令执行上述投影方法、标定方法或AR-HUD的投影方法的任一操作步骤以及其中任一可选的实施例。
应理解,根据本申请实施例的计算设备1500可以对应于执行根据本申请各实施例的方法中的相应主体,并且计算设备1500中的各个模块的上述和其它操作和/或功能分别为了实现本实施例各方法的相应流程,为了简洁,在此不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时用于执行一种多样化问题生成方法,该方法包括上述各个实施例所描述的方案中的至少之一。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括、但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的 网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
需要说明的是,本申请所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,上述对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
说明书和权利要求书中的词语“第一、第二、第三等”或模块A、模块B、模块C等类似用语,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
在上述的描述中,所涉及的表示步骤的标号,如S110、S120……等,并不表示一定会按此步骤执行,还可以包括中间的步骤或者由其他的步骤代替,在允许的情况下可以互换前后步骤的顺序,或同时执行。
说明书和权利要求书中使用的术语“包括”不应解释为限制于其后列出的内容;它不排除其它的元件或步骤。因此,其应当诠释为指定所提到的所述特征、整体、步骤或部件的存在,但并不排除存在或添加一个或更多其它特征、整体、步骤或部件及其组群。因此,表述“包括装置A和B的设备”不应局限为仅由部件A和B组成的设备。
本说明书中提到的“一个实施例”或“实施例”意味着与该实施例结合描述的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在本说明书各处出现的用语“在一个实施例中”或“在实施例中”并不一定都指同一实施例,但可以指同一实施例。此外,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
注意,上述仅为本申请的较佳实施例及所运用的技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明的构思的情况下,还可以包括更多其他等效实施例,均属于本发明的保护范畴。

Claims (19)

  1. 一种投影方法,其特征在于,包括:
    获取标定物的图像信息和位置信息;
    根据所述标定物的图像信息和位置信息、以及成像模型,投影所述标定物;
    在所述标定物与所述标定物的投影面的重合度小于第一阈值时,调整所述成像模型的参数。
  2. 根据权利要求1所述的方法,其特征在于,所述调整所述成像模型的参数包括:
    调整所述成像模型的视场角和成像面位置中的一个或多个参数。
  3. 根据权利要求2所述的方法,其特征在于,所述在所述标定物与所述标定物的投影面的重合度小于第一阈值时,调整所述成像模型的参数具体包括:
    在所述标定物与所述标定物的投影面的面积差大于第二阈值时,调整所述成像模型的视场角。
  4. 根据权利要求2所述的方法,其特征在于,所述在所述标定物与所述标定物的投影面的重合度小于第一阈值时,调整所述成像模型的参数具体包括:
    在所述标定物与所述标定物的投影面的偏移量大于第三阈值时,调整所述成像模型的成像面的二维位置。
  5. 根据权利要求1所述的方法,其特征在于,
    所述标定物与所述标定物的投影面的重合度是通过所述标定物与所述标定物的投影面的像素偏移确定的;所述像素偏移是通过摄像头采集的包含所述标定物与所述标定物的投影面的图像确定的。
  6. 根据权利要求1所述的方法,其特征在于,
    所述成像模型是根据包括多个训练样本的训练集训练的,其中所述训练样本包括人眼位置信息参数、标定物的图像信息和位置信息参数、以及所述标定物与所述标定物的投影面的重合度参数。
  7. 根据权利要求1所述的方法,其特征在于,还包括:
    获取用户的校准需求,向用户发送校准开始的提示消息;
    获取用户的人眼位置,根据所述用户的人眼位置对所述成像模型的参数进行校准;
    在校准完成后,向用户发送校准完成的提示消息。
  8. 根据权利要求7所述的方法,其特征在于,还包括:
    通过人眼确定所述标定物与所述标定物的投影面是否重合;
    在所述标定物与所述标定物的投影面未重合时,根据用户的调整指令,对校准完成的所述成像模型的参数进行调整。
  9. 一种投影装置,其特征在于,包括:
    获取模块,用于获取标定物的图像信息和位置信息;
    投影模块,用于根据所述标定物的图像信息和位置信息、以及成像模型,投影所述标定物;
    调整模块,用于在所述标定物与所述标定物的投影面的重合度小于第一阈值时,调整所述成像模型的参数。
  10. 根据权利要求9所述的装置,其特征在于,所述调整模块用于调整所述成像模型的参数时,具体用于:
    调整所述成像模型的视场角和成像面位置中的一个或多个参数。
  11. 根据权利要求10所述的装置,其特征在于,所述调整模块具体用于:
    在所述标定物与所述标定物的投影面的面积差大于第二阈值时,调整所述成像模型的视场角。
  12. 根据权利要求10所述的装置,其特征在于,所述调整模块具体用于:
    在所述标定物与所述标定物的投影面的偏移量大于第三阈值时,调整所述成像模型的成像面的二维位置。
  13. 根据权利要求9所述的装置,其特征在于,
    所述标定物与所述标定物的投影面的重合度是通过所述标定物与所述标定物的投影面的像素偏移确定的;所述像素偏移是通过摄像头采集的包含所述标定物与所述标定物的投影面的图像确定的。
  14. 根据权利要求9所述的装置,其特征在于,
    所述成像模型是根据包括多个训练样本的训练集训练的,其中所述训练样本包括人眼位置信息参数、标定物的图像信息和位置信息参数、以及所述标定物与所述标定物的投影面的重合度参数。
  15. 根据权利要求9所述的装置,其特征在于,还包括:
    提示模块,用于在获取用户的校准需求时,向用户发送校准开始的提示消息;
    所述调整模块还用于根据获取的用户的人眼位置,对所述成像模型的参数进行校准;
    所述提示模块还用于在校准完成后,向用户发送校准完成的提示消息。
  16. 根据权利要求15所述的装置,其特征在于,
    所述提示模块还用于提示用户通过人眼确定所述标定物与所述标定物的投影面是否重合;
    所述调整模块还用于在所述标定物与所述标定物的投影面未重合时,根据用户的调整指令,对校准完成的所述成像模型的参数进行调整。
  17. 一种计算设备,其特征在于,包括:
    处理器,以及
    存储器,其上存储有程序指令,所述程序指令当被所述处理器执行时使得所述处理器执行权利要求1至8任意一项所述的投影方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储有程序代码,所述程序代码当被计算机或处理器执行时使得所述计算机或所述处理器执行权利要求1至8任意一项所述的投影方法。
  19. 一种计算机程序产品,其特征在于,所述计算机程序产品包含的程序代码,被计算机或处理器执行时使得所述计算机或所述处理器执行权利要求1至8任意一项所述的投影方法。
PCT/CN2021/094344 2021-05-18 2021-05-18 一种投影方法及装置、车辆及ar-hud WO2022241638A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202180001479.9A CN114258319A (zh) 2021-05-18 2021-05-18 一种投影方法及装置、车辆及ar-hud
EP21940095.9A EP4339938A1 (en) 2021-05-18 2021-05-18 Projection method and apparatus, and vehicle and ar-hud
PCT/CN2021/094344 WO2022241638A1 (zh) 2021-05-18 2021-05-18 一种投影方法及装置、车辆及ar-hud
US18/511,141 US20240087491A1 (en) 2021-05-18 2023-11-16 Projection Method and Apparatus, Vehicle, and AR-HUD

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/094344 WO2022241638A1 (zh) 2021-05-18 2021-05-18 一种投影方法及装置、车辆及ar-hud

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/511,141 Continuation US20240087491A1 (en) 2021-05-18 2023-11-16 Projection Method and Apparatus, Vehicle, and AR-HUD

Publications (1)

Publication Number Publication Date
WO2022241638A1 true WO2022241638A1 (zh) 2022-11-24

Family

ID=80796581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094344 WO2022241638A1 (zh) 2021-05-18 2021-05-18 一种投影方法及装置、车辆及ar-hud

Country Status (4)

Country Link
US (1) US20240087491A1 (zh)
EP (1) EP4339938A1 (zh)
CN (1) CN114258319A (zh)
WO (1) WO2022241638A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578682A (zh) * 2022-12-07 2023-01-06 北京东舟技术股份有限公司 增强现实抬头显示测试方法、系统以及存储介质
US11953697B1 (en) 2023-05-05 2024-04-09 Ford Global Technologies, Llc Position tracking sensor in a head up display

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821723B (zh) * 2022-04-27 2023-04-18 江苏泽景汽车电子股份有限公司 一种投影像面调节方法、装置、设备及存储介质
GB2612663B (en) * 2022-05-17 2023-12-20 Envisics Ltd Head-up display calibration
CN116055694B (zh) * 2022-09-02 2023-09-01 深圳市极米软件科技有限公司 一种投影图像控制方法、装置、设备及存储介质
CN116974417B (zh) * 2023-07-25 2024-03-29 江苏泽景汽车电子股份有限公司 显示控制方法及装置、电子设备、存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169679A1 (en) * 2011-12-30 2013-07-04 Automotive Research & Test Center Vehicle image display system and correction method thereof
CN109873997A (zh) * 2019-04-03 2019-06-11 贵安新区新特电动汽车工业有限公司 投影画面校正方法及装置
CN109917920A (zh) * 2019-03-14 2019-06-21 百度在线网络技术(北京)有限公司 车载投射处理方法、装置、车载设备及存储介质
CN111107332A (zh) * 2019-12-30 2020-05-05 华人运通(上海)云计算科技有限公司 一种hud投影图像显示方法和装置
CN111754442A (zh) * 2020-07-07 2020-10-09 惠州市德赛西威汽车电子股份有限公司 一种hud图像校正方法、装置及系统
CN112344963A (zh) * 2020-11-05 2021-02-09 南京讯天游科技有限公司 一种基于增强现实抬头显示设备的测试方法及系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242866B (zh) * 2020-01-13 2023-06-16 重庆邮电大学 观测者动态眼位条件下ar-hud虚像畸变校正的神经网络插值方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169679A1 (en) * 2011-12-30 2013-07-04 Automotive Research & Test Center Vehicle image display system and correction method thereof
CN109917920A (zh) * 2019-03-14 2019-06-21 百度在线网络技术(北京)有限公司 车载投射处理方法、装置、车载设备及存储介质
CN109873997A (zh) * 2019-04-03 2019-06-11 贵安新区新特电动汽车工业有限公司 投影画面校正方法及装置
CN111107332A (zh) * 2019-12-30 2020-05-05 华人运通(上海)云计算科技有限公司 一种hud投影图像显示方法和装置
CN111754442A (zh) * 2020-07-07 2020-10-09 惠州市德赛西威汽车电子股份有限公司 一种hud图像校正方法、装置及系统
CN112344963A (zh) * 2020-11-05 2021-02-09 南京讯天游科技有限公司 一种基于增强现实抬头显示设备的测试方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578682A (zh) * 2022-12-07 2023-01-06 北京东舟技术股份有限公司 增强现实抬头显示测试方法、系统以及存储介质
US11953697B1 (en) 2023-05-05 2024-04-09 Ford Global Technologies, Llc Position tracking sensor in a head up display

Also Published As

Publication number Publication date
EP4339938A1 (en) 2024-03-20
CN114258319A (zh) 2022-03-29
US20240087491A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
WO2022241638A1 (zh) 一种投影方法及装置、车辆及ar-hud
WO2021197189A1 (zh) 基于增强现实的信息显示方法、系统、装置及投影设备
US20230226445A1 (en) Reality vs virtual reality racing
US11715238B2 (en) Image projection method, apparatus, device and storage medium
JP5999032B2 (ja) 車載表示装置およびプログラム
WO2019037489A1 (zh) 地图显示方法、装置、存储介质及终端
WO2022134364A1 (zh) 车辆的控制方法、装置、系统、设备及存储介质
JP7339386B2 (ja) 視線追跡方法、視線追跡装置、端末デバイス、コンピュータ可読記憶媒体及びコンピュータプログラム
WO2021197190A1 (zh) 基于增强现实的信息显示方法、系统、装置及投影设备
WO2022266829A1 (zh) 一种显示方法及装置、设备及车辆
WO2023071834A1 (zh) 用于显示设备的对齐方法及对齐装置、车载显示系统
WO2023272453A1 (zh) 视线校准方法及装置、设备、计算机可读存储介质、系统、车辆
CN112242009A (zh) 显示效果融合方法、系统、存储介质及主控单元
CN108039084A (zh) 基于虚拟现实的汽车视野评价方法及系统
WO2023138537A1 (zh) 一种图像处理方法、装置、终端设备及存储介质
WO2019243392A1 (en) Heads up display (hud) content control system and methodologies
CN115525152A (zh) 图像处理方法及系统、装置、电子设备和存储介质
Feld et al. Dfki cabin simulator: A test platform for visual in-cabin monitoring functions
CN111833443A (zh) 自主机器应用中的地标位置重建
JP6258000B2 (ja) 画像表示システム、画像表示方法及びプログラム
Feld et al. Dfki cabin simulator: A test platform for visual in-cabin monitoring functions
WO2017024458A1 (en) System, method and apparatus for vehicle and computer readable medium
TWI799000B (zh) 資訊顯示方法及其處理裝置與顯示系統
US11741671B2 (en) Three-dimensional scene recreation using depth fusion
WO2024031709A1 (zh) 一种显示方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21940095

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021940095

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021940095

Country of ref document: EP

Effective date: 20231213