WO2021227969A1 - 一种数据处理方法及其设备 - Google Patents

一种数据处理方法及其设备 Download PDF

Info

Publication number
WO2021227969A1
WO2021227969A1 PCT/CN2021/092269 CN2021092269W WO2021227969A1 WO 2021227969 A1 WO2021227969 A1 WO 2021227969A1 CN 2021092269 W CN2021092269 W CN 2021092269W WO 2021227969 A1 WO2021227969 A1 WO 2021227969A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
distortion
feature
user
Prior art date
Application number
PCT/CN2021/092269
Other languages
English (en)
French (fr)
Inventor
宋碧薇
刘欣
闫云飞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21803436.1A priority Critical patent/EP4141621A4/en
Publication of WO2021227969A1 publication Critical patent/WO2021227969A1/zh
Priority to US17/986,344 priority patent/US20230077753A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Definitions

  • the embodiments of the present application relate to the field of computer vision technology, and in particular to a data processing method and equipment.
  • Augmented Reality-Head Up Display uses an optical projection system to project driving assistance information (digits, pictures, animations, etc.) onto the front windshield of the car to form a virtual image.
  • driving assistance information digitals, pictures, animations, etc.
  • the driver passes through the windshield.
  • the corresponding driving assistance information can be observed in the display area of the glass.
  • the virtual image projected on the windshield by the optical projection system tends to be distorted.
  • a human eye simulation device will be installed near the driver’s head.
  • the human eye simulation device can simulate the position of the driver’s eyes and take pictures of passing by in the driving area.
  • the calibration image projected by the AR-HUD display system calculates the distortion amount of the calibration image through the position of the reference point on the calibration image, and performs image correction through the distortion amount.
  • the reference point in the captured calibration image is basically fixed.
  • the position of the driver often changes, such as the replacement of the driver, the adjustment of the seat, etc., when the position of the driver changes, the position of the driver’s eyes and the position of the eyes
  • the position of the simulation equipment in the preparation stage is different, so the corrected virtual image of the projection seen by the driver may still be distorted, which leads to the poor effect of the virtual projection seen by the driver.
  • the embodiment of the application provides a data processing method and equipment, which can be applied to a human-computer interaction system, such as a human-computer interaction system in a car. Improve the user experience.
  • the first aspect of the embodiments of the present application provides a data processing method.
  • the first device will receive the first location information sent by the second device.
  • the first location information includes the location information of the first feature in the preset coordinate system, and the first feature represents Characteristic information of the user collected by the second device.
  • the first device obtains the first predistortion model according to the first position information sent by the second device.
  • the first device corrects the projected image according to the first pre-distortion model, and the projected image is an image projected by the first device.
  • the first device obtains the first pre-distortion model according to the first position information including the user's characteristic information sent by the second device, so that the first device can obtain the first pre-distortion model according to the user's characteristic information in real time
  • the model corrects the projected virtual image, which improves the user experience.
  • the first device after receiving the first location information, the first device obtains second location information according to the first location information, where the second location information is among multiple preset value location information Position information whose distance from the first position information in the preset coordinate system is less than a preset threshold, and the preset position information is preset by the first device. After obtaining the second location information, the first device obtains the first predistortion model corresponding to the second location information.
  • the first device obtains the corresponding first pre-distortion model according to the preset position information, which saves the resources consumed by the online calculation of the first pre-distortion model, and improves the execution of the human-computer interaction system in the use phase. efficient.
  • the first device before the first device receives the first location information sent by the second device, the first device receives at least two pieces of first image information sent by the third device, and the at least two pieces of first image information are sent by the third device.
  • the first image information represents the information of the image projected by the first device collected by the third device at different positions in the preset coordinate system.
  • the first device obtains standard image information, which represents a projected image without distortion.
  • the first device compares at least two pieces of first image information with standard image information, respectively, to obtain at least two preset distortion variables respectively, and the preset distortion variables represent the distortion variables of the first image information relative to the standard image information.
  • the at least two preset distortion variables obtained by the first device are respectively calculated to obtain at least two first pre-distortion models, and the at least two first pre-distortion models have a one-to-one correspondence with the first image information.
  • the information of at least two projection images and the standard image collected by the third device at different positions are calculated to obtain the corresponding preset distortion variables, and then at least two second images are obtained through the corresponding preset distortion variables.
  • a pre-distortion model can calibrate the projected images viewed by the user at different positions during later use, thereby improving the user experience.
  • the first device receives gaze information sent by the second device, where the gaze information represents information of a user's gaze reference point, and the reference point is calibrated in the image projected by the first device.
  • the first device determines a first field of view range according to the gaze information, and the first field of view range represents a field of view range that the user can observe.
  • the first device determines the first distortion amount according to the gaze information and the first position information.
  • the first distortion amount represents the distortion amount of the calibrated human eye image relative to the standard image, and the calibrated human eye image indicates that the projected image of the first device is in the user's eye
  • the standard image is the projected image without distortion.
  • the first device obtains the first pre-distortion model according to the determined first field of view range and the first distortion amount.
  • the gaze information of the user is collected in real time, and the projection image is calibrated in real time according to the gaze information, so that the user can watch the complete projection image in different positions, which improves the user experience.
  • the characteristic information includes the user's eye information.
  • the feasibility of the technical solution is improved.
  • the first device uses the central processing unit (CPU) according to the first pre-distortion model.
  • CPU central processing unit
  • One or more of a graphics processing unit (graphics processing unit, GPU), and a field programmable gate array (field programmable gate array, FPGA) perform image processing to correct the projected image.
  • the feasibility of the solution is improved.
  • the first device uses liquid crystal on silicon (LCD) according to the first pre-distortion model.
  • LCD liquid crystal on silicon
  • DLP digital light processing
  • LCD liquid crystal display
  • the feasibility of the solution is improved.
  • the second aspect of the embodiments of the present application provides a data processing method.
  • the second device obtains first position information.
  • the first position information includes the position information of the first feature in a preset coordinate system.
  • the first feature represents the user's information obtained by the second device.
  • the feature information, the first position information is used by the first device to correct the projected image, and the projected image is an image projected by the first device.
  • the second device sends the first location information to the first device.
  • the second device sends the first location information including the characteristic information of the user to the first device, so that the first device can correct the image projected by the first device according to the first location information in real time to improve Enhance the user experience.
  • the second device collects second image information, the second image information includes user characteristic information, and the second device performs calculations based on the second image information to obtain the first position information .
  • the second device obtains the first position information by collecting image information including the characteristic information of the user and performing calculations, which improves the feasibility of the solution.
  • the second device when the second device performs calculations based on the first image information, the second device performs calculations through a feature recognition algorithm to obtain the feature position of the feature information in the second image information information. The second device performs calculations based on the characteristic location information to obtain the first location information.
  • the second device performs calculation according to the feature recognition algorithm to obtain the feature location information, and then obtains the first location information according to the feature location information, which improves the feasibility of the solution.
  • the second device before the second device calculates the first location information by using the feature location information, the second device also collects depth information, which indicates that the feature information is sent to the second device The straight-line distance.
  • the second device calculates the characteristic position information and the depth information to obtain the first position information.
  • the second device obtains the first position information by calculating the collected depth information and characteristic position information, which improves the accuracy of calculating the first position information.
  • the characteristic information includes the user's eye information.
  • the feasibility of the technical solution is improved.
  • the second device after the second device obtains the first position information, the second device also obtains the gaze information of the user, and the gaze information represents the information of the user's gaze reference point, and the reference point Calibrated in the image projected by the first device, the gaze information is used to determine the first distortion variable, the first distortion variable is used to determine the first pre-distortion model, and the first pre-distortion model is used to calibrate the first device Projection of the projected image.
  • the second device After the second device obtains the first location information and the gaze information, the second device sends the first location information and the gaze information to the first device.
  • the gaze information of the user is collected in real time, and the projection image is calibrated in real time according to the gaze information, so that the user can watch the complete projection image in different positions, which improves the user experience.
  • the third aspect of the embodiments of the present application provides a display device.
  • Display equipment includes:
  • a receiving unit configured to receive first location information sent by a second device, the first location information includes location information of the first feature in a preset coordinate system, and the first feature represents feature information of the user;
  • a processing unit configured to obtain a first pre-distortion model according to the first position information
  • the correction unit is used to correct the projected image according to the first pre-distortion model, and the projected image is the image projected by the first device.
  • the display device further includes:
  • the acquiring unit is configured to acquire the second location information according to the first location information, and the second location information is the location information whose distance from the first location information in the preset coordinate system is less than the preset threshold among the plurality of preset location information. Set the location information to be preset by the first device;
  • the acquiring unit is also used to acquire the first pre-distortion model corresponding to the second position information.
  • the receiving unit is further configured to receive at least two pieces of first image information sent by the third device, and the at least two pieces of first image information indicate the position of the third device in the preset coordinate system. Information about the images projected by the first device collected at different locations;
  • the acquiring unit is also used to acquire standard image information, and the standard image information represents a projected image without distortion;
  • the processing unit is further configured to compare at least two pieces of first image information with standard image information respectively to obtain at least two preset distortion variables, and the preset distortion variables represent the distortion variables of the first image information relative to the standard image information;
  • the processing unit is further configured to calculate separately according to the at least two preset distortion variables to obtain at least two first pre-distortion models, and the at least two first pre-distortion models correspond to the first image information in a one-to-one manner.
  • the receiving unit is further configured to receive gaze information sent by the second device, where the gaze information represents information about the user's gaze reference point, and the reference point is calibrated in the image projected by the first device;
  • the display equipment also includes:
  • the determining unit is configured to determine a first field of view range according to the gaze information, where the first field of view range represents the range of the field of view observed by the user;
  • the determining unit is also used to determine the first distortion amount according to the gaze information and the first position information.
  • the first distortion amount represents the distortion amount of the calibrated human eye image relative to the standard image, and the calibrated human eye image represents the projection image of the first device on the user’s
  • the image presented in the eyes, the standard image is the projected image without distortion;
  • the processing unit is further configured to obtain the first pre-distortion model according to the first field of view range and the first distortion amount.
  • the characteristic information of the user includes the human eye information of the user.
  • the correction unit is specifically configured to use one or more of the central processing unit CPU, graphics processing unit GPU and field programmable logic gate array FPGA according to the first pre-distortion model. Perform image processing to correct the projected image.
  • the correction unit is specifically configured to perform light modulation by one or more of liquid crystal on silicon LCOS, digital light processing technology DLP, and liquid crystal display LCD according to the first predistortion model. To correct the projected image.
  • the fourth aspect of the present application provides a feature collection device.
  • Feature collection equipment includes:
  • the acquiring unit is configured to acquire first location information, the first location information includes location information of the first feature in a preset coordinate system, the first feature represents the feature information of the user, and the first location information is used for the first device to correct the projection image ,
  • the projected image is the image projected by the first device;
  • the sending unit is configured to send the first location information to the first device.
  • the feature collection device further includes:
  • An acquisition unit configured to acquire second image information, where the second image information includes characteristic information of the user
  • the processing unit is configured to calculate according to the second image information to obtain the first position information.
  • the processing unit is specifically configured to obtain the feature position information of the feature information in the second image information through a feature recognition algorithm calculation;
  • the processing unit is specifically configured to obtain the first position information through calculation of the characteristic position information.
  • the collection unit is also used to collect depth information, and the depth information represents the linear distance from the feature information to the second device;
  • the processing unit is further configured to obtain the first position information by calculating the characteristic position information, including:
  • the processing unit is further configured to obtain the first position information through calculation of the feature position information and the depth information.
  • the characteristic information includes the user's eye information.
  • the acquiring unit is further configured to acquire the gaze information of the user.
  • the gaze information indicates the information of the user's gaze reference point, the reference point is calibrated in the image projected by the first device, and the gaze information is used for Determine the first distortion variable, the first distortion variable is used to determine the first pre-distortion model, and the first pre-distortion model is used to correct the projection image;
  • the sending unit is also used to send the first location information and the gaze information to the first device.
  • the fifth aspect of the embodiments of the present application provides a human-computer interaction system.
  • the human-computer interaction system includes:
  • the display device is used to execute the method of the first aspect in the embodiments of the present application.
  • the feature collection device is used to execute the method of the second aspect in the embodiment of the present application.
  • a sixth aspect of the embodiments of the present application provides a display device.
  • the display device includes:
  • the processor is connected to the memory and the input and output equipment;
  • the processor executes the method described in the implementation manner of the first aspect of the present application.
  • a seventh aspect of the embodiments of the present application provides a feature collection device.
  • the feature collection equipment includes:
  • the processor is connected to the memory and the input and output equipment;
  • the processor executes the method described in the implementation manner of the first aspect of the present application.
  • the eighth aspect of the embodiments of the present application provides a computer storage medium.
  • the computer storage medium stores instructions. When the instructions are executed on the computer, the computer executes the same Aspects implement the method described in the mode.
  • the ninth aspect of the embodiments of the present application provides a computer program product.
  • the computer program product When the computer program product is executed on a computer, the computer executes the method described in the implementation manners of the first aspect and/or the second aspect of the present application.
  • the first device obtains the first pre-distortion model according to the position information of the user's feature information in the preset coordinate system, so that the first device can adjust the pre-distortion model in real time, and then use the first pre-distortion model to correct
  • the image projected by the first device is improved, and the quality of the projected image seen by the user is improved.
  • FIG. 1 is a schematic diagram of the human-computer interaction system provided by this application.
  • FIG. 2 is another schematic diagram of the human-computer interaction system provided by this application.
  • FIG. 3 is a schematic flow chart of the data processing method provided by this application.
  • FIG. 4 is a schematic diagram of another flow of the data processing method provided by this application.
  • FIG. 5 is a schematic diagram of a scene of the data processing method provided by this application.
  • FIG. 6 is a schematic diagram of another scene of the data processing method provided by this application.
  • FIG. 7 is a schematic diagram of another scene of the data processing method provided by this application.
  • FIG. 8 is a schematic structural diagram of the display device provided by this application.
  • FIG. 9 is a schematic diagram of another structure of the display device provided by this application.
  • FIG. 10 is a schematic structural diagram of the feature collection device provided by this application.
  • FIG. 11 is another schematic diagram of the structure of the feature collection device provided by this application.
  • FIG. 12 is a schematic diagram of another structure of the display device provided by this application.
  • FIG. 13 is a schematic diagram of another structure of the feature collection device provided by this application.
  • the embodiments of the present application provide a data processing method and equipment for obtaining a first predistortion model according to the position information of the user's characteristic information in a preset coordinate system in a driving system, so that the first device can be real-time
  • the pre-distortion model is adjusted according to the characteristic information of the user, and the image projected by the first device is corrected through the first pre-distortion model, which improves the quality of the projected image seen by the user, thereby enhancing the user experience.
  • Figure 1 is a schematic diagram of the human-computer interaction system provided by this application.
  • the embodiment of the present application provides a human-computer interaction system.
  • the human-computer interaction system includes a display device, a feature collection device, and the front windshield of a car.
  • the feature collection device and the display device can be connected through a wired or wireless connection.
  • the wired connection can be made through a data cable connection, such as the data cable of the COM interface, the data cable of the USB interface, the data cable of the Type-C interface, and the data of the Micro-USB interface.
  • Wired connection is performed in a manner such as a wire. It is understood that the wired connection may also be performed in other manners, such as a wired connection via an optical fiber, which is not specifically limited here.
  • the feature collection device and the display device are connected wirelessly, they can be connected via Wi-Fi wireless connection, Bluetooth connection, infrared connection and other wireless connection methods. It is understandable that wireless connection can also be made in other ways, such as the third-generation connection.
  • the third generation (3G), the fourth generation (4G), the fifth generation (5G), etc. are used for wireless connection, and the details are not limited here.
  • the display device may be a head-up display (HUD), or an augmented reality-head-up display (AR-HUD), or a display device with a projection imaging function.
  • HUD head-up display
  • AR-HUD augmented reality-head-up display
  • the details are not limited here.
  • the feature collection device may be a camera, or a separate camera, or a camera with processing functions, such as a human eye tracking device, which is not specifically limited here.
  • the display device further includes a calculation processing unit for processing information sent by other devices, such as image information, etc.
  • the calculation processing unit can be integrated with the display device or independent of the display device
  • the specific processing equipment is not limited here.
  • the display device is used to project the image that needs to be displayed on the front windshield of the car.
  • the display device may also include an optical system that is used to project the image that needs to be displayed.
  • the feature collection device is used to obtain the feature information of the user, and transmit the feature information to the calculation processing unit.
  • the calculation processing unit performs related calculations and feeds back the calculation results to the display device.
  • the feature information may be human eye information .
  • the display device then adjusts the projection system to adapt to the user's viewing, so that the user can view the complete projection virtual image in different positions.
  • FIG. 2 another schematic diagram of the human-computer interaction system provided by the present application.
  • the embodiment of the application also provides a human-computer interaction system.
  • the human-computer interaction system includes a display device, a photographing device, and the front windshield of a car.
  • the photographing device and the display device can be connected through a wired or wireless connection.
  • the connection mode of the photographing device and the display device is similar to the connection mode of the feature collection device and the display device in the human-computer interaction system shown in FIG. 1, and the details are not repeated here.
  • the display device may be a head-up display (HUD), or an augmented reality-head-up display (AR-HUD), or a display device with a projection imaging function.
  • HUD head-up display
  • AR-HUD augmented reality-head-up display
  • the details are not limited here.
  • the shooting device may be a camera, or a separate camera, or a camera with processing functions, such as a human eye simulation device, which is not specifically limited here.
  • the display device further includes a calculation processing unit for processing information sent by other devices, such as image information, etc.
  • the calculation processing unit can be integrated with the display device or independent of the display device
  • the specific processing equipment is not limited here.
  • the shooting device is used to simulate the visual angle of the human eye to shoot the projected image in a specific field of view space.
  • the specific field of view space is a space in the vehicle where the projected virtual image can be observed partially or completely.
  • the scene as shown in Figure 2 is an achievable way of the human-computer interaction system.
  • the projected virtual image is photographed at an angle, and then the photographed image is transmitted to the calculation processing unit, the calculation processing unit performs related calculations, and then feeds back the calculation results to the display device, and the display device is based on the information of different shooting devices at different locations
  • To set up different pre-distortion models at the stage when the human-computer interaction system is put into use, according to the user’s different viewing positions, obtain the corresponding pre-distortion model to adjust the projection virtual image, so that the user can watch it in different positions Complete projected virtual image.
  • Eye box range In AR-HUD display technology, when the driver's eyes are within the eye box range, a complete virtual image of the AR-HUD projection can be seen. When the driver's eyes are beyond the designed eye box, the driver can only see part of the projected virtual image or not at all.
  • the AR-HUD display system can correct the projected virtual image through the preset pre-distortion model, or through The eye tracking device obtains the gaze information of the human eye, and obtains the pre-distortion model through the gaze information of the human eye and the characteristic information of the human eye, and then corrects the projected virtual image through the pre-distortion model.
  • FIG. 3 is a schematic flowchart of a data processing method according to an embodiment of this application.
  • the AR-HUD display system represents the first device
  • the human eye tracking device represents the second device
  • the human eye simulation device represents the third device as an example for description.
  • step 301 the human eye simulation device sends at least two pieces of first image information to the AR-HUD display system.
  • the human-computer interaction system Before the human-computer interaction system is put into use, the human-computer interaction system will be pre-set or trained. In the pre-setting or training stage, the human eye simulation device collects the information of the image projected by the AR-HUD at different positions in the preset coordinate system, that is, collects the first image information. After collecting at least two pieces of first image information, the human eye simulation device sends the at least two pieces of first image information to the AR-HUD display system.
  • the AR-HUD display system will first determine the available field of view range of the AR-HUD display system, divide the available field of view range into several small areas, and record the center points of the several small areas in the preset coordinate system In the position information, the position information of the center points of the several small areas in the preset coordinate system represents the preset position information, and the preset position information is preset by the AR-HUD display system.
  • the AR-HUD display system records the center points of several small areas in the camera. The position coordinates in the coordinate system.
  • the AR-HUD display system when the preset coordinate system is the world coordinate system with the AR-HUD display system as the origin, the AR-HUD display system records the center points of several small areas at The position coordinates in the world coordinate system.
  • the human eye simulation equipment is installed or placed at the spatial point corresponding to each position information to collect the projected virtual image. It should be noted that there may be many ways of collecting, for example, by photographing, or by photographing. The specific method is not limited here.
  • the human eye simulation device is placed at the spatial point corresponding to (12, 31, 22) in the camera coordinate system, and the projection virtual image projected by the AR-HUD display system is photographed.
  • the projected virtual image before collecting the projected virtual image by the human eye simulation device, can also be calibrated through the AR-HUD display system, for example, through a checkerboard format, or through a bitmap. Carry out calibration, the specifics are not limited here.
  • the calibration of the projected virtual image allows the image to be calculated based on the calibrated points when calculating the corresponding distortion in the later stage. Compared with the calculation of the uncalibrated image, the accuracy of the calculation of the distortion can be improved.
  • step 302 the AR-HUD display system obtains standard image information.
  • the AR-HUD display system After the AR-HUD display system receives at least two pieces of first image information sent by the human eye simulation device, the AR-HUD display system obtains standard image information locally, and the standard image information represents a projected image without distortion.
  • the obtained standard image is a calibrated standard image
  • the specific calibration method can be performed in a checkerboard format or a dot matrix diagram.
  • the specific method is not done here.
  • the calibration method of the standard image may be the same as the calibration method of the received at least two pieces of first image information.
  • step 303 the AR-HUD display system compares the at least two pieces of first image information with the standard image, respectively, to obtain at least two preset distortion variables.
  • the AR-HUD display system After acquiring the standard image, the AR-HUD display system compares the received at least two first image information with the standard image to obtain at least two preset distortion variables, the preset distortion variables representing the first image information The amount of distortion relative to standard image information.
  • the AR-HUD display system calculates the conversion formula between the calibration points of the standard image and the punctuation points in the first image information, For example, the standard image is calibrated with a 100*100 dot matrix, and there is an 80*80 dot matrix in the first image information, then by calculating the transformation formula from 80*80 dot matrix to 100*100 dot matrix, we get The amount of distortion is preset.
  • step 304 the AR-HUD display system calculates respectively according to at least two preset distortion variables to obtain at least two first pre-distortion models.
  • the AR-HUD display system After the AR-HUD display system obtains at least two preset distortion variables, the AR-HUD display system performs calculations according to the at least two preset distortion variables to obtain at least two first pre-distortion models, and the at least two pre-distortion models
  • the distortion model has a one-to-one correspondence with the first image information.
  • the AR-HUD display system can calculate the standard image and preset distortion variables to obtain the transformation mathematical model corresponding to the standard image, that is, the transformation mathematical model is the first pre-distortion model.
  • the AR-HUD display system can adjust the standard image according to the transformation mathematical model and project the adjusted image, so that when the user views the projected image with the position information in the first image information corresponding to the transformation mathematical model, You can see the complete standard image.
  • the AR-HUD display system can also calculate by pre-setting the distortion variable and the projection parameters of the AR-HUD display system to obtain the modified projection parameter, that is, the modified projection parameter is the first A pre-distortion model.
  • the AR-HUD display system can project the standard image according to the modified projection parameters.
  • the standard image changes according to the changes in the projection parameters, because the projection parameters are based on the preset distortion variables Therefore, when viewing the projected image with the position information in the first image information corresponding to the projection parameter, the user can see the complete standard image.
  • the corresponding relationship between each first pre-distortion model in the multiple first pre-distortion models and the position information in the corresponding first image information can be established, and the corresponding relationship Stored locally in the AR-HUD display system.
  • step 305 the eye tracking device collects second image information.
  • the eye tracking device collects second image information, and the second image information includes the user's characteristic information.
  • the characteristic information of the user includes human eye information.
  • the eye tracking device takes a photo or video of the user to collect the user's second image information, and the second image information includes the user's eye information.
  • the eye tracking device collects by means of video recording, after the collection, the image information of the user is determined by extracting the frame of the video.
  • the feature information may also include more information, such as face information, nose information, mouth information, etc., which are not specifically limited here.
  • step 306 the eye tracking device calculates through a feature recognition algorithm to obtain feature location information of the feature information in the second image information.
  • the human eye simulation device calculates through the feature recognition algorithm to obtain the feature location information of the feature information in the second image information.
  • the feature location information indicates that the feature information is in the second image. Location information in the message.
  • the eye tracking device recognizes the position information of the user's eye information in the second image information through the eye recognition algorithm, and then obtains the position of the human eye in the image coordinate system, such as As shown in FIG. 7, the image coordinate system represents a two-dimensional coordinate system with the center of the image as the origin of the coordinates.
  • the position information of the user's eye information in the second image information is recognized by the Huffman circle detection method.
  • the position information of the user's eye information in the second image information is identified by means of a convolutional neural network, which is not specifically limited here.
  • step 307 the eye tracking device collects depth information.
  • the eye tracking device is also used to collect depth information, which indicates the linear distance from the user's characteristic information to the eye tracking device.
  • the eye tracking device obtains the straight-line distance from the user's eye information to the eye tracking device through the ranging function.
  • the eye tracking device obtains the straight-line distance from the user's eye information to the eye tracking device through infrared distance measurement.
  • the depth information can also be obtained in other ways, such as ultrasonic distance measurement. , The specifics are not limited here.
  • step 308 the eye tracking device obtains the first position information by calculating the feature position information and the depth information.
  • the eye tracking device calculates the feature location information and the depth information to obtain the first location information, which represents the location of the user's feature information in the preset coordinate system information.
  • the eye tracking device obtains the first position information through calculation of characteristic position information and depth information and internal parameters of the eye tracking device. For example, it can be calculated by the following formula:
  • z c represents the value corresponding to the Z axis in the location information of the user's feature information in the camera coordinate system
  • x c represents the value corresponding to the X axis in the location information of the user's feature information in the camera coordinate system
  • y c Represents the value corresponding to the Y axis in the position information of the user’s feature information in the camera coordinate system
  • d represents the depth information
  • s represents the scaling factor in the internal parameters of the eye tracking device
  • f u represents the value in the internal parameters of the eye tracking device
  • f v represents the focal length in the vertical direction in the internal reference of the eye tracking device
  • u represents the value corresponding to the X axis in the image coordinate system in the feature position information
  • v represents the Y axis in the image coordinate system in the feature position information.
  • the values of C u and C v represent the X-axis and Y-axis values corresponding to the origin
  • the first position information is equal to the position information of the user's feature information in the camera coordinate system.
  • the first position information indicates the position information of the user's feature information in the world coordinate system
  • the eye tracking device is in the camera according to the user's feature information
  • the position information in the coordinate system is calculated to obtain the first position information.
  • the eye tracking device may calculate the first position information in the following manner:
  • ⁇ , ⁇ and ⁇ are the rotation parameters ( ⁇ , ⁇ , ⁇ ), t x , t y and t z are the translation parameters of the three axes (t x , t y , t z ), and x w is the user’s characteristics
  • the value of the X axis in the location information of the information in the world coordinate system, y w is the value of the Y axis in the location information of the user's feature information in the world coordinate system, and z w is the location of the user's feature information in the world coordinate system
  • the value of the Z axis in the information, z c represents the value corresponding to the Z axis in the location information of the user's feature information in the camera coordinate system, and x c represents the X axis correspondence in the location information of the user's feature information in the camera coordinate system
  • the value of y c represents the value corresponding to the Y axis in the position information of the user'
  • step 309 the eye tracking device sends the first position information to the AR-HUD display system.
  • the eye tracking device After the eye tracking device obtains the first location information, the eye tracking device sends the first location information to the AR-HUD display system.
  • step 310 the AR-HUD display system obtains second location information according to the first location information.
  • the AR-HUD display system After the AR-HUD receives the AR-HUD display system sent by the eye tracking device, the AR-HUD display system obtains the second position information according to the first position information. Position information whose distance in a preset coordinate system is less than a preset threshold.
  • the AR-HUD display system performs calculations separately according to the first position information and each preset position information among the plurality of preset position information, and obtains that the first position information is in the preset coordinates
  • the preset position information with the smallest distance in the system For example, it can be calculated by the following formula:
  • j represents the index number corresponding to the value with the smallest distance between the preset location information and the first location information
  • x i represents the value of the X axis in the preset location information
  • y i represents the value of the Y axis in the preset location information
  • z i represents the value of the Z axis in the preset location information
  • x w is the value of the X axis in the location information of the user's feature information in the world coordinate system
  • y w is the location information of the user's feature information in the world coordinate system
  • Y is the location information of the user's feature information in the world coordinate system
  • Y is the value of the axis
  • z w is the value of the Z axis in the position information of the user's feature information in the world coordinate system.
  • the distance between the preset location information and the first location information can also be calculated by other formulas.
  • the first location information is the location information of the user’s feature information in the camera coordinate system
  • the corresponding The value (x c , y c , z c ) of the position information in the camera coordinate system replaces the above formula (x w , y w , z w ), and the specific calculation formula is not limited here.
  • each preset location information and the first location information is obtained by the above method, and the preset location information whose distance from the first location information is less than the preset range is selected as the second location information. Preferably, it can be selected from the preset location information.
  • the preset position information with the smallest distance from the first position information is output as the second position information.
  • step 311 the AR-HUD display system obtains the first pre-distortion model corresponding to the second position information.
  • the AR-HUD display system After obtaining the second location information, the AR-HUD display system searches the local for the first pre-distortion model corresponding to the second location information.
  • step 312 the AR-HUD display system corrects the projected image according to the first pre-distortion model.
  • the AR-HUD display system After the AR-HUD display system obtains the first pre-distortion model, the AR-HUD display system corrects the image projected by the first device according to the first pre-distortion model.
  • the AR-HUD display system adjusts the standard image according to the transformed mathematical model and projects the adjusted image so that The user's eyes can see the complete standard image when viewing the projected image with the preset position information corresponding to the transform mathematical model.
  • the AR-HUD display system can process standard images through one or more of CPU, GPU, and FPGA according to the transformation mathematical model to obtain adjusted images, so that users can When viewing the adjusted projection image with the preset position information corresponding to the transform mathematical model, the human eye can see the complete standard image. It is understandable that the standard image can also be processed in other ways to achieve the purpose of adjusting the image, which is not specifically limited here.
  • the AR-HUD display system projects the standard image according to the modified projection parameters. Since the projection parameters are modified, the standard The image changes according to the change of the projection parameter. Since the projection parameter is obtained according to the preset distortion variable, the user's eyes can see the complete standard image when viewing the projected image with the preset position information corresponding to the projection parameter.
  • the AR-HUD display system can perform light modulation on one or more of the liquid crystal on silicon LCOS, digital light processing technology DLP, and liquid crystal display LCD according to the modified projection parameters.
  • the user's eyes can see the complete standard image when viewing the light-modulated projected image with the preset position information corresponding to the projection parameter.
  • light modulation can also be performed in other ways to achieve the purpose of adjusting the projected image, which is not specifically limited here.
  • steps 301 to 304 are steps in the preparation phase of the human-computer interaction system before being put into use. Therefore, in the actual application process, that is, during the use phase of the human-computer interaction system, only steps 305 to 312 can be performed. There is no limitation here.
  • the AR-HUD display system uses the user’s characteristic information collected by the eye tracking device to determine the first pre-distortion model, and then correct the projected image according to the first pre-distortion model, so that the user can be in different positions You can watch the complete projected image, which improves the user's visual experience.
  • FIG. 4 is a schematic diagram of another flow of the data processing method according to the embodiment of this application.
  • the AR-HUD display system represents the first device
  • the eye tracking device represents the second device as an example for description.
  • step 401 the eye tracking device collects second image information.
  • step 402 the eye tracking device calculates through a feature recognition algorithm to obtain feature location information of the feature information in the second image information.
  • step 403 the eye tracking device collects depth information.
  • step 404 the eye tracking device obtains the first position information by calculating the feature position information and the depth information.
  • steps 401 to 404 in this embodiment are similar to steps 305 to 308 in the foregoing embodiment shown in FIG. 3, and details are not described herein again.
  • step 405 the eye tracking device obtains the gaze information of the user.
  • the eye tracking device is also used to obtain the gaze information of the user, the gaze information represents the information of the user's gaze reference point, and the reference point is calibrated in the image projected by the first device.
  • the user when the user enters the car, the user chooses whether to activate the calibration mode, and the calibration mode is used to calibrate the current projected virtual image. If the user activates the calibration mode, the AR-HUD system will project the image information with reference point calibration, such as the image calibrated by the dot matrix icon calibration method or the image calibrated by the checkerboard calibration method.
  • the reference point represents the dot matrix.
  • the points in the figure or the points on the chessboard are not specifically limited here.
  • the calibration mode can also be implemented by automatic activation. For example, when it is detected that the current user enters the car, the calibration mode is automatically activated. The specific timing or manner of activation of the calibration mode is not limited here.
  • the AR-HUD display system After the AR-HUD display system projects the image information with fiducial point calibration, it prompts the user to look at the points in the image information by sending instructions to the user.
  • the eye tracking device collects the user's eye when it is gazing. Information, get gaze information.
  • the AR-HUD display system issues a system voice to prompt the user to enter the calibration mode, and projects the calibrated image information onto the front windshield. The system voice also instructs the user to look at the calibrated reference point in the image information one by one.
  • AR-HUD When the user's eyes are gazing at the reference point for more than a preset time period, for example, when the human eye is gazing at the reference point for more than 3 seconds, then AR-HUD The display system determines that the user is watching the reference point and obtains the corresponding human eye information. It is understandable that the preset time period of 3 seconds is only an example. In actual applications, different values can be set according to different scenarios, and the specifics are not limited here. It should be noted that the instruction information may be system voice or information on the projection image used to instruct the user to watch the reference point, which is not specifically limited here.
  • the human eye tracking device when the user gazes at the reference point according to the prompt information, the human eye tracking device emits infrared rays to form a bright spot on the pupil of the human eye, and the bright spot is tracked by the human eye
  • the different angles from the device to the pupil of the human eye form bright spots at different positions of the pupil, and the position of the bright spot relative to the center of the pupil can be used to calculate the direction of the line of sight of the human eye.
  • the position in the preset coordinate system and the direction of the line of sight of the human eye determine the coordinates of the reference point actually observed by the human eye in the projected virtual image.
  • the human eye tracking device may also collect the coordinates of the reference point observed by the human eye in other ways, which is not specifically limited here.
  • the eye tracking device collects the human eye gazing at each reference point, because some reference points have exceeded the user's observable field of view at the current position, the eye tracking device cannot collect the reference point.
  • the human eye observes the coordinates of the reference point. After the user looks at each observable reference point, the coordinate points collected by the eye tracking device can form a human eye calibration image information, which is the calibration information that the user can observe at the current position Image information, that is, gaze information.
  • step 406 the eye tracking device sends the first position information and gaze information to the AR-HUD display system.
  • the eye tracking device After the eye tracking device obtains the first location information and the gaze information, the eye tracking device sends the first location information and the gaze information to the AR-HUD display system.
  • the AR-HUD display system determines the first field of view range according to the gaze information.
  • the AR-HUD display system After receiving the gaze information, the AR-HUD display system determines the first field of view range according to the gaze information, and the first field of view range represents the range of the field of view that the user can observe at the current position.
  • the AR-HUD display system determines the first field of view range according to the calibrated image information of the human eye in the gaze information.
  • the AR-HUD display system determines the first distortion amount according to the gaze information and the first position information.
  • the AR-HUD display system determines the first field of view range, it determines the first distortion value according to the first position information and gaze information.
  • the first distortion value represents the distortion value of the calibrated image of the human eye relative to the standard image.
  • the standard image is not generated. Distorted projected image.
  • the AR-HUD display system is based on the position information of the user's eye information in the preset coordinate system in the first position information, and the eye calibration image information in the gaze information.
  • the coordinate information of the human eye calibration image relative to the first position information is obtained, and then through coordinate conversion, the position information of the human eye calibration image in the preset coordinate system is obtained.
  • the first distortion amount is obtained by calculating the position information of the calibration image of the human eye in the preset coordinate system and the position information of the standard image in the preset coordinate system.
  • the first distortion variable can also be determined in other ways, for example, the position information of a certain reference point in the human eye calibration image in the preset coordinate system and the calibration method using the same calibration method.
  • the position information of the corresponding reference point in the standard image in the preset coordinate system is used to obtain the first distortion variable, which is not specifically limited here.
  • step 409 the AR-HUD display system obtains the first pre-distortion model according to the first field of view range and the first distortion amount.
  • the AR-HUD display system After the AR-HUD display system obtains the first distortion amount, the AR-HUD display system obtains the first pre-distortion model according to the first field of view range and the first distortion amount.
  • the AR-HUD display system determines the size of the projected virtual image according to the field of view that the user’s eyes can see in the current position, and then according to the first distortion and The standard image is calculated to obtain the transformation mathematical model corresponding to the standard image, and then the first pre-distortion model is determined according to the transformation mathematical model corresponding to the standard image and the size of the projection virtual image.
  • the AR-HUD display system determines the size of the projected virtual image according to the field of view that the user’s eyes can see in the current position, and then according to the first distortion and The projection parameters of the AR-HUD display system are calculated to obtain the modified projection parameters, and then the first pre-distortion model is determined according to the modified projection parameters and the size of the projection virtual image.
  • the first pre-distortion model may also be determined according to other methods, which is not specifically limited here.
  • step 410 the AR-HUD display system corrects the projected image according to the first pre-distortion model.
  • Step 410 in this embodiment is similar to step 312 in the foregoing embodiment shown in FIG. 3, and details are not described herein again.
  • the AR-HUD display system determines the first pre-distortion model by collecting the gaze information of the human eye, thereby correcting the projection image according to the first pre-distortion model, so that the user can calibrate the projection image in real time, which improves the user Experience.
  • FIG. 8 is a schematic structural diagram of an embodiment of the display device provided by this application.
  • a display device including:
  • the receiving unit 801 is configured to receive first location information sent by a second device, where the first location information includes location information of the first feature in a preset coordinate system, and the first feature represents feature information of the user;
  • the processing unit 802 is configured to obtain a first predistortion model according to the first position information
  • the correction unit 803 is configured to correct the projected image according to the first predistortion model, and the projected image is an image projected by the first device.
  • each unit of the display device is similar to those described in the AR-HUD display system in the embodiment shown in FIG. 2 and FIG. 3, and will not be repeated here.
  • FIG. 9 is a schematic structural diagram of another embodiment of the display device provided by this application.
  • a display device including:
  • the receiving unit 901 is configured to receive first location information sent by the second device, where the first location information includes location information of the first feature in a preset coordinate system, and the first feature represents feature information of the user;
  • the processing unit 902 is configured to obtain a first predistortion model according to the first position information
  • the correction unit 903 is configured to correct the projected image according to the first pre-distortion model, and the projected image is an image projected by the first device.
  • the display device further includes:
  • the obtaining unit 904 is configured to obtain second position information according to the first position information, where the second position information is position information whose distance from the first position information in a preset coordinate system is less than a preset threshold among the plurality of preset position information,
  • the preset location information is preset by the first device;
  • the acquiring unit 904 is further configured to acquire the first pre-distortion model corresponding to the second position information.
  • the receiving unit 901 is further configured to receive at least two pieces of first image information sent by the third device, where the at least two pieces of first image information represent the first device collected by the third device at different positions in the preset coordinate system. Information about the projected image;
  • the acquiring unit 904 is also used to acquire standard image information, and the standard image information represents a projected image that has not been distorted;
  • the processing unit 902 is further configured to compare at least two pieces of first image information with standard image information respectively to obtain at least two preset distortion variables, and the preset distortion variables represent the distortion variables of the first image information relative to the standard image information;
  • the processing unit 902 is further configured to calculate separately according to the at least two preset distortion variables to obtain at least two first pre-distortion models, and the at least two first pre-distortion models correspond to the first image information in a one-to-one correspondence.
  • the receiving unit 901 is further configured to receive gaze information sent by the second device, where the gaze information represents information about the user's gaze reference point, and the reference point is calibrated in the image projected by the first device;
  • the display equipment also includes:
  • the determining unit 905 is configured to determine a first field of view range according to the gaze information, where the first field of view range represents the range of the field of view observed by the user;
  • the determining unit 905 is further configured to determine the first distortion amount according to the gaze information and the first position information.
  • the first distortion amount indicates the distortion amount of the calibration image of the human eye relative to the standard image, and the calibration image of the human eye indicates that the projection image of the first device is in the user's The image presented by the human eye, the standard image is the projected image without distortion;
  • the processing unit 902 is further configured to obtain a first pre-distortion model according to the first field of view range and the first distortion amount.
  • the characteristic information of the user includes eye information of the user.
  • the correction unit 903 is specifically configured to perform image processing to correct the projection image by one or more of the central processing unit CPU, the graphics processing unit GPU, and the field programmable logic gate array FPGA according to the first pre-distortion model.
  • the correction unit 903 is specifically configured to perform light modulation to correct the projected image by one or more of liquid crystal on silicon LCOS, digital light processing technology DLP, and liquid crystal display LCD according to the first predistortion model.
  • each unit of the display device is similar to those described in the AR-HUD display system in the embodiment shown in FIG. 2 and FIG. 3, and will not be repeated here.
  • FIG. 10 is a schematic structural diagram of an embodiment of the feature collection device provided by this application.
  • a feature collection device including:
  • the acquiring unit 1001 is configured to acquire first location information, the first location information includes location information of the first feature in a preset coordinate system, the first feature represents the feature information of the user, and the first location information is used for the first device to correct the projection An image, and the projected image is an image projected by the first device;
  • the sending unit 1002 is configured to send first location information to the first device.
  • each unit of the feature collection device is similar to those described in the eye tracking device in the embodiment shown in FIG. 2 and FIG. 3, and will not be repeated here.
  • FIG. 11 is a schematic structural diagram of another embodiment of the feature collection device provided by this application.
  • the acquiring unit 1101 is configured to acquire first location information, the first location information includes location information of the first feature in a preset coordinate system, the first feature represents the feature information of the user, and the first location information is used for the first device to correct the projection An image, and the projected image is an image projected by the first device;
  • the sending unit 1102 is configured to send first location information to the first device.
  • the feature collection device further includes:
  • the collection unit 1103 is configured to collect second image information, where the second image information includes characteristic information of the user;
  • the processing unit 1104 is configured to calculate according to the second image information to obtain the first position information.
  • the processing unit 1104 is specifically configured to obtain the feature position information of the feature information in the second image information through a feature recognition algorithm calculation;
  • the processing unit 1104 is specifically configured to obtain the first position information through calculation of the characteristic position information.
  • the collection unit 1103 is also configured to collect depth information, where the depth information represents the linear distance from the feature information to the second device;
  • the processing unit 1104 is further configured to obtain the first position information by calculating the characteristic position information, including:
  • the processing unit 1104 is further configured to obtain the first position information through calculation of the feature position information and the depth information.
  • the feature information includes the user's eye information.
  • the acquiring unit 1101 is also used to acquire the gaze information of the user.
  • the gaze information indicates the information of the user's gaze reference point.
  • the reference point is calibrated in the image projected by the first device.
  • the gaze information is used to determine the first distortion variable.
  • the distortion variable is used to determine the first pre-distortion model, and the first pre-distortion model is used to correct the projected image;
  • the sending unit 1102 is further configured to send first location information and gaze information to the first device.
  • each unit of the feature collection device is similar to those described in the eye tracking device in the embodiment shown in FIG. 2 and FIG. 3, and will not be repeated here.
  • the functional units in the various embodiments of the present application can be integrated into one processing unit, or each unit can exist alone physically, or two or more units can be integrated into one.
  • the acquisition unit of the feature collection device may be a camera
  • the determination unit may be a processor
  • both the acquisition unit and the determination unit in the display device may correspond to one processor, and the processor implements the acquisition unit and the determination unit. Described function.
  • FIG. 12 is a schematic structural diagram of another embodiment of the display device provided by this application.
  • the display device includes a processor 1201, a memory 1202, a bus 1205, and an interface 1204.
  • the processor 1201 is connected to the memory 1202 and an interface 1204.
  • the bus 1205 is connected to the processor 1201, the memory 1202, and the interface 1204, respectively.
  • the interface 1204 is used to receive or To send data
  • the processor 1201 is a single-core or multi-core central processing unit, or a specific integrated circuit, or one or more integrated circuits configured to implement the embodiments of the present invention.
  • the memory 1202 may be a random access memory (Random Access Memory, RAM), or a non-volatile memory (non-volatile memory), such as at least one hard disk memory.
  • the memory 1202 is used to store computer execution instructions. Specifically, the program 1203 may be included in the computer-executable instructions.
  • the processor 1201 can execute the operations performed by the AR-HUD display system in the foregoing embodiments shown in FIG. 2 and FIG. 3, and details are not described herein again.
  • FIG. 13 is a schematic structural diagram of another embodiment of the feature collection device provided by this application.
  • the feature collection equipment includes a processor 1301, a memory 1302, a bus 1305, and an interface 1304.
  • the processor 1301 is connected to the memory 1302 and an interface 1304.
  • the bus 1305 is connected to the processor 1301, the memory 1302, and the interface 1304, respectively, and the interface 1304 is used for receiving Or to send data
  • the processor 1301 is a single-core or multi-core central processing unit, or a specific integrated circuit, or one or more integrated circuits configured to implement the embodiments of the present invention.
  • the memory 1302 may be a random access memory (Random Access Memory, RAM) or a non-volatile memory (non-volatile memory), such as at least one hard disk memory.
  • the memory 1302 is used to store computer execution instructions. Specifically, the program 1303 may be included in the computer-executable instructions.
  • the processor 1301 can execute the operations performed by the eye tracking device in the embodiments shown in FIG. 2 and FIG. 3, and details are not described herein again.
  • processors mentioned in the above embodiments of this application may be a central processing unit (CPU), or may also be other general-purpose processors, digital signal processing Digital signal processor (DSP), application-specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete Hardware components, etc.
  • DSP digital signal processing Digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • processors in the above embodiments of the present application may be one or multiple, and may be adjusted according to actual application scenarios. This is only an exemplary description and is not limited.
  • the number of memories in the embodiment of the present application may be one or multiple, and may be adjusted according to actual application scenarios. This is only an exemplary description and is not limited.
  • the display device or the feature collection device includes a processor (or processing unit) and a storage unit
  • the processor in this application may be integrated with the storage unit, or the processor and the storage unit may pass through
  • the interface connection can be adjusted according to actual application scenarios and is not limited.
  • the embodiment of the present application also provides a computer program or a computer program product including a computer program.
  • the computer program When the computer program is executed on a computer, the computer will enable the computer to implement the AR- The method flow of HUD display system or eye tracking device.
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program When executed by a computer, it realizes the AR-HUD display system or the eye tracking device in any of the above-mentioned method embodiments. Method flow.
  • FIG. 2 to FIG. 3 it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Geometry (AREA)

Abstract

一种数据处理方法,用于人机交互系统。该方法包括:第一设备接收第二设备发送的第一位置信息,第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示用户的特征信息,第一设备根据第一位置信息得到第一预畸变模型,第一设备根据第一预畸变模型校正投影图像,投影图像为第一设备投影的图像。该方法中,第一设备根据第二设备发送的包括用户的特征信息的第一位置信息得到第一预畸变模型,使得第一设备可以实时的根据用户的特征信息得到的第一预畸变模型校正投影虚像,提升了用户的体验感。

Description

一种数据处理方法及其设备
本申请要求于2020年5月15日提交中国国家知识产权局、申请号为202010415230.X、发明名称为“一种数据处理方法及其设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机视觉技术领域,具体涉及一种数据处理方法及其设备。
背景技术
增强现实抬头显示器(Augmented Reality-Head Up Display,AR-HUD)利用光学投影系统将辅助驾驶信息(数字、图片、动画等)投影到汽车前挡风玻璃上以形成一个虚像,驾驶员通过挡风玻璃的显示区域可以观察到对应的辅助驾驶信息。
由于挡风玻璃的规格曲率是不同的,光学投影系统投影在挡风玻璃上的虚像往往会发生畸变。为了消除这种畸变,在AR-HUD正式使用之前的准备阶段,会在驾驶员的头部附近设置一个人眼模拟设备,该人眼模拟设备可以模拟驾驶员的眼睛的位置在驾驶区域拍摄经过AR-HUD显示系统投影出来的标定图像,通过标定图像上基准点的位置,计算该标定图像的畸变量,并通过畸变量来进行图像校正。
由于人眼模拟设备基本上是固定的,因此拍摄到的标定图像中的基准点基本也是固定的。在AR-HUD正式使用时,驾驶员的位置经常会发生变化,例如驾驶员的更换,座椅的调节等等,当驾驶员的位置发生变化时,此时驾驶员的人眼位置和人眼模拟设备在准备阶段时的位置不同,因此驾驶员看到的经过校正后的投影虚像可能还会存在畸变,因此导致驾驶员看到的投影虚像效果不佳。
发明内容
本申请实施例提供了一种数据处理方法及其设备,可以应用于人机交互系统,例如车内的人机交互系统,本申请实施例提供的方法用于实时校正产生畸变的投影虚像,提升了用户体验。
本申请实施例第一方面提供了一种数据处理方法。
在人机交互系统中使用的过程中,第一设备会接收到第二设备发送的第一位置信息,该第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示第二设备采集到的用户的特征信息。
第一设备根据第二设备发送的第一位置信息得到第一预畸变模型。第一设备根据第一预畸变模型校正投影图像,该投影图像为第一设备投影的图像。
本申请实施例中,第一设备根据第二设备发送的包括用户的特征信息的第一位置信息得到第一预畸变模型,使得第一设备可以实时的根据用户的特征信息得到的第一预畸变模型校正投影虚像,提升了用户的体验感。
可选地,在一种可能的实现方式中,第一设备在接收到第一位置信息之后,根据第一位置信息获取第二位置信息,该第二位置信息为多个预设值位置信息中与第一位置信息在预设坐标系中的距离小于预设阈值的位置信息,该预设置位置信息是第一设备预先设置的。第一 设备获取到了第二位置信息之后,再获取第二位置信息对应的第一预畸变模型。
本申请实施例中,第一设备根据预设置的位置信息得到对应的第一预畸变模型,节省了需要在线计算第一预畸变模型所消耗的资源,提升了人机交互系统在使用阶段的执行效率。
可选地,在一种可能的实现方式中,第一设备在接收第二设备发送的第一位置信息之前,第一设备接收了第三设备发送的至少两个第一图像信息,该至少两个第一图像信息表示第三设备在预设坐标系中的不同位置采集到的第一设备投影的图像的信息。
第一设备获取标准图像信息,该标准图像信息表示未产生畸变的投影图像。第一设备将至少两个第一图像信息分别与标准图像信息进行比较,分别得到至少两个预设置畸变量,该预设置畸变量表示第一图像信息相对于标准图像信息的畸变量。
第一设备得到的至少两个预设置畸变量分别进行计算,以得到至少两个第一预畸变模型,该至少两个第一预畸变模型与第一图像信息是一一对应的。
本申请实施例中,通过第三设备在不同位置采集到的至少两个投影图像的信息和标准图像进行计算,得到对应的预设置畸变量,再通过对应的预设置畸变量得到至少两个第一预畸变模型,在后期使用时,可以对用户在不同的位置观看到的投影图像进行校准,进而提升了用户的体验。
可选地,在一种可能的实现方式中,第一设备接收第二设备发送的注视信息,该注视信息表示用户注视基准点的信息,该基准点标定在第一设备投影的图像中。第一设备根据注视信息确定第一视场范围,第一视场范围表示用户可以观察到的视场范围。
第一设备根据注视信息和第一位置信息确定第一畸变量,该第一畸变量表示人眼标定图像相对标准图像的畸变量,人眼标定图像表示第一设备的投影图像在用户的人眼中呈现的图像,标准图像为未产生畸变的投影图像。
第一设备根据确定的第一视场范围和第一畸变量得到第一预畸变模型。
本申请实施例中,根据实时采集用户的注视信息,并根据注视信息实时的校准投影图像,使得用户在不同的位置都可以观看到完整的投影图像,提升了用户的体验。
可选地,在一种可能的实现方式中,特征信息包括用户的人眼信息。
本申请实施例中,当特征信息包括了人眼信息时,提升了技术方案的可实现性。
可选地,在一种可能的实现方式中,第一设备根据第一预畸变模型校正投影图像的具体过程中,第一设备根据第一预畸变模型,通过中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)以及现场可编程逻辑门阵列(field programmable gate array,FPGA)中的一种或者多种进行图像处理,以校正投影图像。
本申请实施例中,当第一设备通过CPU、GPU及FPGA中的一种或多种进行图像处理,进而校正投影图像,提升了方案的可实现性。
可选地,在一种可能的实现方式中,第一设备根据第一预畸变模型校正投影图像的具体过程中,第一设备根据第一预畸变模型,通过硅基液晶(liquid crystal on silicon,LCOS)、数字光处理技术(digital light processing,DLP)及液晶显示器(liquid crystal display,LCD)中的一种或者多种进行光调制,以校正投影图像。
本申请实施例中,当第一设备通过LCOS、DLP以及LCD中的一种或者多种进行光调制,进而校正投影图像,提升了方案的可实现性。
本申请实施例第二方面提供了一种数据处理方法。
在人机交互系统使用的过程中,第二设备获取第一位置信息,该第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示第二设备获取到的用户的特征信息,第一位 置信息用于第一设备校正投影图像,投影图像为第一设备投影的图像。第二设备向第一设备发送第一位置信息。
本申请实施例中,第二设备向第一设备发送包括用户的特征信息的第一位置信息,以便于第一设备可以实时的根据该第一位置信息对第一设备投影的图像进行校正,提升了用户的体验感。
可选地,在一种可能的实现方式中,第二设备采集第二图像信息,该第二图像信息包括用户的特征信息,第二设备根据第二图像信息进行计算,以得到第一位置信息。
本申请实施例中,第二设备通过采集包括了用户的特征信息的图像信息并进行计算,以得到第一位置信息,提升了方案的可实现性。
可选地,在一种可能的实现方式中,第二设备根据第一图像信息进行计算的过程中,第二设备通过特征识别算法进行计算,以得到特征信息在第二图像信息中的特征位置信息。第二设备通过特征位置信息再进行计算,以得到第一位置信息。
本申请实施例中,第二设备根据特征识别算法进行计算以得到特征位置信息,再根据特征位置信息得到第一位置信息,提升了方案的可实现性。
可选地,在一种可能的实现方式中,在第二设备通过特征位置信息进行计算以得到第一位置信息之前,第二设备还采集了深度信息,该深度信息表示特征信息到第二设备的直线距离。第二设备通过特征位置信息进行计算以得到第一位置信息的一种实现方式中,第二设备通过特征位置信息和深度信息进行计算,以得到第一位置信息。
本申请实施例中,第二设备通过采集到的深度信息和特征位置信息计算以得到第一位置信息,提升了计算第一位置信息的精确度。
可选地,在一种可能的实现方式中,特征信息包括用户的人眼信息。
本申请实施例中,当特征信息包括了人眼信息时,提升了技术方案的可实现性。
可选地,在一种可能的实现方式中,第二设备在获取了第一位置信息之后,第二设备还会获取用户的注视信息,该注视信息表示用户注视基准点的信息,该基准点标定在第一设备投影的图像中,注视信息是用于确定第一畸变量的,第一畸变量是用于确定第一预畸变模型的,而第一预畸变模型是用于校正第一设备投影的投影图像的。
第二设备在获取到了第一位置信息和注视信息之后,第二设备向第一设备发送第一位置信息和注视信息。
本申请实施例中,根据实时采集用户的注视信息,并根据注视信息实时的校准投影图像,使得用户在不同的位置都可以观看到完整的投影图像,提升了用户的体验。
本申请实施例第三方面提供了一种显示设备。
显示设备包括:
接收单元,用于接收第二设备发送的第一位置信息,第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示用户的特征信息;
处理单元,用于根据第一位置信息得到第一预畸变模型;
校正单元,用于根据第一预畸变模型校正投影图像,投影图像为第一设备投影的图像。
可选地,在一种可能的实现方式中,显示设备还包括:
获取单元,用于根据第一位置信息获取第二位置信息,第二位置信息为多个预设置位置信息中与第一位置信息在预设坐标系中的距离小于预设阈值的位置信息,预设置位置信息为第一设备预先设置的;
获取单元还用于获取第二位置信息对应的第一预畸变模型。
可选地,在一种可能的实现方式中,接收单元还用于接收第三设备发送的至少两个第一图像信息,至少两个第一图像信息表示第三设备在预设坐标系中的不同位置采集到的第一设备投影的图像的信息;
获取单元还用于获取标准图像信息,标准图像信息表示未产生畸变的投影图像;
处理单元还用于将至少两个第一图像信息分别与标准图像信息进行比较,以得到至少两个预设置畸变量,预设置畸变量表示第一图像信息相对于标准图像信息的畸变量;
处理单元还用于根据至少两个预设置畸变量分别计算以得到至少两个第一预畸变模型,至少两个第一预畸变模型与第一图像信息一一对应。
可选地,在一种可能的实现方式中,接收单元还用于接收第二设备发送的注视信息,注视信息表示用户注视基准点的信息,基准点标定在第一设备投影的图像中;
显示设备还包括:
确定单元,用于根据注视信息确定第一视场范围,第一视场范围表示用户观察到的视场范围;
确定单元还用于根据注视信息和第一位置信息确定第一畸变量,第一畸变量表示人眼标定图像相对标准图像的畸变量,人眼标定图像表示第一设备的投影图像在用户的人眼中呈现的图像,标准图像为未产生畸变的投影图像;
处理单元还用于根据第一视场范围和第一畸变量得到第一预畸变模型。
可选地,在一种可能的实现方式中,其特征在于,用户的特征信息包括用户的人眼信息。
可选地,在一种可能的实现方式中,校正单元具体用于根据第一预畸变模型,通过中央处理器CPU、图形处理器GPU及现场可编程逻辑门阵列FPGA中的一种或多种进行图像处理以校正投影图像。
可选地,在一种可能的实现方式中,校正单元具体用于根据第一预畸变模型,通过硅基液晶LCOS、数字光处理技术DLP及液晶显示器LCD中的一种或多种进行光调制以校正投影图像。
本申请第四方面提供了一种特征采集设备。
特征采集设备包括:
获取单元,用于获取第一位置信息,第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示用户的特征信息,第一位置信息用于第一设备校正投影图像,投影图像为第一设备投影的图像;
发送单元,用于向第一设备发送第一位置信息。
可选地,在一种可能的实现方式中,特征采集设备还包括:
采集单元,用于采集第二图像信息,第二图像信息包括用户的特征信息;
处理单元,用于根据第二图像信息计算以获取第一位置信息。
可选地,在一种可能的实现方式中,处理单元具体用于通过特征识别算法计算以得到特征信息在第二图像信息中的特征位置信息;
处理单元具体用于通过特征位置信息计算以得到第一位置信息。
可选地,在一种可能的实现方式中,采集单元还用于采集深度信息,深度信息表示特征信息到第二设备的直线距离;
处理单元还用于通过特征位置信息计算以得到第一位置信息包括:
处理单元还用于通过特征位置信息和深度信息计算以得到第一位置信息。
可选地,在一种可能的实现方式中,特征信息包括用户的人眼信息。
可选地,在一种可能的实现方式中,获取单元还用于获取用户的注视信息,注视信息表示用户注视基准点的信息,基准点标定在第一设备投影的图像中,注视信息用于确定第一畸变量,第一畸变量用于确定第一预畸变模型,第一预畸变模型用于校正投影图像;
发送单元还用于向第一设备发送第一位置信息和注视信息。
本申请实施例第五方面提供了一种人机交互系统。
人机交互系统包括:
显示设备,用于执行如本申请实施例中第一方面的方法。
特征采集设备,用于执行如本申请实施例中第二方面的方法。
本申请实施例第六方面提供了一种显示设备。
该显示设备包括:
处理器、存储器、输入输出设备;
处理器与存储器、输入输出设备相连;
处理器执行如本申请第一方面实施方式所述的方法。
本申请实施例第七方面提供了一种特征采集设备。
该特征采集设备包括:
处理器、存储器、输入输出设备;
处理器与存储器、输入输出设备相连;
处理器执行如本申请第一方面实施方式所述的方法。
本申请实施例第八方面提供了一种计算机存储介质,所述计算机存储介质中存储有指令,所述指令在所述计算机上执行时,使得计算机执行如本申请第一方面和/或第二方面实施方式所述的方法。
本申请实施例第九方面提供了一种计算机程序产品,所述计算机程序产品在计算机上执行时,使得所述计算机执行如本申请第一方面和/或第二方面实施方式所述的方法。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请实施例中,第一设备根据用户的特征信息在预设坐标系中的位置信息得到第一预畸变模型,使得第一设备可以实时的调整预畸变模型,进而使用第一预畸变模型校正了第一设备投影的图像,提升了用户看到的投影图像的质量。
附图说明
图1为本申请提供的人机交互系统的一个示意图;
图2为本申请提供的人机交互系统的另一示意图;
图3为本申请提供的数据处理方法的一个流程示意图;
图4为本申请提供的数据处理方法的另一流程示意图;
图5为本申请提供的数据处理方法的一个场景示意图;
图6为本申请提供的数据处理方法的另一场景示意图;
图7为本申请提供的数据处理方法的另一场景示意图;
图8为本申请提供的显示设备的一个结构示意图;
图9为本申请提供的显示设备的另一结构示意图;
图10为本申请提供的特征采集设备的一个结构示意图;
图11为本申请提供的特征采集设备的另一结构示意图;
图12为本申请提供的显示设备的另一结构示意图;
图13为本申请提供的特征采集设备的另一结构示意图。
具体实施方式
本申请实施例提供了一种数据处理方法及其设备,用于在驾驶系统中,根据用户的特征信息在预设坐标系中的位置信息,得到第一预畸变模型,使得第一设备可以实时的根据用户的特征信息调整预畸变模型,进而通过第一预畸变模型校正了第一设备投影的图像,提升了用户看到的投影图像的质量,从而提升用户的体验感。
请参阅图1,为本申请提供的人机交互系统的一个示意图。
本申请实施例提供了一种人机交互系统,该人机交互系统包括显示设备,特征采集设备和汽车的前挡风玻璃,特征采集设备和显示设备可以通过有线连接,也可以通过无线连接,具体此处不做限定。如果特征采集设备和显示设备通过有线连接,可以通过数据线连接的方式进行有线连接,例如通过COM接口的数据线、USB接口的数据线、Type-C接口的数据线、Micro-USB接口的数据线等方式进行有线连接,可以理解的是,还可以通过其他方式进行有线连接,例如通过光纤进行有线连接,具体此处不做限定。如果特征采集设备和显示设备通过无线连接,可以通过Wi-Fi无线连接、蓝牙连接、红外连接等无线连接方式连接,可以理解的是,还可以通过其他方式进行无线连接,例如通过第三代接入技术(third generation,3G)、第四代接入技术(fourth generation,4G)、第五代接入技术(fifth generation,5G)等进行无线连接,具体此处不做限定。
具体的,该显示设备可以是抬头显示系统(head up display,HUD),还可以是增强现实抬头显示系统(augmented reality-head up display,AR-HUD),或者带有投影成像功能的显示设备,具体此处不做限定。
具体的,特征采集设备可以是相机,还可以是单独的摄像头,或者是带有处理功能的摄像机,例如人眼追踪设备,具体此处不做限定。
可选的,显示设备还包括计算处理单元,该计算处理单元用于处理其他设备发送的信息,例如图像信息等,该计算处理单元可以集成与该显示设备中,也可以独立于显示设备之外的处理设备,具体此处不做限定。
在该人机交互系统中,显示设备用于将需要显示的图像投影在汽车的前挡风玻璃上,具体的,该显示设备还可以包括光学系统,该光学系统用于将需要显示的图像投射在汽车的前挡风玻璃上。特征采集设备用于获取用户的特征信息,并将该特征信息传输给计算处理单元,计算处理单元进行相关的计算,并将计算结果反馈给显示设备,具体的,该特征信息可以是人眼信息。显示设备再通过调整投影系统来适配用户的观看,使得用户在不同的位置都能观看到完整的投影虚像。
本申请实施例中,根据不同的实施方式,还可以包括更多的使用场景,如图2所示,为本申请提供的人机交互系统的另一示意图。
本申请实施例还提供了一种人机交互系统,该人机交互系统包括显示设备,拍摄设备和汽车的前挡风玻璃,拍摄设备和显示设备可以通过有线连接,也可以通过无线连接,具体此处不做限定。其中,拍摄设备和显示设备的连接方式与图1所示的人机交互系统中的特征采集设备与显示设备的连接方式类似,具体此处不再赘述。
具体的,该显示设备可以是抬头显示系统(head up display,HUD),还可以是增强现实抬头显示系统(augmented reality-head up display,AR-HUD),或者带有投影成像功能的显示设备,具体此处不做限定。
具体的,拍摄设备可以是相机,还可以是单独的摄像头,或者是带有处理功能的摄像机,例如人眼模拟设备,具体此处不做限定。
可选的,显示设备还包括计算处理单元,该计算处理单元用于处理其他设备发送的信息,例如图像信息等,该计算处理单元可以集成与该显示设备中,也可以独立于显示设备之外的处理设备,具体此处不做限定。
拍摄设备用于在特定的视场空间内模拟人眼的视觉角度对投影的图像进行拍摄,该特定的视场空间为车内可部分观察或者可全部观察到投影虚像的空间。
如图2所示的场景为人机交互系统的一种可实现方式中,在人机交互系统投入使用之前的准备阶段的场景,该场景下通过拍摄设备在特定的视场空间内的各个不同的角度对投影的虚像进行拍摄,再将拍摄的图像传输给计算处理单元,计算处理单元进行相关的计算,再将计算结果反馈给显示设备,显示设备从而根据不同的拍摄设备在不同的位置的信息来设置不同的预畸变模型,在人机交互系统投入使用的阶段,再根据用户在不同观看位置的情况,获取对应的预畸变模型调整投影虚像,进而让用户可以在不同的位置都能观看到完整的投影虚像。
为了方便理解本申请实施例,下面对本申请实施例中使用到的名词做一定的解释:
眼盒范围:在AR-HUD显示技术中,当驾驶员的眼睛处于眼盒范围内时,可以看到AR-HUD投影的完整的投影虚像。当驾驶员的眼睛超出设计的眼盒范围时,则会导致驾驶员只能看见部分投影虚像或者完全看不见投影虚像。
下面结合图1和图2所示的人机交互系统,对本申请实施例中的数据处理方法进行描述。
本申请实施例中,在人眼追踪装置获取到人眼特征信息并发送给AR-HUD显示系统之后,AR-HUD显示系统可以通过预先设置的预畸变模型来进行投影虚像的校正,也可以通过人眼追踪装置获取人眼的注视信息,并通过人眼的注视信息和人眼的特征信息得到预畸变模型,进而通过预畸变模型对投影虚像进行校正。下面对两种不同的实施方式分别进行描述。
一、通过预先设置的预畸变模型进行投影虚像的校正。
请参阅图3,为本申请实施例数据处理方法的一个流程示意图。
本实施例中,以AR-HUD显示系统表示第一设备,人眼追踪装置表示第二设备,人眼模拟设备表示第三设备为例进行说明。
在步骤301中,人眼模拟设备向AR-HUD显示系统发送至少两个第一图像信息。
在人机交互系统投入使用之前,会对人机交互系统进行预先的设置或者训练。在预先设置或者训练的阶段,人眼模拟设备会在预设坐标系中的不同位置采集由AR-HUD投影的图像的信息,即采集第一图像信息。在采集到至少两个第一图像信息之后,人眼模拟设备向AR-HUD显示系统发送该至少两个第一图像信息。
具体的,AR-HUD显示系统会先确定AR-HUD显示系统的可用视场范围,并且将该可用视场范围分为若干个小区域,并记录若干个小区域的中心点在预设坐标系中的位置信息,该若干个小区域的中心点在预设坐标系中的位置信息即表示预设置位置信息,该预设置位置信息是由AR-HUD显示系统预先设置的。
在一种可能的实现方式中,如图5所示,当预设坐标系为以人眼追踪装置为原点的相机坐标系时,则AR-HUD显示系统记录若干个小区域的中心点在相机坐标系下的位置坐标。
在一种可能的实现方式中,如图6所示,当预设坐标系为以AR-HUD显示系统为原点的世界坐标系时,则AR-HUD显示系统记录若干个小区域的中心点在世界坐标系下的位置坐标。
在AR-HUD系统记录了若干个小区域的中心点在预设坐标系中的位置信息之后,将人眼模 拟设备安装或者放置在各个位置信息对应的空间点对投影虚像进行采集。需要说明的是,该采集的方式可以有很多种,例如通过拍摄的方式,或者通过摄影的方式,具体此处不做限定。例如,将人眼模拟设备放置在相机坐标系下的(12,31,22)对应的空间点,对AR-HUD显示系统投影的投影虚像进行拍摄。
在一种可能的实现方式中,在通过人眼模拟设备对投影虚像进行采集之前,还可以通过AR-HUD显示系统对投影虚像进行标定,例如通过棋盘格式进行标定,或者通过点阵图的方式进行标定,具体此处不做限定。
对投影虚像进行标定可以让图像在后期计算对应的畸变量时,可以通过标定的点进行计算,相对于对未标定的图像进行计算,可以提升计算畸变量的精准度。
在步骤302中,AR-HUD显示系统获取标准图像信息。
AR-HUD显示系统接收人眼模拟设备发送的至少两个第一图像信息之后,AR-HUD显示系统从本地获取标准图像信息,该标准图像信息表示未产生畸变的投影图像。
可选地,在一种可能的实现方式中,获取到的标准图像是经过标定的标准图像,具体的标定的方式可以通过棋盘格式进行标定或者点阵图的方式进行标定,具体此处不做限定,优选地,该标准图像的标定方式可以和接收到的至少两个第一图像信息的标定方式相同。
在步骤303中,AR-HUD显示系统将至少两个第一图像信息分别与标准图像进行比较,以得到至少两个预设置畸变量。
AR-HUD显示系统在获取了标准图像之后,将接收到的至少两个第一图像信息分别与标准图像进行比较,从而得到至少两个预设置畸变量,该预设置畸变量表示第一图像信息相对于标准图像信息的畸变量。
具体的,在一种可能的实现方式当中,当第一图像信息和标准图像分别进行了标定后,AR-HUD显示系统通过计算标准图像的标定点和第一图像信息中的标点的变换公式,例如该标准图像标定了一个100*100的点阵,第一图像信息中有一个80*80的点阵,则通过计算80*80的点阵变换到100*100的点阵的变换公式,得到预设置畸变量。
在实际应用过程中,由于第一图像信息相对于标准图像信息的畸变情况比较复杂,因此可以设计相对应的计算方式,具体的计算方式此处不做限定。
在步骤304中,AR-HUD显示系统根据至少两个预设置畸变量分别计算以得到至少两个第一预畸变模型。
在AR-HUD显示系统得到至少两个预设置畸变量之后,AR-HUD显示系统根据至少两个预设置畸变量分别进行计算,以得到至少两个第一预畸变模型,且该至少两个预畸变模型与第一图像信息一一对应。
具体的,在一种可能的实现方式中,AR-HUD显示系统可以通过标准图像和预设置畸变量进行计算,得到标准图像对应的变换数学模型,即该变换数学模型为第一预畸变模型。在实际应用过程中,AR-HUD显示系统可以根据该变换数学模型调整标准图像并投影该调整后的图像,使得用户在该变换数学模型对应的第一图像信息中的位置信息观看投影图像时,可以看到完整的标准图像。
具体的,在一种可能的实现方式中,AR-HUD显示系统还可以通过预设置畸变量和AR-HUD显示系统的投影参数进行计算,得到修改的投影参数,即该修改的投影参数为第一预畸变模型。在实际应用过程中,AR-HUD显示系统可以根据修改后的投影参数进行投影标准图像,由于投影参数被修改,则标准图像根据投影参数的变化而变化,由于该投影参数是根据预设置畸变量得到的,因此用户在该投影参数对应的第一图像信息中的位置信息观看投影图像时, 可以看到完整的标准图像。
具体的,在得到多个第一预畸变模型之后,可以建立多个第一预畸变模型中每个第一预畸变模型和对应的第一图像信息中位置信息的对应关系,并将该对应关系存储在AR-HUD显示系统本地。
在步骤305中,人眼追踪装置采集第二图像信息。
在人机交互系统投入使用阶段,当用户进入车内时,人眼追踪装置采集第二图像信息,该第二图像信息包括了用户的特征信息。
具体的,在一种可能的实现方式中,用户的特征信息包括人眼信息。当用户进入车内时,人眼追踪装置对用户进行拍照或者录像,以采集到用户的第二图像信息,该第二图像信息包括用户的人眼信息。当人眼追踪装置通过录像的方式采集时,则在采集之后,通过录像中的画面帧提取,来确定用户的图像信息。
可以理解的是,该特征信息还可以包括更多的信息,例如脸部信息,鼻子信息,嘴部信息等等,具体此处不做限定。
在步骤306中,人眼追踪装置通过特征识别算法计算以得到特征信息在第二图像信息中的特征位置信息。
在人眼追踪装置在采集到第二图像信息之后,人眼模拟设备通过特征识别算法计算,以得到特征信息在第二图像信息中的特征位置信息,该特征位置信息表示特征信息在第二图像信息中的位置信息。
具体的,在一种可能的实现方式中,人眼追踪装置通过人眼识别算法识别用户的人眼信息在第二图像信息中的位置信息,进而获得人眼在图像坐标系中的位置,如图7所示,该图像坐标系表示以图像中心为坐标原点的二维坐标系。例如通过霍夫曼圆检测法识别用户的人眼信息在第二图像信息中的位置信息。或者通过卷积神经网络的方式识别用户的人眼信息在第二图像信息中的位置信息,具体此处不做限定。
在步骤307中,人眼追踪装置采集深度信息。
人眼追踪装置还用于采集深度信息,该深度信息表示用户的特征信息到人眼追踪装置的直线距离。
具体的,在一种可能的实现方式中,人眼追踪装置通过测距功能获得用户的人眼信息到人眼追踪装置的直线距离。例如,人眼追踪装置通过红外测距的方式,获得用户的人眼信息到人眼追踪装置的直线距离,可以理解的是,还可以通过其他方式获得该深度信息,例如通过超声波测距的方式,具体此处不做限定。
在步骤308中,人眼追踪装置通过特征位置信息和深度信息计算以得到第一位置信息。
人眼追踪装置在采集到深度信息之后,人眼追踪装置通过特征位置信息和深度信息进行计算,以得到第一位置信息,该第一位置信息表示用户的特征信息在预设坐标系中的位置信息。
具体的,在一种可能的实现方式中,当预设坐标系为相机坐标系时,人眼追踪装置通过特征位置信息和深度信息以及人眼追踪装置的内参计算得到第一位置信息。例如,可以通过如下公式计算得到:
z c=ds
x c=Z(u-C u)/f u
y c=Z(v-C v)/f v
其中,z c表示用户的特征信息在相机坐标系中的位置信息中的Z轴对应的值,x c表示用 户的特征信息在相机坐标系中的位置信息中的X轴对应的值,y c表示用户的特征信息在相机坐标系中的位置信息中的Y轴对应的值,d表示深度信息,s表示人眼追踪装置的内参中的缩放因子,f u表示人眼追踪装置的内参中的水平方向的焦距,f v表示人眼追踪装置的内参中垂直方向的焦距,u表示特征位置信息中图像坐标系中的X轴对应的值,v表示特征位置信息中图像坐标系中Y轴对应的值,C u和C v表示图像坐标系中的原点坐标对应的X轴和Y轴的值。
需要说明的是,当预设坐标系为相机坐标系时,第一位置信息等于用户的特征信息在相机坐标系中的位置信息。
可以理解的是,在实际应用过程中,还可以通过其他公式得到用户的特征信息在相机坐标系中的位置信息中,具体此处不做限定。
当预设坐标系是以AR-HUD显示系统为原点的世界坐标系时,第一位置信息表示用户的特征信息在世界坐标系中的位置信息,则人眼追踪装置根据用户的特征信息在相机坐标系中的位置信息计算得到第一位置信息。
具体的,在一种可能的实现方式中,人眼追踪装置可以通过如下方式计算得到第一位置信息:
Figure PCTCN2021092269-appb-000001
R=R Z*R Y*R X,T=(t x,t y,t z) T
Figure PCTCN2021092269-appb-000002
其中,ω、δ和θ为旋转参数(ω,δ,θ),t x、t y和t z为三个轴的平移参数(t x,t y,t z),x w为用户的特征信息在世界坐标系中的位置信息中X轴的值,y w为用户的特征信息在世界坐标系中的位置信息中Y轴的值,z w为用户的特征信息在世界坐标系中的位置信息中Z轴的值,z c表示用户的特征信息在相机坐标系中的位置信息中的Z轴对应的值,x c表示用户的特征信息在相机坐标系中的位置信息中的X轴对应的值,y c表示用户的特征信息在相机坐标系中的位置信息中的Y轴对应的值。
可以理解的是,还可以通过其他公式计算出用户的特征信息在世界坐标系中的位置信息,具体此处不做限定。
在步骤309中,人眼追踪装置向AR-HUD显示系统发送第一位置信息。
人眼追踪装置在得到第一位置信息之后,人眼追踪装置将第一位置信息发送给AR-HUD显示系统。
在步骤310中,AR-HUD显示系统根据第一位置信息获取第二位置信息。
AR-HUD在接收到人眼追踪装置发送的AR-HUD显示系统之后,AR-HUD显示系统根据第一位置信息获取第二位置信息,该第二位置信息表示多个预设置位置信息中与第一位置信息在预设坐标系中的距离小于预设阈值的位置信息。
具体的,在一种可能的实现方式中,AR-HUD显示系统根据第一位置信息和多个预设置位置信息中每个预设置位置信息分别进行计算,得到与第一位置信息在预设坐标系中的距离最小的预设置位置信息。例如,可以通过如下公式进行计算:
Figure PCTCN2021092269-appb-000003
其中,j表示预设置位置信息与第一位置信息的距离最小的值对应的索引号,x i表示预设置位置信息中X轴的值,y i表示预设置位置信息中Y轴的值,z i表示预设置位置信息中Z轴的值,x w为用户的特征信息在世界坐标系中的位置信息中X轴的值,y w为用户的特征信息在世界坐标系中的位置信息中Y轴的值,z w为用户的特征信息在世界坐标系中的位置信息中Z轴的值。
可以理解的是,还可以通过其他公式计算得到预设置位置信息与第一位置信息的距离,例如,当第一位置信息为用户的特征信息在相机坐标系中的位置信息时,则使用对应的相机坐标系中的位置信息的取值(x c,y c,z c)代替上述公式中(x w,y w,z w),具体的计算公式此处不做限定。
通过上述方法得出每个预设置位置信息与第一位置信息的距离,再从中选出与第一位置信息距离小于预设范围的预设置位置信息作为第二位置信息,优选地,可以从中选出与第一位置信息距离最小的预设置位置信息作为第二位置信息。
在步骤311中,AR-HUD显示系统获取第二位置信息对应的第一预畸变模型。
AR-HUD显示系统在获得了第二位置信息之后,从本地中查找第二位置信息对应的第一预畸变模型。
在步骤312中,AR-HUD显示系统根据第一预畸变模型校正投影图像。
AR-HUD显示系统在获得了第一预畸变模型之后,AR-HUD显示系统根据第一预畸变模型校正第一设备投影的图像。
具体的,在一种可能的实现方式当中,当第一预畸变模型表示标准图像的变换数学模型时,则AR-HUD显示系统根据该变换数学模型调整标准图像并投影该调整后的图像,使得用户的人眼在该变换数学模型对应的预设置位置信息观看投影图像时,可以看到完整的标准图像。
具体的,在一种可能的实现方式中,AR-HUD显示系统可以根据变换数学模型,通过CPU、GPU及FPGA中的一种或者多种来处理标准图像,以得到调整后的图像,使得用户的人眼在该变换数学模型对应的预设置位置信息观看调整后的投影图像时,可以看到完整的标准图像。可以理解的是,还可以通过其他方式来处理标准图像以达到调整图像的目的,具体此处不做限定。
具体的,在一种可能的实现方式中,当第一预畸变模型表示修改的投影参数时,则AR-HUD显示系统根据修改后的投影参数进行投影标准图像,由于投影参数被修改,则标准图像根据投影参数的变化而变化,由于该投影参数是根据预设置畸变量得到的,因此用户的人眼在该投影参数对应的预设置位置信息观看投影图像时,可以看到完整的标准图像。
具体的,在一种可能的实现方式中,AR-HUD显示系统可以根据修改的投影参数,对硅基液晶LCOS、数字光处理技术DLP以及液晶显示器LCD中的一种或者多种进行光调制,这样可以让用户的人眼在该投影参数对应的预设置位置信息观看经过光调制的投影图像时,可以看到完整的标准图像。可以理解的是,还可以通过其他方式来进行光调制以达到调整投影图像的目的,具体此处不做限定。
本实施例中,步骤301至步骤304为人机交互系统在投入使用之前准备阶段的步骤,因此在实际应用过程中,即在人机交互系统使用阶段时,可以只执行步骤305至步骤312,具体此处不做限定。
本实施例中,AR-HUD显示系统通过人眼追踪装置采集到的用户的特征信息,进而确定第一预畸变模型,再根据第一预畸变模型校正投影图像,可以使得用户在不同的位置都可以观看到完整的投影图像,提升了用户的视觉体验。
二、通过人眼追踪装置实时获取人眼的注视信息进而对投影虚像进行校正。
请参阅图4,为本申请实施例数据处理方法另一流程示意图。
本实施例中,以AR-HUD显示系统表示第一设备,人眼追踪装置表示第二设备为例进行说明。
在步骤401中,人眼追踪装置采集第二图像信息。
在步骤402中,人眼追踪装置通过特征识别算法计算以得到特征信息在第二图像信息中的特征位置信息。
在步骤403中,人眼追踪装置采集深度信息。
在步骤404中,人眼追踪装置通过特征位置信息和深度信息计算以得到第一位置信息。
本实施例中步骤401至404所执行的方法步骤与前述图3所示实施例中的步骤305至308类似,具体此处不再赘述。
在步骤405中,人眼追踪装置获取用户的注视信息。
人眼追踪装置还用于获取用户的注视信息,该注视信息表示用户注视基准点的信息,基准点标定在第一设备投影的图像中。
具体的,在一种可能的实现方式中,当用户进入车内时,由用户来选择是否启动校准模式,该校准模式用于对当前的投影虚像进行校准。若用户启动了校准模式,则由AR-HUD系统投影出带有基准点标定的图像信息,例如点阵图标定法标定的图像或者棋盘标定法棋盘标定法标定的图像,该基准点就表示点阵图中的点或者棋盘中的点,具体此处不做限定。可以理解的是,该校准模式还可以通过自动启动来实现,例如,当检测到当前用户进入车内时,自动启动校准模式,具体启动校准模式的时机或者方式此处不做限定。
AR-HUD显示系统在投影了带有基准点标定的图像信息之后,就通过给用户发送指示信息来提示用户注视图像信息中的点,人眼追踪装置采集用户的人眼在注视时的人眼信息,得到注视信息。例如,AR-HUD显示系统发出系统语音提示用户进入校准模式,并将经过标定的图像信息投影到前挡风玻璃上。系统语音还指示用户逐个注视图像信息中的标定的基准点,当用户的人眼注视基准点的时间超过预置的时间段,例如人眼注视基准点的时间超过3秒时,则AR-HUD显示系统确定用户注视了该基准点,并获取对应的人眼信息。可以理解的是,这里的预置的时间段为3秒仅仅是示例,在实际应用过程中,可以根据场景的不同而设置不同的值,具体此处不做限定。需要说明的是,该指示信息可以是系统语音,或者投影图像上用来指示用户观看基准点的信息,具体此处不做限定。
具体的,在一种可能的实现方式中,当用户根据提示信息注视基准点时,人眼追踪装置通过发射红外线的方式,在人眼的瞳孔处形成一个亮斑,该亮斑由人眼追踪装置到人眼瞳孔的角度的不同而在瞳孔的不同位置形成亮斑,再通过亮斑相对于瞳孔中心点的位置,就能计算出人眼的视线方向,人眼追踪装置再根据人眼在预设坐标系中的位置,和人眼的视线方向确定人眼在投影虚像中实际观察到的基准点的坐标。
可以理解的是,在实际应用过程中,人眼追踪装置还可以通过其他方式采集人眼观察到的基准点的坐标,具体此处不做限定。
在人眼追踪装置采集人眼注视每个基准点的过程时,因为有的基准点已经超过了用户在当前位置可观察到的视场范围,所以人眼追踪装置则采集不到该基准点下的人眼观察基准点的坐标。当用户注视了每个可观察到的基准点之后,人眼追踪装置采集到的坐标点可以组成一个人眼标定图像信息,该人眼标定图像信息即是用户在当前位置可以观察到的标定的图像信息,即注视信息。
在步骤406中,人眼追踪装置向AR-HUD显示系统发送第一位置信息和注视信息。
人眼追踪装置在得到第一位置信息和注视信息之后,人眼追踪装置将第一位置信息和注视信息发送给AR-HUD显示系统。
在步骤407中,AR-HUD显示系统根据注视信息确定第一视场范围。
AR-HUD显示系统在接收到了注视信息之后,根据注视信息确定第一视场范围,该第一视场范围表示用户在当前位置可以观察到的视场范围。
具体的,AR-HUD显示系统根据注视信息中的人眼标定图像信息确定第一视场范围。
在步骤408中,AR-HUD显示系统根据注视信息和第一位置信息确定第一畸变量。
AR-HUD显示系统在确定了第一视场范围后,根据第一位置信息和注视信息确定第一畸变量,第一畸变量表示人眼标定图像相对标准图像的畸变量,标准图像为未产生畸变的投影图像。
具体的,在一种可能的实现方式中,AR-HUD显示系统根据第一位置信息中用户的人眼信息在预设坐标系中的位置信息,和注视信息中的人眼标定图像信息中的每个基准点的坐标,得到人眼标定图像相对于第一位置信息的坐标信息,再通过坐标转换,得到人眼标定图像在预设坐标系中的位置信息。再通过计算人眼标定图像在预设坐标系中的位置信息和标准图像在预设坐标系中的位置信息得到第一畸变量。
可以理解的是,在实际应用过程中,还可以通过其他方式确定第一畸变量,例如通过人眼标定图像中的某一个基准点在预设坐标系中的位置信息和用相同标定法标定的标准图像中对应的基准点在预设坐标系中的位置信息,得到第一畸变量,具体此处不做限定。
在步骤409中,AR-HUD显示系统根据第一视场范围和第一畸变量得到第一预畸变模型。
在AR-HUD显示系统得到第一畸变量之后,AR-HUD显示系统根据第一视场范围和第一畸变量得到第一预畸变模型。
具体的,在一种可能的实现方式中,AR-HUD显示系统根据用户的人眼在当前位置中可以看到的视场范围来确定投影虚像可以投射的范围大小,再根据第一畸变量和标准图像进行计算,得到标准图像对应的变换数学模型,再根据标准图像对应的变换数学模型,和投影虚像可以投射的范围大小来确定第一预畸变模型。
具体的,在一种可能的实现方式中,AR-HUD显示系统根据用户的人眼在当前位置中可以看到的视场范围来确定投影虚像可以投射的范围大小,再根据第一畸变量和AR-HUD显示系统的投影参数进行计算,得到修改的投影参数,再根据修改的投影参数,和投影虚像可以投射的范围大小来确定第一预畸变模型。
可以理解的是,在实际应用过程中,还可以根据其他方式确定第一预畸变模型,具体此处不做限定。
在步骤410中,AR-HUD显示系统根据第一预畸变模型校正投影图像。
本实施例中的步骤410与前述图3所示实施例中的步骤312类似,具体此处不再赘述。
本实施例中,AR-HUD显示系统通过采集人眼的注视信息进而确定第一预畸变模型,从而根据第一预畸变模型校正投影图像,使得用户可以实时的对投影图像进行校准,提升了用户的体验。
上面对本申请实施例中的信息处理方法进行了描述,下面对本申请实施例中的设备进行描述,请参阅图8,为本申请提供的显示设备的一个实施例的结构示意图。
一种显示设备,包括:
接收单元801,用于接收第二设备发送的第一位置信息,第一位置信息包括第一特征在 预设坐标系中的位置信息,第一特征表示用户的特征信息;
处理单元802,用于根据第一位置信息得到第一预畸变模型;
校正单元803,用于根据第一预畸变模型校正投影图像,投影图像为第一设备投影的图像。
本实施例中,显示设备各单元所执行的操作与前述图2和图3所示实施例中AR-HUD显示系统描述的类似,此处不再赘述。
请参阅图9,为本申请提供的显示设备的另一实施例的结构示意图。
一种显示设备,包括:
接收单元901,用于接收第二设备发送的第一位置信息,第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示用户的特征信息;
处理单元902,用于根据第一位置信息得到第一预畸变模型;
校正单元903,用于根据第一预畸变模型校正投影图像,投影图像为第一设备投影的图像。
可选地,显示设备还包括:
获取单元904,用于根据第一位置信息获取第二位置信息,第二位置信息为多个预设置位置信息中与第一位置信息在预设坐标系中的距离小于预设阈值的位置信息,预设置位置信息为第一设备预先设置的;
获取单元904还用于获取第二位置信息对应的第一预畸变模型。
可选地,接收单元901还用于接收第三设备发送的至少两个第一图像信息,至少两个第一图像信息表示第三设备在预设坐标系中的不同位置采集到的第一设备投影的图像的信息;
获取单元904还用于获取标准图像信息,标准图像信息表示未产生畸变的投影图像;
处理单元902还用于将至少两个第一图像信息分别与标准图像信息进行比较,以得到至少两个预设置畸变量,预设置畸变量表示第一图像信息相对于标准图像信息的畸变量;
处理单元902还用于根据至少两个预设置畸变量分别计算以得到至少两个第一预畸变模型,至少两个第一预畸变模型与第一图像信息一一对应。
可选地,接收单元901还用于接收第二设备发送的注视信息,注视信息表示用户注视基准点的信息,基准点标定在第一设备投影的图像中;
显示设备还包括:
确定单元905,用于根据注视信息确定第一视场范围,第一视场范围表示用户观察到的视场范围;
确定单元905还用于根据注视信息和第一位置信息确定第一畸变量,第一畸变量表示人眼标定图像相对标准图像的畸变量,人眼标定图像表示第一设备的投影图像在用户的人眼中呈现的图像,标准图像为未产生畸变的投影图像;
处理单元902还用于根据第一视场范围和第一畸变量得到第一预畸变模型。
可选地,用户的特征信息包括用户的人眼信息。
可选地,校正单元903具体用于根据第一预畸变模型,通过中央处理器CPU、图形处理器GPU及现场可编程逻辑门阵列FPGA中的一种或多种进行图像处理以校正投影图像。
可选地,校正单元903具体用于根据第一预畸变模型,通过硅基液晶LCOS、数字光处理技术DLP及液晶显示器LCD中的一种或多种进行光调制以校正投影图像。
本实施例中,显示设备各单元所执行的操作与前述图2和图3所示实施例中AR-HUD显示系统描述的类似,此处不再赘述。
请参阅图10,为本申请提供的特征采集设备的一个实施例的结构示意图。
一种特征采集设备,包括:
获取单元1001,用于获取第一位置信息,第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示用户的特征信息,第一位置信息用于第一设备校正投影图像,投影图像为第一设备投影的图像;
发送单元1002,用于向第一设备发送第一位置信息。
本实施例中,特征采集设备各单元所执行的操作与前述图2和图3所示实施例中人眼追踪装置描述的类似,此处不再赘述。
请参阅图11,为本申请提供的特征采集设备的另一实施例的结构示意图。
获取单元1101,用于获取第一位置信息,第一位置信息包括第一特征在预设坐标系中的位置信息,第一特征表示用户的特征信息,第一位置信息用于第一设备校正投影图像,投影图像为第一设备投影的图像;
发送单元1102,用于向第一设备发送第一位置信息。
可选地,特征采集设备还包括:
采集单元1103,用于采集第二图像信息,第二图像信息包括用户的特征信息;
处理单元1104,用于根据第二图像信息计算以获取第一位置信息。
可选地,处理单元1104具体用于通过特征识别算法计算以得到特征信息在第二图像信息中的特征位置信息;
处理单元1104具体用于通过特征位置信息计算以得到第一位置信息。
可选地,采集单元1103还用于采集深度信息,深度信息表示特征信息到第二设备的直线距离;
处理单元1104还用于通过特征位置信息计算以得到第一位置信息包括:
处理单元1104还用于通过特征位置信息和深度信息计算以得到第一位置信息。
可选地,特征信息包括用户的人眼信息。
可选地,获取单元1101还用于获取用户的注视信息,注视信息表示用户注视基准点的信息,基准点标定在第一设备投影的图像中,注视信息用于确定第一畸变量,第一畸变量用于确定第一预畸变模型,第一预畸变模型用于校正投影图像;
发送单元1102还用于向第一设备发送第一位置信息和注视信息。
本实施例中,特征采集设备各单元所执行的操作与前述图2和图3所示实施例中人眼追踪装置描述的类似,此处不再赘述。
需要说明的是,在实际应用过程中,本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。例如,特征采集设备的获取单元可以是摄像头,确定单元可以是处理器,或者,显示设备中的获取单元和确定单元都可以对应在一个处理器上,由处理器来实现获取单元和确定单元所描述的功能。
请参阅图12,为本申请提供的显示设备的另一实施例的结构示意图。
显示设备中包括处理器1201、存储器1202、总线1205、接口1204等设备,处理器1201与存储器1202、接口1204相连,总线1205分别连接处理器1201、存储器1202以及接口1204,接口1204用于接收或者发送数据,处理器1201是单核或多核中央处理单元,或者为特定集成电路,或者为被配置成实施本发明实施例的一个或多个集成电路。存储器1202可以为随机存取存储器(Random Access Memory,RAM),也可以为非易失性存储器(non-volatile memory), 例如至少一个硬盘存储器。存储器1202用于存储计算机执行指令。具体的,计算机执行指令中可以包括程序1203。
本实施例中,该处理器1201可以执行前述图2和图3所示实施例中AR-HUD显示系统所执行的操作,具体此处不再赘述。
请参阅图13,为本申请提供的特征采集设备的另一实施例的结构示意图。
特征采集设备中包括处理器1301、存储器1302、总线1305、接口1304等设备,处理器1301与存储器1302、接口1304相连,总线1305分别连接处理器1301、存储器1302以及接口1304,接口1304用于接收或者发送数据,处理器1301是单核或多核中央处理单元,或者为特定集成电路,或者为被配置成实施本发明实施例的一个或多个集成电路。存储器1302可以为随机存取存储器(Random Access Memory,RAM),也可以为非易失性存储器(non-volatile memory),例如至少一个硬盘存储器。存储器1302用于存储计算机执行指令。具体的,计算机执行指令中可以包括程序1303。
本实施例中,该处理器1301可以执行前述图2和图3所示实施例中人眼追踪装置所执行的操作,具体此处不再赘述。
应理解,本申请以上实施例中提及的处理器,或者本申请上述实施例提供的处理器,可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还应理解,本申请中以上实施例中的处理器的数量可以是一个,也可以是多个,可以根据实际应用场景调整,此处仅仅是示例性说明,并不作限定。本申请实施例中的存储器的数量可以是一个,也可以是多个,可以根据实际应用场景调整,此处仅仅是示例性说明,并不作限定。
需要说明的是,当显示设备或者特征采集设备包括处理器(或处理单元)与存储单元时,本申请中的处理器可以是与存储单元集成在一起的,也可以是处理器与存储单元通过接口连接,可以根据实际应用场景调整,并不作限定。
本申请实施例还提供了一种计算机程序或包括计算机程序的一种计算机程序产品,该计算机程序在某一计算机上执行时,将会使所述计算机实现上述任一方法实施例中与AR-HUD显示系统或者人眼追踪装置的方法流程。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现上述任一方法实施例中与AR-HUD显示系统或者人眼追踪装置相关的方法流程。
在上述图2-图3中各个实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计 算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
在本申请实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本申请实施例中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,在本申请的描述中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。

Claims (32)

  1. 一种数据处理方法,其特征在于,包括:
    第一设备接收第二设备发送的第一位置信息,所述第一位置信息包括第一特征在预设坐标系中的位置信息,所述第一特征表示用户的特征信息;
    所述第一设备根据所述第一位置信息得到第一预畸变模型;
    所述第一设备根据所述第一预畸变模型校正投影图像,所述投影图像为所述第一设备投影的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述第一设备根据所述第一位置信息得到第一预畸变模型包括:
    所述第一设备根据所述第一位置信息获取第二位置信息,所述第二位置信息为多个预设置位置信息中与所述第一位置信息在所述预设坐标系中的距离小于预设阈值的位置信息,所述预设置位置信息为所述第一设备预先设置的;
    所述第一设备获取所述第二位置信息对应的第一预畸变模型。
  3. 根据权利要求2所述的方法,其特征在于,所述在第一设备接收第二设备发送的第一位置信息之前,所述方法还包括:
    所述第一设备接收第三设备发送的至少两个第一图像信息,所述至少两个第一图像信息表示所述第三设备在所述预设坐标系中的不同位置采集到的所述第一设备投影的图像的信息;
    所述第一设备获取标准图像信息,所述标准图像信息表示未产生畸变的投影图像;
    所述第一设备将所述至少两个第一图像信息分别与所述标准图像信息进行比较,以得到至少两个预设置畸变量,所述预设置畸变量表示所述第一图像信息相对于所述标准图像信息的畸变量;
    所述第一设备根据所述至少两个预设置畸变量分别计算以得到至少两个第一预畸变模型,所述至少两个第一预畸变模型与所述第一图像信息一一对应。
  4. 根据权利要求1所述的方法,其特征在于,所述第一设备根据所述第一位置信息得到第一预畸变模型包括:
    所述第一设备接收所述第二设备发送的注视信息,所述注视信息表示所述用户注视基准点的信息,所述基准点标定在所述第一设备投影的图像中;
    所述第一设备根据所述注视信息确定第一视场范围,所述第一视场范围表示所述用户观察到的视场范围;
    所述第一设备根据所述注视信息和所述第一位置信息确定第一畸变量,所述第一畸变量表示人眼标定图像相对标准图像的畸变量,所述人眼标定图像表示所述第一设备的投影图像在所述用户的人眼中呈现的图像,所述标准图像为未产生畸变的投影图像;
    所述第一设备根据所述第一视场范围和所述第一畸变量得到第一预畸变模型。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述用户的特征信息包括所述用户的人眼信息。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述第一设备根据所述第一预畸变模型校正投影图像包括:
    所述第一设备根据所述第一预畸变模型,通过中央处理器CPU、图形处理器GPU及现场可编程逻辑门阵列FPGA中的一种或多种进行图像处理以校正所述投影图像。
  7. 根据权利要求1至5中任一项所述的方法,其特征在于,所述第一设备根据所述第一预畸变模型校正投影图像包括:
    所述第一设备根据所述第一预畸变模型,通过硅基液晶LCOS、数字光处理技术DLP及液晶显示器LCD中的一种或多种进行光调制以校正所述投影图像。
  8. 一种数据处理方法,其特征在于,包括:
    第二设备获取第一位置信息,所述第一位置信息包括第一特征在预设坐标系中的位置信息,所述第一特征表示用户的特征信息,所述第一位置信息用于第一设备校正投影图像,所述投影图像为所述第一设备投影的图像;
    所述第二设备向所述第一设备发送所述第一位置信息。
  9. 根据权利要求8所述的方法,其特征在于,所述第二设备获取第一位置信息包括:
    所述第二设备采集第二图像信息,所述第二图像信息包括所述用户的特征信息;
    所述第二设备根据所述第二图像信息计算以获取所述第一位置信息。
  10. 根据权利要求9所述的方法,其特征在于,所述第二设备根据所述第一图像信息计算以获取所述第一位置信息包括:
    所述第二设备通过特征识别算法计算以得到所述特征信息在所述第二图像信息中的特征位置信息;
    所述第二设备通过所述特征位置信息计算以得到第一位置信息。
  11. 根据权利要求10所述的方法,其特征在于,所述第二设备通过所述特征位置信息计算以得到第一位置信息之前,所述方法还包括:
    所述第二设备采集深度信息,所述深度信息表示所述特征信息到所述第二设备的直线距离;
    所述第二设备通过所述特征位置信息计算以得到第一位置信息包括:
    所述第二设备通过所述特征位置信息和所述深度信息计算以得到所述第一位置信息。
  12. 根据权利要求8至11中任一项所述的方法,其特征在于,所述特征信息包括所述用户的人眼信息。
  13. 根据权利要求8至12中任一项所述的方法,其特征在于,第二设备获取第一位置信息之后,所述方法还包括:
    所述第二设备获取所述用户的注视信息,所述注视信息表示所述用户注视基准点的信息,所述基准点标定在所述第一设备投影的图像中,所述注视信息用于确定第一畸变量,所述第一畸变量用于确定第一预畸变模型,所述第一预畸变模型用于校正所述投影图像;
    所述第二设备向所述第一设备发送所述第一位置信息包括:
    所述第二设备向所述第一设备发送所述第一位置信息和所述注视信息。
  14. 一种显示设备,其特征在于,包括:
    接收单元,用于接收第二设备发送的第一位置信息,所述第一位置信息包括第一特征在预设坐标系中的位置信息,所述第一特征表示用户的特征信息;
    处理单元,用于根据所述第一位置信息得到第一预畸变模型;
    校正单元,用于根据所述第一预畸变模型校正投影图像,所述投影图像为所述第一设备投影的图像。
  15. 根据权利要求14所述的显示设备,其特征在于,所述显示设备还包括:
    获取单元,用于根据所述第一位置信息获取第二位置信息,所述第二位置信息为多个预 设置位置信息中与所述第一位置信息在所述预设坐标系中的距离小于预设阈值的位置信息,所述预设置位置信息为所述第一设备预先设置的;
    所述获取单元还用于获取所述第二位置信息对应的第一预畸变模型。
  16. 根据权利要求15所述的显示设备,其特征在于,所述接收单元还用于接收第三设备发送的至少两个第一图像信息,所述至少两个第一图像信息表示所述第三设备在所述预设坐标系中的不同位置采集到的所述第一设备投影的图像的信息;
    所述获取单元还用于获取标准图像信息,所述标准图像信息表示未产生畸变的投影图像;
    所述处理单元还用于将所述至少两个第一图像信息分别与所述标准图像信息进行比较,以得到至少两个预设置畸变量,所述预设置畸变量表示所述第一图像信息相对于所述标准图像信息的畸变量;
    所述处理单元还用于根据所述至少两个预设置畸变量分别计算以得到至少两个第一预畸变模型,所述至少两个第一预畸变模型与所述第一图像信息一一对应。
  17. 根据权利要求14所述的显示设备,其特征在于,所述接收单元还用于接收所述第二设备发送的注视信息,所述注视信息表示所述用户注视基准点的信息,所述基准点标定在所述第一设备投影的图像中;
    所述显示设备还包括:
    确定单元,用于根据所述注视信息确定第一视场范围,所述第一视场范围表示所述用户观察到的视场范围;
    所述确定单元还用于根据所述注视信息和所述第一位置信息确定第一畸变量,所述第一畸变量表示人眼标定图像相对标准图像的畸变量,所述人眼标定图像表示所述第一设备的投影图像在所述用户的人眼中呈现的图像,所述标准图像为未产生畸变的投影图像;
    所述处理单元还用于根据所述第一视场范围和所述第一畸变量得到第一预畸变模型。
  18. 根据权利要求14至17中任一项所述的显示设备,其特征在于,所述用户的特征信息包括所述用户的人眼信息。
  19. 根据权利要求14至18中任一项所述的显示设备,其特征在于,所述校正单元具体用于根据所述第一预畸变模型,通过中央处理器CPU、图形处理器GPU及现场可编程逻辑门阵列FPGA中的一种或多种进行图像处理以校正所述投影图像。
  20. 根据权利要求14至18中任一项所述的显示设备,其特征在于,所述校正单元具体用于根据所述第一预畸变模型,通过硅基液晶LCOS、数字光处理技术DLP及液晶显示器LCD中的一种或多种进行光调制以校正所述投影图像。
  21. 一种特征采集设备,其特征在于,包括:
    获取单元,用于获取第一位置信息,所述第一位置信息包括第一特征在预设坐标系中的位置信息,所述第一特征表示用户的特征信息,所述第一位置信息用于第一设备校正投影图像,所述投影图像为所述第一设备投影的图像;
    发送单元,用于向所述第一设备发送所述第一位置信息。
  22. 根据权利要求21所述的特征采集设备,其特征在于,所述特征采集设备还包括:
    采集单元,用于采集第二图像信息,所述第二图像信息包括所述用户的特征信息;
    处理单元,用于根据所述第二图像信息计算以获取所述第一位置信息。
  23. 根据权利要求22所述的特征采集设备,其特征在于,所述处理单元具体用于通过特征识别算法计算以得到所述特征信息在所述第二图像信息中的特征位置信息;
    所述处理单元具体用于通过所述特征位置信息计算以得到第一位置信息。
  24. 根据权利要求23所述的特征采集设备,其特征在于,所述采集单元还用于采集深度信息,所述深度信息表示所述特征信息到所述第二设备的直线距离;
    所述处理单元还用于通过所述特征位置信息计算以得到第一位置信息包括:
    所述处理单元还用于通过所述特征位置信息和所述深度信息计算以得到所述第一位置信息。
  25. 根据权利要求21至24中任一项所述的特征采集设备,其特征在于,所述特征信息包括所述用户的人眼信息。
  26. 根据权利要求21至25中任一项所述的特征采集设备,其特征在于,所述获取单元还用于获取所述用户的注视信息,所述注视信息表示所述用户注视基准点的信息,所述基准点标定在所述第一设备投影的图像中,所述注视信息用于确定第一畸变量,所述第一畸变量用于确定第一预畸变模型,所述第一预畸变模型用于校正所述投影图像;
    所述发送单元还用于向所述第一设备发送所述第一位置信息和所述注视信息。
  27. 一种显示设备,其特征在于,所述显示设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述显示设备执行如权利要求1-7中任一项所述的方法。
  28. 一种特征采集设备,其特征在于,所述特征采集设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述特征采集设备执行如权利要求8-13中任一项所述的方法。
  29. 一种显示设备,其特征在于,包括:处理器和接口电路;
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求1-7中任一项所述的方法。
  30. 一种特征采集设备,其特征在于,包括:处理器和接口电路;
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求8-13中任一项所述的方法。
  31. 一种人机交互系统,其特征在于,包括:
    显示设备,用于执行如权利要求1-7中任一项所述的方法;
    特征采集设备,用于执行如权利要求8-13中任一项所述的方法。
  32. 一种可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求1-13中任一项所述的方法被实现。
PCT/CN2021/092269 2020-05-15 2021-05-08 一种数据处理方法及其设备 WO2021227969A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21803436.1A EP4141621A4 (en) 2020-05-15 2021-05-08 DATA PROCESSING METHOD AND CORRESPONDING DEVICE
US17/986,344 US20230077753A1 (en) 2020-05-15 2022-11-14 Data Processing Method and Device Thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010415230.XA CN113672077A (zh) 2020-05-15 2020-05-15 一种数据处理方法及其设备
CN202010415230.X 2020-05-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/986,344 Continuation US20230077753A1 (en) 2020-05-15 2022-11-14 Data Processing Method and Device Thereof

Publications (1)

Publication Number Publication Date
WO2021227969A1 true WO2021227969A1 (zh) 2021-11-18

Family

ID=78525245

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092269 WO2021227969A1 (zh) 2020-05-15 2021-05-08 一种数据处理方法及其设备

Country Status (4)

Country Link
US (1) US20230077753A1 (zh)
EP (1) EP4141621A4 (zh)
CN (2) CN114415826A (zh)
WO (1) WO2021227969A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114019686A (zh) * 2021-11-23 2022-02-08 芜湖汽车前瞻技术研究院有限公司 抬头显示器的虚像显示方法、装置、设备和介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002431B (zh) * 2022-05-20 2023-10-27 广景视睿科技(深圳)有限公司 一种投影方法、控制装置和投影系统
CN116017174B (zh) * 2022-12-28 2024-02-06 江苏泽景汽车电子股份有限公司 Hud畸变矫正方法、装置及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190025815A1 (en) * 2017-07-24 2019-01-24 Motorola Solutions, Inc. Methods and systems for controlling an object using a head-mounted display
CN109803133A (zh) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 一种图像处理方法及装置、显示装置
CN209542964U (zh) * 2019-03-12 2019-10-25 苏州车萝卜汽车电子科技有限公司 抬头显示装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITBO20060027A1 (it) * 2006-01-17 2007-07-18 Ferrari Spa Metodo di controllo di sistema hud per un veicolo stradale
DE102010040694A1 (de) * 2010-09-14 2012-03-15 Robert Bosch Gmbh Head-up-Display
KR20170135522A (ko) * 2016-05-31 2017-12-08 엘지전자 주식회사 차량용 제어장치 및 그것의 제어방법
CN107333121B (zh) * 2017-06-27 2019-02-26 山东大学 曲面屏幕上移动视点的沉浸式立体渲染投影系统及其方法
TWI657409B (zh) * 2017-12-27 2019-04-21 財團法人工業技術研究院 虛擬導引圖示與真實影像之疊合裝置及其相關疊合方法
CN108171673B (zh) * 2018-01-12 2024-01-23 京东方科技集团股份有限公司 图像处理方法、装置、车载抬头显示系统及车辆
CN109086726B (zh) * 2018-08-10 2020-01-14 陈涛 一种基于ar智能眼镜的局部图像识别方法及系统
CN109688392B (zh) * 2018-12-26 2021-11-02 联创汽车电子有限公司 Ar-hud光学投影系统及映射关系标定方法和畸变矫正方法
CN109917920B (zh) * 2019-03-14 2023-02-24 阿波罗智联(北京)科技有限公司 车载投射处理方法、装置、车载设备及存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190025815A1 (en) * 2017-07-24 2019-01-24 Motorola Solutions, Inc. Methods and systems for controlling an object using a head-mounted display
CN209542964U (zh) * 2019-03-12 2019-10-25 苏州车萝卜汽车电子科技有限公司 抬头显示装置
CN109803133A (zh) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 一种图像处理方法及装置、显示装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4141621A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114019686A (zh) * 2021-11-23 2022-02-08 芜湖汽车前瞻技术研究院有限公司 抬头显示器的虚像显示方法、装置、设备和介质

Also Published As

Publication number Publication date
EP4141621A4 (en) 2024-01-17
US20230077753A1 (en) 2023-03-16
EP4141621A1 (en) 2023-03-01
CN114415826A (zh) 2022-04-29
CN113672077A (zh) 2021-11-19

Similar Documents

Publication Publication Date Title
WO2021227969A1 (zh) 一种数据处理方法及其设备
US7783077B2 (en) Eye gaze tracker system and method
KR102555820B1 (ko) 이미지 프로젝션 방법, 장치, 기기 및 저장 매체
CN108243332B (zh) 车载抬头显示系统影像调节方法及车载抬头显示系统
TWI507729B (zh) 頭戴式視覺輔助系統及其成像方法
US20210133469A1 (en) Neural network training method and apparatus, gaze tracking method and apparatus, and electronic device
WO2020063000A1 (zh) 神经网络训练、视线检测方法和装置及电子设备
US20210368152A1 (en) Information processing apparatus, information processing method, and program
CN103517060A (zh) 一种终端设备的显示控制方法及装置
WO2023272453A1 (zh) 视线校准方法及装置、设备、计算机可读存储介质、系统、车辆
CN111880654A (zh) 一种图像显示方法、装置、穿戴设备及存储介质
WO2022257120A1 (zh) 瞳孔位置的确定方法、装置及系统
WO2022032911A1 (zh) 一种视线追踪方法及装置
JP2011248655A (ja) ユーザ視点空間映像提示装置、ユーザ視点空間映像提示方法及びプログラム
EP3402410B1 (en) Detection system
JP2017107359A (ja) 眼鏡状の光学シースルー型の両眼のディスプレイにオブジェクトを表示する画像表示装置、プログラム及び方法
CN115689920B (zh) Hud成像的辅助矫正方法、装置及矫正系统
CN114020150A (zh) 图像显示方法、装置、电子设备及介质
JP6932526B2 (ja) 画像表示装置、画像表示方法及びプログラム
US11615767B2 (en) Information processing apparatus, information processing method, and recording medium
US20230244307A1 (en) Visual assistance
CN109246411B (zh) 立体图像产生方法及其装置
CN115877573A (zh) 显示方法、头戴显示设备及存储介质
CN115811606A (zh) 一种增强现实ar眼镜及其显示方法、装置及存储介质
CN117275376A (zh) 用于车载平视显示的重影消除方法、以及相关设备和芯片

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803436

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021803436

Country of ref document: EP

Effective date: 20221122

NENP Non-entry into the national phase

Ref country code: DE