WO2021197189A1 - Augmented reality-based information display method, system and apparatus, and projection device - Google Patents

Augmented reality-based information display method, system and apparatus, and projection device Download PDF

Info

Publication number
WO2021197189A1
WO2021197189A1 PCT/CN2021/082943 CN2021082943W WO2021197189A1 WO 2021197189 A1 WO2021197189 A1 WO 2021197189A1 CN 2021082943 W CN2021082943 W CN 2021082943W WO 2021197189 A1 WO2021197189 A1 WO 2021197189A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
display area
matrix
target display
real scene
Prior art date
Application number
PCT/CN2021/082943
Other languages
French (fr)
Chinese (zh)
Inventor
余新
康瑞
邓岳慈
弓殷强
赵鹏
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2021197189A1 publication Critical patent/WO2021197189A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Definitions

  • This application relates to the technical field of coordinate transformation, and more specifically, to an information display method, system, device, projection device, and storage medium based on augmented reality.
  • HUD head up display
  • head-up display or head-up display
  • HUD head up display
  • Look down at the data in the dashboard so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard.
  • HUD was introduced from the aircraft to the automotive field.
  • the existing HUD display method lacks intelligence. Take car driving as an example. With the addition of more driving assistance information such as road conditions, navigation, and danger warning, the user's body and head will change or change with the road conditions. It is a personal activity that causes the user's spatial posture to change. In this case, it may cause the user to see that the image displayed by the HUD is offset from the real scene image.
  • this application proposes an augmented reality-based information display method, system, device, projection equipment, and storage medium to improve the above-mentioned problems.
  • an embodiment of the present application provides an information display method based on augmented reality.
  • the method includes: acquiring real scene information collected by a scene sensing device; acquiring a target display area determined based on a user's first spatial pose, so The first spatial pose is determined based on the human eye tracking device; acquiring the coordinate transformation rule corresponding to the real scene information mapped to the target display area; generating driving guidance information based on the real scene information; The driving guide information is displayed at the corresponding position of the target display area.
  • an embodiment of the present application provides an augmented reality information display device.
  • the information display device includes an image perception module, a coordinate transformation module, and a display module: the image perception module is used to obtain the real scene collected by the image perception device. Information; the coordinate transformation module is used to obtain the target display area determined based on the user's first spatial pose, the first spatial pose is determined based on the eye tracking device; the coordinate transformation module is also used to obtain the The real scene information is mapped to the coordinate transformation rule corresponding to the target display area; the display module is used to generate driving guidance information based on the real scene information; the display module is also used to transform the driving guidance information based on the coordinate transformation rule Displayed in the corresponding position of the target display area.
  • an embodiment of the present application provides an augmented reality vehicle-mounted information display system.
  • the system includes: a scene sensing device for collecting real-world information of the vehicle's external environment; and a human eye tracking device for acquiring the user's line of sight Range of movement; an image processing device for acquiring a first spatial pose of the user based on the range of movement of the line of sight, acquiring a target display area determined based on the first spatial pose, and acquiring the real scene information mapped to the
  • the coordinate transformation rule corresponding to the target display area is to generate driving guidance information based on the real scene information, and the target position coordinates of the driving guidance information displayed in the target display area are generated based on the coordinate transformation rule; the HUD display device is used to display The driving guidance information is displayed at the target position coordinates of the target display area.
  • an embodiment of the present application provides a projection device, including one or more processors and a memory; one or more programs are stored in the memory and configured to be processed by the one or more The one or more programs are configured to execute the method described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium having program code stored in the computer-readable storage medium, wherein the method described in the first aspect is executed when the program code is running.
  • the present application provides an information display method, system, device, projection device, and storage medium based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on the user's first spatial pose, The first spatial pose is determined based on the human eye tracking device, and then the real scene information is mapped to the coordinate transformation rules corresponding to the target display area, and the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed on the target display based on the coordinate transformation rules The corresponding location of the area.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method.
  • the eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
  • Fig. 1 shows a method flowchart of an augmented reality-based information display method proposed by an embodiment of the present application.
  • Fig. 2 shows an example of the structure of an augmented reality-based vehicle information display system based on the augmented reality-based information display method provided by this embodiment.
  • FIG. 3 shows an example diagram of the HUD virtual image plane of the HUD display device in this embodiment.
  • Figure 4 shows a schematic diagram of the relationship between the driver's eyes and the HUD virtual image plane in this embodiment.
  • FIG. 5 shows an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed in this application in a dangerous scenario provided by this embodiment.
  • Fig. 6 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 7 shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • FIG. 8 shows another example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • FIG. 9 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 10 shows an example diagram of adjusting the target display area based on the change vector provided by this embodiment.
  • FIG. 11 shows an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment.
  • FIG. 12 shows a structural block diagram of an information display device based on augmented reality proposed by an embodiment of the present application.
  • Fig. 13 shows a structural block diagram of a projection device of the present application for executing an augmented reality-based information display method according to an embodiment of the present application.
  • Fig. 14 shows a storage unit for storing or carrying program code for implementing an augmented reality-based information display method according to an embodiment of the present application.
  • HUD head up display
  • head-up display or head-up display
  • HUD head up display
  • Look down at the data in the dashboard so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard.
  • HUD was introduced from the aircraft to the automotive field.
  • HUD is mainly divided into two types: rear-mounted (also known as Combine HUD, C-type HUD) and front-mounted (also known as Windshield HUD, W-type HUD).
  • rear-mounted HUD also known as Combine HUD, C-type HUD
  • front-mounted HUD also known as Windshield HUD, W-type HUD
  • the front-mounted HUD uses the windshield as a combiner to project the content required by the driver to the front windshield through the optical system.
  • Driving safety and driving comfort some existing HUD devices only display virtual information in front of the driver's line of sight, and are not integrated with the real environment. With the addition of more driving assistance information such as road conditions, navigation, and hazard warnings, the mismatch between this virtual content and the real scene will cause the driver's attention to be distracted.
  • Augmented Reality is a technology that ingeniously integrates virtual information with the real world.
  • AR-HUD can solve the separation of traditional HUD virtual information and actual scenes through the combination of AR technology and front-mounted HUD.
  • the existing HUD display method lacks intelligence. Take car driving as an example. With the addition of more driving assistance information such as road conditions, navigation, and danger warning, the user's body and head will change or change with the road conditions. It is a personal activity that causes the user's spatial posture to change. In this case, it may cause the user to see that the image displayed by the HUD is offset from the real scene image.
  • the inventor proposes the real scene information that can be collected by the image sensing device provided in this application, and then obtains the target display area determined based on the user's first spatial pose, where the first spatial pose is Based on the determination of the human eye tracking device, the real scene information is mapped to the coordinate transformation rule corresponding to the target display area, and then the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed in the corresponding position of the target display area based on the coordinate transformation rule.
  • the driving guidance information generated based on real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule, and the real-time tracking device is realized by the eye tracking device.
  • Tracking the user’s first spatial posture makes it possible to adapt to changes in the human eye’s posture and dynamically adjust the display area and display position of the driving guidance information, so that the user can accurately and conveniently view the corresponding driving scene during driving.
  • the virtual driving guidance information improves the safety and comfort of driving, thereby enhancing the user experience.
  • FIG. 1 is a method flowchart of an augmented reality-based information display method provided by an embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S110 Acquire real scene information collected by the scene sensing device.
  • the real scene information in the embodiment of the present application may be real scene information corresponding to multiple scenes.
  • multiple scenes may include, but are not limited to, driving scenes, travel scenes, and outdoor activity scenes.
  • the real scene information can include lanes, signs, dangerous pedestrians (such as vulnerable groups such as blind people, elderly people walking alone, pregnant women, or children), vehicles, etc.; if it is a tourist scene, the real scene information can include tourist destination signs. , Tourist routes, tourist attractions information and tourist attractions weather information, etc.; if it is an outdoor activity scene, the real scene information can include current location information and nearby convenience store information.
  • the scene sensing device may include sensing devices such as lasers and infrared radars, and may also include image acquisition devices such as cameras (including monocular cameras, binocular cameras, RGB-D cameras, etc.).
  • the real scene information corresponding to the current scene can be acquired through the scene sensing device.
  • the scene sensing device is a camera.
  • the camera can be installed on the car (optionally, the installation position can be adjusted according to the style and structure of the car or the actual needs), so that the camera can obtain the Real-life information related to driving.
  • the scene sensing device including laser, infrared radar, or camera
  • related technologies which will not be repeated here.
  • Step S120 Obtain a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on the eye tracking device.
  • the first spatial pose may be a human eye pose determined based on the human eye tracking device. It is understandable that if the user's line of sight range changes, the corresponding human eye pose can also change, and if the user's body rotates and the eye line of sight does not change, the human eye pose can remain unchanged In this way, because the user’s eyes can see the real-life information that needs attention in the driving scene (for example, road conditions, dangerous pedestrians and vehicles, etc.), safe driving is still possible, so as a way, the first space
  • the pose may be determined based on the user's current eye pose. In this way, the target display area determined based on the user's current eye pose may be obtained.
  • the eye tracking device may be a device with a camera function, such as a camera, and the details may not be limited.
  • the user's first spatial pose may be the sitting posture of the user in a driving state, Or it is the sitting posture after adjusting the seat (here can be the current user adjusting the seat for the first time). It is understandable that the user's sitting posture is different, and the corresponding spatial posture is different. As a way, the sitting posture of the user after adjusting the seat can be used as the user's first spatial posture.
  • the change range of the user's eye pose is not less than a preset threshold
  • the user's first spatial pose may be re-determined according to the user's current eye pose.
  • the target display area is an area for displaying virtual image information corresponding to real scene information.
  • the target display area may be an area on the windshield of a car for displaying projected virtual image information corresponding to real scene information.
  • the target display areas corresponding to different spatial poses of the same user may be different, and the target display areas corresponding to the spatial poses of different users may be different.
  • the target display area determined based on the user's first spatial pose can be acquired, so that the target display area can be displayed in the target display area. Displaying the virtual image information corresponding to the real scene information reduces the foregoing display difference, thereby improving the accuracy of the display position of the virtual image information corresponding to the real scene information.
  • Step S130 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • the coordinate transformation rule can be used to map the coordinates of the real scene information to the corresponding coordinates of the target display area.
  • the coordinate transformation rules corresponding to the real scene information mapped to the target display area can be acquired, so that the subsequent driving guidance information corresponding to the real scene information can be accurately determined based on the coordinate transformation rules. Is displayed in the corresponding position of the target display area.
  • the coordinate transformation rule may include a first transformation matrix and a second transformation matrix.
  • the first transformation matrix can be used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device
  • the second transformation matrix can be used to convert the reference world coordinates into an offset from the user's perspective in the target display area Matching view coordinates.
  • the reference world coordinates can be understood as the relative position coordinates of the real scene information in the established coordinate system corresponding to the scene sensing device.
  • the reference world coordinates in this embodiment can be understood as the world coordinates that are relatively stationary with the car.
  • View coordinates can be understood as the relative position coordinates of the reference world coordinates in the coordinate system corresponding to the target display area.
  • the first transformation matrix and the second transformation matrix can be obtained, and then the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix is mapped to the coordinates corresponding to the target display area as real scene information Transformation rules.
  • the first transformation matrix may include a first rotation matrix and a first translation vector.
  • the first rotation matrix may be used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector may be used to translate the coordinates.
  • the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device may be determined based on the first rotation matrix and the first translation vector.
  • the second transformation matrix may include a viewing angle offset matrix, a transpose matrix, and a projection matrix.
  • the projection matrix can be used to determine the mapping range of the real scene information to the target display area
  • the viewing angle offset matrix can be used to determine the degree of deviation of the user's viewing angle detected by the eye tracking device
  • the transposition matrix can be used to determine The relative position of the driving guidance information is displayed within the mapping range.
  • the reference world coordinates may be converted into view coordinates matching the offset of the user's viewing angle in the target display area based on the viewing angle offset matrix, the transposed matrix, and the projection matrix.
  • the viewing angle offset matrix may include the first view coordinates
  • the transposition matrix represents the transposition facing the first transformation matrix
  • the transposition matrix may include a spatial unit vector located in the target display area
  • the projection matrix may include field angle parameters, which may be
  • the field of view parameter may include a distance parameter and a scale parameter associated with the user's viewing angle.
  • FIG. 2 is a structural example diagram of a vehicle-mounted information display system based on augmented reality that is applicable to the method for displaying information based on augmented reality provided by this embodiment.
  • the vehicle-mounted information display system based on augmented reality may include a scene perception device, an image processing device, a HUD display device, and a human eye tracking device.
  • the scene sensing device can be used to collect real scene information of the external environment of the vehicle.
  • the eye tracking device can be used to obtain the user's line of sight movement range.
  • the image processing device can be used to obtain the user's first spatial pose based on the movement range of the line of sight, obtain the target display area determined based on the first spatial pose, obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area, and generate based on the real scene information
  • the driving guidance information is generated based on the coordinate transformation rules to display the target position coordinates of the driving guidance information in the target display area.
  • the HUD display device can be used to display driving guidance information to the target position coordinates of the target display area.
  • the image processing device may be the processor chip of the vehicle system, or the processing chip of an independent vehicle computer system, or the processor chip integrated in the scene sensing device (such as lidar), etc., which is not limited herein.
  • the vehicle-mounted information display system may include a car, a driver, a human eye tracking device, a scene perception device, an image processing device, and a HUD display device with AR-HUD function.
  • the scene sensing device can be installed on the car and can obtain driving-related scene information (also can be understood as the aforementioned real scene information), the driver sits in the driving position of the car, and the eye tracking device is installed in the car.
  • the eye tracking device can track the reasonable position of the basic movement range of the driver's eyes.
  • the reasonable position here can be understood as the position where the driver's line of sight matches the driving demand during driving, such as to the left Turning, turning right or turning backward, etc., the specific direction and angle of turning may not be limited.
  • the HUD display device is installed on the front windshield of the car, and the position of the HUD display device can be adjusted so that the driver's eyes can see the entire virtual image corresponding to the driving scene information.
  • the image processing device can adapt to the change of the driver’s eye spatial pose. In this way, the image processing device can convert the real-world information collected by the scene-sensing device into real-time information that is fused with the real-world information of the real scene in real time. The image is sent to the HUD display device for display.
  • the scene sensing device can obtain the position coordinates of the real scene information in the world coordinate system (O-xyz as shown in Figure 2) based on GPS positioning and other location acquisition methods, and then can be based on the car Select the world coordinate origin and coordinate axis direction for the traveling direction, and determine the reference world coordinate system relative to the car according to the world coordinate origin and the coordinate axis direction.
  • the reference world coordinate system can be obtained The following reference world coordinates corresponding to the coordinates of the real scene information.
  • the method of selecting the origin of the world coordinate and the direction of the coordinate axis can refer to the related technology, which will not be repeated here.
  • the reference world coordinate system can be understood as a coordinate system obtained after rotating and/or translating the world coordinate system.
  • the spatial pose of the scene sensing device in the reference world coordinate system can be obtained.
  • the device calculates the perception module transformation matrix M (that is, the aforementioned first transformation matrix) based on the spatial pose in the reference world coordinate system.
  • the process of changing the world coordinate system to the reference world coordinate system includes the first rotation matrix (also can be understood as the total rotation matrix of the scene sensing device) R M and the first translation vector T M , optionally , sensing module transformation between the matrix M and the first rotation matrix R M T M and the first translation vector may be expressed as:
  • R Mx , R My , and R Mz are the rotation matrices of the perception module transformation matrix M around the x-axis, y-axis, and z-axis of the world coordinate system, respectively.
  • the Euler angles of rotation are ⁇ M , ⁇ M , ⁇ M , (T Mx , T My , T Mz ) are the coordinates of the real scene information in the reference world coordinate system.
  • the spatial pose of the virtual image displayed on the plane where the HUD display device is located and the spatial pose of the driver’s eyes can be determined. In this way, it can be based on the spatial pose of the displayed virtual image and driving
  • the spatial pose of the eyes of the driver (that is, the pose of the eyes of the driver) obtains the aforementioned second transformation matrix.
  • the second transformation matrix (here also to be understood as a virtual image of the perspective matrix) C may include a viewing angle offset matrix T, N T a transposed matrix and a projection matrix P
  • viewing angle offset matrix T may be determined by the human eye pose of the driver
  • transposed matrix N T may be displayed by means of virtual HUD image plane to determine the spatial position and orientation
  • the projection matrix P can be determined as a virtual device common plane spatial position and orientation of the display by the human eye pose HUD and a driver.
  • the relationship between N T a transposed matrix and the projection matrix P can be expressed as three:
  • the viewing angle offset matrix T may include the first view coordinates, and the first view coordinates are the position coordinates of the driver's eye pose in the reference world coordinate system.
  • (P ex , Pey , P ez ) can be used to represent the first view coordinates.
  • the viewing angle offset matrix T can be expressed as:
  • transposed matrix N T may include a unit vector in the target area of the display space.
  • transposed matrix N T in the present embodiment may be expressed as:
  • Vr, Vu, and Vn are the spatial unit vectors represented by the virtual image plane corresponding to the HUD display module.
  • FIG. 3 shows an example diagram of the HUD virtual image plane of the HUD display device in this embodiment.
  • V r is the right vector
  • V u is the upper vector
  • V n is the normal vector of the HUD virtual image plane.
  • the projection matrix includes a field of view parameter, and the field of view may include a distance parameter and a scale parameter associated with the user's viewing angle.
  • the relational expression satisfied by the projection matrix P can be:
  • the parameters n and f are the distance of the human eye's field of view (that is, the above distance parameters), and the parameters l, r, b, and t can represent the left, right, bottom, and top determined by the relationship between the size of the eye and the HUD virtual image plane and the pose.
  • Scale that is, the scale parameter mentioned above.
  • the calculation formulas of l, r, b, and t can be expressed as:
  • the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix can be obtained as real scene information and mapped to the target display
  • the coordinate transformation rule corresponding to the area that is, the coordinate transformation rule can be expressed as:
  • the eye tracking device can capture the spatial pose of the driver's eyes in real time, and calculate the difference between the current moment and the previous moment in the driver's eye pose.
  • the aforementioned second transformation matrix C may be recalculated and updated.
  • the driver’s eye position difference can be calculated in a variety of ways, for example, it can be the mean square error MSE or the average absolute error MAE. It is understandable that the position of the driver’s eyes can also be customized according to actual needs. How the posture difference is calculated.
  • the designated thresholds corresponding to different calculation methods may be different.
  • Step S140 Generate driving guidance information based on the real scene information.
  • the driving guide information in this embodiment may include navigation instruction information corresponding to road conditions, pedestrian warning information, and tourist attractions prompt information, etc.
  • the type and specific content of the driving guide information may not be limited.
  • FIG. 5 there is shown an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed by this application in a dangerous scene provided by this embodiment.
  • the real scene information collected by the scene perception device is converted into a HUD virtual image and displayed on the HUD display device.
  • the specific content displayed is shown in the right image of Figure 5.
  • the scene seen by the driver’s eyes may include lane guidance information ( That is, the "navigation instructions in the virtual image" shown in FIG. 5) and pedestrian warning information (ie, the "pedestrian prompt box in the virtual image” shown in FIG. 5).
  • the driving guide information can be generated based on the real scene information.
  • the way of prompting the driving guide information in this embodiment may not be limited. Arrows), pictures, animations, voice or video, etc., then the driving guidance information for different prompts can be generated in a corresponding manner.
  • the driving guide information in this embodiment may include at least one prompt method.
  • the navigation indicator icon corresponding to the road may be displayed in combination with voice prompting the user, so that the user can be driven more accurately. Guidance reminders to ensure driving safety and enhance user experience.
  • Step S150 Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the difference between the position where the HUD displays the real-scene information and the actual position of the real-scene information can be avoided, and the accuracy and reliability of the display can be improved.
  • the present application provides an information display method based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on a user's first spatial pose.
  • the first spatial pose is based on human eye tracking.
  • the device determines, and then obtains the coordinate transformation rules corresponding to the real scene information mapped to the target display area, generates driving guidance information based on the real scene information, and then displays the driving guidance information in the corresponding position of the target display area based on the coordinate transformation rules.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method.
  • the eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
  • FIG. 6 is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S210 Acquire real scene information collected by the scene sensing device.
  • Step S220 Obtain a target display area determined based on the user's first spatial pose.
  • the first spatial pose is determined based on the eye tracking device, and the specific description can refer to the description in the foregoing embodiment, which will not be repeated here.
  • Step S230 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • Step S240 Generate driving guidance information based on the real scene information.
  • Step S250 Input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed.
  • the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device may be input into the first transformation matrix, and the result obtained by the output may be used as the coordinate transformation matrix to be processed.
  • the position coordinates of the actual scene information in the coordinate system corresponding to the scene sensing device are O w (x, y, z).
  • the position coordinates O w (x, y) , Z) After inputting the aforementioned first transformation matrix, we can get:
  • O'can be used as the coordinate transformation matrix to be processed
  • O W uses homogeneous coordinates.
  • Step S260 Perform coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
  • the coordinate transformation matrix to be processed may be transformed according to the aforementioned second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
  • the specific implementation process of coordinate transformation can refer to related technologies, which will not be repeated here.
  • width and height are the width and height of the HUD image, and the unit can be both pixels.
  • Oh (u, v) can be used as the relative position coordinates of the real scene information in the target display area.
  • Step S270 Display the driving guidance information at the position represented by the relative position coordinates.
  • the driving guide information corresponding to the real scene information may be displayed at the position represented by the relative position coordinates.
  • FIG. 7 shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • a virtual and real scene fusion system (which can be understood as an information display system based on augmented reality in this application) can be built in related modeling software (such as Unity3D, etc.).
  • the virtual and real scene fusion system may include cars, Camera 1 (used to simulate the eyes of the driver), script program for simulating human eye tracking device 1, HUD imaging module simulated by camera 2 and plane together (i.e.
  • the spatial scene information (which can be spatial scene information in different scenes) acquired by the module (implemented by the script program 2), and the information transformation, image rendering and rendering of the image processing device completed by the script program 3.
  • the center position of the bottom of the car can be selected as the coordinate origin
  • the forward direction of the car is the positive direction of the Z axis
  • the right-hand coordinate system is adopted. It is assumed that the driver is sitting in the driving position, and the camera 1 The simulated driver's eyes are facing forward, and the pose of the HUD virtual camera simulated by camera 2 is the same as that of the driver's eyes.
  • the scene sensing device can obtain the checkerboard space corner information on the car
  • the image processing device can draw the corner point to the HUD image space (the lower left corner as shown in Figure 7 is the corner point image drawn by the image processing device to the HUD image space), and then send the drawn image to the HUD display device for processing Display (as shown in Figure 7 the corner image is sent to the HUD for virtual image display).
  • the driver's perspective scene as shown in FIG. 7 can be obtained.
  • the virtual and real fusion result shown in FIG. 7 it can be seen that the virtual and real scenes can be accurately fused.
  • the eye tracking device in this embodiment may be configured with a 3D sensor that is perceived by the spatial pose of the human eye and an eye tracking algorithm adapted to the output data of the sensor.
  • the 3D sensor for human eye spatial pose perception may be a binocular camera, or an RGB-D camera, etc.
  • the human eye tracking algorithm may be a computer vision algorithm, or a deep learning algorithm, etc., and the details are not limited.
  • the scene sensing device can obtain real-life information in driving scenes such as lanes, navigation instructions, (dangerous) pedestrians, etc., through a camera combined with GPS, IMU sensors, and perception processing algorithms.
  • the image processing device can adjust the aforementioned first transformation matrix and the second transformation matrix according to the changed eye position, and then draw the same as the changed eye position.
  • the eye tracking device can be used in real time. Obtain the spatial pose of the driver's eyes, and determine whether the change between the human eye pose at the current moment and the human eye pose at the previous moment exceeds a specified threshold, where the value of the specified threshold can be set according to the actual situation.
  • the driver’s spatial pose can be retrieved based on the changed driver’s eye pose, and then based on the spatial pose. The pose re-determines the target display area, and then displays the virtual and real fusion scene graph at the corresponding position of the re-determined target real area.
  • Figure 7 shows an example of the display effect.
  • the specific value of the time may not be limited, and in some possible implementation manners, the judgment interval may also be changed to a time period or a period.
  • the driver can accurately see the navigation instructions marked on the driving lane or surround pedestrians through the HUD virtual image plane. Warning box and other driving guidance information to ensure safe driving.
  • multiple HUD virtual image planes can be configured if necessary, so that the driver’s When the line of sight is shifted (or rotated) in any direction, you can see the driving guidance information that needs to be paid attention to in the current driving scene, which improves the flexibility and diversity of information display and enhances the user experience.
  • the present application provides an information display method based on augmented reality.
  • the driving guidance information generated based on the real scene information is transformed by the first transformation matrix and the second transformation matrix respectively, and then displayed on the basis of the human eye tracking device.
  • the determined first spatial pose of the user and then the determined corresponding position of the target display area realize the real-time tracking of the user's first spatial pose through the eye tracking device, so that it can adapt to the changes of the human eye pose and dynamically adjust
  • the display area and display position of the driving guide information so that the user can accurately and conveniently view the virtual driving guide information corresponding to the driving scene during driving, without the need to repeatedly confirm the accuracy of the driving guide information, reducing the need to check the road conditions
  • the frequent change of sight caused by fatigue has improved the safety and comfort of driving.
  • FIG. 9 is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S310 Acquire the real scene information collected by the scene sensing device.
  • Step S320 Obtain a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on the eye tracking device.
  • Step S330 Detect the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device.
  • the user's posture may change according to the driving scene information such as the road conditions of the traveling vehicle.
  • the driving scene information such as the road conditions of the traveling vehicle.
  • the user's head will rotate in the direction In this mode, the user's line of sight will change (for example, the line of sight is shifted, etc.).
  • the original HUD display method is still used to display the driving guidance information corresponding to the actual scene information, there may be display errors due to the position Cause safety hazards.
  • the eye tracking device in this embodiment can detect the user's eye posture in real time. If the human eye pose changes, then the parameters corresponding to the human eye pose change can be obtained.
  • the user’s first position can be detected through the human eye pose change (ie, the human eye pose change) parameter.
  • a change in spatial pose so that if a change in the first spatial pose is detected, the target display area will be re-determined based on the changed spatial pose, so as to ensure the accuracy of the display position of the driving guidance information corresponding to the actual scene information Without requiring the user to repeatedly confirm the accuracy of the driving guidance information, the flexibility of displaying the driving guidance information is improved, thereby enhancing the user experience.
  • the human eye posture change parameter may include the direction, angle, or range of the human eye's line of sight.
  • the relevant face recognition algorithm can be used to determine whether the eye pose has changed in the eye pose image collected by the eye tracking device. If there is a change, the corresponding eye pose change parameter can be obtained .
  • the change vector corresponding to the first spatial pose can be obtained according to the human eye pose change parameter.
  • the specific calculation process can be implemented with reference to related technologies, which will not be repeated here.
  • Step S340 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • Step S350 Generate driving guidance information based on the real scene information.
  • Step S361 If the amount of change of the human eye posture change parameter is greater than a preset threshold, update the coordinate transformation rule according to the human eye posture change parameter.
  • a preset threshold corresponding to the change vector may be configured in advance, and the preset threshold may be used to distinguish whether subsequent adjustments to the target display area are required.
  • the target display area can be adjusted based on the change vector to obtain the newly determined target display area.
  • FIG. 10 shows an example diagram of adjusting the target display area based on the change vector provided in this embodiment.
  • the user's first spatial pose has changed from 22 to 23', where 23' is the current first spatial pose, which is determined based on the user's current human eye pose .
  • 23' is the current first spatial pose, which is determined based on the user's current human eye pose .
  • the change vector of the human eye pose corresponding to the user's current first spatial pose 23' is greater than the preset threshold, in this way, it is located on the screen 21 of the front windshield of the car
  • the position of the target display area can be changed from 23 to 23', where 23' is the re-determined target display area.
  • the coordinate transformation rules can be updated according to the human eye pose change parameters to obtain the real scene information mapped to the second coordinate corresponding to the newly determined target display area Conversion rules, where the specific determination process of the second coordinate conversion rule can refer to the determination principle and determination process of the aforementioned coordinate conversion rule, which will not be repeated here.
  • Step S362 Display the driving guidance information at the corresponding position of the target display area based on the updated coordinate transformation rule.
  • the driving guidance information may be displayed at the corresponding position of the newly determined target display area based on the second coordinate change rule.
  • the target display area in this embodiment can be adjusted according to the change of the user's first spatial pose. For example, if it is detected that the user has a gesture such as bowing, the target display area can be displayed in the corresponding position on the central control display; if it is detected that the user looks at the phone more frequently during driving, the target display area can be displayed to On the display screen of the mobile phone; or other screens that can be used as the target display area in the driving scene, for example, the windows on the left and right sides of the driving position.
  • Step S371 If the amount of change of the human eye posture change parameter is not greater than the preset threshold, display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the target display area determined based on the user's first spatial pose can be obtained.
  • the user's first spatial pose can be For the sitting posture of the user, please refer to the description in the foregoing embodiment for details.
  • step S330 may be implemented after step S340.
  • FIG. 11 an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment is shown.
  • the process pointed by the hollow arrow may be the initial process
  • the process pointed by the solid arrow may be the real-time continuous process.
  • the scene perception module can acquire real-time information in real time, use the real-time information as the information to be displayed and send it to the image processing module (which can be understood as the aforementioned image processing device), and the image processing module performs coordinate change processing on the coordinates corresponding to the real-world information
  • the final image is drawn, and the image is projected onto the HUD display screen (that is, the aforementioned target display area) for display, so as to improve the accuracy of the display position of the driving guide information, reduce user operations, and improve user experience.
  • Step S364 Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the present application provides an information display method based on augmented reality, which detects the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device, and then the change value of the change vector corresponding to the eye posture change parameter is greater than
  • the target display area is re-adjusted based on the change vector, and the driving guidance information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined based on the eye tracking device.
  • it can adapt to changes in the eye’s posture, dynamically adjust the display area and display position of the driving guidance information, so that the user can be accurate during driving.
  • an information display device 400 based on augmented reality provided by an embodiment of the present application can be run on a projection device.
  • the device 400 includes:
  • the image sensing module 410 is used to obtain real scene information collected by the image sensing device.
  • the coordinate transformation module 420 is configured to obtain the target display area determined based on the user's first spatial pose, which is determined based on the human eye tracking device.
  • the device 400 may further include a parameter change detection module, which is used to detect the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device.
  • the coordinate transformation module 420 may be specifically used to obtain the eye posture change parameters collected by the eye tracking device; obtain the change vector corresponding to the first spatial pose based on the eye posture change parameters If the amount of change of the change vector is greater than a preset threshold, adjust the target display area based on the change vector to obtain a re-determined target display area.
  • the step of obtaining the target display area determined based on the user's first spatial pose is performed.
  • the coordinate transformation module 420 may also be used to obtain a coordinate transformation rule corresponding to the real scene information mapped to the target display area.
  • the coordinate transformation rule in this embodiment may include a first transformation matrix and a second transformation matrix.
  • the first transformation matrix is used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device.
  • the second transformation matrix is used to convert the reference world coordinates into view coordinates in the target display area that match the offset of the user's viewing angle.
  • the first transformation matrix may include a first rotation matrix and a first translation vector, where the first rotation matrix is used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector is used To translate the coordinates, the first transformation matrix determines the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device based on the first rotation matrix and the first translation vector.
  • the second transformation matrix may include a viewing angle offset matrix, a transposition matrix, and a projection matrix.
  • the projection matrix is used to determine a mapping range for mapping the real scene information to the target display area, and the viewing angle offset
  • the matrix is used to determine the degree of deviation of the user’s viewing angle detected by the eye tracking device
  • the transposed matrix is used to determine the relative position within the mapping range where the driving guidance information is displayed
  • the second The transformation matrix converts the reference world coordinates into view coordinates in the target display area that match the offset of the user's view angle based on the view angle offset matrix, the transpose matrix, and the projection matrix.
  • the viewing angle offset matrix may include first view coordinates
  • the transposition matrix represents a transposition facing the first transformation matrix
  • the transposition matrix includes a spatial unit vector located in the target display area
  • the projection matrix includes a field angle parameter
  • the field angle parameter includes a distance parameter and a scale parameter associated with the user's viewing angle
  • the display module 430 is configured to generate driving guidance information based on the real scene information.
  • the display module 430 may also be used to display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the display module 430 may be specifically configured to input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed;
  • the coordinate transformation matrix performs coordinate transformation according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area; the driving guidance information is displayed at the position characterized by the relative position coordinates.
  • an embodiment of the present application also provides another projection device 100 that can execute the foregoing augmented reality-based information display method.
  • the projection device 100 includes one or more (only one shown in the figure) processor 102, memory 104, image perception module 11, coordinate transformation module 12, human eye tracking module 14 and display module 13 coupled with each other.
  • the memory 104 stores a program that can execute the content in the foregoing embodiment
  • the processor 102 can execute the program stored in the memory 104
  • the memory 104 includes the device 400 described in the foregoing embodiment.
  • the processor 102 may include one or more processing cores.
  • the processor 102 uses various interfaces and lines to connect various parts of the entire projection device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and calling data stored in the memory 104.
  • the processor 102 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 102 may integrate one or a combination of a central processing unit (CPU), a video image processor (Graphics Processing Unit, GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 102, but may be implemented by a communication chip alone.
  • the memory 104 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 104 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 104 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, video image playback function, etc.) ), instructions used to implement the foregoing method embodiments, etc.
  • the data storage area can also store data (for example, audio and video data) created by the projection device 100 during use.
  • the image perception module 11 is used to obtain the real scene information collected by the image perception device; the coordinate transformation module 12 is used to obtain the target display area determined based on the user's first spatial pose, which is based on the human eye tracking device 14
  • the eye tracking device 14 is used to detect the user's eye pose in real time;
  • the coordinate transformation module 12 is also used to obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area;
  • the display The module 13 is configured to generate driving guidance information based on the real scene information; the display module 13 is also configured to display the driving guidance information in a corresponding position of the target display area based on the coordinate transformation rule.
  • FIG. 14 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 500 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 500 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 500 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 500 has a storage space for the program code 510 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
  • the program code 510 may be compressed in a suitable form, for example.
  • the present application provides an augmented reality-based information display method, system, device, projection equipment, and storage medium.
  • the first spatial pose determination based on the user is obtained.
  • the first spatial pose is determined based on the human eye tracking device, and then the real scene information is mapped to the coordinate transformation rule corresponding to the target display area, and the driving guidance information is generated based on the real scene information, and then the driving guidance is based on the coordinate transformation rule
  • the information is displayed in the corresponding position of the target display area.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method.
  • the eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.

Abstract

An augmented reality-based information display method, system and apparatus, a projection device and a storage medium. The method comprises: obtaining real scene information collected by a scene sensing device (S110); obtaining a target display area determined on the basis of a first spatial pose of a user, the first spatial pose being determined on the basis of an eye tracking device (S120); obtaining coordinate transformation rules corresponding to real scene information mapped to the target display area (S130); generating driving guidance information on the basis of the real scene information (S140); and displaying the driving guidance information in position corresponding to the target display area on the basis of the coordinate transformation rules (S150). Driving guidance information is displayed, by means of the coordinate transformation rules, in the position corresponding to the target display area determined on the basis of the first spatial pose of the user as determined by the eye tracking device. The first spatial pose of the user is tracked in real-time by the eye tracking device, which enables adaptation to changes to the eye pose, and the display area and display position of the driving guidance information are dynamically adjusted.

Description

基于增强现实的信息显示方法、系统、装置及投影设备Information display method, system, device and projection equipment based on augmented reality 技术领域Technical field
本申请涉及坐标变换技术领域,更具体地,涉及一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质。This application relates to the technical field of coordinate transformation, and more specifically, to an information display method, system, device, projection device, and storage medium based on augmented reality.
背景技术Background technique
HUD(head up display)为平视显示(或称抬头显示),能够将重要信息在视线前方的一块透明玻璃上显示,最早应用于战斗机上,其主要目的是为了让飞行员不需要频繁集中注意力看低头看仪表盘中的数据,从而避免飞行员在观看仪表盘中的数据时,不能观察到飞行前方领域的环境信息。为了减少用户低头看仪表盘或中控台引发的事故,HUD从飞机引入至汽车领域。HUD (head up display) is a head-up display (or head-up display), which can display important information on a piece of transparent glass in front of the line of sight. Look down at the data in the dashboard, so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard. In order to reduce accidents caused by users looking down at the dashboard or center console, HUD was introduced from the aircraft to the automotive field.
然而,现有的HUD显示信息的方式缺乏智能性,以汽车驾驶为例,随着路况、导航以及危险预警等更多辅助驾驶信息的加入,用户的身体和头部会随着路况的变化或者是个人活动等而摆动,使得用户的空间位姿发生变化,在这种情况下,可能会导致用户看到HUD所显示的图像与真实场景图像存在偏移等不匹配的问题。However, the existing HUD display method lacks intelligence. Take car driving as an example. With the addition of more driving assistance information such as road conditions, navigation, and danger warning, the user's body and head will change or change with the road conditions. It is a personal activity that causes the user's spatial posture to change. In this case, it may cause the user to see that the image displayed by the HUD is offset from the real scene image.
发明内容Summary of the invention
鉴于上述问题,本申请提出了一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质,以改善上述问题。In view of the above-mentioned problems, this application proposes an augmented reality-based information display method, system, device, projection equipment, and storage medium to improve the above-mentioned problems.
第一方面,本申请实施例提供了一种基于增强现实的信息显示方法,所述方法包括:获取场景感知装置采集的实景信息;获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定;获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;基于所述实景信息生成驾驶指引信息;基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。In the first aspect, an embodiment of the present application provides an information display method based on augmented reality. The method includes: acquiring real scene information collected by a scene sensing device; acquiring a target display area determined based on a user's first spatial pose, so The first spatial pose is determined based on the human eye tracking device; acquiring the coordinate transformation rule corresponding to the real scene information mapped to the target display area; generating driving guidance information based on the real scene information; The driving guide information is displayed at the corresponding position of the target display area.
第二方面,本申请实施例提供了一种增强现实的信息显示装置,所述信息显示装置包括图像感知模块、坐标变换模块以及显示模块:所述图像感知模块用于获取图像感知装置采集的实景信息;所述坐标变换模块用于获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定;所述坐标变换模块还用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;所述显示模块用于基于所述实景信息生成驾驶指引信息;所述显示模块还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。In a second aspect, an embodiment of the present application provides an augmented reality information display device. The information display device includes an image perception module, a coordinate transformation module, and a display module: the image perception module is used to obtain the real scene collected by the image perception device. Information; the coordinate transformation module is used to obtain the target display area determined based on the user's first spatial pose, the first spatial pose is determined based on the eye tracking device; the coordinate transformation module is also used to obtain the The real scene information is mapped to the coordinate transformation rule corresponding to the target display area; the display module is used to generate driving guidance information based on the real scene information; the display module is also used to transform the driving guidance information based on the coordinate transformation rule Displayed in the corresponding position of the target display area.
第三方面,本申请实施例提供了一种增强现实的车载信息显示系统,所述系统包括:场景感知装置,用于采集车辆外部环境的实景信息;人眼追踪 装置,用于获取用户的视线移动范围;图像处理装置,用于基于所述视线移动范围获取所述用户的第一空间位姿,获取基于所述第一空间位姿确定的目标显示区域,获取所述实景信息映射到所述目标显示区域对应的坐标变换规则,基于所述实景信息生成驾驶指引信息,基于所述坐标变换规则生成所述驾驶指引信息显示在所述目标显示区域的目标位置坐标;HUD显示装置,用于将所述驾驶指引信息展示到所述目标显示区域的所述目标位置坐标处。In a third aspect, an embodiment of the present application provides an augmented reality vehicle-mounted information display system. The system includes: a scene sensing device for collecting real-world information of the vehicle's external environment; and a human eye tracking device for acquiring the user's line of sight Range of movement; an image processing device for acquiring a first spatial pose of the user based on the range of movement of the line of sight, acquiring a target display area determined based on the first spatial pose, and acquiring the real scene information mapped to the The coordinate transformation rule corresponding to the target display area is to generate driving guidance information based on the real scene information, and the target position coordinates of the driving guidance information displayed in the target display area are generated based on the coordinate transformation rule; the HUD display device is used to display The driving guidance information is displayed at the target position coordinates of the target display area.
第四方面,本申请实施例提供了一种投影设备,包括一个或多个处理器以及存储器;一个或一个以上的程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或一个以上的程序配置用于执行上述第一方面所述的方法。In a fourth aspect, an embodiment of the present application provides a projection device, including one or more processors and a memory; one or more programs are stored in the memory and configured to be processed by the one or more The one or more programs are configured to execute the method described in the first aspect.
第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码运行时执行上述第一方面所述的方法。In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium having program code stored in the computer-readable storage medium, wherein the method described in the first aspect is executed when the program code is running.
本申请提供的一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质,通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,第一空间位姿为基于人眼追踪装置确定,再获取实景信息映射到目标显示区域对应的坐标变换规则,并基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。从而通过上述方式实现了将基于实景信息生成得到的驾驶指引信息,通过坐标变换规则显示在基于人眼追踪装置确定的用户的第一空间位姿进而确定的目标显示区域的对应位置,实现了通过人眼追踪装置实时的追踪用户的第一空间位姿,使得能够适应人眼位姿的变化,动态的调节驾驶指引信息的显示区域以及显示位置,实现了用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,提升了驾驶的安全性与舒适性,进而提升用户体验。The present application provides an information display method, system, device, projection device, and storage medium based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on the user's first spatial pose, The first spatial pose is determined based on the human eye tracking device, and then the real scene information is mapped to the coordinate transformation rules corresponding to the target display area, and the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed on the target display based on the coordinate transformation rules The corresponding location of the area. As a result, the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method. The eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
附图说明Description of the drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application, the following will briefly introduce the drawings that need to be used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can be obtained based on these drawings without creative work.
图1示出了本申请一实施例提出的一种基于增强现实的信息显示方法的方法流程图。Fig. 1 shows a method flowchart of an augmented reality-based information display method proposed by an embodiment of the present application.
图2示出了本实施例提供的基于增强现实的信息显示方法的基于增强现实的车载信息显示系统的结构示例图。Fig. 2 shows an example of the structure of an augmented reality-based vehicle information display system based on the augmented reality-based information display method provided by this embodiment.
图3示出了本实施例中HUD显示装置的HUD虚像平面的示例图。FIG. 3 shows an example diagram of the HUD virtual image plane of the HUD display device in this embodiment.
图4示出了本实施例中驾驶员的眼睛与HUD虚像平面的关系示意图。Figure 4 shows a schematic diagram of the relationship between the driver's eyes and the HUD virtual image plane in this embodiment.
图5示出了本实施例提供的危险场景下通过本申请提出的基于增强现实的信息显示系统显示驾驶指引信息的示例图。FIG. 5 shows an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed in this application in a dangerous scenario provided by this embodiment.
图6示出了本申请另一实施例提出的一种基于增强现实的信息显示方法的方法流程图。Fig. 6 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
图7示出了本实施例提供的基于增强现实的信息显示系统的显示效果的一示例图。FIG. 7 shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
图8示出了本实施例提供的基于增强现实的信息显示系统的显示效果的另一示例图。FIG. 8 shows another example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
图9示出了本申请又一实施例提出的一种基于增强现实的信息显示方法的方法流程图。FIG. 9 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
图10示出了本实施例提供的基于变化向量调整目标显示区域的一示例图。FIG. 10 shows an example diagram of adjusting the target display area based on the change vector provided by this embodiment.
图11示出了本实施例中提出的基于增强显示的信息显示方法的处理过程示例图。FIG. 11 shows an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment.
图12示出了本申请实施例提出的一种基于增强现实的信息显示装置的结构框图。FIG. 12 shows a structural block diagram of an information display device based on augmented reality proposed by an embodiment of the present application.
图13示出了本申请的用于执行根据本申请实施例的一种基于增强现实的信息显示方法的投影设备的结构框图。Fig. 13 shows a structural block diagram of a projection device of the present application for executing an augmented reality-based information display method according to an embodiment of the present application.
图14示出了本申请实施例的用于保存或者携带实现根据本申请实施例的一种基于增强现实的信息显示方法的程序代码的存储单元。Fig. 14 shows a storage unit for storing or carrying program code for implementing an augmented reality-based information display method according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
HUD(head up display)为平视显示(或称抬头显示),能够将重要信息在视线前方的一块透明玻璃上显示,最早应用于战斗机上,其主要目的是为了让飞行员不需要频繁集中注意力看低头看仪表盘中的数据,从而避免飞行员在观看仪表盘中的数据时,不能观察到飞行前方领域的环境信息。为了减少用户低头看仪表盘或中控台引发的事故,HUD从飞机引入至汽车领域。HUD (head up display) is a head-up display (or head-up display), which can display important information on a piece of transparent glass in front of the line of sight. Look down at the data in the dashboard, so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard. In order to reduce accidents caused by users looking down at the dashboard or center console, HUD was introduced from the aircraft to the automotive field.
HUD主要分后装(也被称为Combine HUD,C型HUD)和前装(也被称为Windshield HUD,W型HUD)两种。其中,前装HUD将挡风玻璃作为组合器,把驾驶员需要的内容通过光学系统投射至前挡风玻璃,人眼通过挡风玻璃可以在平视范围内同时观察到HUD虚像和外界景象,提高行车安全性和驾驶舒适度。但是,一些现有HUD设备仅在驾驶员视线前方显示虚拟信息,并没有与真实环境融合。随着路况、导航、危险预警等更多辅助驾驶信息的加入,这种虚拟内容与真实场景的不匹配反而会导致驾驶员注意力的分散。HUD is mainly divided into two types: rear-mounted (also known as Combine HUD, C-type HUD) and front-mounted (also known as Windshield HUD, W-type HUD). Among them, the front-mounted HUD uses the windshield as a combiner to project the content required by the driver to the front windshield through the optical system. Driving safety and driving comfort. However, some existing HUD devices only display virtual information in front of the driver's line of sight, and are not integrated with the real environment. With the addition of more driving assistance information such as road conditions, navigation, and hazard warnings, the mismatch between this virtual content and the real scene will cause the driver's attention to be distracted.
增强现实(Augmented Reality,AR)是一种将虚拟信息与真实世界巧妙融合的技术。Augmented Reality (AR) is a technology that ingeniously integrates virtual information with the real world.
作为一种方式,伴随着自动驾驶和增强现实、混合现实技术的发展,可以将AR技术引入HUD领域,AR-HUD通过AR技术与前装HUD的结合能 够解决传统HUD虚拟信息与实际场景分离、不匹配的问题,在丰富HUD显示内容的同时提高驾驶的安全性与舒适度。然而,现有的HUD显示信息的方式缺乏智能性,以汽车驾驶为例,随着路况、导航以及危险预警等更多辅助驾驶信息的加入,用户的身体和头部会随着路况的变化或者是个人活动等而摆动,使得用户的空间位姿发生变化,在这种情况下,可能会导致用户看到HUD所显示的图像与真实场景图像存在偏移等不匹配的问题。As a way, with the development of autonomous driving, augmented reality, and mixed reality technology, AR technology can be introduced into the HUD field. AR-HUD can solve the separation of traditional HUD virtual information and actual scenes through the combination of AR technology and front-mounted HUD. The problem of mismatch, while enriching the HUD display content, improves the safety and comfort of driving. However, the existing HUD display method lacks intelligence. Take car driving as an example. With the addition of more driving assistance information such as road conditions, navigation, and danger warning, the user's body and head will change or change with the road conditions. It is a personal activity that causes the user's spatial posture to change. In this case, it may cause the user to see that the image displayed by the HUD is offset from the real scene image.
因此,为了改善上述问题,发明人提出了本申请提供的可以通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,其中,第一空间位姿为基于人眼追踪装置确定,再获取实景信息映射到目标显示区域对应的坐标变换规则,再基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置,实现了将基于实景信息生成得到的驾驶指引信息,通过坐标变换规则显示在基于人眼追踪装置确定的用户的第一空间位姿进而确定的目标显示区域的对应位置,实现了通过人眼追踪装置实时的追踪用户的第一空间位姿,使得能够适应人眼位姿的变化,动态的调节驾驶指引信息的显示区域以及显示位置,实现了用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,提升了驾驶的安全性与舒适性,进而提升用户体验。Therefore, in order to improve the above problems, the inventor proposes the real scene information that can be collected by the image sensing device provided in this application, and then obtains the target display area determined based on the user's first spatial pose, where the first spatial pose is Based on the determination of the human eye tracking device, the real scene information is mapped to the coordinate transformation rule corresponding to the target display area, and then the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed in the corresponding position of the target display area based on the coordinate transformation rule. The driving guidance information generated based on real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule, and the real-time tracking device is realized by the eye tracking device. Tracking the user’s first spatial posture makes it possible to adapt to changes in the human eye’s posture and dynamically adjust the display area and display position of the driving guidance information, so that the user can accurately and conveniently view the corresponding driving scene during driving. The virtual driving guidance information improves the safety and comfort of driving, thereby enhancing the user experience.
下面将结合附图具体描述本申请的各实施例。Hereinafter, each embodiment of the present application will be described in detail with reference to the accompanying drawings.
请参阅图1,为本申请一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:Please refer to FIG. 1, which is a method flowchart of an augmented reality-based information display method provided by an embodiment of this application. The method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
步骤S110:获取场景感知装置采集的实景信息。Step S110: Acquire real scene information collected by the scene sensing device.
其中,本申请实施例中的实景信息可以为与多种场景对应的实景信息。可选的,多种场景可以包括但不限于驾驶场景,旅游场景以及户外活动场景等。例如,若为驾驶场景,实景信息可以包括车道、标识牌、危险行人(例如盲人、独自行走的老人、孕妇或儿童等弱势群体)与车辆等;若为旅游场景,实景信息可以包括旅游地标识、旅游路线、旅游景点信息以及旅游景点天气信息等;若为户外活动场景,实景信息可以包括当前位置信息以及附近便利店信息等。Among them, the real scene information in the embodiment of the present application may be real scene information corresponding to multiple scenes. Optionally, multiple scenes may include, but are not limited to, driving scenes, travel scenes, and outdoor activity scenes. For example, if it is a driving scene, the real scene information can include lanes, signs, dangerous pedestrians (such as vulnerable groups such as blind people, elderly people walking alone, pregnant women, or children), vehicles, etc.; if it is a tourist scene, the real scene information can include tourist destination signs. , Tourist routes, tourist attractions information and tourist attractions weather information, etc.; if it is an outdoor activity scene, the real scene information can include current location information and nearby convenience store information.
可选的,场景感知装置可以包括激光、红外雷达等感应装置,也可以包括相机(包括单目摄像机、双目摄像机以及RGB-D相机等)等图像采集装置。作为一种方式,可以通过场景感知装置获取与当前的场景对应的实景信息。例如,假设当前的场景为驾驶场景,场景感知装置为相机,相机可以安装在汽车上(可选的,安装位置可以根据汽车的款式结构或者是实际需要进行调整),使得相机可以实时获取到与驾驶相关的实景信息。其中,对于场景感知装置(包括激光、红外雷达或是相机)采集实景信息的采集原理及实现可以参考相关技术,在此不再赘述。Optionally, the scene sensing device may include sensing devices such as lasers and infrared radars, and may also include image acquisition devices such as cameras (including monocular cameras, binocular cameras, RGB-D cameras, etc.). As a way, the real scene information corresponding to the current scene can be acquired through the scene sensing device. For example, suppose the current scene is a driving scene, and the scene sensing device is a camera. The camera can be installed on the car (optionally, the installation position can be adjusted according to the style and structure of the car or the actual needs), so that the camera can obtain the Real-life information related to driving. Among them, for the collection principle and implementation of the scene sensing device (including laser, infrared radar, or camera) collecting real scene information, please refer to related technologies, which will not be repeated here.
步骤S120:获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定。Step S120: Obtain a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on the eye tracking device.
本实施例中,第一空间位姿可以为基于人眼追踪装置确定的人眼位姿。可以理解的是,若用户的视线范围发生了变化,对应的人眼位姿也可以发生变化, 而若用户的身体发生了转动而眼睛的视线范围没有变化时,人眼位姿可以保持不变,在这种方式下,由于用户的眼睛可以看到驾驶场景中需要注意的实景信息(例如,车道路况、危险行人与车辆等),仍可以进行安全驾驶,那么作为一种方式,第一空间位姿可以为基于用户当前的人眼位姿确定,在这种方式下,可以获取基于用户当前的人眼位姿确定的目标显示区域。可选的,人眼追踪装置可以为相机等具备摄像功能的装置,具体可以不作限定。In this embodiment, the first spatial pose may be a human eye pose determined based on the human eye tracking device. It is understandable that if the user's line of sight range changes, the corresponding human eye pose can also change, and if the user's body rotates and the eye line of sight does not change, the human eye pose can remain unchanged In this way, because the user’s eyes can see the real-life information that needs attention in the driving scene (for example, road conditions, dangerous pedestrians and vehicles, etc.), safe driving is still possible, so as a way, the first space The pose may be determined based on the user's current eye pose. In this way, the target display area determined based on the user's current eye pose may be obtained. Optionally, the eye tracking device may be a device with a camera function, such as a camera, and the details may not be limited.
可选的,若用户的人眼位姿发生了变化,但变化的范围很小(例如,小于预先设定的阈值)时,用户的第一空间位姿可以为用户处于驾驶状态下的坐姿,或者为调节好座椅(这里可以为当前用户首次调节好座椅)后的坐姿。可以理解的是,用户的坐姿不同,对应的空间位姿不同,作为一种方式,可以将用户调节好座椅后的坐姿状态作为该用户的第一空间位姿。可选的,若用户的人眼位姿的变化范围不小于预先设定的阈值时,可以根据用户当前的人眼位姿重新确定用户的第一空间位姿。Optionally, if the user's eye pose changes, but the range of the change is small (for example, less than a preset threshold), the user's first spatial pose may be the sitting posture of the user in a driving state, Or it is the sitting posture after adjusting the seat (here can be the current user adjusting the seat for the first time). It is understandable that the user's sitting posture is different, and the corresponding spatial posture is different. As a way, the sitting posture of the user after adjusting the seat can be used as the user's first spatial posture. Optionally, if the change range of the user's eye pose is not less than a preset threshold, the user's first spatial pose may be re-determined according to the user's current eye pose.
本实施例中,目标显示区域为用于显示与实景信息对应的虚拟图像信息的区域。以驾驶场景为例,目标显示区域可以为汽车的挡风玻璃上用于显示投射的与实景信息对应的虚拟图像信息的区域。可选的,同一用户的不同空间位姿对应的目标显示区域可以不同,不同用户的空间位姿对应的目标显示区域可以不同。In this embodiment, the target display area is an area for displaying virtual image information corresponding to real scene information. Taking a driving scene as an example, the target display area may be an area on the windshield of a car for displaying projected virtual image information corresponding to real scene information. Optionally, the target display areas corresponding to different spatial poses of the same user may be different, and the target display areas corresponding to the spatial poses of different users may be different.
为了消除显示与实景信息对应的虚拟图像信息的位置与实景信息的实际位置的显示差异,作为一种方式,可以获取基于用户的第一空间位姿确定的目标显示区域,以便可以在目标显示区域显示与实景信息对应的虚拟图像信息,降低前述显示差异,进而提升显示与实景信息对应的虚拟图像信息的显示位置的准确性。In order to eliminate the display difference between the position where the virtual image information corresponding to the real scene information is displayed and the actual position of the real scene information, as a way, the target display area determined based on the user's first spatial pose can be acquired, so that the target display area can be displayed in the target display area. Displaying the virtual image information corresponding to the real scene information reduces the foregoing display difference, thereby improving the accuracy of the display position of the virtual image information corresponding to the real scene information.
步骤S130:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。Step S130: Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
其中,坐标变换规则可以用于将实景信息的坐标映射为目标显示区域的对应坐标。作为一种方式,在获取了实景信息以及目标显示区域的情况下,可以获取实景信息映射到目标显示区域对应的坐标变换规则,以便后续可以基于坐标变换规则将与实景信息对应的驾驶指引信息准确的显示在目标显示区域的对应位置。Among them, the coordinate transformation rule can be used to map the coordinates of the real scene information to the corresponding coordinates of the target display area. As a way, when the real scene information and the target display area are acquired, the coordinate transformation rules corresponding to the real scene information mapped to the target display area can be acquired, so that the subsequent driving guidance information corresponding to the real scene information can be accurately determined based on the coordinate transformation rules. Is displayed in the corresponding position of the target display area.
可选的,坐标变换规则可以包括第一变换矩阵以及第二变换矩阵。其中,第一变换矩阵可以用于确定与场景感知装置采集的实景信息的坐标对应的参考世界坐标,第二变换矩阵可以用于将参考世界坐标转换为目标显示区域内与用户的视角的偏移匹配的视图坐标。其中,参考世界坐标可以理解为实景信息在建立的与场景感知装置对应的坐标系下的相对位置坐标,可选的,本实施例中的参考世界坐标可以理解为与汽车相对静止的世界坐标。视图坐标可以理解为参考世界坐标在目标显示区域对应的坐标系下的相对位置坐标。Optionally, the coordinate transformation rule may include a first transformation matrix and a second transformation matrix. Wherein, the first transformation matrix can be used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device, and the second transformation matrix can be used to convert the reference world coordinates into an offset from the user's perspective in the target display area Matching view coordinates. Wherein, the reference world coordinates can be understood as the relative position coordinates of the real scene information in the established coordinate system corresponding to the scene sensing device. Optionally, the reference world coordinates in this embodiment can be understood as the world coordinates that are relatively stationary with the car. View coordinates can be understood as the relative position coordinates of the reference world coordinates in the coordinate system corresponding to the target display area.
作为一种实现方式,可以获取第一变换矩阵以及第二变换矩阵,继而将第一变换矩阵所表征的参数与第二变换矩阵所表征的参数的乘积作为实景信息映射到目标显示区域对应的坐标变换规则。As an implementation manner, the first transformation matrix and the second transformation matrix can be obtained, and then the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix is mapped to the coordinates corresponding to the target display area as real scene information Transformation rules.
可选的,第一变换矩阵可以包括第一旋转矩阵以及第一平移向量。其中,第一旋转矩阵可以用于对场景感知装置采集的实景信息的坐标进行旋转,第一平移向量可以用于对该坐标进行平移。作为一种方式,可以基于第一旋转矩阵以及第一平移向量确定与场景感知装置采集的实景信息的坐标对应的参考世界坐标。Optionally, the first transformation matrix may include a first rotation matrix and a first translation vector. The first rotation matrix may be used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector may be used to translate the coordinates. As a way, the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device may be determined based on the first rotation matrix and the first translation vector.
可选的,第二变换矩阵可以包括视角偏移矩阵、转置矩阵以及投影矩阵。其中,投影矩阵可以用于确定将实景信息映射到目标显示区域的映射范围,视角偏移矩阵可以用于确定人眼追踪装置检测到的用户的视角的偏移程度,转置矩阵可以用于确定该映射范围内显示驾驶指引信息的相对位置。作为一种方式,可以基于视角偏移矩阵、转置矩阵以及投影矩阵将参考世界坐标转换为目标显示区域内与用户的视角的偏移匹配的视图坐标。其中,视角偏移矩阵可以包括第一视图坐标,转置矩阵表征面向第一变换矩阵的转置,转置矩阵可以包括位于目标显示区域的空间单位向量,投影矩阵可以包括视场角参数,可选的,视场角参数可以包括与用户的视角关联的距离参数和尺度参数。Optionally, the second transformation matrix may include a viewing angle offset matrix, a transpose matrix, and a projection matrix. Among them, the projection matrix can be used to determine the mapping range of the real scene information to the target display area, the viewing angle offset matrix can be used to determine the degree of deviation of the user's viewing angle detected by the eye tracking device, and the transposition matrix can be used to determine The relative position of the driving guidance information is displayed within the mapping range. As a way, the reference world coordinates may be converted into view coordinates matching the offset of the user's viewing angle in the target display area based on the viewing angle offset matrix, the transposed matrix, and the projection matrix. Wherein, the viewing angle offset matrix may include the first view coordinates, the transposition matrix represents the transposition facing the first transformation matrix, the transposition matrix may include a spatial unit vector located in the target display area, and the projection matrix may include field angle parameters, which may be Optionally, the field of view parameter may include a distance parameter and a scale parameter associated with the user's viewing angle.
下面以驾驶场景为例,对本实施例进行示例性的说明:The following takes a driving scene as an example to illustrate this embodiment as an example:
请参阅图2,为适用于本实施例提供的基于增强现实的信息显示方法的基于增强现实的车载信息显示系统的结构示例图。如图2所示,该基于增强现实的车载信息显示系统可以包括场景感知装置、图像处理装置、HUD显示装置以及人眼追踪装置。其中,场景感知装置可以用于采集车辆外部环境的实景信息。人眼追踪装置可以用于获取用户的视线移动范围。图像处理装置可以用于基于视线移动范围获取用户的第一空间位姿,获取基于第一空间位姿确定的目标显示区域,获取实景信息映射到目标显示区域对应的坐标变换规则,基于实景信息生成驾驶指引信息,基于坐标变换规则生成驾驶指引信息显示在目标显示区域的目标位置坐标。HUD显示装置可以用于将驾驶指引信息展示到目标显示区域的目标位置坐标处。Please refer to FIG. 2, which is a structural example diagram of a vehicle-mounted information display system based on augmented reality that is applicable to the method for displaying information based on augmented reality provided by this embodiment. As shown in FIG. 2, the vehicle-mounted information display system based on augmented reality may include a scene perception device, an image processing device, a HUD display device, and a human eye tracking device. Among them, the scene sensing device can be used to collect real scene information of the external environment of the vehicle. The eye tracking device can be used to obtain the user's line of sight movement range. The image processing device can be used to obtain the user's first spatial pose based on the movement range of the line of sight, obtain the target display area determined based on the first spatial pose, obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area, and generate based on the real scene information The driving guidance information is generated based on the coordinate transformation rules to display the target position coordinates of the driving guidance information in the target display area. The HUD display device can be used to display driving guidance information to the target position coordinates of the target display area.
其中,图像处理装置可以是车机系统的处理器芯片,或者是独立的车载计算机系统的处理芯片,或者是集成在场景感知装置(例如激光雷达)中的处理器芯片等,在此不作限定。The image processing device may be the processor chip of the vehicle system, or the processing chip of an independent vehicle computer system, or the processor chip integrated in the scene sensing device (such as lidar), etc., which is not limited herein.
在一种实现方式中,该车载信息显示系统可以包括汽车、驾驶员、人眼追踪装置、场景感知装置、图像处理装置以及具备AR-HUD功能的HUD显示装置。作为一种实施方式,场景感知装置可以安装在汽车上并能获取到与驾驶相关的场景信息(也可以理解为前述的实景信息),驾驶员坐在汽车的驾驶位,人眼追踪装置安装在汽车内,人眼追踪装置可以追踪驾驶员眼睛基本移动范围的合理位置,可选的,此处的合理位置可以理解为在驾驶的过程中驾驶员视线配合驾驶需求可能转向的位置,例如向左转、向右转或者是向后转等,具体转的方向以及角度可以不作限定。HUD显示装置安装在汽车前挡风玻璃,HUD显示装置的位置可以进行调节,以使得驾驶员的眼睛能够看到与驾驶场景信息对应的整个虚像。可选的,图像处理装置可以适应驾驶员眼睛空间位姿的变动,在该种方式下,图像处理装置可以实时地将场景感知装置采集到的实景信息转换为与真实场景的实景信息相融合的图像,并发送给HUD显示装置进行显示。In an implementation manner, the vehicle-mounted information display system may include a car, a driver, a human eye tracking device, a scene perception device, an image processing device, and a HUD display device with AR-HUD function. As an implementation manner, the scene sensing device can be installed on the car and can obtain driving-related scene information (also can be understood as the aforementioned real scene information), the driver sits in the driving position of the car, and the eye tracking device is installed in the car. In the car, the eye tracking device can track the reasonable position of the basic movement range of the driver's eyes. Optionally, the reasonable position here can be understood as the position where the driver's line of sight matches the driving demand during driving, such as to the left Turning, turning right or turning backward, etc., the specific direction and angle of turning may not be limited. The HUD display device is installed on the front windshield of the car, and the position of the HUD display device can be adjusted so that the driver's eyes can see the entire virtual image corresponding to the driving scene information. Optionally, the image processing device can adapt to the change of the driver’s eye spatial pose. In this way, the image processing device can convert the real-world information collected by the scene-sensing device into real-time information that is fused with the real-world information of the real scene in real time. The image is sent to the HUD display device for display.
作为一种方式,场景感知装置在采集得到实景信息之后,可以基于GPS定位等位置获取方式获取实景信息在世界坐标系(如图2所示的O-xyz)下的位置坐标,继而可以基于汽车的行进方向选定世界坐标原点及坐标轴方向,根据世界坐标原点以及坐标轴方向确定与汽车相对静止的参考世界坐标系,通过确定与汽车相对静止的参考世界坐标系,可以获得参考世界坐标系下与实景信息的坐标对应的参考世界坐标。其中,世界坐标原点以及坐标轴方向的选取方式可以参考相关技术,在此不再赘述。需要说明的是,参考世界坐标系可以理解为将世界坐标系进行旋转和/或平移之后得到的坐标系。As a way, after the scene sensing device collects the real scene information, it can obtain the position coordinates of the real scene information in the world coordinate system (O-xyz as shown in Figure 2) based on GPS positioning and other location acquisition methods, and then can be based on the car Select the world coordinate origin and coordinate axis direction for the traveling direction, and determine the reference world coordinate system relative to the car according to the world coordinate origin and the coordinate axis direction. By determining the reference world coordinate system relative to the car, the reference world coordinate system can be obtained The following reference world coordinates corresponding to the coordinates of the real scene information. Among them, the method of selecting the origin of the world coordinate and the direction of the coordinate axis can refer to the related technology, which will not be repeated here. It should be noted that the reference world coordinate system can be understood as a coordinate system obtained after rotating and/or translating the world coordinate system.
例如,作为一种实施方式,在确定了与汽车相对静止的参考世界坐标系的情况下,可以获取场景感知装置在参考世界坐标系下的空间位姿,在这种方式下,可以根据场景感知装置在参考世界坐标系下的空间位姿计算感知模块变换矩阵M(即前述的第一变换矩阵)。示例性的,可以假设将世界坐标系变化为参考世界坐标系的变化过程包括第一旋转矩阵(也可以理解为场景感知装置的总旋转矩阵)R M以及第一平移向量T M,可选的,感知模块变换矩阵M与第一旋转矩阵R M以及第一平移向量T M之间的关系可以表示为: For example, as an implementation manner, in the case that a reference world coordinate system that is relatively stationary with the car is determined, the spatial pose of the scene sensing device in the reference world coordinate system can be obtained. In this way, the scene can be sensed The device calculates the perception module transformation matrix M (that is, the aforementioned first transformation matrix) based on the spatial pose in the reference world coordinate system. Exemplarily, it can be assumed that the process of changing the world coordinate system to the reference world coordinate system includes the first rotation matrix (also can be understood as the total rotation matrix of the scene sensing device) R M and the first translation vector T M , optionally , sensing module transformation between the matrix M and the first rotation matrix R M T M and the first translation vector may be expressed as:
Figure PCTCN2021082943-appb-000001
Figure PCTCN2021082943-appb-000001
其中,in,
Figure PCTCN2021082943-appb-000002
Figure PCTCN2021082943-appb-000002
Figure PCTCN2021082943-appb-000003
Figure PCTCN2021082943-appb-000003
其中,R Mx、R My、R Mz分别为感知模块变换矩阵M绕世界坐标系的x轴、y轴、z轴的旋转矩阵,旋转的欧拉角度依次为α M、β M、γ M,(T Mx,T My,T Mz)为实景信息在参考世界坐标系下的坐标。 Among them, R Mx , R My , and R Mz are the rotation matrices of the perception module transformation matrix M around the x-axis, y-axis, and z-axis of the world coordinate system, respectively. The Euler angles of rotation are α M , β M , γ M , (T Mx , T My , T Mz ) are the coordinates of the real scene information in the reference world coordinate system.
可选的,在参考世界坐标系下,可以测定HUD显示装置所在平面的显示虚像的空间位姿以及驾驶员眼睛的空间位姿,在这种方式下,可以基于显示虚像的空间位姿以及驾驶员眼睛的空间位姿(即驾驶员的人眼位姿)获取前述的第二变换矩阵。Optionally, in the reference world coordinate system, the spatial pose of the virtual image displayed on the plane where the HUD display device is located and the spatial pose of the driver’s eyes can be determined. In this way, it can be based on the spatial pose of the displayed virtual image and driving The spatial pose of the eyes of the driver (that is, the pose of the eyes of the driver) obtains the aforementioned second transformation matrix.
第二变换矩阵(这里也可以理解为虚拟图像的透视矩阵)C可以包括视角偏移矩阵T、转置矩阵N T以及投影矩阵P,视角偏移矩阵T可以由驾驶员的人眼位姿确定,转置矩阵N T可以由HUD显示装置的虚像平面的空间位姿确定,投影矩阵P可以由驾驶员的人眼位姿以及HUD显示装置的虚像平面的空间位姿共同确定。作为一种方式,视角偏移矩阵T、转置矩阵N T以及投影矩阵P三者之间的关系可以表示为: The second transformation matrix (here also to be understood as a virtual image of the perspective matrix) C may include a viewing angle offset matrix T, N T a transposed matrix and a projection matrix P, viewing angle offset matrix T may be determined by the human eye pose of the driver , transposed matrix N T may be displayed by means of virtual HUD image plane to determine the spatial position and orientation, the projection matrix P can be determined as a virtual device common plane spatial position and orientation of the display by the human eye pose HUD and a driver. As a way of viewing angle offset matrix T, the relationship between N T a transposed matrix and the projection matrix P can be expressed as three:
C=PN TT。 C=PN T T.
其中,视角偏移矩阵T可以包括第一视图坐标,第一视图坐标为驾驶员的 人眼位姿在参考世界坐标系下的位置坐标。作为一种方式,本实施例中可以用(P ex,P ey,P ez)表示第一视图坐标,在这种方式下,视角偏移矩阵T可以表示为: The viewing angle offset matrix T may include the first view coordinates, and the first view coordinates are the position coordinates of the driver's eye pose in the reference world coordinate system. As a way, in this embodiment, (P ex , Pey , P ez ) can be used to represent the first view coordinates. In this way, the viewing angle offset matrix T can be expressed as:
Figure PCTCN2021082943-appb-000004
Figure PCTCN2021082943-appb-000004
可选的,转置矩阵N T可以包括位于目标显示区域的空间单位向量。作为一种方式,本实施例中的转置矩阵N T可以表示为: Alternatively, a transposed matrix N T may include a unit vector in the target area of the display space. As a way transposed matrix N T in the present embodiment may be expressed as:
Figure PCTCN2021082943-appb-000005
Figure PCTCN2021082943-appb-000005
其中,Vr、Vu、Vn为HUD显示模块对应的虚像平面代表的空间单位向量,示例性的,请参阅图3,示出了本实施例中HUD显示装置的HUD虚像平面的示例图。如图3所示,V r为右向量,V u为上向量,V n为HUD虚像平面的法向量。 Wherein, Vr, Vu, and Vn are the spatial unit vectors represented by the virtual image plane corresponding to the HUD display module. For example, please refer to FIG. 3, which shows an example diagram of the HUD virtual image plane of the HUD display device in this embodiment. As shown in Figure 3, V r is the right vector, V u is the upper vector, and V n is the normal vector of the HUD virtual image plane.
可选的,投影矩阵包括视场角参数,视场角可以包括与用户的视角关联的距离参数和尺度参数。投影矩阵P满足的关系式可以为:Optionally, the projection matrix includes a field of view parameter, and the field of view may include a distance parameter and a scale parameter associated with the user's viewing angle. The relational expression satisfied by the projection matrix P can be:
Figure PCTCN2021082943-appb-000006
Figure PCTCN2021082943-appb-000006
其中,参数n、f为人眼视野的远近距离(即上述距离参数),参数l、r、b、t可以代表由眼睛和HUD虚像平面的尺寸与位姿关系决定的左、右、下、上尺度(即上述的尺度参数)。可选的,请参阅图4,示出了本实施例中驾驶员的眼睛与HUD虚像平面的关系示意图,如图4所示,若d=-(v n·v a),那么可以得到参数l、r、b、t的计算公式分别可以表示为: Among them, the parameters n and f are the distance of the human eye's field of view (that is, the above distance parameters), and the parameters l, r, b, and t can represent the left, right, bottom, and top determined by the relationship between the size of the eye and the HUD virtual image plane and the pose. Scale (that is, the scale parameter mentioned above). Optionally, please refer to Fig. 4, which shows a schematic diagram of the relationship between the driver’s eyes and the HUD virtual image plane in this embodiment. As shown in Fig. 4, if d=-(v n · v a ), then the parameters can be obtained The calculation formulas of l, r, b, and t can be expressed as:
l=(v r·v a)n/d r=(v r·v b)n/d l=(v r ·v a )n/d r=(v r ·v b )n/d
b=(v u·v a)n/d t=(v u·v c)n/d b=(v u ·v a )n/d t=(v u ·v c )n/d
作为一种方式,在获取了第一变换矩阵以及第二变换矩阵的情况下,可以将第一变换矩阵所表征的参数与第二变换矩阵所表征的参数的乘积获取作为实景信息映射到目标显示区域对应的坐标变换规则,即此种方式下可以将坐标变换规则表示为:As a way, when the first transformation matrix and the second transformation matrix are obtained, the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix can be obtained as real scene information and mapped to the target display The coordinate transformation rule corresponding to the area, that is, the coordinate transformation rule can be expressed as:
F=MC=MPN TT。 F=MC=MPN T T.
需要说明的是,本实施例中,人眼追踪装置可以实时捕获驾驶员眼睛的空间位姿,并计算当前时刻与前一时刻驾驶员眼睛的位姿差异。可选的,当该位姿差异超过预先设定的指定阈值(具体数值可以不作限定)时,可以重新计算并更新前述的第二变换矩阵C。其中,驾驶员眼睛的位姿差异的计算方式可以为多种,例如,可以为均方误差MSE或者是平均绝对误差MAE等,可以理解的是,还可以根据实际需要自定义驾驶员眼睛的位姿差异的计算方式。可选的,不同的计算方式对应的指定阈值可以不同。It should be noted that in this embodiment, the eye tracking device can capture the spatial pose of the driver's eyes in real time, and calculate the difference between the current moment and the previous moment in the driver's eye pose. Optionally, when the pose difference exceeds a preset specified threshold (the specific value may not be limited), the aforementioned second transformation matrix C may be recalculated and updated. Among them, the driver’s eye position difference can be calculated in a variety of ways, for example, it can be the mean square error MSE or the average absolute error MAE. It is understandable that the position of the driver’s eyes can also be customized according to actual needs. How the posture difference is calculated. Optionally, the designated thresholds corresponding to different calculation methods may be different.
步骤S140:基于所述实景信息生成驾驶指引信息。Step S140: Generate driving guidance information based on the real scene information.
本实施例中的驾驶指引信息可以包括与路况对应的导航指示信息,行人预警信息以及旅游景点提示信息等,驾驶指引信息的种类及具体内容可以不作限定。例如,如图5所示,示出了本实施例提供的危险场景下通过本申请提出的基于增强现实的信息显示系统显示驾驶指引信息的示例图,如图5所示,图像处理装置可以将场景感知装置采集的实景信息转换为HUD虚像在HUD显示装置进行显示,显示的具体内容如图5的右侧图像所示,此种情况下,驾驶员眼睛看到的场景可以包括车道指引信息(即图5中所示的“虚像中的导航指示”)以及行人预警信息(即图5中所示的“虚像中的行人提示框”)。The driving guide information in this embodiment may include navigation instruction information corresponding to road conditions, pedestrian warning information, and tourist attractions prompt information, etc. The type and specific content of the driving guide information may not be limited. For example, as shown in FIG. 5, there is shown an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed by this application in a dangerous scene provided by this embodiment. As shown in FIG. The real scene information collected by the scene perception device is converted into a HUD virtual image and displayed on the HUD display device. The specific content displayed is shown in the right image of Figure 5. In this case, the scene seen by the driver’s eyes may include lane guidance information ( That is, the "navigation instructions in the virtual image" shown in FIG. 5) and pedestrian warning information (ie, the "pedestrian prompt box in the virtual image" shown in FIG. 5).
作为一种方式,在获取了实景信息的情况下,可以基于实景信息生成驾驶指引信息,可选的,本实施例中驾驶指引信息的提示方式可以不做限定,例如,可以是以图标(例如箭头)、图片、动画、语音或是视频等方式进行提示,那么对于不同提示方式的驾驶指引信息,可以以对应的方式进行生成。可选的,对于基于实景信息生成与每一种提示方式的驾驶指引信息的生成原理可以参考相关技术,在此不再赘述。可选的,本实施例中的驾驶指引信息可以包括至少一种提示方式,例如,可以在显示与道路对应的导航指示图标的基础上,结合语音提示用户,以便可以更加准确的对用户进行驾驶指引提示,保障行车安全,提升用户体验。As a way, when the real scene information is acquired, the driving guide information can be generated based on the real scene information. Optionally, the way of prompting the driving guide information in this embodiment may not be limited. Arrows), pictures, animations, voice or video, etc., then the driving guidance information for different prompts can be generated in a corresponding manner. Optionally, for the principle of generating driving guidance information based on actual scene information and each prompt mode, reference may be made to related technologies, which will not be repeated here. Optionally, the driving guide information in this embodiment may include at least one prompt method. For example, the navigation indicator icon corresponding to the road may be displayed in combination with voice prompting the user, so that the user can be driven more accurately. Guidance reminders to ensure driving safety and enhance user experience.
步骤S150:基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。Step S150: Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
可选的,通过基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置,可以避免HUD显示实景信息的位置与实景信息的实际位置存在差异,提升显示的准确性与可靠性。Optionally, by displaying the driving guide information at the corresponding position of the target display area based on the coordinate transformation rule, the difference between the position where the HUD displays the real-scene information and the actual position of the real-scene information can be avoided, and the accuracy and reliability of the display can be improved.
本申请提供的一种基于增强现实的信息显示方法,通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,第一空间位姿为基于人眼追踪装置确定,再获取实景信息映射到目标显示区域对应的坐标变换规则,并基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。从而通过上述方式实现了将基于实景信息生成得到的驾驶指引信息,通过坐标变换规则显示在基于人眼追踪装置确定的用户的第一空间位姿进而确定的目标显示区域的对应位置,实现了通过人眼追踪装置实时的追踪用户的第一空间位姿,使得能够适应人眼位姿的变化,动态的调节驾驶指引信息的显示区域以及显示位置,实现了用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,提升了驾驶的安全性与舒适性,进而提升用户体验。The present application provides an information display method based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on a user's first spatial pose. The first spatial pose is based on human eye tracking. The device determines, and then obtains the coordinate transformation rules corresponding to the real scene information mapped to the target display area, generates driving guidance information based on the real scene information, and then displays the driving guidance information in the corresponding position of the target display area based on the coordinate transformation rules. As a result, the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method. The eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
请参阅图6,为本申请另一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:Please refer to FIG. 6, which is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application. The method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
步骤S210:获取场景感知装置采集的实景信息。Step S210: Acquire real scene information collected by the scene sensing device.
步骤S220:获取基于用户的第一空间位姿确定的目标显示区域。Step S220: Obtain a target display area determined based on the user's first spatial pose.
其中,第一空间位姿为基于人眼追踪装置确定,具体描述可以参考前述实施例中的描述,在此不再赘述。Wherein, the first spatial pose is determined based on the eye tracking device, and the specific description can refer to the description in the foregoing embodiment, which will not be repeated here.
步骤S230:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。Step S230: Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
步骤S240:基于所述实景信息生成驾驶指引信息。Step S240: Generate driving guidance information based on the real scene information.
步骤S250:将所述实景信息在所述场景感知装置对应的坐标系中的位置坐标输入所述第一变换矩阵,得到待处理坐标变换矩阵。Step S250: Input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed.
作为一种方式,可以将实景信息在场景感知装置对应的坐标系中的位置坐标输入第一变换矩阵,将输出得到的结果作为待处理坐标变换矩阵。例如,在一个具体的应用场景中,假设实景信息在场景感知装置对应的坐标系中的位置坐标为O w(x,y,z),可选的,在将位置坐标O w(x,y,z)输入到前述第一变换矩阵之后,可以得到: As a way, the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device may be input into the first transformation matrix, and the result obtained by the output may be used as the coordinate transformation matrix to be processed. For example, in a specific application scenario, it is assumed that the position coordinates of the actual scene information in the coordinate system corresponding to the scene sensing device are O w (x, y, z). Optionally, the position coordinates O w (x, y) , Z) After inputting the aforementioned first transformation matrix, we can get:
Figure PCTCN2021082943-appb-000007
Figure PCTCN2021082943-appb-000007
Figure PCTCN2021082943-appb-000008
Figure PCTCN2021082943-appb-000008
其中,O'可以作为待处理坐标变换矩阵,O W使用了齐次坐标。 Among them, O'can be used as the coordinate transformation matrix to be processed, and O W uses homogeneous coordinates.
步骤S260:将所述待处理坐标变换矩阵按照所述第二变换矩阵进行坐标变换,得到所述实景信息在所述目标显示区域内的相对位置坐标。Step S260: Perform coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
作为一种方式,可以将待处理坐标变换矩阵按照前述的第二变换矩阵进行坐标变换,得到实景信息在目标显示区域内的相对位置坐标。可选的,坐标变换的具体实现过程可以参考相关技术,在此不再赘述。As a way, the coordinate transformation matrix to be processed may be transformed according to the aforementioned second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area. Optionally, the specific implementation process of coordinate transformation can refer to related technologies, which will not be repeated here.
例如,以上述示例为例,假设HUD显示装置的目标显示区域的HUD图像中,与位置坐标O w(x,y,z)相对应的位置坐标表示为O h(u,v),那么在将待处理坐标变换矩阵O'按照第二变换矩阵进行坐标变换后,可以得到: For example, taking the above example as an example, assuming that in the HUD image of the target display area of the HUD display device, the position coordinate corresponding to the position coordinate O w (x, y, z) is expressed as O h (u, v), then After the coordinate transformation matrix O'to be processed is transformed according to the second transformation matrix, we can obtain:
Figure PCTCN2021082943-appb-000009
Figure PCTCN2021082943-appb-000009
其中,width和height为HUD图像的宽度和高度,单位可以均为像素。在这种方式下,可以将O h(u,v)作为实景信息在目标显示区域内的相对位置坐标。 Among them, width and height are the width and height of the HUD image, and the unit can be both pixels. In this way, Oh (u, v) can be used as the relative position coordinates of the real scene information in the target display area.
步骤S270:在所述相对位置坐标所表征的位置处显示所述驾驶指引信息。Step S270: Display the driving guidance information at the position represented by the relative position coordinates.
可选的,在这种方式下,可以在相对位置坐标所表征的位置处显示与实景信息对应的驾驶指引信息。Optionally, in this manner, the driving guide information corresponding to the real scene information may be displayed at the position represented by the relative position coordinates.
下面以一个具体的示例对本实施例进行说明:,,,The following describes this embodiment with a specific example:
请参阅图7,示出了本实施例提供的基于增强现实的信息显示系统的显示效 果的一示例图。如图7所示,可以在相关的建模软件(例如Unity3D等)中搭建虚实场景融合系统(可以理解为本申请中的基于增强现实的信息显示系统),该虚实场景融合系统可以包括汽车,摄像机1(用于模拟驾驶员的眼睛)、用于模拟人眼追踪装置的脚本程序1、摄像机2和平面一起模拟的HUD成像模块(即前述的HUD显示装置)、由棋盘格模拟的图像感知模块(由脚本程序2来进行实现)所获取的空间场景信息(可以是不同场景下的空间场景信息)、以及由脚本程序3完成的图像处理装置的信息变换和图像绘制、渲染。Please refer to FIG. 7, which shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment. As shown in Figure 7, a virtual and real scene fusion system (which can be understood as an information display system based on augmented reality in this application) can be built in related modeling software (such as Unity3D, etc.). The virtual and real scene fusion system may include cars, Camera 1 (used to simulate the eyes of the driver), script program for simulating human eye tracking device 1, HUD imaging module simulated by camera 2 and plane together (i.e. the aforementioned HUD display device), image perception simulated by a checkerboard The spatial scene information (which can be spatial scene information in different scenes) acquired by the module (implemented by the script program 2), and the information transformation, image rendering and rendering of the image processing device completed by the script program 3.
可选的,在模拟的该虚实场景融合系统中,可以选定汽车底部的中心位置为坐标原点,以汽车前向为Z轴正方向,采用右手坐标系,假定驾驶员坐在驾驶位,摄像机1模拟的驾驶员的眼睛朝向前方,摄像机2模拟的HUD虚拟相机的位姿与驾驶员眼睛的位姿相同,在这种方式下,场景感知装置可以在汽车上获取棋盘格的空间角点信息,图像处理装置可以把角点绘制到HUD图像空间(如图7所示的左下角即为图像处理装置绘制到HUD图像空间的角点图像),然后将绘制后的图像发送给HUD显示装置进行显示(如图7所示的将角点图像输送给HUD进行虚像显示)。通过这种方式,可以得到如图7所示的驾驶员视角场景,通过图7所示的虚实融合结果放大图,可以看出,虚实场景可以准确融合。Optionally, in the simulated virtual and real scene fusion system, the center position of the bottom of the car can be selected as the coordinate origin, the forward direction of the car is the positive direction of the Z axis, and the right-hand coordinate system is adopted. It is assumed that the driver is sitting in the driving position, and the camera 1 The simulated driver's eyes are facing forward, and the pose of the HUD virtual camera simulated by camera 2 is the same as that of the driver's eyes. In this way, the scene sensing device can obtain the checkerboard space corner information on the car , The image processing device can draw the corner point to the HUD image space (the lower left corner as shown in Figure 7 is the corner point image drawn by the image processing device to the HUD image space), and then send the drawn image to the HUD display device for processing Display (as shown in Figure 7 the corner image is sent to the HUD for virtual image display). In this way, the driver's perspective scene as shown in FIG. 7 can be obtained. Through the enlarged view of the virtual and real fusion result shown in FIG. 7, it can be seen that the virtual and real scenes can be accurately fused.
其中,本实施例中的人眼追踪装置可以配置有由人眼空间位姿感知的3D传感器以及与传感器的输出数据相适配的人眼追踪算法。可选的,人眼空间位姿感知的3D传感器可以为双目相机、或者是RGB-D相机等,人眼追踪算法可以是计算机视觉算法、或者是深度学习算法等,具体均可以不作限定。作为一种方式,场景感知装置可以通过摄像头结合GPS、IMU传感器及感知处理算法获取车道、导航指示、(危险)行人等驾驶场景中的实景信息。可选的,若用户的人眼位姿发生了变化,图像处理装置可以根据变化后的人眼位姿调整前述的第一变换矩阵以及第二变换矩阵,进而绘制出与变化后的人眼位姿相适配的图像,并通过HUD显示装置进行虚像信息显示。Wherein, the eye tracking device in this embodiment may be configured with a 3D sensor that is perceived by the spatial pose of the human eye and an eye tracking algorithm adapted to the output data of the sensor. Optionally, the 3D sensor for human eye spatial pose perception may be a binocular camera, or an RGB-D camera, etc., and the human eye tracking algorithm may be a computer vision algorithm, or a deep learning algorithm, etc., and the details are not limited. As a way, the scene sensing device can obtain real-life information in driving scenes such as lanes, navigation instructions, (dangerous) pedestrians, etc., through a camera combined with GPS, IMU sensors, and perception processing algorithms. Optionally, if the user's eye position changes, the image processing device can adjust the aforementioned first transformation matrix and the second transformation matrix according to the changed eye position, and then draw the same as the changed eye position. The image that matches the posture, and displays the virtual image information through the HUD display device.
作为一种方式,若在驾驶的过程中驾驶员眼睛的空间位姿发生了变化,例如,驾驶员的视线发生了如图5所示的移动,在这种情况下,人眼追踪装置可以实时获取驾驶员眼睛的空间位姿,并判断当前时刻的人眼位姿与前一时刻的人眼位姿的变化是否超过指定阈值,其中,指定阈值的数值可以根据实际情况进行设定。可选的,若判定驾驶员的人眼位姿的变化超过了指定阈值,在这种情况下,可以基于变化后的驾驶员的人眼位姿重新获取驾驶员的空间位姿,继而根据空间位姿重新确定目标显示区域,然后在重新确定的目标现实区域的对应位置处显示虚实融合场景图。As a way, if the spatial pose of the driver’s eyes changes during driving, for example, the driver’s line of sight moves as shown in Figure 5. In this case, the eye tracking device can be used in real time. Obtain the spatial pose of the driver's eyes, and determine whether the change between the human eye pose at the current moment and the human eye pose at the previous moment exceeds a specified threshold, where the value of the specified threshold can be set according to the actual situation. Optionally, if it is determined that the change of the driver’s eye pose exceeds a specified threshold, in this case, the driver’s spatial pose can be retrieved based on the changed driver’s eye pose, and then based on the spatial pose. The pose re-determines the target display area, and then displays the virtual and real fusion scene graph at the corresponding position of the re-determined target real area.
例如,作为一种实施方式,在图7所示的显示效果下,若驾驶员的人眼位姿的变化相较于前一时刻发生了变化,且变化的范围超过了指定阈值,那么可以得到如图8所示的显示效果示例图。其中,时刻的具体数值可以不作限定,在一些可能的实施方式中,也可以将判断的区间变化为时段或者周期等。如图8所示,在驾驶员的眼睛的位姿的变化范围超过指定阈值的情况下,驾驶员视角场景下显示虚实场景融合图的位置发生了偏移,以使得在驾驶员的人眼位姿发生变化时,可以将与驾驶场景下的实景信息对应的驾驶指引信息以最佳的视角 呈现给用户,避免用户因视线偏移时引发驾驶安全事故,提升了驾驶的安全性与灵活性,进而提升用户体验。For example, as an embodiment, under the display effect shown in Figure 7, if the driver’s eye pose changes compared to the previous moment, and the range of the change exceeds the specified threshold, then you can get Figure 8 shows an example of the display effect. Wherein, the specific value of the time may not be limited, and in some possible implementation manners, the judgment interval may also be changed to a time period or a period. As shown in Figure 8, when the change range of the driver’s eye pose exceeds the specified threshold, the position where the virtual and real scene fusion map is displayed in the driver’s perspective scene is shifted, so that the driver’s eye position When the posture changes, the driving guidance information corresponding to the actual scene information in the driving scene can be presented to the user in the best perspective, to avoid driving safety accidents caused by the user's sight deviation, and to improve the safety and flexibility of driving. And then improve the user experience.
需要说明的是,本实施例中,若驾驶员的视线移动后眼睛仍然可以看到HUD虚像平面,驾驶员可以通过HUD虚像平面准确地看到标识在行驶车道上的导航指示、或者是包围行人的警示框等驾驶指引信息,以确保安全驾驶。可选的,若驾驶员的视线移动后眼睛看不到位于车头前方的HUD虚像平面,在一些可能的实施方式中,可以在需要的情况下,配置多块HUD虚像平面,以使得驾驶员的视线往任意方向偏移(或者转动),任可以看到当前驾驶场景下需要注意的驾驶指引信息,提升信息显示的灵活性与多样性,提升用户体验。It should be noted that, in this embodiment, if the driver's eyes can still see the HUD virtual image plane after the sight of the driver moves, the driver can accurately see the navigation instructions marked on the driving lane or surround pedestrians through the HUD virtual image plane. Warning box and other driving guidance information to ensure safe driving. Optionally, if the driver’s eyes cannot see the HUD virtual image plane in front of the front of the vehicle after the driver’s line of sight moves, in some possible implementations, multiple HUD virtual image planes can be configured if necessary, so that the driver’s When the line of sight is shifted (or rotated) in any direction, you can see the driving guidance information that needs to be paid attention to in the current driving scene, which improves the flexibility and diversity of information display and enhances the user experience.
本申请提供的一种基于增强现实的信息显示方法,通过将基于实景信息生成得到的驾驶指引信息,通过第一变换矩阵以及第二变换矩阵分别进行坐标变换后,显示在基于基于人眼追踪装置确定的用户的第一空间位姿进而确定的目标显示区域的对应位置,实现了通过人眼追踪装置实时的追踪用户的第一空间位姿,使得能够适应人眼位姿的变化,动态的调节驾驶指引信息的显示区域以及显示位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。The present application provides an information display method based on augmented reality. The driving guidance information generated based on the real scene information is transformed by the first transformation matrix and the second transformation matrix respectively, and then displayed on the basis of the human eye tracking device. The determined first spatial pose of the user and then the determined corresponding position of the target display area realize the real-time tracking of the user's first spatial pose through the eye tracking device, so that it can adapt to the changes of the human eye pose and dynamically adjust The display area and display position of the driving guide information, so that the user can accurately and conveniently view the virtual driving guide information corresponding to the driving scene during driving, without the need to repeatedly confirm the accuracy of the driving guide information, reducing the need to check the road conditions As well as navigation and other driving guidance information, the frequent change of sight caused by fatigue has improved the safety and comfort of driving.
请参阅图9,为本申请又一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:Please refer to FIG. 9, which is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application. The method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
步骤S310:获取场景感知装置采集的实景信息。Step S310: Acquire the real scene information collected by the scene sensing device.
步骤S320:获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定。Step S320: Obtain a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on the eye tracking device.
步骤S330:通过获取所述人眼追踪装置的人眼姿态变化参数检测所述第一空间位姿的变化。Step S330: Detect the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device.
可以理解的是,在用户驾驶的过程中,随着行进车道路况等驾驶场景信息的不同,用户的姿态可能会发生变化,例如,用户的头部会往前后左右等方向转动,在该种方式下,用户的视线范围会发生变化(例如,视线偏移等),在这种情况下,若仍旧采用原来的HUD显示方式显示与实景信息对应的驾驶指引信息,可能会因位置存在显示误差造成安全隐患。It is understandable that, during the user's driving process, the user's posture may change according to the driving scene information such as the road conditions of the traveling vehicle. For example, the user's head will rotate in the direction In this mode, the user's line of sight will change (for example, the line of sight is shifted, etc.). In this case, if the original HUD display method is still used to display the driving guidance information corresponding to the actual scene information, there may be display errors due to the position Cause safety hazards.
作为一种改善这一问题的方式,本实施例中的人眼追踪装置可以实时检测用户的人眼姿态,若检测到用户当前的人眼姿态相较于前一时刻(或者前一时段)的人眼位姿发生了变化,那么可以获取与人眼位姿变化对应的参数,可选的,在这种方式下,可以通过人眼姿态变化(即人眼位姿变化)参数检测用户的第一空间位姿的变化,以便于若检测到第一空间位姿发生变化,则基于变化后的空间位姿重新确定目标显示区域,从而可以保证与实景信息对应的驾驶指引信息的显示位置的准确性,而不需要用户反复确认驾驶指引信息的准确性,提升了显示驾驶指引信息的灵活性,进而提升用户体验。As a way to improve this problem, the eye tracking device in this embodiment can detect the user's eye posture in real time. If the human eye pose changes, then the parameters corresponding to the human eye pose change can be obtained. Optionally, in this way, the user’s first position can be detected through the human eye pose change (ie, the human eye pose change) parameter. A change in spatial pose so that if a change in the first spatial pose is detected, the target display area will be re-determined based on the changed spatial pose, so as to ensure the accuracy of the display position of the driving guidance information corresponding to the actual scene information Without requiring the user to repeatedly confirm the accuracy of the driving guidance information, the flexibility of displaying the driving guidance information is improved, thereby enhancing the user experience.
可选的,人眼姿态变化参数可以包括人眼视线的方向、角度或者是范围等。 作为一种方式,可以通过相关的人脸识别算法判断人眼追踪装置所采集的人眼位姿图像中人眼位姿是否发生了变化,若发生了变化,可以获取对应的人眼姿态变化参数。Optionally, the human eye posture change parameter may include the direction, angle, or range of the human eye's line of sight. As a way, the relevant face recognition algorithm can be used to determine whether the eye pose has changed in the eye pose image collected by the eye tracking device. If there is a change, the corresponding eye pose change parameter can be obtained .
作为一种方式,可以根据人眼姿态变化参数获取与第一空间位姿对应的变化向量,其中,可选的,具体计算过程可以参考相关技术实现,在此不再赘述。As a way, the change vector corresponding to the first spatial pose can be obtained according to the human eye pose change parameter. Optionally, the specific calculation process can be implemented with reference to related technologies, which will not be repeated here.
步骤S340:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。Step S340: Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
步骤S350:基于所述实景信息生成驾驶指引信息。Step S350: Generate driving guidance information based on the real scene information.
步骤S361:若所述人眼姿态变化参数的变化量大于预设阈值,根据所述人眼姿态变化参数更新所述坐标变换规则。Step S361: If the amount of change of the human eye posture change parameter is greater than a preset threshold, update the coordinate transformation rule according to the human eye posture change parameter.
可选的,可以预先配置变化向量所对应的预设阈值,该预设阈值可以用于区分是否需要进行后续目标显示区域的调整。作为一种实现方式,若变化向量的变化值大于预设阈值,可以基于变化向量调整目标显示区域,得到重新确定的目标显示区域。Optionally, a preset threshold corresponding to the change vector may be configured in advance, and the preset threshold may be used to distinguish whether subsequent adjustments to the target display area are required. As an implementation manner, if the change value of the change vector is greater than the preset threshold, the target display area can be adjusted based on the change vector to obtain the newly determined target display area.
例如,在一个具体的应用场景中,请参阅图10,示出了本实施例提供的基于变化向量调整目标显示区域的一示例图。如图10所示,用户的第一空间位姿由22变为了23’,其中,23’为用当前的第一空间位姿,该第一空间位姿为基于用户当前的人眼位姿确定。作为一种方式,若检测到与用户的当前第一空间位姿23’对应的人眼姿态的变化向量大于预设阈值,在这种方式下,位于汽车的前置挡风玻璃的屏幕21上的目标显示区域的位置可以由23变化为23’,其中,23’即为重新确定的目标显示区域。For example, in a specific application scenario, please refer to FIG. 10, which shows an example diagram of adjusting the target display area based on the change vector provided in this embodiment. As shown in Figure 10, the user's first spatial pose has changed from 22 to 23', where 23' is the current first spatial pose, which is determined based on the user's current human eye pose . As a way, if it is detected that the change vector of the human eye pose corresponding to the user's current first spatial pose 23' is greater than the preset threshold, in this way, it is located on the screen 21 of the front windshield of the car The position of the target display area can be changed from 23 to 23', where 23' is the re-determined target display area.
可选的,若用户的第一空间位姿发生了变化,在上述方式下,可以根据人眼姿态变化参数更新坐标变换规则,获取得到实景信息映射到重新确定的目标显示区域对应的第二坐标变换规则,其中,第二坐标变换规则的具体确定过程可以参照前述坐标变换规则的确定原理以及确定过程,在此不再赘述。Optionally, if the user's first spatial pose changes, in the above manner, the coordinate transformation rules can be updated according to the human eye pose change parameters to obtain the real scene information mapped to the second coordinate corresponding to the newly determined target display area Conversion rules, where the specific determination process of the second coordinate conversion rule can refer to the determination principle and determination process of the aforementioned coordinate conversion rule, which will not be repeated here.
步骤S362:基于更新后的所述坐标变换规则将所述驾驶指引信息显示在目标显示区域的对应位置。Step S362: Display the driving guidance information at the corresponding position of the target display area based on the updated coordinate transformation rule.
可选的,在获取了重新确定的目标显示区域的基础上,可以基于第二坐标变化规则将驾驶指引信息显示在重新确定的目标显示区域的对应位置。Optionally, on the basis of acquiring the newly determined target display area, the driving guidance information may be displayed at the corresponding position of the newly determined target display area based on the second coordinate change rule.
作为一种实施方式,本实施例中的目标显示区域可以根据用户的第一空间位姿的变化而进行调整。例如,若检测到用户存在低头等姿态时,目标显示区域可以显示在中控显示屏上的对应位置;若检测到用户在驾驶的过程中看手机的频率较高,可以将目标显示区域显示到手机的显示屏上;或者是驾驶场景下其他可以用于作为目标显示区域的屏幕,例如,位于驾驶位左右两侧的车窗等。As an implementation manner, the target display area in this embodiment can be adjusted according to the change of the user's first spatial pose. For example, if it is detected that the user has a gesture such as bowing, the target display area can be displayed in the corresponding position on the central control display; if it is detected that the user looks at the phone more frequently during driving, the target display area can be displayed to On the display screen of the mobile phone; or other screens that can be used as the target display area in the driving scene, for example, the windows on the left and right sides of the driving position.
步骤S371:若所述人眼姿态变化参数的变化量不大于预设阈值,基于所述坐标变换规则将所述驾驶指引信息显示在目标显示区域的对应位置。Step S371: If the amount of change of the human eye posture change parameter is not greater than the preset threshold, display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
作为另一种实现方式,若变化向量的变化值不大于预设阈值,可以获取基于用户的第一空间位姿确定的目标显示区域,在这种方式下,用户的第一空间位姿可以为用户的坐姿等,具体可以参考前述实施例中的描述。As another implementation manner, if the change value of the change vector is not greater than the preset threshold, the target display area determined based on the user's first spatial pose can be obtained. In this way, the user's first spatial pose can be For the sitting posture of the user, please refer to the description in the foregoing embodiment for details.
可选的,本实施例中,各个步骤之间的先后顺序可以不作限定,例如,步 骤S330可以在步骤S340之后实施等。Optionally, in this embodiment, the sequence of the various steps may not be limited. For example, step S330 may be implemented after step S340.
示例性的,下面示出了一种具体的实施流程:Exemplarily, a specific implementation process is shown below:
如图11所示,示出了本实施例中提出的基于增强显示的信息显示方法的处理过程示例图。在图11中,空心箭头所指向的流程可以为初始流程,实心箭头所指向的流程可以为实时持续流程。作为一种实施方式,可以先确立坐标系,继而测定场景感知模块(可以理解为前述的场景感知装置)以及HUD虚像平面的空间位姿,并初始化驾驶员的人眼空间位置坐标,再分别计算场景感知模块矩阵M以及HUD成像矩阵C,然后得到总变换矩阵(即前述的坐标变换规则)F=CM。可选的,场景感知模块可以实时获取实景信息,将实景信息作为待显示信息并发送给图像处理模块(可以理解为前述的图像处理装置),图像处理模块对实景信息对应的坐标进行坐标变化处理并绘制最终得到的图像,将该图像投射到HUD显示屏幕(即前述的目标显示区域)中进行显示,以实现提升显示驾驶指引信息的显示位置的准确性,减少用户操作,进而提升用户体验。As shown in FIG. 11, an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment is shown. In FIG. 11, the process pointed by the hollow arrow may be the initial process, and the process pointed by the solid arrow may be the real-time continuous process. As an implementation manner, the coordinate system can be established first, and then the space pose of the scene perception module (which can be understood as the aforementioned scene perception device) and the HUD virtual image plane are measured, and the driver's eye space position coordinates are initialized, and then calculated separately Scene perception module matrix M and HUD imaging matrix C, and then the total transformation matrix (that is, the aforementioned coordinate transformation rule) F=CM is obtained. Optionally, the scene perception module can acquire real-time information in real time, use the real-time information as the information to be displayed and send it to the image processing module (which can be understood as the aforementioned image processing device), and the image processing module performs coordinate change processing on the coordinates corresponding to the real-world information The final image is drawn, and the image is projected onto the HUD display screen (that is, the aforementioned target display area) for display, so as to improve the accuracy of the display position of the driving guide information, reduce user operations, and improve user experience.
步骤S364:基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。Step S364: Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
本申请提供的一种基于增强现实的信息显示方法,通过获取人眼追踪装置的人眼姿态变化参数检测第一空间位姿的变化,再在人眼姿态变化参数对应的变化向量的变化值大于预设阈值的情况下,基于变化向量重新调整目标显示区域,实现了将驾驶指引信息显示在基于基于人眼追踪装置确定的用户的第一空间位姿进而确定的目标显示区域的对应位置,实现了通过人眼追踪装置实时的追踪用户的第一空间位姿,使得能够适应人眼位姿的变化,动态的调节驾驶指引信息的显示区域以及显示位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。The present application provides an information display method based on augmented reality, which detects the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device, and then the change value of the change vector corresponding to the eye posture change parameter is greater than In the case of a preset threshold, the target display area is re-adjusted based on the change vector, and the driving guidance information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined based on the eye tracking device. In order to track the user’s first spatial posture in real time through the eye tracking device, it can adapt to changes in the eye’s posture, dynamically adjust the display area and display position of the driving guidance information, so that the user can be accurate during driving. Conveniently view the virtual driving guidance information corresponding to the driving scene without repeatedly confirming the accuracy of the driving guidance information. This reduces the fatigue of frequent sight changes caused by viewing road conditions and driving guidance information such as navigation, and improves driving safety and Comfort.
请参阅图12,本申请实施例提供的一种基于增强现实的信息显示装置400,可运行于投影设备,所述装置400包括:Referring to FIG. 12, an information display device 400 based on augmented reality provided by an embodiment of the present application can be run on a projection device. The device 400 includes:
图像感知模块410,用于获取图像感知装置采集的实景信息。The image sensing module 410 is used to obtain real scene information collected by the image sensing device.
坐标变换模块420,用于获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定。The coordinate transformation module 420 is configured to obtain the target display area determined based on the user's first spatial pose, which is determined based on the human eye tracking device.
可选的,装置400还可以包括参数变化检测模块,该参数变化检测模块用于通过获取所述人眼追踪装置的人眼姿态变化参数检测所述第一空间位姿的变化。在该种方式下,坐标变换模块420具体可以用于获取所述人眼追踪装置采集的人眼姿态变化参数;基于所述人眼姿态变化参数获取与所述第一空间位姿对应的变化向量;若所述变化向量的变化量大于预设阈值,基于所述变化向量调整所述目标显示区域,得到重新确定的目标显示区域。可选的,若所述变化向量的变化量不大于预设阈值,执行所述获取基于用户的第一空间位姿确定的目标显示区域的步骤。Optionally, the device 400 may further include a parameter change detection module, which is used to detect the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device. In this manner, the coordinate transformation module 420 may be specifically used to obtain the eye posture change parameters collected by the eye tracking device; obtain the change vector corresponding to the first spatial pose based on the eye posture change parameters If the amount of change of the change vector is greater than a preset threshold, adjust the target display area based on the change vector to obtain a re-determined target display area. Optionally, if the amount of change of the change vector is not greater than a preset threshold, the step of obtaining the target display area determined based on the user's first spatial pose is performed.
作为一种方式,坐标变换模块420还可以用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。As a way, the coordinate transformation module 420 may also be used to obtain a coordinate transformation rule corresponding to the real scene information mapped to the target display area.
可选的,本实施例中的坐标变换规则可以包括第一变换矩阵和第二变换矩阵,所述第一变换矩阵用于确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标,所述第二变换矩阵用于将所述参考世界坐标转换为所述目标显示区域内与所述用户的视角的偏移匹配的视图坐标。Optionally, the coordinate transformation rule in this embodiment may include a first transformation matrix and a second transformation matrix. The first transformation matrix is used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device. The second transformation matrix is used to convert the reference world coordinates into view coordinates in the target display area that match the offset of the user's viewing angle.
可选的,第一变换矩阵可以包括第一旋转矩阵以及第一平移向量,所述第一旋转矩阵用于对所述场景感知装置采集的实景信息的坐标进行旋转,所述第一平移向量用于对所述坐标进行平移,所述第一变换矩阵基于所述第一旋转矩阵以及所述第一平移向量确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标。Optionally, the first transformation matrix may include a first rotation matrix and a first translation vector, where the first rotation matrix is used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector is used To translate the coordinates, the first transformation matrix determines the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device based on the first rotation matrix and the first translation vector.
可选的,第二变换矩阵可以包括视角偏移矩阵、转置矩阵以及投影矩阵,所述投影矩阵用于确定将所述实景信息映射到所述目标显示区域的映射范围,所述视角偏移矩阵用于确定所述人眼追踪装置检测到的所述用户的视角的偏移程度,所述转置矩阵用于确定所述映射范围内显示所述驾驶指引信息的相对位置,所述第二变换矩阵基于所述视角偏移矩阵、转置矩阵以及投影矩阵将所述参考世界坐标转换为所述目标显示区域内与所述用户的视角的偏移匹配的视图坐标。Optionally, the second transformation matrix may include a viewing angle offset matrix, a transposition matrix, and a projection matrix. The projection matrix is used to determine a mapping range for mapping the real scene information to the target display area, and the viewing angle offset The matrix is used to determine the degree of deviation of the user’s viewing angle detected by the eye tracking device, the transposed matrix is used to determine the relative position within the mapping range where the driving guidance information is displayed, and the second The transformation matrix converts the reference world coordinates into view coordinates in the target display area that match the offset of the user's view angle based on the view angle offset matrix, the transpose matrix, and the projection matrix.
可选的,视角偏移矩阵可以包括第一视图坐标,所述转置矩阵表征面向所述第一变换矩阵的转置,所述转置矩阵包括位于所述目标显示区域的空间单位向量,所述投影矩阵包括视场角参数,所述视场角参数包括与所述用户的视角关联的距离参数和尺度参数。Optionally, the viewing angle offset matrix may include first view coordinates, the transposition matrix represents a transposition facing the first transformation matrix, and the transposition matrix includes a spatial unit vector located in the target display area, so The projection matrix includes a field angle parameter, and the field angle parameter includes a distance parameter and a scale parameter associated with the user's viewing angle.
显示模块430,用于基于所述实景信息生成驾驶指引信息。The display module 430 is configured to generate driving guidance information based on the real scene information.
作为一种方式,显示模块430还可以还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。As a way, the display module 430 may also be used to display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
可选的,显示模块430具体可以用于将所述实景信息在所述场景感知装置对应的坐标系中的位置坐标输入所述第一变换矩阵,得到待处理坐标变换矩阵;将所述待处理坐标变换矩阵按照所述第二变换矩阵进行坐标变换,得到所述实景信息在所述目标显示区域内的相对位置坐标;在所述相对位置坐标所表征的位置处显示所述驾驶指引信息。Optionally, the display module 430 may be specifically configured to input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed; The coordinate transformation matrix performs coordinate transformation according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area; the driving guidance information is displayed at the position characterized by the relative position coordinates.
需要说明的是,本申请中装置实施例与前述方法实施例是相互对应的,装置实施例中具体的原理可以参见前述方法实施例中的内容,此处不再赘述。It should be noted that the device embodiment in this application and the foregoing method embodiment correspond to each other, and the specific principles in the device embodiment can be referred to the content in the foregoing method embodiment, which will not be repeated here.
下面将结合图13对本申请提供的一种投影设备进行说明。Hereinafter, a projection device provided by the present application will be described with reference to FIG. 13.
请参阅图13,基于上述的基于增强现实的信息显示方法、系统、装置,本申请实施例还提供的另一种可以执行前述基于增强现实的信息显示方法的投影设备100。投影设备100包括相互耦合的一个或多个(图中仅示出一个)处理器102、存储器104、图像感知模块11、坐标变换模块12、人眼追踪模块14以及显示模块13。其中,该存储器104中存储有可以执行前述实施例中内容的程序,而处理器102可以执行该存储器104中存储的程序,存储器104包括前述实施例中所描述的装置400。Please refer to FIG. 13, based on the foregoing augmented reality-based information display method, system, and device, an embodiment of the present application also provides another projection device 100 that can execute the foregoing augmented reality-based information display method. The projection device 100 includes one or more (only one shown in the figure) processor 102, memory 104, image perception module 11, coordinate transformation module 12, human eye tracking module 14 and display module 13 coupled with each other. Wherein, the memory 104 stores a program that can execute the content in the foregoing embodiment, and the processor 102 can execute the program stored in the memory 104, and the memory 104 includes the device 400 described in the foregoing embodiment.
其中,处理器102可以包括一个或者多个处理核。处理器102利用各种接口和线路连接整个投影设备100内的各个部分,通过运行或执行存储在存储器 104内的指令、程序、代码集或指令集,以及调用存储在存储器104内的数据,执行投影设备100的各种功能和处理数据。可选地,处理器102可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器102可集成中央处理器(Central Processing Unit,CPU)、视频图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器102中,单独通过一块通信芯片进行实现。The processor 102 may include one or more processing cores. The processor 102 uses various interfaces and lines to connect various parts of the entire projection device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and calling data stored in the memory 104. Various functions and processing data of the projection device 100. Optionally, the processor 102 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). A kind of hardware form to realize. The processor 102 may integrate one or a combination of a central processing unit (CPU), a video image processor (Graphics Processing Unit, GPU), a modem, and the like. Among them, the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 102, but may be implemented by a communication chip alone.
存储器104可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器104可用于存储指令、程序、代码、代码集或指令集。存储器104可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、视频图像播放功能等)、用于实现上述各个方法实施例的指令等。存储数据区还可以存储投影设备100在使用中所创建的数据(例如音视频数据)等。The memory 104 may include random access memory (RAM) or read-only memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, codes, code sets or instruction sets. The memory 104 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, video image playback function, etc.) ), instructions used to implement the foregoing method embodiments, etc. The data storage area can also store data (for example, audio and video data) created by the projection device 100 during use.
图像感知模块11用于获取图像感知装置采集的实景信息;坐标变换模块12用于获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置14确定;所述人眼追踪装置14用于实时检测用户的人眼位姿;所述坐标变换模块12还用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;所述显示模块13用于基于所述实景信息生成驾驶指引信息;所述显示模块13还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。The image perception module 11 is used to obtain the real scene information collected by the image perception device; the coordinate transformation module 12 is used to obtain the target display area determined based on the user's first spatial pose, which is based on the human eye tracking device 14 The eye tracking device 14 is used to detect the user's eye pose in real time; the coordinate transformation module 12 is also used to obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area; the display The module 13 is configured to generate driving guidance information based on the real scene information; the display module 13 is also configured to display the driving guidance information in a corresponding position of the target display area based on the coordinate transformation rule.
请参考图14,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质500中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。Please refer to FIG. 14, which shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application. The computer-readable medium 500 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
计算机可读存储介质500可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质500包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质500具有执行上述方法中的任何方法步骤的程序代码510的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码510可以例如以适当形式进行压缩。The computer-readable storage medium 500 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. Optionally, the computer-readable storage medium 500 includes a non-transitory computer-readable storage medium. The computer-readable storage medium 500 has a storage space for the program code 510 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products. The program code 510 may be compressed in a suitable form, for example.
综上所述,本申请提供的一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质,通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,第一空间位姿为基于人眼追踪装置确定,再获取实景信息映射到目标显示区域对应的坐标变换规则,并基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。从而通过上述方式实现了将基于实景信息生成得到的驾 驶指引信息,通过坐标变换规则显示在基于人眼追踪装置确定的用户的第一空间位姿进而确定的目标显示区域的对应位置,实现了通过人眼追踪装置实时的追踪用户的第一空间位姿,使得能够适应人眼位姿的变化,动态的调节驾驶指引信息的显示区域以及显示位置,实现了用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,提升了驾驶的安全性与舒适性,进而提升用户体验。In summary, the present application provides an augmented reality-based information display method, system, device, projection equipment, and storage medium. By acquiring the real scene information collected by the image sensing device, the first spatial pose determination based on the user is obtained. The first spatial pose is determined based on the human eye tracking device, and then the real scene information is mapped to the coordinate transformation rule corresponding to the target display area, and the driving guidance information is generated based on the real scene information, and then the driving guidance is based on the coordinate transformation rule The information is displayed in the corresponding position of the target display area. As a result, the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method. The eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application, not to limit them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the foregoing embodiments are modified, or some of the technical features thereof are equivalently replaced; these modifications or replacements do not drive the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (12)

  1. 一种基于增强现实的信息显示方法,其特征在于,所述方法包括:An information display method based on augmented reality, characterized in that the method includes:
    获取场景感知装置采集的实景信息;Obtain real scene information collected by the scene sensing device;
    获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定;Acquiring a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on a human eye tracking device;
    获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;Acquiring a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area;
    基于所述实景信息生成驾驶指引信息;Generating driving guidance information based on the real scene information;
    基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。Displaying the driving guidance information in a corresponding position of the target display area based on the coordinate transformation rule.
  2. 根据权利要求1所述的方法,其特征在于,所述坐标变换规则包括第一变换矩阵和第二变换矩阵,所述第一变换矩阵用于确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标,所述第二变换矩阵用于将所述参考世界坐标转换为所述目标显示区域内与所述用户的视角的偏移匹配的视图坐标。The method according to claim 1, wherein the coordinate transformation rule includes a first transformation matrix and a second transformation matrix, and the first transformation matrix is used to determine the coordinates of the real scene information collected by the scene sensing device. Corresponding to the reference world coordinates, the second transformation matrix is used to convert the reference world coordinates into view coordinates in the target display area that match the offset of the user's viewing angle.
  3. 根据权利要求2所述的方法,其特征在于,所述第一变换矩阵包括第一旋转矩阵以及第一平移向量,所述第一旋转矩阵用于对所述场景感知装置采集的实景信息的坐标进行旋转,所述第一平移向量用于对所述坐标进行平移,所述第一变换矩阵基于所述第一旋转矩阵以及所述第一平移向量确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标。2. The method according to claim 2, wherein the first transformation matrix comprises a first rotation matrix and a first translation vector, and the first rotation matrix is used to determine the coordinates of the real scene information collected by the scene sensing device. Performing a rotation, the first translation vector is used to translate the coordinates, and the first transformation matrix is based on the first rotation matrix and the first translation vector to determine the difference between the real scene information collected by the scene sensing device and the The coordinate corresponds to the reference world coordinate.
  4. 根据权利要求2所述的方法,其特征在于,所述第二变换矩阵包括视角偏移矩阵、转置矩阵以及投影矩阵,所述投影矩阵用于确定将所述实景信息映射到所述目标显示区域的映射范围,所述视角偏移矩阵用于确定所述人眼追踪装置检测到的所述用户的视角的偏移程度,所述转置矩阵用于确定所述映射范围内显示所述驾驶指引信息的相对位置,所述第二变换矩阵基于所述视角偏移矩阵、转置矩阵以及投影矩阵将所述参考世界坐标转换为所述目标显示区域内与所述用户的视角的偏移匹配的视图坐标。The method according to claim 2, wherein the second transformation matrix comprises a viewing angle offset matrix, a transposition matrix, and a projection matrix, and the projection matrix is used to determine the mapping of the real scene information to the target display The mapping range of the region, the viewing angle offset matrix is used to determine the degree of deviation of the user's viewing angle detected by the eye tracking device, and the transposed matrix is used to determine that the driving is displayed within the mapping range. The relative position of the guide information, the second transformation matrix converts the reference world coordinates into an offset match with the user’s viewing angle in the target display area based on the viewing angle offset matrix, the transposed matrix, and the projection matrix The view coordinates.
  5. 根据权利要求4所述的方法,其特征在于,所述视角偏移矩阵包括第一视图坐标,所述转置矩阵表征面向所述第一变换矩阵的转置,所述转置矩阵包括位于所述目标显示区域的空间单位向量,所述投影矩阵包括视场角参数,所述视场角参数包括与所述用户的视角关联的距离参数和尺度参数。The method according to claim 4, wherein the viewing angle offset matrix includes first view coordinates, the transposition matrix represents a transposition facing the first transformation matrix, and the transposition matrix includes The spatial unit vector of the target display area, the projection matrix includes a field of view parameter, and the field of view parameter includes a distance parameter and a scale parameter associated with the viewing angle of the user.
  6. 根据权利要求2-5任一项所述的方法,其特征在于,所述基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置,包括:The method according to any one of claims 2-5, wherein the displaying the driving guidance information at a corresponding position of the target display area based on the coordinate transformation rule comprises:
    将所述实景信息在所述场景感知装置对应的坐标系中的位置坐标输入所述第一变换矩阵,得到待处理坐标变换矩阵;Inputting the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed;
    将所述待处理坐标变换矩阵按照所述第二变换矩阵进行坐标变换,得到所述实景信息在所述目标显示区域内的相对位置坐标;Performing coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area;
    在所述相对位置坐标所表征的位置处显示所述驾驶指引信息。The driving guidance information is displayed at the position represented by the relative position coordinates.
  7. 根据权利要求1所述的方法,其特征在于,所述获取所述实景信息映 射到所述目标显示区域对应的坐标变换规则之前,还包括:The method according to claim 1, characterized in that, before acquiring the real scene information and mapping the coordinate transformation rule corresponding to the target display area, the method further comprises:
    通过获取所述人眼追踪装置的人眼姿态变化参数检测所述第一空间位姿的变化;Detecting the change in the first spatial pose by acquiring the eye posture change parameter of the eye tracking device;
    所述方法还包括:The method also includes:
    若所述人眼姿态变化参数的变化量大于预设阈值,根据所述人眼姿态变化参数更新所述坐标变换规则。If the amount of change of the human eye posture change parameter is greater than a preset threshold, the coordinate transformation rule is updated according to the human eye posture change parameter.
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:The method according to claim 7, wherein the method further comprises:
    若所述人眼姿态变化参数的变化量不大于预设阈值,执行所述获取所述实景信息映射到所述目标显示区域对应的坐标变换规则的步骤。If the amount of change of the human eye posture change parameter is not greater than a preset threshold, the step of acquiring the coordinate transformation rule corresponding to the mapping of the real scene information to the target display area is performed.
  9. 一种基于增强现实的信息显示装置,其特征在于,所述信息显示装置包括图像感知模块、坐标变换模块以及显示模块:An information display device based on augmented reality, characterized in that the information display device includes an image perception module, a coordinate transformation module, and a display module:
    所述图像感知模块用于获取图像感知装置采集的实景信息;The image sensing module is used to obtain real-life information collected by the image sensing device;
    所述坐标变换模块用于获取基于用户的第一空间位姿确定的目标显示区域,所述第一空间位姿为基于人眼追踪装置确定;The coordinate transformation module is configured to obtain a target display area determined based on a user's first spatial pose, where the first spatial pose is determined based on a human eye tracking device;
    所述坐标变换模块还用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;The coordinate transformation module is also used to obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area;
    所述显示模块用于基于所述实景信息生成驾驶指引信息;The display module is used to generate driving guidance information based on the real scene information;
    所述显示模块还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。The display module is further configured to display the driving guidance information in a corresponding position of the target display area based on the coordinate transformation rule.
  10. 一种基于增强现实的车载信息显示系统,其特征在于,所述系统包括:A vehicle-mounted information display system based on augmented reality is characterized in that the system includes:
    场景感知装置,用于采集车辆外部环境的实景信息;Scene perception device, used to collect real scene information of the vehicle's external environment;
    人眼追踪装置,用于获取用户的视线移动范围;Eye tracking device, used to obtain the user's line of sight movement range;
    图像处理装置,用于基于所述视线移动范围获取所述用户的第一空间位姿,获取基于所述第一空间位姿确定的目标显示区域,获取所述实景信息映射到所述目标显示区域对应的坐标变换规则,基于所述实景信息生成驾驶指引信息,基于所述坐标变换规则生成所述驾驶指引信息显示在所述目标显示区域的目标位置坐标;An image processing device for acquiring a first spatial pose of the user based on the movement range of the line of sight, acquiring a target display area determined based on the first spatial pose, and acquiring the real scene information mapped to the target display area A corresponding coordinate transformation rule, generating driving guidance information based on the real scene information, and generating the target position coordinates of the driving guidance information displayed in the target display area based on the coordinate transformation rule;
    HUD显示装置,用于将所述驾驶指引信息展示到所述目标显示区域的所述目标位置坐标处。The HUD display device is used to display the driving guidance information at the target position coordinates of the target display area.
  11. 一种投影设备,其特征在于,包括一个或多个处理器以及存储器;A projection device, characterized in that it comprises one or more processors and a memory;
    一个或一个以上的程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或一个以上的程序配置用于执行权利要求1-8任一所述的方法。One or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs are configured to execute any one of claims 1-8 method.
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码被处理器运行时执行权利要求1-8任一所述的方法。A computer-readable storage medium, wherein a program code is stored in the computer-readable storage medium, wherein the method according to any one of claims 1-8 is executed when the program code is run by a processor.
PCT/CN2021/082943 2020-03-31 2021-03-25 Augmented reality-based information display method, system and apparatus, and projection device WO2021197189A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010243557.3A CN113467600A (en) 2020-03-31 2020-03-31 Information display method, system and device based on augmented reality and projection equipment
CN202010243557.3 2020-03-31

Publications (1)

Publication Number Publication Date
WO2021197189A1 true WO2021197189A1 (en) 2021-10-07

Family

ID=77865577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082943 WO2021197189A1 (en) 2020-03-31 2021-03-25 Augmented reality-based information display method, system and apparatus, and projection device

Country Status (2)

Country Link
CN (1) CN113467600A (en)
WO (1) WO2021197189A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114041741A (en) * 2022-01-13 2022-02-15 杭州堃博生物科技有限公司 Data processing unit, processing device, surgical system, surgical instrument, and medium
CN114305686A (en) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 Positioning processing method, device, equipment and medium based on magnetic sensor
CN114371779A (en) * 2021-12-31 2022-04-19 北京航空航天大学 Visual enhancement method for sight depth guidance
CN114387198A (en) * 2022-03-24 2022-04-22 青岛市勘察测绘研究院 Fusion display method, device and medium for image and live-action model
CN114494594A (en) * 2022-01-18 2022-05-13 中国人民解放军63919部队 Astronaut operating equipment state identification method based on deep learning
CN114820396A (en) * 2022-07-01 2022-07-29 泽景(西安)汽车电子有限责任公司 Image processing method, device, equipment and storage medium
CN115002440A (en) * 2022-05-09 2022-09-02 北京城市网邻信息技术有限公司 AR-based image acquisition method and device, electronic equipment and storage medium
CN115202476A (en) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 Display image adjusting method and device, electronic equipment and storage medium
CN115218919A (en) * 2022-09-21 2022-10-21 泽景(西安)汽车电子有限责任公司 Optimization method and system of air track line and display
CN115467387A (en) * 2022-05-24 2022-12-13 中联重科土方机械有限公司 Auxiliary control system and method for engineering machinery and engineering machinery
CN116126150A (en) * 2023-04-13 2023-05-16 北京千种幻影科技有限公司 Simulated driving system and method based on live-action interaction
CN116152883A (en) * 2022-11-28 2023-05-23 润芯微科技(江苏)有限公司 Vehicle-mounted eyeball identification and front glass intelligent local display method and system
CN116486051A (en) * 2023-04-13 2023-07-25 中国兵器装备集团自动化研究所有限公司 Multi-user display cooperation method, device, equipment and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489332A (en) * 2022-01-07 2022-05-13 北京经纬恒润科技股份有限公司 Display method and system of AR-HUD output information
CN115061565A (en) * 2022-05-10 2022-09-16 华为技术有限公司 Method and device for adjusting display equipment
CN114911445A (en) * 2022-05-16 2022-08-16 歌尔股份有限公司 Display control method of virtual reality device, and storage medium
CN114915772B (en) * 2022-07-13 2022-11-01 沃飞长空科技(成都)有限公司 Method and system for enhancing visual field of aircraft, aircraft and storage medium
CN116301527B (en) * 2023-03-13 2023-11-21 北京力控元通科技有限公司 Display control method and device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103182984A (en) * 2011-12-28 2013-07-03 财团法人车辆研究测试中心 Vehicle image display system and correction method thereof
WO2018167966A1 (en) * 2017-03-17 2018-09-20 マクセル株式会社 Ar display device and ar display method
CN108711298A (en) * 2018-05-20 2018-10-26 福州市极化律网络科技有限公司 A kind of mixed reality road display method
CN110304057A (en) * 2019-06-28 2019-10-08 威马智慧出行科技(上海)有限公司 Car crass early warning, air navigation aid, electronic equipment, system and automobile
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103182984A (en) * 2011-12-28 2013-07-03 财团法人车辆研究测试中心 Vehicle image display system and correction method thereof
WO2018167966A1 (en) * 2017-03-17 2018-09-20 マクセル株式会社 Ar display device and ar display method
CN108711298A (en) * 2018-05-20 2018-10-26 福州市极化律网络科技有限公司 A kind of mixed reality road display method
CN110304057A (en) * 2019-06-28 2019-10-08 威马智慧出行科技(上海)有限公司 Car crass early warning, air navigation aid, electronic equipment, system and automobile
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (en) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 Positioning processing method, device, equipment and medium based on magnetic sensor
CN114371779A (en) * 2021-12-31 2022-04-19 北京航空航天大学 Visual enhancement method for sight depth guidance
CN114371779B (en) * 2021-12-31 2024-02-20 北京航空航天大学 Visual enhancement method for sight depth guidance
CN114041741A (en) * 2022-01-13 2022-02-15 杭州堃博生物科技有限公司 Data processing unit, processing device, surgical system, surgical instrument, and medium
CN114041741B (en) * 2022-01-13 2022-04-22 杭州堃博生物科技有限公司 Data processing unit, processing device, surgical system, surgical instrument, and medium
CN114494594A (en) * 2022-01-18 2022-05-13 中国人民解放军63919部队 Astronaut operating equipment state identification method based on deep learning
CN114494594B (en) * 2022-01-18 2023-11-28 中国人民解放军63919部队 Deep learning-based astronaut operation equipment state identification method
CN114387198A (en) * 2022-03-24 2022-04-22 青岛市勘察测绘研究院 Fusion display method, device and medium for image and live-action model
CN115002440B (en) * 2022-05-09 2023-06-09 北京城市网邻信息技术有限公司 AR-based image acquisition method and device, electronic equipment and storage medium
CN115002440A (en) * 2022-05-09 2022-09-02 北京城市网邻信息技术有限公司 AR-based image acquisition method and device, electronic equipment and storage medium
CN115467387A (en) * 2022-05-24 2022-12-13 中联重科土方机械有限公司 Auxiliary control system and method for engineering machinery and engineering machinery
CN115202476A (en) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 Display image adjusting method and device, electronic equipment and storage medium
CN115202476B (en) * 2022-06-30 2023-04-11 泽景(西安)汽车电子有限责任公司 Display image adjusting method and device, electronic equipment and storage medium
CN114820396B (en) * 2022-07-01 2022-09-13 泽景(西安)汽车电子有限责任公司 Image processing method, device, equipment and storage medium
CN114820396A (en) * 2022-07-01 2022-07-29 泽景(西安)汽车电子有限责任公司 Image processing method, device, equipment and storage medium
CN115218919A (en) * 2022-09-21 2022-10-21 泽景(西安)汽车电子有限责任公司 Optimization method and system of air track line and display
CN116152883A (en) * 2022-11-28 2023-05-23 润芯微科技(江苏)有限公司 Vehicle-mounted eyeball identification and front glass intelligent local display method and system
CN116152883B (en) * 2022-11-28 2023-08-11 润芯微科技(江苏)有限公司 Vehicle-mounted eyeball identification and front glass intelligent local display method and system
CN116126150A (en) * 2023-04-13 2023-05-16 北京千种幻影科技有限公司 Simulated driving system and method based on live-action interaction
CN116486051A (en) * 2023-04-13 2023-07-25 中国兵器装备集团自动化研究所有限公司 Multi-user display cooperation method, device, equipment and storage medium
CN116486051B (en) * 2023-04-13 2023-11-28 中国兵器装备集团自动化研究所有限公司 Multi-user display cooperation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113467600A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2021197189A1 (en) Augmented reality-based information display method, system and apparatus, and projection device
WO2021197190A1 (en) Information display method, system and apparatus based on augmented reality, and projection device
CN104883554B (en) The method and system of live video is shown by virtually having an X-rayed instrument cluster
US8994558B2 (en) Automotive augmented reality head-up display apparatus and method
US11338807B2 (en) Dynamic distance estimation output generation based on monocular video
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
US20070003162A1 (en) Image generation device, image generation method, and image generation program
CN107554425A (en) A kind of vehicle-mounted head-up display AR HUD of augmented reality
WO2022241638A1 (en) Projection method and apparatus, and vehicle and ar-hud
KR20150087619A (en) Apparatus and method for guiding lane change based on augmented reality
EP3906527B1 (en) Image bounding shape using 3d environment representation
JPWO2009144994A1 (en) VEHICLE IMAGE PROCESSING DEVICE AND VEHICLE IMAGE PROCESSING METHOD
US20210019942A1 (en) Gradual transitioning between two-dimensional and three-dimensional augmented reality images
US11256104B2 (en) Intelligent vehicle point of focus communication
US9836814B2 (en) Display control apparatus and method for stepwise deforming of presentation image radially by increasing display ratio
WO2017169273A1 (en) Information processing device, information processing method, and program
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
US11227366B2 (en) Heads up display (HUD) content control system and methodologies
US10573083B2 (en) Non-transitory computer-readable storage medium, computer-implemented method, and virtual reality system
US20220044032A1 (en) Dynamic adjustment of augmented reality image
US7599546B2 (en) Image information processing system, image information processing method, image information processing program, and automobile
CN115525152A (en) Image processing method, system, device, electronic equipment and storage medium
KR20180008345A (en) Device and method for producing contents, and computer program thereof
WO2017169272A1 (en) Information processing device, information processing method, and program
US20240042857A1 (en) Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21780976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21780976

Country of ref document: EP

Kind code of ref document: A1