WO2021197189A1 - Procédé, système et appareil d'affichage d'informations basé sur la réalité augmentée, et dispositif de projection - Google Patents

Procédé, système et appareil d'affichage d'informations basé sur la réalité augmentée, et dispositif de projection Download PDF

Info

Publication number
WO2021197189A1
WO2021197189A1 PCT/CN2021/082943 CN2021082943W WO2021197189A1 WO 2021197189 A1 WO2021197189 A1 WO 2021197189A1 CN 2021082943 W CN2021082943 W CN 2021082943W WO 2021197189 A1 WO2021197189 A1 WO 2021197189A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
display area
matrix
target display
real scene
Prior art date
Application number
PCT/CN2021/082943
Other languages
English (en)
Chinese (zh)
Inventor
余新
康瑞
邓岳慈
弓殷强
赵鹏
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2021197189A1 publication Critical patent/WO2021197189A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Definitions

  • This application relates to the technical field of coordinate transformation, and more specifically, to an information display method, system, device, projection device, and storage medium based on augmented reality.
  • HUD head up display
  • head-up display or head-up display
  • HUD head up display
  • Look down at the data in the dashboard so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard.
  • HUD was introduced from the aircraft to the automotive field.
  • the existing HUD display method lacks intelligence. Take car driving as an example. With the addition of more driving assistance information such as road conditions, navigation, and danger warning, the user's body and head will change or change with the road conditions. It is a personal activity that causes the user's spatial posture to change. In this case, it may cause the user to see that the image displayed by the HUD is offset from the real scene image.
  • this application proposes an augmented reality-based information display method, system, device, projection equipment, and storage medium to improve the above-mentioned problems.
  • an embodiment of the present application provides an information display method based on augmented reality.
  • the method includes: acquiring real scene information collected by a scene sensing device; acquiring a target display area determined based on a user's first spatial pose, so The first spatial pose is determined based on the human eye tracking device; acquiring the coordinate transformation rule corresponding to the real scene information mapped to the target display area; generating driving guidance information based on the real scene information; The driving guide information is displayed at the corresponding position of the target display area.
  • an embodiment of the present application provides an augmented reality information display device.
  • the information display device includes an image perception module, a coordinate transformation module, and a display module: the image perception module is used to obtain the real scene collected by the image perception device. Information; the coordinate transformation module is used to obtain the target display area determined based on the user's first spatial pose, the first spatial pose is determined based on the eye tracking device; the coordinate transformation module is also used to obtain the The real scene information is mapped to the coordinate transformation rule corresponding to the target display area; the display module is used to generate driving guidance information based on the real scene information; the display module is also used to transform the driving guidance information based on the coordinate transformation rule Displayed in the corresponding position of the target display area.
  • an embodiment of the present application provides an augmented reality vehicle-mounted information display system.
  • the system includes: a scene sensing device for collecting real-world information of the vehicle's external environment; and a human eye tracking device for acquiring the user's line of sight Range of movement; an image processing device for acquiring a first spatial pose of the user based on the range of movement of the line of sight, acquiring a target display area determined based on the first spatial pose, and acquiring the real scene information mapped to the
  • the coordinate transformation rule corresponding to the target display area is to generate driving guidance information based on the real scene information, and the target position coordinates of the driving guidance information displayed in the target display area are generated based on the coordinate transformation rule; the HUD display device is used to display The driving guidance information is displayed at the target position coordinates of the target display area.
  • an embodiment of the present application provides a projection device, including one or more processors and a memory; one or more programs are stored in the memory and configured to be processed by the one or more The one or more programs are configured to execute the method described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium having program code stored in the computer-readable storage medium, wherein the method described in the first aspect is executed when the program code is running.
  • the present application provides an information display method, system, device, projection device, and storage medium based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on the user's first spatial pose, The first spatial pose is determined based on the human eye tracking device, and then the real scene information is mapped to the coordinate transformation rules corresponding to the target display area, and the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed on the target display based on the coordinate transformation rules The corresponding location of the area.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method.
  • the eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
  • Fig. 1 shows a method flowchart of an augmented reality-based information display method proposed by an embodiment of the present application.
  • Fig. 2 shows an example of the structure of an augmented reality-based vehicle information display system based on the augmented reality-based information display method provided by this embodiment.
  • FIG. 3 shows an example diagram of the HUD virtual image plane of the HUD display device in this embodiment.
  • Figure 4 shows a schematic diagram of the relationship between the driver's eyes and the HUD virtual image plane in this embodiment.
  • FIG. 5 shows an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed in this application in a dangerous scenario provided by this embodiment.
  • Fig. 6 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 7 shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • FIG. 8 shows another example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • FIG. 9 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 10 shows an example diagram of adjusting the target display area based on the change vector provided by this embodiment.
  • FIG. 11 shows an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment.
  • FIG. 12 shows a structural block diagram of an information display device based on augmented reality proposed by an embodiment of the present application.
  • Fig. 13 shows a structural block diagram of a projection device of the present application for executing an augmented reality-based information display method according to an embodiment of the present application.
  • Fig. 14 shows a storage unit for storing or carrying program code for implementing an augmented reality-based information display method according to an embodiment of the present application.
  • HUD head up display
  • head-up display or head-up display
  • HUD head up display
  • Look down at the data in the dashboard so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard.
  • HUD was introduced from the aircraft to the automotive field.
  • HUD is mainly divided into two types: rear-mounted (also known as Combine HUD, C-type HUD) and front-mounted (also known as Windshield HUD, W-type HUD).
  • rear-mounted HUD also known as Combine HUD, C-type HUD
  • front-mounted HUD also known as Windshield HUD, W-type HUD
  • the front-mounted HUD uses the windshield as a combiner to project the content required by the driver to the front windshield through the optical system.
  • Driving safety and driving comfort some existing HUD devices only display virtual information in front of the driver's line of sight, and are not integrated with the real environment. With the addition of more driving assistance information such as road conditions, navigation, and hazard warnings, the mismatch between this virtual content and the real scene will cause the driver's attention to be distracted.
  • Augmented Reality is a technology that ingeniously integrates virtual information with the real world.
  • AR-HUD can solve the separation of traditional HUD virtual information and actual scenes through the combination of AR technology and front-mounted HUD.
  • the existing HUD display method lacks intelligence. Take car driving as an example. With the addition of more driving assistance information such as road conditions, navigation, and danger warning, the user's body and head will change or change with the road conditions. It is a personal activity that causes the user's spatial posture to change. In this case, it may cause the user to see that the image displayed by the HUD is offset from the real scene image.
  • the inventor proposes the real scene information that can be collected by the image sensing device provided in this application, and then obtains the target display area determined based on the user's first spatial pose, where the first spatial pose is Based on the determination of the human eye tracking device, the real scene information is mapped to the coordinate transformation rule corresponding to the target display area, and then the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed in the corresponding position of the target display area based on the coordinate transformation rule.
  • the driving guidance information generated based on real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule, and the real-time tracking device is realized by the eye tracking device.
  • Tracking the user’s first spatial posture makes it possible to adapt to changes in the human eye’s posture and dynamically adjust the display area and display position of the driving guidance information, so that the user can accurately and conveniently view the corresponding driving scene during driving.
  • the virtual driving guidance information improves the safety and comfort of driving, thereby enhancing the user experience.
  • FIG. 1 is a method flowchart of an augmented reality-based information display method provided by an embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S110 Acquire real scene information collected by the scene sensing device.
  • the real scene information in the embodiment of the present application may be real scene information corresponding to multiple scenes.
  • multiple scenes may include, but are not limited to, driving scenes, travel scenes, and outdoor activity scenes.
  • the real scene information can include lanes, signs, dangerous pedestrians (such as vulnerable groups such as blind people, elderly people walking alone, pregnant women, or children), vehicles, etc.; if it is a tourist scene, the real scene information can include tourist destination signs. , Tourist routes, tourist attractions information and tourist attractions weather information, etc.; if it is an outdoor activity scene, the real scene information can include current location information and nearby convenience store information.
  • the scene sensing device may include sensing devices such as lasers and infrared radars, and may also include image acquisition devices such as cameras (including monocular cameras, binocular cameras, RGB-D cameras, etc.).
  • the real scene information corresponding to the current scene can be acquired through the scene sensing device.
  • the scene sensing device is a camera.
  • the camera can be installed on the car (optionally, the installation position can be adjusted according to the style and structure of the car or the actual needs), so that the camera can obtain the Real-life information related to driving.
  • the scene sensing device including laser, infrared radar, or camera
  • related technologies which will not be repeated here.
  • Step S120 Obtain a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on the eye tracking device.
  • the first spatial pose may be a human eye pose determined based on the human eye tracking device. It is understandable that if the user's line of sight range changes, the corresponding human eye pose can also change, and if the user's body rotates and the eye line of sight does not change, the human eye pose can remain unchanged In this way, because the user’s eyes can see the real-life information that needs attention in the driving scene (for example, road conditions, dangerous pedestrians and vehicles, etc.), safe driving is still possible, so as a way, the first space
  • the pose may be determined based on the user's current eye pose. In this way, the target display area determined based on the user's current eye pose may be obtained.
  • the eye tracking device may be a device with a camera function, such as a camera, and the details may not be limited.
  • the user's first spatial pose may be the sitting posture of the user in a driving state, Or it is the sitting posture after adjusting the seat (here can be the current user adjusting the seat for the first time). It is understandable that the user's sitting posture is different, and the corresponding spatial posture is different. As a way, the sitting posture of the user after adjusting the seat can be used as the user's first spatial posture.
  • the change range of the user's eye pose is not less than a preset threshold
  • the user's first spatial pose may be re-determined according to the user's current eye pose.
  • the target display area is an area for displaying virtual image information corresponding to real scene information.
  • the target display area may be an area on the windshield of a car for displaying projected virtual image information corresponding to real scene information.
  • the target display areas corresponding to different spatial poses of the same user may be different, and the target display areas corresponding to the spatial poses of different users may be different.
  • the target display area determined based on the user's first spatial pose can be acquired, so that the target display area can be displayed in the target display area. Displaying the virtual image information corresponding to the real scene information reduces the foregoing display difference, thereby improving the accuracy of the display position of the virtual image information corresponding to the real scene information.
  • Step S130 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • the coordinate transformation rule can be used to map the coordinates of the real scene information to the corresponding coordinates of the target display area.
  • the coordinate transformation rules corresponding to the real scene information mapped to the target display area can be acquired, so that the subsequent driving guidance information corresponding to the real scene information can be accurately determined based on the coordinate transformation rules. Is displayed in the corresponding position of the target display area.
  • the coordinate transformation rule may include a first transformation matrix and a second transformation matrix.
  • the first transformation matrix can be used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device
  • the second transformation matrix can be used to convert the reference world coordinates into an offset from the user's perspective in the target display area Matching view coordinates.
  • the reference world coordinates can be understood as the relative position coordinates of the real scene information in the established coordinate system corresponding to the scene sensing device.
  • the reference world coordinates in this embodiment can be understood as the world coordinates that are relatively stationary with the car.
  • View coordinates can be understood as the relative position coordinates of the reference world coordinates in the coordinate system corresponding to the target display area.
  • the first transformation matrix and the second transformation matrix can be obtained, and then the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix is mapped to the coordinates corresponding to the target display area as real scene information Transformation rules.
  • the first transformation matrix may include a first rotation matrix and a first translation vector.
  • the first rotation matrix may be used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector may be used to translate the coordinates.
  • the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device may be determined based on the first rotation matrix and the first translation vector.
  • the second transformation matrix may include a viewing angle offset matrix, a transpose matrix, and a projection matrix.
  • the projection matrix can be used to determine the mapping range of the real scene information to the target display area
  • the viewing angle offset matrix can be used to determine the degree of deviation of the user's viewing angle detected by the eye tracking device
  • the transposition matrix can be used to determine The relative position of the driving guidance information is displayed within the mapping range.
  • the reference world coordinates may be converted into view coordinates matching the offset of the user's viewing angle in the target display area based on the viewing angle offset matrix, the transposed matrix, and the projection matrix.
  • the viewing angle offset matrix may include the first view coordinates
  • the transposition matrix represents the transposition facing the first transformation matrix
  • the transposition matrix may include a spatial unit vector located in the target display area
  • the projection matrix may include field angle parameters, which may be
  • the field of view parameter may include a distance parameter and a scale parameter associated with the user's viewing angle.
  • FIG. 2 is a structural example diagram of a vehicle-mounted information display system based on augmented reality that is applicable to the method for displaying information based on augmented reality provided by this embodiment.
  • the vehicle-mounted information display system based on augmented reality may include a scene perception device, an image processing device, a HUD display device, and a human eye tracking device.
  • the scene sensing device can be used to collect real scene information of the external environment of the vehicle.
  • the eye tracking device can be used to obtain the user's line of sight movement range.
  • the image processing device can be used to obtain the user's first spatial pose based on the movement range of the line of sight, obtain the target display area determined based on the first spatial pose, obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area, and generate based on the real scene information
  • the driving guidance information is generated based on the coordinate transformation rules to display the target position coordinates of the driving guidance information in the target display area.
  • the HUD display device can be used to display driving guidance information to the target position coordinates of the target display area.
  • the image processing device may be the processor chip of the vehicle system, or the processing chip of an independent vehicle computer system, or the processor chip integrated in the scene sensing device (such as lidar), etc., which is not limited herein.
  • the vehicle-mounted information display system may include a car, a driver, a human eye tracking device, a scene perception device, an image processing device, and a HUD display device with AR-HUD function.
  • the scene sensing device can be installed on the car and can obtain driving-related scene information (also can be understood as the aforementioned real scene information), the driver sits in the driving position of the car, and the eye tracking device is installed in the car.
  • the eye tracking device can track the reasonable position of the basic movement range of the driver's eyes.
  • the reasonable position here can be understood as the position where the driver's line of sight matches the driving demand during driving, such as to the left Turning, turning right or turning backward, etc., the specific direction and angle of turning may not be limited.
  • the HUD display device is installed on the front windshield of the car, and the position of the HUD display device can be adjusted so that the driver's eyes can see the entire virtual image corresponding to the driving scene information.
  • the image processing device can adapt to the change of the driver’s eye spatial pose. In this way, the image processing device can convert the real-world information collected by the scene-sensing device into real-time information that is fused with the real-world information of the real scene in real time. The image is sent to the HUD display device for display.
  • the scene sensing device can obtain the position coordinates of the real scene information in the world coordinate system (O-xyz as shown in Figure 2) based on GPS positioning and other location acquisition methods, and then can be based on the car Select the world coordinate origin and coordinate axis direction for the traveling direction, and determine the reference world coordinate system relative to the car according to the world coordinate origin and the coordinate axis direction.
  • the reference world coordinate system can be obtained The following reference world coordinates corresponding to the coordinates of the real scene information.
  • the method of selecting the origin of the world coordinate and the direction of the coordinate axis can refer to the related technology, which will not be repeated here.
  • the reference world coordinate system can be understood as a coordinate system obtained after rotating and/or translating the world coordinate system.
  • the spatial pose of the scene sensing device in the reference world coordinate system can be obtained.
  • the device calculates the perception module transformation matrix M (that is, the aforementioned first transformation matrix) based on the spatial pose in the reference world coordinate system.
  • the process of changing the world coordinate system to the reference world coordinate system includes the first rotation matrix (also can be understood as the total rotation matrix of the scene sensing device) R M and the first translation vector T M , optionally , sensing module transformation between the matrix M and the first rotation matrix R M T M and the first translation vector may be expressed as:
  • R Mx , R My , and R Mz are the rotation matrices of the perception module transformation matrix M around the x-axis, y-axis, and z-axis of the world coordinate system, respectively.
  • the Euler angles of rotation are ⁇ M , ⁇ M , ⁇ M , (T Mx , T My , T Mz ) are the coordinates of the real scene information in the reference world coordinate system.
  • the spatial pose of the virtual image displayed on the plane where the HUD display device is located and the spatial pose of the driver’s eyes can be determined. In this way, it can be based on the spatial pose of the displayed virtual image and driving
  • the spatial pose of the eyes of the driver (that is, the pose of the eyes of the driver) obtains the aforementioned second transformation matrix.
  • the second transformation matrix (here also to be understood as a virtual image of the perspective matrix) C may include a viewing angle offset matrix T, N T a transposed matrix and a projection matrix P
  • viewing angle offset matrix T may be determined by the human eye pose of the driver
  • transposed matrix N T may be displayed by means of virtual HUD image plane to determine the spatial position and orientation
  • the projection matrix P can be determined as a virtual device common plane spatial position and orientation of the display by the human eye pose HUD and a driver.
  • the relationship between N T a transposed matrix and the projection matrix P can be expressed as three:
  • the viewing angle offset matrix T may include the first view coordinates, and the first view coordinates are the position coordinates of the driver's eye pose in the reference world coordinate system.
  • (P ex , Pey , P ez ) can be used to represent the first view coordinates.
  • the viewing angle offset matrix T can be expressed as:
  • transposed matrix N T may include a unit vector in the target area of the display space.
  • transposed matrix N T in the present embodiment may be expressed as:
  • Vr, Vu, and Vn are the spatial unit vectors represented by the virtual image plane corresponding to the HUD display module.
  • FIG. 3 shows an example diagram of the HUD virtual image plane of the HUD display device in this embodiment.
  • V r is the right vector
  • V u is the upper vector
  • V n is the normal vector of the HUD virtual image plane.
  • the projection matrix includes a field of view parameter, and the field of view may include a distance parameter and a scale parameter associated with the user's viewing angle.
  • the relational expression satisfied by the projection matrix P can be:
  • the parameters n and f are the distance of the human eye's field of view (that is, the above distance parameters), and the parameters l, r, b, and t can represent the left, right, bottom, and top determined by the relationship between the size of the eye and the HUD virtual image plane and the pose.
  • Scale that is, the scale parameter mentioned above.
  • the calculation formulas of l, r, b, and t can be expressed as:
  • the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix can be obtained as real scene information and mapped to the target display
  • the coordinate transformation rule corresponding to the area that is, the coordinate transformation rule can be expressed as:
  • the eye tracking device can capture the spatial pose of the driver's eyes in real time, and calculate the difference between the current moment and the previous moment in the driver's eye pose.
  • the aforementioned second transformation matrix C may be recalculated and updated.
  • the driver’s eye position difference can be calculated in a variety of ways, for example, it can be the mean square error MSE or the average absolute error MAE. It is understandable that the position of the driver’s eyes can also be customized according to actual needs. How the posture difference is calculated.
  • the designated thresholds corresponding to different calculation methods may be different.
  • Step S140 Generate driving guidance information based on the real scene information.
  • the driving guide information in this embodiment may include navigation instruction information corresponding to road conditions, pedestrian warning information, and tourist attractions prompt information, etc.
  • the type and specific content of the driving guide information may not be limited.
  • FIG. 5 there is shown an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed by this application in a dangerous scene provided by this embodiment.
  • the real scene information collected by the scene perception device is converted into a HUD virtual image and displayed on the HUD display device.
  • the specific content displayed is shown in the right image of Figure 5.
  • the scene seen by the driver’s eyes may include lane guidance information ( That is, the "navigation instructions in the virtual image" shown in FIG. 5) and pedestrian warning information (ie, the "pedestrian prompt box in the virtual image” shown in FIG. 5).
  • the driving guide information can be generated based on the real scene information.
  • the way of prompting the driving guide information in this embodiment may not be limited. Arrows), pictures, animations, voice or video, etc., then the driving guidance information for different prompts can be generated in a corresponding manner.
  • the driving guide information in this embodiment may include at least one prompt method.
  • the navigation indicator icon corresponding to the road may be displayed in combination with voice prompting the user, so that the user can be driven more accurately. Guidance reminders to ensure driving safety and enhance user experience.
  • Step S150 Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the difference between the position where the HUD displays the real-scene information and the actual position of the real-scene information can be avoided, and the accuracy and reliability of the display can be improved.
  • the present application provides an information display method based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on a user's first spatial pose.
  • the first spatial pose is based on human eye tracking.
  • the device determines, and then obtains the coordinate transformation rules corresponding to the real scene information mapped to the target display area, generates driving guidance information based on the real scene information, and then displays the driving guidance information in the corresponding position of the target display area based on the coordinate transformation rules.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method.
  • the eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.
  • FIG. 6 is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S210 Acquire real scene information collected by the scene sensing device.
  • Step S220 Obtain a target display area determined based on the user's first spatial pose.
  • the first spatial pose is determined based on the eye tracking device, and the specific description can refer to the description in the foregoing embodiment, which will not be repeated here.
  • Step S230 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • Step S240 Generate driving guidance information based on the real scene information.
  • Step S250 Input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed.
  • the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device may be input into the first transformation matrix, and the result obtained by the output may be used as the coordinate transformation matrix to be processed.
  • the position coordinates of the actual scene information in the coordinate system corresponding to the scene sensing device are O w (x, y, z).
  • the position coordinates O w (x, y) , Z) After inputting the aforementioned first transformation matrix, we can get:
  • O'can be used as the coordinate transformation matrix to be processed
  • O W uses homogeneous coordinates.
  • Step S260 Perform coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
  • the coordinate transformation matrix to be processed may be transformed according to the aforementioned second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
  • the specific implementation process of coordinate transformation can refer to related technologies, which will not be repeated here.
  • width and height are the width and height of the HUD image, and the unit can be both pixels.
  • Oh (u, v) can be used as the relative position coordinates of the real scene information in the target display area.
  • Step S270 Display the driving guidance information at the position represented by the relative position coordinates.
  • the driving guide information corresponding to the real scene information may be displayed at the position represented by the relative position coordinates.
  • FIG. 7 shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • a virtual and real scene fusion system (which can be understood as an information display system based on augmented reality in this application) can be built in related modeling software (such as Unity3D, etc.).
  • the virtual and real scene fusion system may include cars, Camera 1 (used to simulate the eyes of the driver), script program for simulating human eye tracking device 1, HUD imaging module simulated by camera 2 and plane together (i.e.
  • the spatial scene information (which can be spatial scene information in different scenes) acquired by the module (implemented by the script program 2), and the information transformation, image rendering and rendering of the image processing device completed by the script program 3.
  • the center position of the bottom of the car can be selected as the coordinate origin
  • the forward direction of the car is the positive direction of the Z axis
  • the right-hand coordinate system is adopted. It is assumed that the driver is sitting in the driving position, and the camera 1 The simulated driver's eyes are facing forward, and the pose of the HUD virtual camera simulated by camera 2 is the same as that of the driver's eyes.
  • the scene sensing device can obtain the checkerboard space corner information on the car
  • the image processing device can draw the corner point to the HUD image space (the lower left corner as shown in Figure 7 is the corner point image drawn by the image processing device to the HUD image space), and then send the drawn image to the HUD display device for processing Display (as shown in Figure 7 the corner image is sent to the HUD for virtual image display).
  • the driver's perspective scene as shown in FIG. 7 can be obtained.
  • the virtual and real fusion result shown in FIG. 7 it can be seen that the virtual and real scenes can be accurately fused.
  • the eye tracking device in this embodiment may be configured with a 3D sensor that is perceived by the spatial pose of the human eye and an eye tracking algorithm adapted to the output data of the sensor.
  • the 3D sensor for human eye spatial pose perception may be a binocular camera, or an RGB-D camera, etc.
  • the human eye tracking algorithm may be a computer vision algorithm, or a deep learning algorithm, etc., and the details are not limited.
  • the scene sensing device can obtain real-life information in driving scenes such as lanes, navigation instructions, (dangerous) pedestrians, etc., through a camera combined with GPS, IMU sensors, and perception processing algorithms.
  • the image processing device can adjust the aforementioned first transformation matrix and the second transformation matrix according to the changed eye position, and then draw the same as the changed eye position.
  • the eye tracking device can be used in real time. Obtain the spatial pose of the driver's eyes, and determine whether the change between the human eye pose at the current moment and the human eye pose at the previous moment exceeds a specified threshold, where the value of the specified threshold can be set according to the actual situation.
  • the driver’s spatial pose can be retrieved based on the changed driver’s eye pose, and then based on the spatial pose. The pose re-determines the target display area, and then displays the virtual and real fusion scene graph at the corresponding position of the re-determined target real area.
  • Figure 7 shows an example of the display effect.
  • the specific value of the time may not be limited, and in some possible implementation manners, the judgment interval may also be changed to a time period or a period.
  • the driver can accurately see the navigation instructions marked on the driving lane or surround pedestrians through the HUD virtual image plane. Warning box and other driving guidance information to ensure safe driving.
  • multiple HUD virtual image planes can be configured if necessary, so that the driver’s When the line of sight is shifted (or rotated) in any direction, you can see the driving guidance information that needs to be paid attention to in the current driving scene, which improves the flexibility and diversity of information display and enhances the user experience.
  • the present application provides an information display method based on augmented reality.
  • the driving guidance information generated based on the real scene information is transformed by the first transformation matrix and the second transformation matrix respectively, and then displayed on the basis of the human eye tracking device.
  • the determined first spatial pose of the user and then the determined corresponding position of the target display area realize the real-time tracking of the user's first spatial pose through the eye tracking device, so that it can adapt to the changes of the human eye pose and dynamically adjust
  • the display area and display position of the driving guide information so that the user can accurately and conveniently view the virtual driving guide information corresponding to the driving scene during driving, without the need to repeatedly confirm the accuracy of the driving guide information, reducing the need to check the road conditions
  • the frequent change of sight caused by fatigue has improved the safety and comfort of driving.
  • FIG. 9 is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S310 Acquire the real scene information collected by the scene sensing device.
  • Step S320 Obtain a target display area determined based on a first spatial pose of the user, where the first spatial pose is determined based on the eye tracking device.
  • Step S330 Detect the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device.
  • the user's posture may change according to the driving scene information such as the road conditions of the traveling vehicle.
  • the driving scene information such as the road conditions of the traveling vehicle.
  • the user's head will rotate in the direction In this mode, the user's line of sight will change (for example, the line of sight is shifted, etc.).
  • the original HUD display method is still used to display the driving guidance information corresponding to the actual scene information, there may be display errors due to the position Cause safety hazards.
  • the eye tracking device in this embodiment can detect the user's eye posture in real time. If the human eye pose changes, then the parameters corresponding to the human eye pose change can be obtained.
  • the user’s first position can be detected through the human eye pose change (ie, the human eye pose change) parameter.
  • a change in spatial pose so that if a change in the first spatial pose is detected, the target display area will be re-determined based on the changed spatial pose, so as to ensure the accuracy of the display position of the driving guidance information corresponding to the actual scene information Without requiring the user to repeatedly confirm the accuracy of the driving guidance information, the flexibility of displaying the driving guidance information is improved, thereby enhancing the user experience.
  • the human eye posture change parameter may include the direction, angle, or range of the human eye's line of sight.
  • the relevant face recognition algorithm can be used to determine whether the eye pose has changed in the eye pose image collected by the eye tracking device. If there is a change, the corresponding eye pose change parameter can be obtained .
  • the change vector corresponding to the first spatial pose can be obtained according to the human eye pose change parameter.
  • the specific calculation process can be implemented with reference to related technologies, which will not be repeated here.
  • Step S340 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • Step S350 Generate driving guidance information based on the real scene information.
  • Step S361 If the amount of change of the human eye posture change parameter is greater than a preset threshold, update the coordinate transformation rule according to the human eye posture change parameter.
  • a preset threshold corresponding to the change vector may be configured in advance, and the preset threshold may be used to distinguish whether subsequent adjustments to the target display area are required.
  • the target display area can be adjusted based on the change vector to obtain the newly determined target display area.
  • FIG. 10 shows an example diagram of adjusting the target display area based on the change vector provided in this embodiment.
  • the user's first spatial pose has changed from 22 to 23', where 23' is the current first spatial pose, which is determined based on the user's current human eye pose .
  • 23' is the current first spatial pose, which is determined based on the user's current human eye pose .
  • the change vector of the human eye pose corresponding to the user's current first spatial pose 23' is greater than the preset threshold, in this way, it is located on the screen 21 of the front windshield of the car
  • the position of the target display area can be changed from 23 to 23', where 23' is the re-determined target display area.
  • the coordinate transformation rules can be updated according to the human eye pose change parameters to obtain the real scene information mapped to the second coordinate corresponding to the newly determined target display area Conversion rules, where the specific determination process of the second coordinate conversion rule can refer to the determination principle and determination process of the aforementioned coordinate conversion rule, which will not be repeated here.
  • Step S362 Display the driving guidance information at the corresponding position of the target display area based on the updated coordinate transformation rule.
  • the driving guidance information may be displayed at the corresponding position of the newly determined target display area based on the second coordinate change rule.
  • the target display area in this embodiment can be adjusted according to the change of the user's first spatial pose. For example, if it is detected that the user has a gesture such as bowing, the target display area can be displayed in the corresponding position on the central control display; if it is detected that the user looks at the phone more frequently during driving, the target display area can be displayed to On the display screen of the mobile phone; or other screens that can be used as the target display area in the driving scene, for example, the windows on the left and right sides of the driving position.
  • Step S371 If the amount of change of the human eye posture change parameter is not greater than the preset threshold, display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the target display area determined based on the user's first spatial pose can be obtained.
  • the user's first spatial pose can be For the sitting posture of the user, please refer to the description in the foregoing embodiment for details.
  • step S330 may be implemented after step S340.
  • FIG. 11 an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment is shown.
  • the process pointed by the hollow arrow may be the initial process
  • the process pointed by the solid arrow may be the real-time continuous process.
  • the scene perception module can acquire real-time information in real time, use the real-time information as the information to be displayed and send it to the image processing module (which can be understood as the aforementioned image processing device), and the image processing module performs coordinate change processing on the coordinates corresponding to the real-world information
  • the final image is drawn, and the image is projected onto the HUD display screen (that is, the aforementioned target display area) for display, so as to improve the accuracy of the display position of the driving guide information, reduce user operations, and improve user experience.
  • Step S364 Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the present application provides an information display method based on augmented reality, which detects the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device, and then the change value of the change vector corresponding to the eye posture change parameter is greater than
  • the target display area is re-adjusted based on the change vector, and the driving guidance information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined based on the eye tracking device.
  • it can adapt to changes in the eye’s posture, dynamically adjust the display area and display position of the driving guidance information, so that the user can be accurate during driving.
  • an information display device 400 based on augmented reality provided by an embodiment of the present application can be run on a projection device.
  • the device 400 includes:
  • the image sensing module 410 is used to obtain real scene information collected by the image sensing device.
  • the coordinate transformation module 420 is configured to obtain the target display area determined based on the user's first spatial pose, which is determined based on the human eye tracking device.
  • the device 400 may further include a parameter change detection module, which is used to detect the change of the first spatial pose by acquiring the eye posture change parameter of the eye tracking device.
  • the coordinate transformation module 420 may be specifically used to obtain the eye posture change parameters collected by the eye tracking device; obtain the change vector corresponding to the first spatial pose based on the eye posture change parameters If the amount of change of the change vector is greater than a preset threshold, adjust the target display area based on the change vector to obtain a re-determined target display area.
  • the step of obtaining the target display area determined based on the user's first spatial pose is performed.
  • the coordinate transformation module 420 may also be used to obtain a coordinate transformation rule corresponding to the real scene information mapped to the target display area.
  • the coordinate transformation rule in this embodiment may include a first transformation matrix and a second transformation matrix.
  • the first transformation matrix is used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device.
  • the second transformation matrix is used to convert the reference world coordinates into view coordinates in the target display area that match the offset of the user's viewing angle.
  • the first transformation matrix may include a first rotation matrix and a first translation vector, where the first rotation matrix is used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector is used To translate the coordinates, the first transformation matrix determines the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device based on the first rotation matrix and the first translation vector.
  • the second transformation matrix may include a viewing angle offset matrix, a transposition matrix, and a projection matrix.
  • the projection matrix is used to determine a mapping range for mapping the real scene information to the target display area, and the viewing angle offset
  • the matrix is used to determine the degree of deviation of the user’s viewing angle detected by the eye tracking device
  • the transposed matrix is used to determine the relative position within the mapping range where the driving guidance information is displayed
  • the second The transformation matrix converts the reference world coordinates into view coordinates in the target display area that match the offset of the user's view angle based on the view angle offset matrix, the transpose matrix, and the projection matrix.
  • the viewing angle offset matrix may include first view coordinates
  • the transposition matrix represents a transposition facing the first transformation matrix
  • the transposition matrix includes a spatial unit vector located in the target display area
  • the projection matrix includes a field angle parameter
  • the field angle parameter includes a distance parameter and a scale parameter associated with the user's viewing angle
  • the display module 430 is configured to generate driving guidance information based on the real scene information.
  • the display module 430 may also be used to display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the display module 430 may be specifically configured to input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed;
  • the coordinate transformation matrix performs coordinate transformation according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area; the driving guidance information is displayed at the position characterized by the relative position coordinates.
  • an embodiment of the present application also provides another projection device 100 that can execute the foregoing augmented reality-based information display method.
  • the projection device 100 includes one or more (only one shown in the figure) processor 102, memory 104, image perception module 11, coordinate transformation module 12, human eye tracking module 14 and display module 13 coupled with each other.
  • the memory 104 stores a program that can execute the content in the foregoing embodiment
  • the processor 102 can execute the program stored in the memory 104
  • the memory 104 includes the device 400 described in the foregoing embodiment.
  • the processor 102 may include one or more processing cores.
  • the processor 102 uses various interfaces and lines to connect various parts of the entire projection device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and calling data stored in the memory 104.
  • the processor 102 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 102 may integrate one or a combination of a central processing unit (CPU), a video image processor (Graphics Processing Unit, GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 102, but may be implemented by a communication chip alone.
  • the memory 104 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 104 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 104 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, video image playback function, etc.) ), instructions used to implement the foregoing method embodiments, etc.
  • the data storage area can also store data (for example, audio and video data) created by the projection device 100 during use.
  • the image perception module 11 is used to obtain the real scene information collected by the image perception device; the coordinate transformation module 12 is used to obtain the target display area determined based on the user's first spatial pose, which is based on the human eye tracking device 14
  • the eye tracking device 14 is used to detect the user's eye pose in real time;
  • the coordinate transformation module 12 is also used to obtain the coordinate transformation rule corresponding to the real scene information mapped to the target display area;
  • the display The module 13 is configured to generate driving guidance information based on the real scene information; the display module 13 is also configured to display the driving guidance information in a corresponding position of the target display area based on the coordinate transformation rule.
  • FIG. 14 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 500 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 500 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 500 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 500 has a storage space for the program code 510 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
  • the program code 510 may be compressed in a suitable form, for example.
  • the present application provides an augmented reality-based information display method, system, device, projection equipment, and storage medium.
  • the first spatial pose determination based on the user is obtained.
  • the first spatial pose is determined based on the human eye tracking device, and then the real scene information is mapped to the coordinate transformation rule corresponding to the target display area, and the driving guidance information is generated based on the real scene information, and then the driving guidance is based on the coordinate transformation rule
  • the information is displayed in the corresponding position of the target display area.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose determined by the eye tracking device through the coordinate transformation rule through the above method.
  • the eye tracking device tracks the user's first spatial pose in real time, so that it can adapt to changes in the human eye pose, dynamically adjust the display area and display position of the driving guidance information, and realize that the user can accurately and conveniently drive Viewing the virtual driving guidance information corresponding to the driving scene improves the safety and comfort of driving, thereby enhancing the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention se rapporte à un procédé, à un système et à un appareil d'affichage d'informations basé sur la réalité augmentée, à un dispositif de projection et à un support de stockage. Le procédé comprend les étapes consistant à : obtenir des informations de scène réelle collectées par un dispositif de détection de scène (S110) ; obtenir une zone d'affichage cible déterminée sur la base d'une première pose spatiale d'un utilisateur, la première pose spatiale étant déterminée sur la base d'un oculomètre (S120) ; obtenir des règles de transformation de coordonnées correspondant à des informations de scène réelle mappées sur la zone d'affichage cible (S130) ; générer des informations de guidage de conduite sur la base des informations de scène réelle (S140) ; et afficher les informations de guidage de conduite dans une position correspondant à la zone d'affichage cible sur la base des règles de transformation de coordonnées (S150). Des informations de guidage de conduite sont affichées, au moyen des règles de transformation de coordonnées, dans la position correspondant à la zone d'affichage cible déterminée sur la base de la première pose spatiale de l'utilisateur telle que déterminée par l'oculomètre. La première pose spatiale de l'utilisateur est suivie en temps réel par l'oculomètre, ce qui permet une adaptation à des changements de la pose de l'œil, et la zone d'affichage et la position d'affichage des informations de guidage de conduite sont ajustées de manière dynamique.
PCT/CN2021/082943 2020-03-31 2021-03-25 Procédé, système et appareil d'affichage d'informations basé sur la réalité augmentée, et dispositif de projection WO2021197189A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010243557.3A CN113467600A (zh) 2020-03-31 2020-03-31 基于增强现实的信息显示方法、系统、装置及投影设备
CN202010243557.3 2020-03-31

Publications (1)

Publication Number Publication Date
WO2021197189A1 true WO2021197189A1 (fr) 2021-10-07

Family

ID=77865577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082943 WO2021197189A1 (fr) 2020-03-31 2021-03-25 Procédé, système et appareil d'affichage d'informations basé sur la réalité augmentée, et dispositif de projection

Country Status (2)

Country Link
CN (1) CN113467600A (fr)
WO (1) WO2021197189A1 (fr)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114041741A (zh) * 2022-01-13 2022-02-15 杭州堃博生物科技有限公司 数据处理部、处理装置、手术系统、设备与介质
CN114305686A (zh) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 基于磁传感器的定位处理方法、装置、设备与介质
CN114371779A (zh) * 2021-12-31 2022-04-19 北京航空航天大学 一种视线深度引导的视觉增强方法
CN114387198A (zh) * 2022-03-24 2022-04-22 青岛市勘察测绘研究院 一种影像与实景模型的融合显示方法、装置及介质
CN114489332A (zh) * 2022-01-07 2022-05-13 北京经纬恒润科技股份有限公司 Ar-hud输出信息的显示方法及系统
CN114494594A (zh) * 2022-01-18 2022-05-13 中国人民解放军63919部队 基于深度学习的航天员操作设备状态识别方法
CN114816049A (zh) * 2022-03-30 2022-07-29 联想(北京)有限公司 一种增强现实的引导方法及装置、电子设备、存储介质
CN114820396A (zh) * 2022-07-01 2022-07-29 泽景(西安)汽车电子有限责任公司 图像处理方法、装置、设备及存储介质
CN115002440A (zh) * 2022-05-09 2022-09-02 北京城市网邻信息技术有限公司 基于ar的图像采集方法、装置、电子设备及存储介质
CN115097628A (zh) * 2022-06-24 2022-09-23 北京经纬恒润科技股份有限公司 一种行车信息显示方法、装置及系统
CN115202476A (zh) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 显示图像的调整方法、装置、电子设备及存储介质
CN115218919A (zh) * 2022-09-21 2022-10-21 泽景(西安)汽车电子有限责任公司 航迹线的优化方法、系统和显示器
CN115467387A (zh) * 2022-05-24 2022-12-13 中联重科土方机械有限公司 用于工程机械的辅助控制系统、方法及工程机械
CN116126150A (zh) * 2023-04-13 2023-05-16 北京千种幻影科技有限公司 一种基于实景交互的模拟驾驶系统及方法
CN116152883A (zh) * 2022-11-28 2023-05-23 润芯微科技(江苏)有限公司 一种车载眼球识别和前玻璃智能局部显示的方法和系统
CN116486051A (zh) * 2023-04-13 2023-07-25 中国兵器装备集团自动化研究所有限公司 一种多用户展示协同方法、装置、设备及存储介质
CN117934777A (zh) * 2024-01-26 2024-04-26 扬州自在岛生态旅游投资发展有限公司 一种基于虚拟现实的空间布置系统及方法
WO2024124480A1 (fr) * 2022-12-15 2024-06-20 京东方科技集团股份有限公司 Système et procédé d'affichage d'interface utilisateur, dispositif informatique et support de stockage
CN118259699A (zh) * 2024-05-27 2024-06-28 北京易诚高科科技发展有限公司 一种智能座舱的多屏联动控制方法
WO2024153227A1 (fr) * 2023-01-20 2024-07-25 闪耀现实(无锡)科技有限公司 Procédé et appareil d'affichage d'image sur un dispositif d'affichage en visiocasque, dispositif et support

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742891A (zh) * 2022-03-30 2022-07-12 青岛虚拟现实研究院有限公司 一种用于vr显示设备的定位系统
CN114742872A (zh) * 2022-03-30 2022-07-12 青岛虚拟现实研究院有限公司 一种基于ar技术的视频透视系统
CN115061565A (zh) * 2022-05-10 2022-09-16 华为技术有限公司 调节显示设备的方法和装置
CN114911445B (zh) * 2022-05-16 2024-07-30 歌尔股份有限公司 虚拟现实设备的显示控制方法、虚拟现实设备及存储介质
CN114915772B (zh) * 2022-07-13 2022-11-01 沃飞长空科技(成都)有限公司 飞行器的视景增强方法、系统、飞行器及存储介质
CN115984514A (zh) * 2022-10-21 2023-04-18 长城汽车股份有限公司 一种增强显示的方法、装置、电子设备及存储介质
CN118092828A (zh) * 2022-11-25 2024-05-28 北京罗克维尔斯科技有限公司 信息显示方法、装置、设备、存储介质及车辆
CN115762293A (zh) * 2022-12-26 2023-03-07 北京东方瑞丰航空技术有限公司 一种基于虚拟现实定位器定位的航空训练方法和系统
CN116301527B (zh) * 2023-03-13 2023-11-21 北京力控元通科技有限公司 显示控制方法及装置、电子设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103182984A (zh) * 2011-12-28 2013-07-03 财团法人车辆研究测试中心 车用影像显示系统及其校正方法
WO2018167966A1 (fr) * 2017-03-17 2018-09-20 マクセル株式会社 Dispositif d'affichage de réalité augmentée et procédé d'affichage de réalité augmentée
CN108711298A (zh) * 2018-05-20 2018-10-26 福州市极化律网络科技有限公司 一种混合现实道路显示方法
CN110304057A (zh) * 2019-06-28 2019-10-08 威马智慧出行科技(上海)有限公司 汽车碰撞预警、导航方法、电子设备、系统及汽车
CN110703904A (zh) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 一种基于视线跟踪的增强虚拟现实投影方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103182984A (zh) * 2011-12-28 2013-07-03 财团法人车辆研究测试中心 车用影像显示系统及其校正方法
WO2018167966A1 (fr) * 2017-03-17 2018-09-20 マクセル株式会社 Dispositif d'affichage de réalité augmentée et procédé d'affichage de réalité augmentée
CN108711298A (zh) * 2018-05-20 2018-10-26 福州市极化律网络科技有限公司 一种混合现实道路显示方法
CN110304057A (zh) * 2019-06-28 2019-10-08 威马智慧出行科技(上海)有限公司 汽车碰撞预警、导航方法、电子设备、系统及汽车
CN110703904A (zh) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 一种基于视线跟踪的增强虚拟现实投影方法及系统

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (zh) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 基于磁传感器的定位处理方法、装置、设备与介质
CN114371779A (zh) * 2021-12-31 2022-04-19 北京航空航天大学 一种视线深度引导的视觉增强方法
CN114371779B (zh) * 2021-12-31 2024-02-20 北京航空航天大学 一种视线深度引导的视觉增强方法
CN114489332A (zh) * 2022-01-07 2022-05-13 北京经纬恒润科技股份有限公司 Ar-hud输出信息的显示方法及系统
CN114041741B (zh) * 2022-01-13 2022-04-22 杭州堃博生物科技有限公司 数据处理部、处理装置、手术系统、设备与介质
CN114041741A (zh) * 2022-01-13 2022-02-15 杭州堃博生物科技有限公司 数据处理部、处理装置、手术系统、设备与介质
CN114494594B (zh) * 2022-01-18 2023-11-28 中国人民解放军63919部队 基于深度学习的航天员操作设备状态识别方法
CN114494594A (zh) * 2022-01-18 2022-05-13 中国人民解放军63919部队 基于深度学习的航天员操作设备状态识别方法
CN114387198A (zh) * 2022-03-24 2022-04-22 青岛市勘察测绘研究院 一种影像与实景模型的融合显示方法、装置及介质
CN114816049A (zh) * 2022-03-30 2022-07-29 联想(北京)有限公司 一种增强现实的引导方法及装置、电子设备、存储介质
CN115002440B (zh) * 2022-05-09 2023-06-09 北京城市网邻信息技术有限公司 基于ar的图像采集方法、装置、电子设备及存储介质
CN115002440A (zh) * 2022-05-09 2022-09-02 北京城市网邻信息技术有限公司 基于ar的图像采集方法、装置、电子设备及存储介质
CN115467387A (zh) * 2022-05-24 2022-12-13 中联重科土方机械有限公司 用于工程机械的辅助控制系统、方法及工程机械
CN115097628B (zh) * 2022-06-24 2024-05-07 北京经纬恒润科技股份有限公司 一种行车信息显示方法、装置及系统
CN115097628A (zh) * 2022-06-24 2022-09-23 北京经纬恒润科技股份有限公司 一种行车信息显示方法、装置及系统
CN115202476B (zh) * 2022-06-30 2023-04-11 泽景(西安)汽车电子有限责任公司 显示图像的调整方法、装置、电子设备及存储介质
CN115202476A (zh) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 显示图像的调整方法、装置、电子设备及存储介质
CN114820396A (zh) * 2022-07-01 2022-07-29 泽景(西安)汽车电子有限责任公司 图像处理方法、装置、设备及存储介质
CN114820396B (zh) * 2022-07-01 2022-09-13 泽景(西安)汽车电子有限责任公司 图像处理方法、装置、设备及存储介质
CN115218919A (zh) * 2022-09-21 2022-10-21 泽景(西安)汽车电子有限责任公司 航迹线的优化方法、系统和显示器
CN116152883A (zh) * 2022-11-28 2023-05-23 润芯微科技(江苏)有限公司 一种车载眼球识别和前玻璃智能局部显示的方法和系统
CN116152883B (zh) * 2022-11-28 2023-08-11 润芯微科技(江苏)有限公司 一种车载眼球识别和前玻璃智能局部显示的方法和系统
WO2024124480A1 (fr) * 2022-12-15 2024-06-20 京东方科技集团股份有限公司 Système et procédé d'affichage d'interface utilisateur, dispositif informatique et support de stockage
WO2024153227A1 (fr) * 2023-01-20 2024-07-25 闪耀现实(无锡)科技有限公司 Procédé et appareil d'affichage d'image sur un dispositif d'affichage en visiocasque, dispositif et support
CN116126150A (zh) * 2023-04-13 2023-05-16 北京千种幻影科技有限公司 一种基于实景交互的模拟驾驶系统及方法
CN116486051A (zh) * 2023-04-13 2023-07-25 中国兵器装备集团自动化研究所有限公司 一种多用户展示协同方法、装置、设备及存储介质
CN116486051B (zh) * 2023-04-13 2023-11-28 中国兵器装备集团自动化研究所有限公司 一种多用户展示协同方法、装置、设备及存储介质
CN117934777A (zh) * 2024-01-26 2024-04-26 扬州自在岛生态旅游投资发展有限公司 一种基于虚拟现实的空间布置系统及方法
CN118259699A (zh) * 2024-05-27 2024-06-28 北京易诚高科科技发展有限公司 一种智能座舱的多屏联动控制方法

Also Published As

Publication number Publication date
CN113467600A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2021197189A1 (fr) Procédé, système et appareil d'affichage d'informations basé sur la réalité augmentée, et dispositif de projection
WO2021197190A1 (fr) Procédé, système et appareil d'affichage d'informations basés sur la réalité augmentée et dispositif de projection
CN108140235B (zh) 用于产生图像视觉显示的系统和方法
CN104883554B (zh) 通过虚拟透视仪器群集显示直播视频的方法和系统
US8994558B2 (en) Automotive augmented reality head-up display apparatus and method
WO2022241638A1 (fr) Procédé et appareil de projection, et véhicule et ar-hud
US20070003162A1 (en) Image generation device, image generation method, and image generation program
CN107554425A (zh) 一种增强现实车载平视显示器ar‑hud
KR20150087619A (ko) 증강 현실 기반의 차로 변경 안내 장치 및 방법
US10747007B2 (en) Intelligent vehicle point of focus communication
JP2006242859A (ja) 車両用情報表示装置
US9836814B2 (en) Display control apparatus and method for stepwise deforming of presentation image radially by increasing display ratio
EP3811326B1 (fr) Système et méthodologies de commande de contenu d'affichage tête haute (hud)
WO2017169273A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US20220044032A1 (en) Dynamic adjustment of augmented reality image
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
US20240042857A1 (en) Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program
US7599546B2 (en) Image information processing system, image information processing method, image information processing program, and automobile
US20190088024A1 (en) Non-transitory computer-readable storage medium, computer-implemented method, and virtual reality system
CN115525152A (zh) 图像处理方法及系统、装置、电子设备和存储介质
KR20180008345A (ko) 컨텐츠 제작 장치, 방법 및 컴퓨터 프로그램
JP2020019369A (ja) 車両用表示装置、方法、及びコンピュータ・プログラム
WO2024138467A1 (fr) Système d'affichage de réalité augmentée basé sur des caméras multi-vues et un suivi de clôture
US20240208415A1 (en) Display control device and display control method
JP2019081480A (ja) ヘッドアップディスプレイ装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21780976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21780976

Country of ref document: EP

Kind code of ref document: A1