CN113467601A - Information display method, system and device based on augmented reality and projection equipment - Google Patents

Information display method, system and device based on augmented reality and projection equipment Download PDF

Info

Publication number
CN113467601A
CN113467601A CN202010244728.4A CN202010244728A CN113467601A CN 113467601 A CN113467601 A CN 113467601A CN 202010244728 A CN202010244728 A CN 202010244728A CN 113467601 A CN113467601 A CN 113467601A
Authority
CN
China
Prior art keywords
information
display area
target display
scene
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010244728.4A
Other languages
Chinese (zh)
Inventor
余新
康瑞
邓岳慈
弓殷强
赵鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Appotronics Corp Ltd
Original Assignee
Appotronics Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appotronics Corp Ltd filed Critical Appotronics Corp Ltd
Priority to CN202010244728.4A priority Critical patent/CN113467601A/en
Priority to PCT/CN2021/082944 priority patent/WO2021197190A1/en
Publication of CN113467601A publication Critical patent/CN113467601A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application discloses an information display method, system and device based on augmented reality, projection equipment and a storage medium. The method comprises the following steps: acquiring real scene information acquired by a scene sensing device; acquiring a target display area determined based on a first spatial pose of a user; acquiring a coordinate transformation rule corresponding to mapping of the live-action information to the target display area; generating driving guide information based on the live-action information; and displaying the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule. The method displays the driving guide information generated based on the real-scene information at the corresponding position of the target display area determined based on the first space pose of the user through the coordinate transformation rule, so that the user can accurately and conveniently check the virtual driving guide information corresponding to the driving scene in the driving process, frequent sight line switching fatigue caused by checking the driving guide information such as road conditions, navigation and the like is reduced, and the driving safety and comfort are improved.

Description

Information display method, system and device based on augmented reality and projection equipment
Technical Field
The present disclosure relates to the field of coordinate transformation technologies, and more particularly, to an augmented reality-based information display method, system, apparatus, projection device, and storage medium.
Background
HUD (head up display) is head-up display (or head-up display), can show important information on a transparent glass in the sight place ahead, is applied to fighter at first, and its main objective is in order to let the pilot need frequently concentrate on the attention and see the data that look at in the panel board low to avoid the pilot when watching the data in the panel board, can not observe the environmental information in the field in the flight place ahead. In order to reduce the accidents caused by the user looking down at the dashboard or center console, HUDs have been introduced from airplanes into the automotive field.
However, current HUD display information's mode is comparatively single, use the automobile driving as an example, along with road conditions, navigation and dangerous early warning etc. more supplementary driving information's joining, when the higher user of height drives a vehicle for the user customization of ordinary height (be less than higher height), the position that the HUD that can lead to the higher user of height to see this vehicle shows environmental information and environmental information's actual position have the difference, drive for the user and bring a great deal of inconvenience, reduce user experience.
Disclosure of Invention
In view of the above problems, the present application provides an augmented reality-based information display method, system, apparatus, projection device, and storage medium to improve the above problems.
In a first aspect, an embodiment of the present application provides an augmented reality-based information display method, where the method includes: acquiring real scene information acquired by a scene sensing device; acquiring a target display area determined based on a first spatial pose of a user; acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area; generating driving guide information based on the live-action information; and displaying the driving guide information at the corresponding position of the target display area based on the coordinate transformation rule.
In a second aspect, an embodiment of the present application provides an augmented reality information display apparatus, where the information display apparatus includes an image sensing module, a coordinate transformation module, and a display module: the image sensing module is used for acquiring the live-action information acquired by the image sensing device; the coordinate transformation module is used for acquiring a target display area determined based on a first space pose of a user; the coordinate transformation module is further used for acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area; the display module is used for generating driving guide information based on the live-action information; the display module is further used for displaying the driving guide information at the corresponding position of the target display area based on the coordinate transformation rule.
In a third aspect, an embodiment of the present application provides an augmented reality-based vehicle-mounted information display system, where the system includes: the scene sensing device is used for acquiring the real scene information of the external environment of the vehicle; the image processing device is used for acquiring the real scene information acquired by the scene sensing device, acquiring a target display area determined based on a first space pose of a user, acquiring a coordinate transformation rule corresponding to the real scene information mapped to the target display area, generating driving guide information based on the real scene information, and generating a target position coordinate of the driving guide information displayed in the target display area based on the coordinate transformation rule; a HUD display device for displaying the driving direction information to the target position coordinates of the target display area.
In a fourth aspect, embodiments of the present application provide a projection device, including a data acquisition module, a projection module, one or more processors, and a memory; one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect described above.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, where the program code executes the method of the first aspect.
According to the information display method, the information display system, the information display device, the projection equipment and the storage medium based on augmented reality, the real-scene information acquired by the image sensing device is acquired, the target display area determined based on the first space pose of the user is acquired, the real-scene information is mapped to the coordinate transformation rule corresponding to the target display area, the driving guide information is generated based on the real-scene information, and the driving guide information is displayed at the corresponding position of the target display area based on the coordinate transformation rule. Therefore, the driving guide information generated based on the real-scene information is displayed at the corresponding position of the target display area determined based on the first space pose of the user through the coordinate transformation rule, so that the user can accurately and conveniently check the virtual driving guide information corresponding to the driving scene in the driving process without repeatedly confirming the accuracy of the driving guide information, the frequent switching fatigue of the sight line caused by checking the driving guide information such as road conditions, navigation and the like is reduced, and the driving safety and comfort are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flowchart of a method for displaying information based on augmented reality according to an embodiment of the present application.
Fig. 2 is a diagram showing a configuration example of an augmented reality-based in-vehicle information display system suitable for the augmented reality-based information display method provided in the present embodiment.
Fig. 3 shows an exemplary diagram of displaying driving guidance information through the augmented reality-based in-vehicle information display system provided in the present application in a dangerous scene provided in the present embodiment.
Fig. 4 shows a flowchart of a method for displaying augmented reality-based information according to another embodiment of the present application.
Fig. 5 is a diagram illustrating an example of a display effect of the augmented reality-based in-vehicle information display system provided by the present embodiment.
Fig. 6 shows a flowchart of a method for displaying augmented reality-based information according to another embodiment of the present application.
Fig. 7 is a diagram illustrating an example of a target display area determined based on a first spatial pose of a user according to the present embodiment.
Fig. 8 shows another exemplary diagram of the target display area determined based on the first spatial pose of the user provided by the present embodiment.
Fig. 9 shows an exemplary diagram of the target display area determined based on the spatial pose of the user provided by the present embodiment.
Fig. 10 is a flowchart illustrating a method for displaying augmented reality-based information according to still another embodiment of the present application.
Fig. 11 is a diagram showing an exemplary processing procedure of the information display method based on enhanced display proposed in the present embodiment.
Fig. 12 is a block diagram illustrating a structure of an augmented reality-based information display device according to an embodiment of the present application.
Fig. 13 is a block diagram illustrating a projection apparatus for performing an augmented reality-based information display method according to an embodiment of the present application.
Fig. 14 illustrates a storage unit for storing or carrying program codes for implementing an augmented reality-based information display method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
HUD (head up display) is head-up display (or head-up display), can show important information on a transparent glass in the sight place ahead, is applied to fighter at first, and its main objective is in order to let the pilot need frequently concentrate on the attention and see the data that look at in the panel board low to avoid the pilot when watching the data in the panel board, can not observe the environmental information in the field in the flight place ahead. In order to reduce the accidents caused by the user looking down at the dashboard or center console, HUDs have been introduced from airplanes into the automotive field.
HUDs are primarily classified into afterload (also known as Combine HUD, type C HUD) and upfront (also known as Windshield HUD, type W HUD). Wherein, the front-mounted HUD projects the content that the driver needs to the front windshield through an optical system as a combiner, and human eyes can observe HUD virtual images and external scenes simultaneously in a head-up range through the windshield, so that the driving safety and the driving comfort are improved. However, some existing HUD devices display virtual information only in front of the driver's line of sight and do not blend with the real environment. With the addition of more auxiliary driving information such as road conditions, navigation, danger early warning and the like, the mismatching of the virtual content and the real scene can lead to the distraction of the attention of the driver.
Augmented Reality (AR) is a technology that skillfully fuses virtual information with the real world.
As a mode, along with the development of autopilot and augmented reality, mixed reality technique, can introduce the HUD field with the AR technique, AR-HUD can solve traditional HUD virtual information and actual scene separation, unmatched problem through the combination of AR technique and front-mounted HUD, improves the security and the comfort level of driving when richening HUD display content. However, current HUD display information's mode is comparatively single, use the automobile driving as an example, along with road conditions, navigation and dangerous early warning etc. more supplementary driving information's joining, when the higher user of height drives a vehicle for the user customization of ordinary height (be less than higher height), the position that the HUD that can lead to the higher user of height to see this vehicle shows environmental information and environmental information's actual position have the difference, drive for the user and bring a great deal of inconvenience, reduce user experience.
Therefore, in order to improve the above problems, the inventor proposes that the driving guidance information provided by the present application can be displayed at the corresponding position of the target display area determined based on the first spatial pose of the user by the coordinate transformation rule, so that the user can accurately and conveniently view the virtual driving guidance information corresponding to the driving scene during the driving process without repeatedly confirming the accuracy of the driving guidance information, the frequent switching fatigue of sight lines caused by viewing road conditions, navigation and other driving guide information is reduced, and the driving safety and comfort are improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Please refer to fig. 1, which is a flowchart illustrating a method for displaying augmented reality-based information according to an embodiment of the present application. The method of this embodiment may be performed by an apparatus for processing live-action information based on augmented reality, which may be implemented by hardware and/or software, and includes:
step S110: and acquiring the live-action information acquired by the scene sensing device.
The live-action information in the embodiment of the present application may be live-action information corresponding to a plurality of scenes. Optionally, the plurality of scenes may include, but are not limited to, driving scenes, tourism scenes, outdoor activity scenes, and the like. For example, if the driving scene is a driving scene, the live-action information may include a lane, a signboard, a dangerous pedestrian (e.g., a blind person, an elderly person walking alone, a pregnant woman, or a weak group such as a child), a vehicle, and the like; if the scene is a tourist scene, the real-scene information can comprise tourist place identification, tourist routes, tourist spot information, tourist spot weather information and the like; in the case of outdoor activity scenes, the live-action information may include current location information, information about nearby convenience stores, and the like.
Optionally, the scene sensing device may include a sensing device such as a laser, an infrared radar, and the like, and may also include an image acquisition device such as a camera (including a monocular camera, a binocular camera, an RGB-D camera, and the like). As one mode, the scene sensing device may acquire the live-action information corresponding to the current scene. For example, assuming that the current scene is a driving scene, the scene sensing device is a camera, and the camera may be mounted on an automobile (optionally, the mounting position may be adjusted according to the model structure of the automobile or the actual need), so that the camera may acquire real-scene information related to driving in real time. For the collection principle and implementation of the scene sensing device (including laser, infrared radar or camera) to collect the real-scene information, reference may be made to the related technology, which is not described herein again.
Step S120: a target display area determined based on a first spatial pose of a user is obtained.
Alternatively, the first spatial pose of the user may be a sitting posture of the user in a driving state, or a sitting posture after the seat is adjusted (here, the seat may be adjusted for the current user for the first time). It can be understood that the user has different sitting postures and different corresponding spatial poses, and as a way, the sitting posture state of the user after the seat is adjusted can be used as the first spatial pose of the user.
In this embodiment, the target display area is an area for displaying virtual image information corresponding to the real-scene information. Taking the driving scene as an example, the target display area may be an area on a windshield of the automobile for displaying projected virtual image information corresponding to the real scene information. Optionally, the target display areas corresponding to different spatial poses of the same user may be different, and the target display areas corresponding to spatial poses of different users may be different.
In order to eliminate the display difference between the position where the virtual image information corresponding to the real-scene information is displayed and the actual position of the real-scene information, as one mode, a target display area determined based on the first spatial pose of the user may be acquired, so that the virtual image information corresponding to the real-scene information may be displayed in the target display area, the display difference may be reduced, and the accuracy of displaying the display position of the virtual image information corresponding to the real-scene information may be improved.
Step S130: and acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area.
The coordinate transformation rule may be used to map the coordinates of the real-scene information to the corresponding coordinates of the target display area. As one mode, when the real-scene information and the target display area are acquired, the coordinate transformation rule corresponding to the mapping of the real-scene information to the target display area may be acquired, so that the driving guidance information corresponding to the real-scene information may be accurately displayed at the corresponding position of the target display area based on the coordinate transformation rule in the following.
Optionally, the coordinate transformation rule may include a first transformation matrix and a second transformation matrix. The first transformation matrix may be used to determine reference world coordinates corresponding to coordinates of the real-scene information collected by the scene sensing device, and the second transformation matrix may be used to convert the reference world coordinates into view coordinates within the target display area. The reference world coordinate may be understood as a relative position coordinate of the real-scene information in the established coordinate system corresponding to the scene sensing device, and optionally, the reference world coordinate in this embodiment may be understood as a world coordinate relatively stationary with respect to the vehicle. The view coordinates may be understood as relative position coordinates in a coordinate system corresponding to the target display area with reference to world coordinates.
As an implementation manner, a first transformation matrix and a second transformation matrix may be obtained, and then a product of a parameter represented by the first transformation matrix and a parameter represented by the second transformation matrix is mapped to a coordinate transformation rule corresponding to the target display area as the live-action information.
Optionally, the first transformation matrix may include a first rotation matrix and a first translation vector. The first rotation matrix may be configured to rotate coordinates of the real-scene information collected by the scene sensing device, and the first translational vector may be configured to translate the coordinates. As one approach, reference world coordinates corresponding to coordinates of the live-action information collected by the scene sensing device may be determined based on the first rotation matrix and the first translational vector.
Optionally, the second transformation matrix may include a view matrix and a projection matrix. The projection matrix may be used to determine a mapping range for mapping the real-scene information to the target display area, and the view matrix may be used to determine a relative position within the mapping range for displaying the driving guidance information (which may be understood as the aforementioned virtual image information corresponding to the real-scene information). As one approach, the reference world coordinates may be converted to view coordinates within the target display area based on the mapping range and the relative position. Wherein the view matrix may include a second rotation matrix and a second translation vector. The second rotation matrix can be used for rotating the reference world coordinates, and the second translation vector can be used for translating the reference world coordinates; the projection matrix may include field angle parameters, and the field angles may include a horizontal field angle and a vertical field angle.
The following takes a driving scenario as an example to exemplarily explain the present embodiment:
fig. 2 is a diagram illustrating a structure of an augmented reality-based vehicle-mounted information display system suitable for the augmented reality-based information display method provided in this embodiment. As shown in fig. 2, the augmented reality-based in-vehicle information display system may include a scene sensing device, an image processing device, and a HUD display device. The scene sensing device can be used for collecting the real scene information of the external environment of the vehicle. The image processing device may be configured to acquire real-scene information acquired by the scene sensing device, acquire a target display area determined based on a first spatial pose of a user, acquire a coordinate transformation rule that the real-scene information is mapped to the target display area, generate driving guidance information based on the real-scene information, and generate a target position coordinate that the driving guidance information is displayed in the target display area based on the coordinate transformation rule. The HUD display device may be used to present the driving direction information at the target position coordinates of the target display area.
The image processing device may be a processor chip of a car machine system, or a processing chip of a separate car computer system, or a processor chip integrated in a scene sensing device (e.g., a lidar), and the like, which is not limited herein.
In one implementation, the in-vehicle information display system may include an automobile, a driver, a scene sensing device, an image processing device, and an HUD display device having an AR-HUD function. As an embodiment, the scene sensing device may be installed in an automobile and may acquire scene information related to driving (may also be understood as the aforementioned real-scene information), a driver sits at a driving seat of the automobile, the HUD display device is installed at a front windshield of the automobile, a position of the HUD display device may be adjusted so that eyes of the driver can see a whole virtual image corresponding to the driving scene information, and the image processing device may convert the real-scene information acquired by the scene sensing device into an image fused with the real-scene information of a real scene, and send the image to the HUD display device for displaying.
As one mode, after acquiring the real-scene information, the scene sensing device may acquire the position coordinates of the real-scene information in a world coordinate system (e.g., O-xyz shown in fig. 2) based on a position acquisition mode such as GPS positioning, then may select a world coordinate origin and a coordinate axis direction based on the traveling direction of the automobile, determine a reference world coordinate system relatively stationary with respect to the automobile according to the world coordinate origin and the coordinate axis direction, and may acquire the reference world coordinates corresponding to the coordinates of the real-scene information in the reference world coordinate system by determining the reference world coordinate system relatively stationary with respect to the automobile. The selection manner of the world coordinate origin and the coordinate axis direction may refer to the related art, and is not described herein again. It should be noted that the reference to the world coordinate system is understood to be a coordinate system obtained by rotating and/or translating the world coordinate system.
For example, as an embodiment, in the case where a reference world coordinate system relatively stationary with respect to the automobile is determined, the scene sensing device and the eyes of the driver in the reference world coordinate system may be acquiredAnd in this way, the sensing module transformation matrix M (namely the first transformation matrix) can be calculated according to the spatial pose of the scene sensing device in the reference world coordinate system. For example, it may be assumed that the transformation process for transforming the world coordinate system into the reference world coordinate system comprises a first rotation matrix (which may also be understood as the overall rotation matrix of the scene awareness apparatus) RMAnd a first translation vector TMOptionally, the sensing module transforms the matrix M and the first rotation matrix RMAnd a first translation vector TMThe relationship between can be expressed as:
Figure BDA0002433695950000061
wherein the content of the first and second substances,
Figure BDA0002433695950000071
Figure BDA0002433695950000072
wherein R isMx、RMy、RMzRespectively rotating matrixes of the sensing module transformation matrix M around an x axis, a y axis and a z axis of a world coordinate system, and sequentially rotating Euler angles of the matrixes are alphaM、βM、γM,(TMx,TMy,TMz) Coordinates of the live-action information in a reference world coordinate system.
Alternatively, the HUD display device in this embodiment may be an inverse virtual camera model, and in this way, the second transformation matrix may be calculated based on the relevant parameters of the HUD display device and the pose of the virtual camera, assuming that the pose of the virtual camera is the same as the pose of the eyes of the driver. The second transformation matrix (herein also understood as the imaging matrix of the virtual camera) C may include a view matrix V and a projection matrix P, and the relationship between the three may be expressed as:
C=PV。
wherein the view matrix V may include a second rotation matrix RH TAnd a second translation vector THSecond rotation matrix RH TIt can be understood that the coordinates of the live-action information in the reference world coordinate system are converted into a total rotation matrix of the coordinates of the HUD display device in the coordinate system, and optionally, the second translation vector THIt can be used to translate the reference world coordinates or to translate the coordinates of the HUD display device in the coordinate system. Optionally, the view matrix V and the second rotation matrix R in this embodimentH TAnd a second translation vector THThe relationship between can be expressed as:
Figure BDA0002433695950000073
wherein the content of the first and second substances,
Figure BDA0002433695950000074
Figure BDA0002433695950000075
wherein RH isx、RHy、RHzCan be understood as a rotation matrix around the x-axis, the y-axis and the z-axis of a reference world coordinate system respectively, and the Euler angle of the rotation is alpha in turnH、βH、γH,(THx,THy,THz) Coordinates of the pose of the virtual camera under a reference world coordinate system are obtained.
Optionally, the projection matrix P may satisfy the following relation:
Figure BDA0002433695950000081
optionally, the projection matrix includes field angle parameters, the field angles may include horizontal field angles and vertical field angles,the hvfov and the vFOV can represent the horizontal and vertical field angles, respectively, and n and f can be understood as the distances of the assumed near-far clipping planes, for example, as shown by the virtual image O in fig. 2hThe distance of the plane on which the virtual image Ow is located from the center of the front windshield can be understood as the distance of the assumed near cutting plane, and the distance of the plane on which the virtual image Ow is located from the center of the front windshield can be understood as the distance of the assumed far cutting plane. It should be understood that the description is only given by way of example, and the distance between the far and near cutting surfaces can be adjusted according to actual requirements in practical implementation.
As one mode, when the first transformation matrix and the second transformation matrix are obtained, a coordinate transformation rule corresponding to mapping the real-scene information to the target display area may be obtained as a product of a parameter represented by the first transformation matrix and a parameter represented by the second transformation matrix, that is, the coordinate transformation rule may be expressed as:
F=CM=PVM。
step S140: and generating driving guide information based on the real scene information.
The driving guidance information in this embodiment may include navigation instruction information corresponding to road conditions, pedestrian warning information, tourist attraction prompt information, and the like, and the type and specific content of the driving guidance information may not be limited. For example, as shown in fig. 3, an example of displaying driving guidance information through the augmented reality-based information display system proposed in the present application in a dangerous scene provided by the present embodiment is shown, as shown in fig. 3, the image processing device may convert the real scene information collected by the scene sensing device into a virtual HUD image to be displayed on the HUD display device, and the specific content of the display is shown in the right image of fig. 3, in this case, the scene seen by the eyes of the driver may include lane guidance information (i.e., "navigation instruction in virtual image" shown in fig. 3) and pedestrian warning information (i.e., "pedestrian prompting box in virtual image" shown in fig. 3).
As one way, when the real-scene information is obtained, the driving guidance information may be generated based on the real-scene information, optionally, the prompting manner of the driving guidance information in this embodiment may not be limited, for example, the driving guidance information may be prompted in the form of an icon (for example, an arrow), a picture, an animation, a voice, a video, or the like, and then the driving guidance information in different prompting manners may be generated in a corresponding manner. Optionally, for the generation principle of generating the driving guidance information in each prompting mode based on the real-scene information, reference may be made to related technologies, which are not described herein again. Optionally, the driving guidance information in this embodiment may include at least one prompting mode, for example, the user may be prompted by combining with voice on the basis of displaying the navigation indication icon corresponding to the road, so that the driving guidance prompt may be performed on the user more accurately, driving safety is guaranteed, and user experience is improved.
Step S150: and displaying the driving guide information at the corresponding position of the target display area based on the coordinate transformation rule.
Optionally, the driving guidance information is displayed at the corresponding position of the target display area based on the coordinate transformation rule, so that the difference between the position of the HUD for displaying the live-action information and the actual position of the live-action information can be avoided, and the display accuracy and reliability are improved.
According to the information display method based on augmented reality, the real-scene information acquired by the image sensing device is acquired, the target display area determined based on the first space pose of the user is acquired, the coordinate transformation rule corresponding to the target display area mapped by the real-scene information is acquired, the driving guide information is generated based on the real-scene information, and the driving guide information is displayed at the corresponding position of the target display area based on the coordinate transformation rule. The driving guide information generated based on the real-scene information is displayed at the corresponding position of the target display area determined based on the first space pose of the user through the coordinate transformation rule, so that the user can accurately and conveniently check the virtual driving guide information corresponding to the driving scene in the driving process without repeatedly confirming the accuracy of the driving guide information, frequent sight line switching fatigue caused by checking the driving guide information such as road conditions, navigation and the like is reduced, and the driving safety and comfort are improved.
Please refer to fig. 4, which is a flowchart illustrating a method for displaying augmented reality-based information according to another embodiment of the present application. The method of this embodiment may be performed by an apparatus for processing live-action information based on augmented reality, which may be implemented by hardware and/or software, and includes:
step S210: and acquiring the live-action information acquired by the scene sensing device.
Step S220: a target display area determined based on a first spatial pose of a user is obtained.
Step S230: and acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area.
Step S240: and generating driving guide information based on the real scene information.
Step S250: and inputting the position coordinates of the live-action information in a coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain a coordinate transformation matrix to be processed.
As one mode, the position coordinates of the live-action information in the coordinate system corresponding to the scene sensing device may be input into the first transformation matrix, and the output result may be used as the coordinate transformation matrix to be processed. For example, in a specific application scenario, the position coordinate of the real-scene information in the coordinate system corresponding to the scene sensing device is assumed to be Ow(x, y, z), optionally, in the case of position coordinates OwAfter (x, y, z) is input to the first transformation matrix, it can be obtained:
Figure BDA0002433695950000091
Figure BDA0002433695950000092
wherein, O' can be used as a transformation matrix of the coordinate to be processed, OWHomogeneous coordinates are used.
Step S260: and carrying out coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the live-action information in the target display area.
As one mode, the coordinate transformation matrix to be processed may be subjected to coordinate transformation according to the aforementioned second transformation matrix, so as to obtain the relative position coordinates of the real-scene information in the target display area. Optionally, the specific implementation process of the coordinate transformation may refer to related technologies, and is not described herein again.
For example, in the above example, assume that the HUD image of the target display area of the HUD display device has the position coordinate OwThe corresponding position coordinate of (x, y, z) is represented as Oh(u, v), then after coordinate transformation of the coordinate transformation matrix O' to be processed according to the second transformation matrix, we can obtain:
Figure BDA0002433695950000101
where width and height are the width and height of the HUD image, the units may be pixels. In this manner, O may be addedh(u, v) as relative position coordinates of the live-action information within the target display area.
Step S270: displaying the driving direction information at a location characterized by the relative location coordinates.
Alternatively, in this manner, the driving guidance information corresponding to the live-action information may be displayed at the position represented by the relative position coordinates.
The present embodiment is described below with a specific example:
referring to fig. 5, an exemplary diagram of a display effect of the augmented reality-based information display system provided in the embodiment is shown. As shown in fig. 5, a virtual-real scene fusion system (which can be understood as an augmented reality-based information display system in the present application) may be built in related modeling software (for example, Unity3D, etc.), and the virtual-real scene fusion system may include an automobile, a video camera 1 (for simulating eyes of a driver), a video camera 2 and a HUD imaging module (i.e., the aforementioned HUD display device) simulated by a plane together, spatial scene information (which may be spatial scene information in different scenes) acquired by an image sensing module simulated by a checkerboard, and information transformation and image rendering and rendering of an image processing device performed by a program script.
Optionally, in this simulated virtual-real scene fusion system, the central position of the bottom of the vehicle may be selected as the origin of coordinates, the vehicle forward direction is the positive direction of the Z axis, a right-hand coordinate system is adopted, it is assumed that the driver sits at the driving position, the eyes of the driver simulated by the camera 1 face the front, the pose of the HUD virtual camera simulated by the camera 2 is the same as the pose of the eyes of the driver, in this way, the scene sensing device may acquire the spatial corner information of the checkerboard on the vehicle, the image processing device may draw the corner into the HUD image space (the lower left corner shown in fig. 5 is the corner image drawn into the HUD image space by the image processing device), and then the drawn image is sent to the HUD module for display (the image is sent to the HUD for virtual image display as shown in fig. 5). In this way, the driver view scene shown in fig. 5 can be obtained, and the virtual and real scenes can be accurately fused as can be seen from the virtual and real fusion result enlarged view shown in fig. 5.
According to the information display method based on augmented reality, the driving guide information generated based on the real scene information is subjected to coordinate transformation through the first transformation matrix and the second transformation matrix respectively and then is displayed at the corresponding position of the target display area determined based on the first space pose of the user, so that the user can accurately and conveniently check the virtual driving guide information corresponding to the driving scene in the driving process, the accuracy of the driving guide information does not need to be repeatedly confirmed, frequent sight line switching fatigue caused by checking of the road condition, navigation and other driving guide information is reduced, and the driving safety and comfort are improved.
Please refer to fig. 6, which is a flowchart illustrating a method for displaying augmented reality-based information according to another embodiment of the present application. The method of this embodiment may be performed by an apparatus for processing live-action information based on augmented reality, which may be implemented by hardware and/or software, and includes:
step S310: and acquiring the live-action information acquired by the scene sensing device.
Step S320: and acquiring a target display area re-determined based on the changed spatial pose.
It can be understood that, during the driving process of the user, the posture of the user may change, for example, the sitting posture of the user may change (including tilting the body left and right, adjusting the height of the seat up and down, or adjusting the tilt degree of the seat back forward and backward), or the head of the user may shake along with the change of the traveling road condition, so in this way, the first spatial posture of the user may change, and if the original HUD display mode is still adopted to display the driving guidance information corresponding to the real-scene information, the potential safety hazard may be caused by the position display error.
As a way to improve this problem, in this embodiment, the spatial pose of the user is detected in real time, so that if it is detected that the first spatial pose changes, the target display area is determined again based on the changed spatial pose, thereby ensuring the accuracy of the display position of the driving guidance information corresponding to the real-scene information, without requiring the user to repeatedly confirm the accuracy of the driving guidance information, improving the flexibility of displaying the driving guidance information, and further improving the user experience.
For example, please refer to fig. 7 as an implementation manner, which illustrates an exemplary diagram of the target display area determined based on the first spatial pose of the user according to the present embodiment. As shown in fig. 7, if the spatial posture corresponding to the current sitting posture of the user 22 is the first spatial posture, in this way, the target display area 23 shown in fig. 7 can be displayed on the screen 21 of the front windshield of the automobile. Alternatively, if the spatial pose of the user changes from 22 to 22 'as shown in fig. 8, a target display area 23' as shown in fig. 8 may be displayed on the screen 21, where the target display area 23 'is a target display area re-determined based on the changed spatial pose 22'.
Optionally, in this embodiment, a corresponding relationship between a change range of the spatial pose of the user and a change range of the target display area may be preset. For example, the spatial pose of the user can be set to include the variation ranges A, B, C, D and E, and the variation ranges of the target display areas corresponding to the variation ranges A, B, C, D and E can be set to 1, 2, 3, 4 and 5 (assuming that the larger the value is, the larger the variation range is, the unit value is, the corresponding variation range is 5 °), and optionally, assuming that the larger the variation range is, the larger the corresponding variation range is. In this way, if it is detected that the first spatial pose of the user changes, the change range of the spatial pose may be determined based on the changed parameter, and the change range of the corresponding target display area may be determined according to the change range.
It can be understood that, assuming that the variation range of the first spatial pose of the user is small, that is, any corresponding variation amplitude is not reached, in this way, the display position of the program guide display area may not be adjusted.
Step S330: and acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area.
Optionally, if the first spatial pose of the user changes, in the above manner, the real-scene information may be obtained and mapped to the second coordinate transformation rule corresponding to the re-determined target display area, where the specific determination process of the second coordinate transformation rule may refer to the determination principle and the determination process of the coordinate transformation rule, and is not described herein again.
Step S340: and generating driving guide information based on the real scene information.
Step S350: displaying the driving direction information at a corresponding position of the re-determined target display area based on the coordinate transformation rule.
Alternatively, on the basis of re-determining the target display area based on the changed spatial pose, the driving guidance information may be displayed at a corresponding position of the re-determined target display area based on the second coordinate change rule.
As an implementation manner, the target display area in this embodiment may be adjusted according to the change of the first spatial pose of the user. For example, if it is detected that the user has a low head gesture, the target display area may be displayed at a corresponding position on the central control display screen; if the fact that the frequency of watching the mobile phone by the user in the driving process is high is detected, the target display area can be displayed on a display screen of the mobile phone; or other screens which can be used as target display areas in the driving scene, such as vehicle windows positioned on the left and right sides of the driving position.
As still another implementation, in this embodiment, at least one target display area may be set at the same time, so that the driving user may be assisted by other users to perform safe driving in a fatigue state or a visually poor state. For example, as shown in fig. 9, the front windshield 21 may be divided into two regions including a first display region 211 and a second display region 212, in such a manner that, assuming that the target display region corresponding to the spatial pose of the driver 221 is 231, the target display region 232 is a target display region corresponding to the spatial pose of the rider-driving user 222. The content displayed in the target display area 231 may be the same as the content displayed in the target display area 232, and optionally, the display state of the target display area 232 may be turned off or turned on according to actual needs, for example, when the mental state of the primary driver (i.e., the driver 221) is tired, the display function of the target display area 232 may be selected to be turned on. The display position of the target display area 232 in the second display area 212 may change along with the change of the spatial pose of the user 222, and specific change principles may refer to the foregoing corresponding descriptions, which are not described herein again.
Optionally, if the display state of the target display area 232 is in an on state, in the driving process, if dangerous driving information is found, the assistant driver 222 can timely remind the driver user 221, so that the mode of reminding the driver through the assistance of other users is realized, the safety and the comfort of multiple driving promotion are realized, meanwhile, the frequent switching fatigue of sight lines caused by viewing the driving guide information such as road conditions and navigation is reduced, the interaction among users in the driving process is promoted, and the user-friendly experience is promoted. Optionally, if the implementation principle of the display function of the target display area 232 can refer to the description in the foregoing embodiments, no further description is provided herein.
According to the information display method based on augmented reality, the driving guide information generated based on the real scene information is displayed at the corresponding position of the target display area determined based on the first space pose of the user through the coordinate transformation rule, or is displayed at the corresponding position of the target display area determined again based on the space pose after the user changes, so that the user can accurately and conveniently check the virtual driving guide information corresponding to the driving scene in the driving process, the accuracy of the driving guide information does not need to be repeatedly confirmed, the frequent sight line switching fatigue caused by checking the driving guide information such as road conditions and navigation is reduced, and the driving safety and comfort are improved.
Please refer to fig. 10, which is a flowchart illustrating a method for displaying augmented reality-based information according to yet another embodiment of the present application. The method of this embodiment may be performed by an apparatus for processing live-action information based on augmented reality, which may be implemented by hardware and/or software, and includes:
step S410: and acquiring the live-action information acquired by the scene sensing device.
Step S420: and detecting the change of the first space pose by acquiring sitting posture adjusting parameters of the electric seat.
Optionally, in this embodiment, the seat of the vehicle may be an electric seat, and in this way, if the user adjusts the position of the electric seat, the electric seat may automatically generate an adjustment parameter, and the parameter may be used as a sitting posture adjustment parameter of the user. Then, as one mode, a change in the first spatial posture of the user can be detected by acquiring the sitting posture adjustment parameter of the power seat.
Step S430: and acquiring sitting posture adjusting parameters of the electric seat.
Optionally, the sitting posture adjusting parameters of the electric seat can be acquired by reading the data automatically generated by the electric seat, or a camera can be installed, the sitting posture adjusting parameters of the electric seat are acquired by the camera, and optionally, the specific acquisition mode can be unlimited.
Step S440: and acquiring a change vector corresponding to the first space pose based on the sitting posture adjusting parameter.
After the sitting posture adjustment parameters of the electric seat are obtained, a change vector corresponding to the first spatial pose may be obtained based on the sitting posture adjustment parameters. Optionally, the specific calculation process may be implemented by referring to related technologies, and details are not described herein.
Step S450: and adjusting the target display area based on the change vector to obtain a redetermined target display area.
As one approach, the display position of the target display area may be adjusted based on the change vector corresponding to the first spatial pose to obtain a re-determined target display area. Optionally, the specific adjustment principle may refer to the description in the foregoing embodiments, and is not described herein again.
Step S460: and acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area.
Step S470: and generating driving guide information based on the real scene information.
Optionally, in this embodiment, the sequence of the steps may not be limited, for example, step S470 may be implemented after step S410.
By way of example, a specific implementation flow is shown below:
as shown in fig. 11, a processing procedure example diagram of the information display method based on enhanced display proposed in the present embodiment is shown. In fig. 11, the flow indicated by the open arrows may be an initial flow, and the flow indicated by the solid arrows may be a real-time continuous flow. As an embodiment, a coordinate system may be established, the spatial positions of the scene sensing device and the eyes of the driver are measured, the scene sensing device matrix M and the HUD imaging matrix C are calculated, and then the total transformation matrix (i.e., the aforementioned coordinate transformation rule) F ═ CM is obtained. Optionally, the scene sensing device may acquire the real-scene information in real time, send the real-scene information to the image processing device as the information to be displayed, and the image processing device performs coordinate change processing on the coordinate corresponding to the real-scene information and draws the finally obtained image, projects the image into the HUD display screen (i.e., the target reality area) to display the image, so as to improve the accuracy of the display position where the driving guidance information is displayed, reduce user operations, and further improve user experience.
Step S480: displaying the driving direction information at a corresponding position of the re-determined target display area based on the coordinate transformation rule.
According to the information display method based on the augmented reality, the change of the first space pose of the user is detected by acquiring the sitting posture adjusting parameter of the electric seat, so that the driving guide information generated based on the real scene information is displayed at the corresponding position of the target display area determined based on the first space pose of the user through the coordinate transformation rule, the virtual driving guide information corresponding to the driving scene can be accurately and conveniently checked by the user in the driving process, the accuracy of the driving guide information does not need to be repeatedly confirmed, frequent sight line switching fatigue caused by checking of the driving guide information such as road conditions and navigation is reduced, and the driving safety and comfort are improved.
Referring to fig. 12, an augmented reality-based information display apparatus 500 provided in an embodiment of the present application may be operated in a projection device, where the apparatus 500 includes:
the image sensing module 510 is configured to obtain the live-action information collected by the image sensing device.
A coordinate transformation module 520 for obtaining a target display area determined based on the first spatial pose of the user.
Optionally, if it is detected that the first spatial pose is changed, the coordinate transformation module 520 may be configured to obtain a target display area re-determined based on the changed spatial pose.
As one way, the change in the first spatial posture may be detected by acquiring a sitting posture adjustment parameter of the power seat. In this manner, the coordinate transformation module 520 may be specifically configured to obtain the sitting posture adjustment parameters of the power seat; acquiring a change vector corresponding to the first space pose based on the sitting posture adjusting parameter; and adjusting the target display area based on the change vector to obtain a redetermined target display area.
As one way, the coordinate transformation module 520 may be further configured to obtain a coordinate transformation rule corresponding to mapping the real-scene information to the target display area.
Optionally, the coordinate transformation rule may include a first transformation matrix and a second transformation matrix, the first transformation matrix is used to determine a world reference coordinate corresponding to a coordinate of the real-scene information collected by the scene sensing device, and the second transformation matrix is used to convert the world reference coordinate into a view coordinate in the target display area.
The first transformation matrix may include a first rotation matrix configured to rotate coordinates of the real-scene information collected by the scene sensing device and a first translation vector configured to translate the coordinates, and the first transformation matrix determines world-of-reference coordinates corresponding to the coordinates of the real-scene information collected by the scene sensing device based on the first rotation matrix and the first translation vector.
The second transformation matrix may include a view matrix for determining a mapping range for mapping the real-scene information to the target display area, and a projection matrix for determining a relative position within the mapping range at which the driving direction information is displayed, and the second transformation matrix converts the reference world coordinates into view coordinates within the target display area based on the mapping range and the relative position.
The view matrix may include a second rotation matrix to rotate the world reference coordinates and a second translation vector to translate the world reference coordinates; the projection matrix includes field angle parameters including a horizontal field angle and a vertical field angle.
As an embodiment, a product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix may be obtained as a coordinate transformation rule corresponding to the mapping of the real-scene information to the target display area.
A display module 530 for generating driving guide information based on the real-scene information.
As one approach, the display module 530 may be further configured to display the driving direction information at a corresponding position of the target display area based on the coordinate transformation rule.
Optionally, the display module 530 may be specifically configured to input the position coordinates of the live-action information in the coordinate system corresponding to the scene sensing device into the first transformation matrix, so as to obtain a to-be-processed coordinate transformation matrix; carrying out coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the live-action information in the target display area; displaying the driving direction information at a location characterized by the relative location coordinates.
Optionally, if it is detected that the first spatial pose is changed, the driving guidance information may be displayed at a corresponding position of the re-determined target display area based on a coordinate transformation rule corresponding to the changed spatial pose.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
A projection apparatus provided by the present application will be described with reference to fig. 13.
Referring to fig. 13, based on the augmented reality-based information display method, system and apparatus, another projection device 100 capable of implementing the augmented reality-based information display method is provided in the embodiment of the present application. Projection device 100 includes one or more (only one shown) processors 102, memory 104, image perception module 11, coordinate transformation module 12, and display module 13 coupled to each other. The memory 104 stores therein a program that can execute the content in the foregoing embodiments, and the processor 102 can execute the program stored in the memory 104, and the memory 104 includes the apparatus 500 described in the foregoing embodiments.
Processor 102 may include one or more processing cores, among other things. Processor 102 interfaces with various components throughout projection device 100 using various interfaces and circuitry to perform various functions of projection device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in memory 104 and invoking data stored in memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a video Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, a video image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by projection device 100 in use (e.g., audiovisual data), and the like.
The image sensing module 11 is used for acquiring the live-action information collected by the image sensing device; the coordinate transformation module 12 is configured to acquire a target display area determined based on a first spatial pose of a user; the coordinate transformation module 12 is further configured to obtain a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area; the display module 13 is configured to generate driving guidance information based on the live-action information; the display module 13 is further configured to display the driving direction information at a corresponding position of the target display area based on the coordinate transformation rule.
Referring to fig. 14, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 600 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 600 includes a non-volatile computer-readable storage medium. The computer readable storage medium 600 has storage space for program code 610 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 610 may be compressed, for example, in a suitable form.
In summary, according to the information display method, the information display system, the information display device, the projection equipment and the storage medium based on augmented reality, the real-scene information acquired by the image sensing device is acquired, the target display area determined based on the first space pose of the user is then acquired, the real-scene information is then acquired and mapped to the coordinate transformation rule corresponding to the target display area, the driving guidance information is then generated based on the real-scene information, and the driving guidance information is then displayed at the corresponding position of the target display area based on the coordinate transformation rule. Therefore, the driving guide information generated based on the real-scene information is displayed at the corresponding position of the target display area determined based on the first space pose of the user through the coordinate transformation rule, so that the user can accurately and conveniently check the virtual driving guide information corresponding to the driving scene in the driving process without repeatedly confirming the accuracy of the driving guide information, the frequent switching fatigue of the sight line caused by checking the driving guide information such as road conditions, navigation and the like is reduced, and the driving safety and comfort are improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. An augmented reality-based information display method, the method comprising:
acquiring real scene information acquired by a scene sensing device;
acquiring a target display area determined based on a first spatial pose of a user;
acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area;
generating driving guide information based on the live-action information;
and displaying the driving guide information at the corresponding position of the target display area based on the coordinate transformation rule.
2. The method of claim 1, wherein the coordinate transformation rule comprises a first transformation matrix and a second transformation matrix, the first transformation matrix is used for determining world-of-reference coordinates corresponding to coordinates of the real-scene information collected by the scene sensing device, and the second transformation matrix is used for converting the world-of-reference coordinates into view coordinates within the target display area.
3. The method of claim 2, wherein the first transformation matrix comprises a first rotation matrix and a first translation vector, the first rotation matrix is used for rotating coordinates of the real-scene information collected by the scene sensing device, the first translation vector is used for translating the coordinates, and the first transformation matrix determines reference world coordinates corresponding to the coordinates of the real-scene information collected by the scene sensing device based on the first rotation matrix and the first translation vector.
4. The method of claim 2, wherein the second transformation matrix comprises a view matrix and a projection matrix, the projection matrix is used for determining a mapping range for mapping the real-scene information to the target display area, the view matrix is used for determining a relative position within the mapping range for displaying the driving direction information, and the second transformation matrix converts the reference world coordinate into a view coordinate within the target display area based on the mapping range and the relative position.
5. The method of claim 4, wherein the view matrix comprises a second rotation matrix for rotating the world-of-reference coordinates and a second translation vector for translating the world-of-reference coordinates; the projection matrix includes field angle parameters including a horizontal field angle and a vertical field angle.
6. The method according to any one of claims 2 to 5, wherein the displaying the driving direction information at the corresponding position of the target display area based on the coordinate transformation rule includes:
inputting the position coordinates of the live-action information in a coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain a coordinate transformation matrix to be processed;
carrying out coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the live-action information in the target display area;
displaying the driving direction information at a location characterized by the relative location coordinates.
7. The method according to claim 6, wherein the obtaining of the coordinate transformation rule corresponding to the mapping of the real-scene information to the target display area comprises:
and acquiring a product of the parameters represented by the first transformation matrix and the parameters represented by the second transformation matrix as a coordinate transformation rule corresponding to the mapping of the real-scene information to the target display area.
8. The method of claim 1, wherein if a change in the first spatial pose is detected, the obtaining a target display area determined based on the first spatial pose of the user comprises:
acquiring a target display area re-determined based on the changed spatial pose;
the displaying the driving direction information at the corresponding position of the target display area based on the coordinate transformation rule includes:
displaying the driving direction information at a corresponding position of the re-determined target display area based on the coordinate transformation rule.
9. The method of claim 8, further comprising:
detecting a change of the first spatial pose by acquiring sitting posture adjustment parameters of the electric seat;
the acquiring a target display area re-determined based on the changed spatial pose includes:
acquiring sitting posture adjusting parameters of the electric seat;
acquiring a change vector corresponding to the first space pose based on the sitting posture adjusting parameter;
and adjusting the target display area based on the change vector to obtain a redetermined target display area.
10. An information display device based on augmented reality, characterized in that, the information display device includes an image perception module, a coordinate transformation module and a display module:
the image sensing module is used for acquiring the live-action information acquired by the image sensing device;
the coordinate transformation module is used for acquiring a target display area determined based on a first space pose of a user;
the coordinate transformation module is further used for acquiring a coordinate transformation rule corresponding to the mapping of the live-action information to the target display area;
the display module is used for generating driving guide information based on the live-action information;
the display module is further used for displaying the driving guide information at the corresponding position of the target display area based on the coordinate transformation rule.
11. An augmented reality based in-vehicle information display system, the system comprising:
the scene sensing device is used for acquiring the real scene information of the external environment of the vehicle;
the image processing device is used for acquiring the real scene information acquired by the scene sensing device, acquiring a target display area determined based on a first space pose of a user, acquiring a coordinate transformation rule corresponding to the real scene information mapped to the target display area, generating driving guide information based on the real scene information, and generating a target position coordinate of the driving guide information displayed in the target display area based on the coordinate transformation rule;
a HUD display device for displaying the driving direction information to the target position coordinates of the target display area.
12. A projection device comprising one or more processors and memory;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-9.
13. A computer-readable storage medium, having a program code stored therein, wherein the program code when executed by a processor performs the method of any of claims 1-9.
CN202010244728.4A 2020-03-31 2020-03-31 Information display method, system and device based on augmented reality and projection equipment Pending CN113467601A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010244728.4A CN113467601A (en) 2020-03-31 2020-03-31 Information display method, system and device based on augmented reality and projection equipment
PCT/CN2021/082944 WO2021197190A1 (en) 2020-03-31 2021-03-25 Information display method, system and apparatus based on augmented reality, and projection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244728.4A CN113467601A (en) 2020-03-31 2020-03-31 Information display method, system and device based on augmented reality and projection equipment

Publications (1)

Publication Number Publication Date
CN113467601A true CN113467601A (en) 2021-10-01

Family

ID=77865430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244728.4A Pending CN113467601A (en) 2020-03-31 2020-03-31 Information display method, system and device based on augmented reality and projection equipment

Country Status (2)

Country Link
CN (1) CN113467601A (en)
WO (1) WO2021197190A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760458A (en) * 2022-04-28 2022-07-15 中南大学 Method for synchronizing tracks of virtual camera and real camera of high-reality augmented reality studio
WO2023216580A1 (en) * 2022-05-10 2023-11-16 华为技术有限公司 Method and apparatuses for adjusting display device
WO2024083102A1 (en) * 2022-10-21 2024-04-25 长城汽车股份有限公司 Augmented display method and apparatus, electronic device, and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (en) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 Positioning processing method, device, equipment and medium based on magnetic sensor
CN114581627B (en) * 2022-03-04 2024-04-16 合众新能源汽车股份有限公司 ARHUD-based imaging method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200445B (en) * 2010-03-23 2013-03-13 财团法人资讯工业策进会 Real-time augmented reality device and method thereof
US20120224060A1 (en) * 2011-02-10 2012-09-06 Integrated Night Vision Systems Inc. Reducing Driver Distraction Using a Heads-Up Display
KR20120113579A (en) * 2011-04-05 2012-10-15 현대자동차주식회사 Apparatus and method for displaying road guide information on the windshield
US8514101B2 (en) * 2011-12-02 2013-08-20 GM Global Technology Operations LLC Driving maneuver assist on full windshield head-up display
CN102542868B (en) * 2012-01-09 2014-03-19 中国人民解放军空军军训器材研究所 Visual simulation method and device
CN107230199A (en) * 2017-06-23 2017-10-03 歌尔科技有限公司 Image processing method, device and augmented reality equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760458A (en) * 2022-04-28 2022-07-15 中南大学 Method for synchronizing tracks of virtual camera and real camera of high-reality augmented reality studio
CN114760458B (en) * 2022-04-28 2023-02-24 中南大学 Method for synchronizing tracks of virtual camera and real camera of high-reality augmented reality studio
WO2023216580A1 (en) * 2022-05-10 2023-11-16 华为技术有限公司 Method and apparatuses for adjusting display device
WO2024083102A1 (en) * 2022-10-21 2024-04-25 长城汽车股份有限公司 Augmented display method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2021197190A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
WO2021197189A1 (en) Augmented reality-based information display method, system and apparatus, and projection device
CN113467601A (en) Information display method, system and device based on augmented reality and projection equipment
AU2020202551B2 (en) Method for representing points of interest in a view of a real environment on a mobile device and mobile device therefor
JP4246195B2 (en) Car navigation system
US11715238B2 (en) Image projection method, apparatus, device and storage medium
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
US20160307374A1 (en) Method and system for providing information associated with a view of a real environment superimposed with a virtual object
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
JPWO2009144994A1 (en) VEHICLE IMAGE PROCESSING DEVICE AND VEHICLE IMAGE PROCESSING METHOD
CN109968979B (en) Vehicle-mounted projection processing method and device, vehicle-mounted equipment and storage medium
US10747007B2 (en) Intelligent vehicle point of focus communication
KR101691034B1 (en) Apparatus and method for synthesizing additional information during rendering object in 3d graphic terminal
JP7156937B2 (en) Image processing device and image processing method
CN109448050B (en) Method for determining position of target point and terminal
WO2017169273A1 (en) Information processing device, information processing method, and program
US20190141310A1 (en) Real-time, three-dimensional vehicle display
KR102107706B1 (en) Method and apparatus for processing image
CN115525152A (en) Image processing method, system, device, electronic equipment and storage medium
EP3811326A1 (en) Heads up display (hud) content control system and methodologies
CN114489332A (en) Display method and system of AR-HUD output information
JP6345381B2 (en) Augmented reality system
CN115493614B (en) Method and device for displaying flight path line, storage medium and electronic equipment
WO2017169272A1 (en) Information processing device, information processing method, and program
CN110347241B (en) AR head-up display optical system capable of realizing normal live-action display
JP6827609B2 (en) Information display control devices and methods, as well as programs and recording media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination