WO2021197190A1 - 基于增强现实的信息显示方法、系统、装置及投影设备 - Google Patents

基于增强现实的信息显示方法、系统、装置及投影设备 Download PDF

Info

Publication number
WO2021197190A1
WO2021197190A1 PCT/CN2021/082944 CN2021082944W WO2021197190A1 WO 2021197190 A1 WO2021197190 A1 WO 2021197190A1 CN 2021082944 W CN2021082944 W CN 2021082944W WO 2021197190 A1 WO2021197190 A1 WO 2021197190A1
Authority
WO
WIPO (PCT)
Prior art keywords
display area
information
target display
real scene
coordinate transformation
Prior art date
Application number
PCT/CN2021/082944
Other languages
English (en)
French (fr)
Inventor
余新
康瑞
邓岳慈
弓殷强
赵鹏
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2021197190A1 publication Critical patent/WO2021197190A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Definitions

  • This application relates to the technical field of coordinate transformation, and more specifically, to an information display method, system, device, projection device, and storage medium based on augmented reality.
  • HUD head up display
  • head-up display or head-up display
  • HUD head up display
  • Look down at the data in the dashboard so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard.
  • HUD was introduced from the aircraft to the automotive field.
  • the existing HUD displays information in a single way. Take car driving as an example. With the addition of more auxiliary driving information such as road conditions, navigation, and danger warning, when a taller user drives a car, it will be of normal height (lower height). When the vehicle is customized by a user with a higher height), it may cause a user with a higher height to see the difference between the position of the vehicle’s HUD displaying environmental information and the actual position of the environmental information, which brings a lot of inconvenience to the user’s driving and reduces the user’s Experience.
  • auxiliary driving information such as road conditions, navigation, and danger warning
  • this application proposes an augmented reality-based information display method, system, device, projection equipment, and storage medium to improve the above-mentioned problems.
  • an embodiment of the present application provides an augmented reality-based information display method, the method includes: acquiring real-world information collected by a scene sensing device; acquiring a target display area determined based on a user's first spatial pose; acquiring The real scene information is mapped to a coordinate transformation rule corresponding to the target display area; driving guidance information is generated based on the real scene information; and the driving guidance information is displayed at a corresponding position of the target display area based on the coordinate transformation rule.
  • an embodiment of the present application provides an augmented reality information display device.
  • the information display device includes an image perception module, a coordinate transformation module, and a display module: the image perception module is used to obtain the real scene collected by the image perception device. Information; the coordinate transformation module is used to obtain a target display area determined based on the user's first spatial pose; the coordinate transformation module is also used to obtain a coordinate transformation rule corresponding to the real scene information mapped to the target display area; The display module is configured to generate driving guide information based on the real scene information; the display module is also configured to display the driving guide information in a corresponding position of the target display area based on the coordinate transformation rule.
  • an embodiment of the present application provides a vehicle-mounted information display system based on augmented reality.
  • the system includes: a scene sensing device for collecting real scene information of the vehicle's external environment; and an image processing device for acquiring the scene
  • the real scene information collected by the sensing device acquires the target display area determined based on the user's first spatial pose, acquires the coordinate transformation rules corresponding to the real scene information mapped to the target display area, and generates driving guidance information based on the real scene information, Generate the target position coordinates of the driving guidance information to be displayed in the target display area based on the coordinate transformation rule; a HUD display device for displaying the driving guidance information to the target position coordinates of the target display area .
  • an embodiment of the present application provides a projection device, including a data acquisition module, a projection module, one or more processors, and a memory; one or more programs are stored in the memory and configured to be The one or more processors execute, and the one or more programs are configured to execute the method described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium having program code stored in the computer-readable storage medium, wherein the method described in the first aspect is executed when the program code is running.
  • the present application provides an information display method, system, device, projection device, and storage medium based on augmented reality, which acquires real scene information collected by an image sensing device, and then acquires a target display area determined based on the user's first spatial pose, Then obtain the coordinate transformation rules corresponding to the real scene information and map to the target display area, and then generate driving guidance information based on the real scene information, and then display the driving guidance information in the corresponding position of the target display area based on the coordinate transformation rules.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose through the coordinate transformation rule, so that the user can be accurate and convenient in the driving process.
  • View the virtual driving guidance information corresponding to the driving scene without repeatedly confirming the accuracy of the driving guidance information, reducing the fatigue of frequent sight changes caused by viewing road conditions and driving guidance information such as navigation, and improving the safety and comfort of driving sex.
  • Fig. 1 shows a method flowchart of an augmented reality-based information display method proposed by an embodiment of the present application.
  • FIG. 2 shows a structural example diagram of an augmented reality-based vehicle information display system suitable for the augmented reality-based information display method provided by this embodiment.
  • FIG. 3 shows an example diagram of displaying driving guide information through the augmented reality-based on-board information display system proposed by this application in a dangerous scene provided by this embodiment.
  • Fig. 4 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 5 shows an example diagram of the display effect of the vehicle-mounted information display system based on augmented reality provided by this embodiment.
  • Fig. 6 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 7 shows an example diagram of the target display area determined based on the user's first spatial pose provided by this embodiment.
  • FIG. 8 shows another example diagram of the target display area determined based on the user's first spatial pose provided by this embodiment.
  • FIG. 9 shows an example diagram of the target display area determined based on the user's spatial pose provided by this embodiment.
  • FIG. 10 shows a method flowchart of an augmented reality-based information display method proposed by another embodiment of the present application.
  • FIG. 11 shows an example diagram of the processing procedure of the information display method based on enhanced display proposed in this embodiment.
  • FIG. 12 shows a structural block diagram of an information display device based on augmented reality proposed by an embodiment of the present application.
  • Fig. 13 shows a structural block diagram of a projection device of the present application for executing an augmented reality-based information display method according to an embodiment of the present application.
  • Fig. 14 shows a storage unit for storing or carrying program code for implementing an augmented reality-based information display method according to an embodiment of the present application.
  • HUD head up display
  • head-up display or head-up display
  • HUD head up display
  • Look down at the data in the dashboard so as to avoid that the pilot cannot observe the environmental information in the field ahead of the flight when viewing the data in the dashboard.
  • HUD was introduced from the aircraft to the automotive field.
  • HUD is mainly divided into two types: rear-mounted (also known as Combine HUD, C-type HUD) and front-mounted (also known as Windshield HUD, W-type HUD).
  • rear-mounted HUD also known as Combine HUD, C-type HUD
  • front-mounted HUD also known as Windshield HUD, W-type HUD
  • the front-mounted HUD uses the windshield as a combiner to project the content required by the driver to the front windshield through the optical system.
  • Driving safety and driving comfort some existing HUD devices only display virtual information in front of the driver's line of sight, and are not integrated with the real environment. With the addition of more driving assistance information such as road conditions, navigation, and hazard warnings, the mismatch between this virtual content and the real scene will cause the driver's attention to be distracted.
  • Augmented Reality is a technology that ingeniously integrates virtual information with the real world.
  • AR-HUD can solve the separation of traditional HUD virtual information and actual scenes through the combination of AR technology and front-mounted HUD.
  • the existing HUD displays information in a single way. Take car driving as an example. With the addition of more auxiliary driving information such as road conditions, navigation, and danger warning, when a taller user drives a car, it will be of normal height (lower height).
  • the vehicle When the vehicle is customized by a user with a higher height), it may cause a user with a higher height to see the difference between the position of the vehicle’s HUD displaying environmental information and the actual position of the environmental information, which brings a lot of inconvenience to the user’s driving and reduces the user’s Experience.
  • the inventor proposes the real scene information that can be collected by the image sensing device provided by this application, and then acquires the target display area determined based on the user's first spatial pose, and then acquires the real scene information and maps it to the target display.
  • the coordinate transformation rule corresponding to the area, the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed in the corresponding position of the target display area based on the coordinate transformation rule, which realizes the driving guidance information generated based on the real scene information through the coordinate transformation rule Displayed in the corresponding position of the target display area determined based on the user's first spatial pose, so that the user can accurately and conveniently view the virtual driving guidance information corresponding to the driving scene during driving, without the need to repeatedly confirm the driving guidance information
  • the accuracy of this reduces the fatigue of frequent sight changes caused by viewing road conditions and driving guidance information such as navigation, and improves the safety and comfort of driving.
  • FIG. 1 is a method flowchart of an augmented reality-based information display method provided by an embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S110 Acquire real scene information collected by the scene sensing device.
  • the real scene information in the embodiment of the present application may be real scene information corresponding to multiple scenes.
  • multiple scenes may include, but are not limited to, driving scenes, travel scenes, and outdoor activity scenes.
  • the real scene information can include lanes, signs, dangerous pedestrians (such as vulnerable groups such as blind people, elderly people walking alone, pregnant women, or children), vehicles, etc.; if it is a tourist scene, the real scene information can include tourist destination signs. , Tourist routes, tourist attractions information and tourist attractions weather information, etc.; if it is an outdoor activity scene, the real scene information can include current location information and nearby convenience store information.
  • the scene sensing device may include sensing devices such as lasers and infrared radars, and may also include image acquisition devices such as cameras (including monocular cameras, binocular cameras, RGB-D cameras, etc.).
  • the real scene information corresponding to the current scene can be acquired through the scene sensing device.
  • the scene sensing device is a camera.
  • the camera can be installed on the car (optionally, the installation position can be adjusted according to the style and structure of the car or the actual needs), so that the camera can obtain the Real-life information related to driving.
  • the scene sensing device including laser, infrared radar, or camera
  • related technologies which will not be repeated here.
  • Step S120 Obtain a target display area determined based on the user's first spatial pose.
  • the user's first spatial posture may be the sitting posture of the user in a driving state, or the sitting posture after the seat is adjusted (here, the current user may adjust the seat for the first time). It is understandable that the user's sitting posture is different, and the corresponding spatial posture is different. As a way, the sitting posture of the user after adjusting the seat can be used as the user's first spatial posture.
  • the target display area is an area for displaying virtual image information corresponding to real scene information.
  • the target display area may be an area on the windshield of a car for displaying projected virtual image information corresponding to real scene information.
  • the target display areas corresponding to different spatial poses of the same user may be different, and the target display areas corresponding to the spatial poses of different users may be different.
  • the target display area determined based on the user's first spatial pose can be acquired, so that the target display area can be displayed in the target display area. Displaying the virtual image information corresponding to the real scene information reduces the foregoing display difference, thereby improving the accuracy of the display position of the virtual image information corresponding to the real scene information.
  • Step S130 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • the coordinate transformation rule can be used to map the coordinates of the real scene information to the corresponding coordinates of the target display area.
  • the coordinate transformation rules corresponding to the real scene information mapped to the target display area can be acquired, so that the subsequent driving guidance information corresponding to the real scene information can be accurately determined based on the coordinate transformation rules. Is displayed in the corresponding position of the target display area.
  • the coordinate transformation rule may include a first transformation matrix and a second transformation matrix.
  • the first transformation matrix may be used to determine the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device
  • the second transformation matrix may be used to convert the reference world coordinates into view coordinates in the target display area.
  • the reference world coordinates can be understood as the relative position coordinates of the real scene information in the established coordinate system corresponding to the scene sensing device.
  • the reference world coordinates in this embodiment can be understood as the world coordinates that are relatively stationary with the car.
  • View coordinates can be understood as the relative position coordinates of the reference world coordinates in the coordinate system corresponding to the target display area.
  • the first transformation matrix and the second transformation matrix can be obtained, and then the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix is mapped to the coordinates corresponding to the target display area as real scene information Transformation rules.
  • the first transformation matrix may include a first rotation matrix and a first translation vector.
  • the first rotation matrix may be used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector may be used to translate the coordinates.
  • the reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device may be determined based on the first rotation matrix and the first translation vector.
  • the second transformation matrix may include a view matrix and a projection matrix.
  • the projection matrix can be used to determine the mapping range of the real scene information to the target display area
  • the view matrix can be used to determine the display of driving guidance information (which can be understood as the aforementioned virtual image information corresponding to the real scene information) within the mapping range. relative position.
  • the reference world coordinates can be converted into view coordinates in the target display area based on the mapping range and the relative position.
  • the view matrix may include a second rotation matrix and a second translation vector.
  • the second rotation matrix can be used to rotate the reference world coordinates, and the second translation vector can be used to translate the reference world coordinates;
  • the projection matrix can include a field of view parameter, and the field of view can include a horizontal field of view and a vertical field of view. Angle of view.
  • FIG. 2 is a structural example diagram of a vehicle-mounted information display system based on augmented reality that is applicable to the method for displaying information based on augmented reality provided by this embodiment.
  • the augmented reality-based vehicle information display system may include a scene sensing device, an image processing device, and a HUD display device.
  • the scene sensing device can be used to collect real scene information of the external environment of the vehicle.
  • the image processing device can be used to acquire the real scene information collected by the scene sensing device, acquire the target display area determined based on the user's first spatial pose, acquire the coordinate transformation rules corresponding to the real scene information mapped to the target display area, and generate driving guidance based on the real scene information Information, based on the coordinate transformation rules to generate driving guidance information and display the coordinates of the target position in the target display area.
  • the HUD display device can be used to display driving guidance information to the target position coordinates of the target display area.
  • the image processing device may be the processor chip of the vehicle system, or the processing chip of an independent vehicle computer system, or the processor chip integrated in the scene sensing device (such as lidar), etc., which is not limited herein.
  • the vehicle-mounted information display system may include a car, a driver, a scene sensing device, an image processing device, and a HUD display device with AR-HUD function.
  • the scene sensing device can be installed on the car and can obtain driving-related scene information (also can be understood as the aforementioned real scene information), the driver sits in the driving position of the car, and the HUD display device is installed in the car
  • the position of the front windshield and the HUD display device can be adjusted so that the driver’s eyes can see the entire virtual image corresponding to the driving scene information.
  • the image processing device can convert the real scene information collected by the scene perception device into a real scene.
  • the image fused with the real scene information is sent to the HUD display device for display.
  • the scene sensing device can obtain the position coordinates of the real scene information in the world coordinate system (O-xyz as shown in Figure 2) based on GPS positioning and other location acquisition methods, and then can be based on the car Select the world coordinate origin and coordinate axis direction for the traveling direction, and determine the reference world coordinate system relative to the car according to the world coordinate origin and the coordinate axis direction.
  • the reference world coordinate system can be obtained The following reference world coordinates corresponding to the coordinates of the real scene information.
  • the method of selecting the origin of the world coordinate and the direction of the coordinate axis can refer to the related technology, which will not be repeated here.
  • the reference world coordinate system can be understood as a coordinate system obtained after rotating and/or translating the world coordinate system.
  • the scene sensing device and the spatial pose of the driver's eyes in the reference world coordinate system can be acquired.
  • the sensing module transformation matrix M ie, the aforementioned first transformation matrix
  • the process of changing the world coordinate system to the reference world coordinate system includes the first rotation matrix (also can be understood as the total rotation matrix of the scene sensing device) R M and the first translation vector T M
  • sensing module transformation between the matrix M and the first rotation matrix R M T M and the first translation vector may be expressed as:
  • R Mx , R My , and R Mz are the rotation matrices of the perception module transformation matrix M around the x-axis, y-axis, and z-axis of the world coordinate system, respectively.
  • the Euler angles of rotation are ⁇ M , ⁇ M , ⁇ M , (T Mx , T My , T Mz ) are the coordinates of the real scene information in the reference world coordinate system.
  • the HUD display device in this embodiment can be a reverse virtual camera model. It is assumed that the pose of the virtual camera is the same as the pose of the driver’s eyes. In this way, it can be based on the relevant parameters of the HUD display device and The pose of the virtual camera calculates the aforementioned second transformation matrix.
  • the second transformation matrix (herein can also be understood as the imaging matrix of the virtual camera) C can include the view matrix V and the projection matrix P, and the relationship between the three can be expressed as:
  • the view matrix V may comprise the second rotation matrix R H T and the second translation vector T H
  • the second rotation matrix R H T can be understood as the coordinate conversion information in the real world reference coordinate system where the coordinates for the HUD display device the total rotation matrix in the coordinate system
  • the second translation vector T H can be used for reference to world coordinates or coordinates the translation device is located HUD display coordinate translation.
  • the relationship between the view matrix V in the second embodiment the rotation matrix R H T T H and a second translation vector of the present embodiment may be expressed as:
  • R Hx , R Hy , R Hz can be understood as rotation matrices around the x-axis, y-axis, and z-axis of the reference world coordinate system, respectively.
  • the Euler angles of rotation are ⁇ H , ⁇ H , ⁇ H , (T Hx , T Hy , T Hz ) are the coordinates of the pose of the virtual camera in the reference world coordinate system.
  • the relational expression satisfied by the projection matrix P can be:
  • the projection matrix includes a field of view parameter
  • the field of view can include a horizontal field of view and a vertical field of view
  • hFOV, vFOV can represent the horizontal field of view and vertical field of view
  • n, f can be understood as hypotheses
  • the distance between the plane where the virtual image O h is located and the center of the front windshield as shown in Fig. 2 can be understood as the distance of the assumed near-cutting surface
  • the plane where the virtual image Ow is located is from the front of the car
  • the distance from the center of the windshield can be understood as the distance of the assumed far cut surface. It is understandable that the description here is only taken as an example, and the distance between the far and near clipping planes can be adjusted according to actual needs in actual implementation.
  • the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix can be obtained as real scene information and mapped to the target display
  • the coordinate transformation rule corresponding to the area that is, the coordinate transformation rule can be expressed as:
  • Step S140 Generate driving guidance information based on the real scene information.
  • the driving guide information in this embodiment may include navigation instruction information corresponding to road conditions, pedestrian warning information, and tourist attractions prompt information, etc.
  • the type and specific content of the driving guide information may not be limited.
  • FIG. 3 it shows an example diagram of displaying driving guidance information through the augmented reality-based information display system proposed in this application in a dangerous scene provided by this embodiment.
  • the image processing device can The real scene information collected by the scene perception device is converted into a HUD virtual image for display on the HUD display device.
  • the specific content displayed is shown in the right image in Figure 3.
  • the scene seen by the driver’s eyes may include lane guidance information ( That is, the "navigation instructions in the virtual image" shown in FIG. 3) and pedestrian warning information (ie, the "pedestrian prompt box in the virtual image” shown in FIG. 3).
  • the driving guide information can be generated based on the real scene information.
  • the way of prompting the driving guide information in this embodiment may not be limited. Arrows), pictures, animations, voice or video, etc., then the driving guidance information for different prompts can be generated in a corresponding manner.
  • the driving guide information in this embodiment may include at least one prompt method.
  • the navigation indicator icon corresponding to the road may be displayed in combination with voice prompting the user, so that the user can be driven more accurately. Guidance reminders to ensure driving safety and enhance user experience.
  • Step S150 Display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the difference between the position where the HUD displays the real-scene information and the actual position of the real-scene information can be avoided, and the accuracy and reliability of the display can be improved.
  • the present application provides an information display method based on augmented reality, which acquires the real scene information collected by the image sensing device, and then acquires the target display area determined based on the user's first spatial pose, and then acquires the real scene information and maps it to the target display area.
  • the driving guidance information is generated based on the real scene information, and then the driving guidance information is displayed in the corresponding position of the target display area based on the coordinate transformation rules.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose through coordinate transformation rules, so that the user can accurately and conveniently view and drive during driving
  • the virtual driving guidance information corresponding to the scene does not need to repeatedly confirm the accuracy of the driving guidance information, which reduces the fatigue of frequent line-of-sight changes caused by viewing road conditions and driving guidance information such as navigation, and improves the safety and comfort of driving.
  • FIG. 4 is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S210 Acquire real scene information collected by the scene sensing device.
  • Step S220 Obtain a target display area determined based on the user's first spatial pose.
  • Step S230 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • Step S240 Generate driving guidance information based on the real scene information.
  • Step S250 Input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed.
  • the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device may be input into the first transformation matrix, and the result obtained by the output may be used as the coordinate transformation matrix to be processed.
  • the position coordinates of the actual scene information in the coordinate system corresponding to the scene sensing device are O w (x, y, z).
  • the position coordinates O w (x, y) , Z) After inputting the aforementioned first transformation matrix, we can get:
  • O'can be used as the coordinate transformation matrix to be processed
  • O W uses homogeneous coordinates.
  • Step S260 Perform coordinate transformation on the coordinate transformation matrix to be processed according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
  • the coordinate transformation matrix to be processed may be transformed according to the aforementioned second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area.
  • the specific implementation process of coordinate transformation can refer to related technologies, which will not be repeated here.
  • width and height are the width and height of the HUD image, and the unit can be both pixels.
  • Oh (u, v) can be used as the relative position coordinates of the real scene information in the target display area.
  • Step S270 Display the driving guidance information at the position represented by the relative position coordinates.
  • the driving guide information corresponding to the real scene information may be displayed at the position represented by the relative position coordinates.
  • FIG. 5 shows an example diagram of the display effect of the information display system based on augmented reality provided by this embodiment.
  • a virtual and real scene fusion system (which can be understood as an information display system based on augmented reality in this application) can be built in related modeling software (such as Unity3D, etc.).
  • the virtual and real scene fusion system can include cars, Camera 1 (used to simulate the eyes of the driver), camera 2 and the HUD imaging module simulated by the plane (that is, the aforementioned HUD display device), and the spatial scene information obtained by the image perception module simulated by the checkerboard (can be different scenes) The space scene information below), and the information transformation and image rendering and rendering of the image processing device completed by the program script.
  • the center position of the bottom of the car can be selected as the coordinate origin
  • the forward direction of the car is the positive direction of the Z axis
  • the right-hand coordinate system is adopted. It is assumed that the driver is sitting in the driving position, and the camera 1 The simulated driver's eyes are facing forward, and the pose of the HUD virtual camera simulated by camera 2 is the same as that of the driver's eyes.
  • the scene sensing device can obtain the checkerboard spatial corner information on the car .
  • the image processing device can draw the corner point to the HUD image space (the lower left corner shown in Figure 5 is the corner point image drawn by the image processing device to the HUD image space), and then send the drawn image to the HUD module for display (As shown in Figure 5, the corner image is sent to the HUD for virtual image display).
  • the driver's perspective scene as shown in FIG. 5 can be obtained.
  • the virtual and real fusion result shown in FIG. 5 it can be seen that the virtual and real scenes can be accurately fused.
  • the present application provides an information display method based on augmented reality.
  • the driving guidance information generated based on real scene information is displayed in the first space based on the user after coordinate transformation through the first transformation matrix and the second transformation matrix.
  • the corresponding position of the target display area is determined so that the user can accurately and conveniently view the virtual driving guidance information corresponding to the driving scene during driving, without the need to repeatedly confirm the accuracy of the driving guidance information, reducing the need to check road conditions and
  • the frequent change of sight caused by driving guidance information such as navigation improves the safety and comfort of driving.
  • FIG. 6 is a method flowchart of an augmented reality-based information display method provided by another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S310 Acquire the real scene information collected by the scene sensing device.
  • Step S320 Obtain the target display area re-determined based on the changed spatial pose.
  • the user's posture will change, for example, the user's sitting posture will change (including tilting the body left and right or adjusting the height of the seat up and down, or adjusting the tilt of the seat back back and forth, etc.) , Or the user’s head will sway as the road conditions change, then in this way, the user’s first spatial pose can change, and if the original HUD display method is still used to display the driving corresponding to the real scene information Guidance information may cause safety hazards due to position display errors.
  • this embodiment adopts real-time detection of the user's spatial pose, so that if a change in the first spatial pose is detected, the target display area is re-determined based on the changed spatial pose, thereby The accuracy of the display position of the driving guidance information corresponding to the actual scene information can be ensured, without requiring the user to repeatedly confirm the accuracy of the driving guidance information, which improves the flexibility of displaying the driving guidance information, thereby enhancing the user experience.
  • FIG. 7 shows an example diagram of the target display area determined based on the user's first spatial pose provided in this embodiment.
  • the screen 21 of the front windshield of the car can display the target as shown in Fig. 7 Display area 23.
  • the screen 21 may display the target display area 23' as shown in FIG. 8, and the target display area 23' is The target display area is re-determined based on the changed spatial pose 22'.
  • the correspondence between the change range of the user's spatial pose and the change range of the target display area may be preset.
  • the user's spatial pose can be set to include the range of change A, B, C, D, and E.
  • the range of change of the target display area corresponding to the range of change A, B, C, D, and E can be set to 1, 2, 3, 4, 5 (Assuming that the larger the value, the larger the change range, the unit value corresponds to the change range of 5°), optional, assuming the change range A>B>C>D>E, the larger the change range, the corresponding The greater the range of change.
  • the change range of the spatial pose can be determined based on the changed parameters, and then the change range of the corresponding target display area can be determined according to the change range.
  • the display position of the target display area may not be adjusted.
  • Step S330 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • the real scene information can be acquired and mapped to the second coordinate transformation rule corresponding to the newly determined target display area, where the specific second coordinate transformation rule is
  • the determination process can refer to the determination principle and determination process of the aforementioned coordinate transformation rule, which will not be repeated here.
  • Step S340 Generate driving guidance information based on the real scene information.
  • Step S350 Display the driving guidance information at the corresponding position of the newly determined target display area based on the coordinate transformation rule.
  • the driving guidance information may be displayed in the corresponding position of the re-determined target display area based on the second coordinate change rule.
  • the target display area in this embodiment can be adjusted according to the change of the user's first spatial pose. For example, if it is detected that the user has a gesture such as bowing, the target display area can be displayed in the corresponding position on the central control display; if it is detected that the user looks at the phone more frequently during driving, the target display area can be displayed to On the display screen of the mobile phone; or other screens that can be used as the target display area in the driving scene, for example, the windows on the left and right sides of the driving position.
  • At least one target display area can be set at the same time, so that the driving user can be assisted by other users to drive safely when the driving user is in a fatigue state or a poor vision state.
  • the front windshield 21 can be divided into two areas, including a first display area 211 and a second display area 212.
  • the target display area 232 is the target display area corresponding to the spatial pose of the user 222 of the co-pilot.
  • the content displayed in the target display area 231 can be the same as the content displayed in the target display area 232.
  • the display state of the target display area 232 can be turned off or turned on according to actual needs, for example, in the main driver (ie The driver 221) may choose to turn on the display function of the target display area 232 when the mental state is relatively exhausted.
  • the display position of the target display area 232 in the second display area 212 may change with the change of the spatial pose of the user 222, and the specific change principle may refer to the foregoing corresponding description, which will not be repeated here.
  • the co-pilot user 222 can promptly remind the driver user 221, so as to realize the reminder of the driver through the assistance of other users.
  • the implementation principle of the display function of the target display area 232 can refer to the description in the foregoing embodiment, the details are not described herein again.
  • the information display method based on augmented reality realizes that the driving guidance information generated based on real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose through coordinate transformation rules, or Displayed in the corresponding position of the target display area re-determined based on the user’s changed spatial pose, so that the user can accurately and conveniently view the virtual driving guidance information corresponding to the driving scene during driving, without the need to repeatedly confirm the driving guidance
  • the accuracy of the information reduces the fatigue of frequent sight changes caused by viewing road conditions and driving guidance information such as navigation, and improves the safety and comfort of driving.
  • FIG. 10 is a method flowchart of an augmented reality-based information display method provided by still another embodiment of this application.
  • the method of this embodiment may be executed by an augmented reality-based device for processing real-scene information, and the device may be implemented by hardware and/or software, and the method includes:
  • Step S410 Acquire real scene information collected by the scene sensing device.
  • Step S420 Detect the change of the first spatial posture by acquiring the sitting posture adjustment parameters of the electric seat.
  • the seat of the car in this embodiment may be an electric seat.
  • the electric seat can automatically generate adjustment parameters, which can be used as the user’s Sitting posture adjustment parameters. Then, as a way, the change of the user's first spatial posture can be detected by obtaining the sitting posture adjustment parameters of the electric seat.
  • Step S430 Obtain the sitting posture adjustment parameters of the electric seat.
  • the sitting posture adjustment parameters of the power seat can be obtained by reading the data automatically generated by the power seat, or a camera can be installed, and the sitting posture adjustment parameters of the power seat can be collected through the camera.
  • the specific acquisition method may not be limited .
  • Step S440 Obtain a change vector corresponding to the first spatial posture based on the sitting posture adjustment parameter.
  • the change vector corresponding to the first spatial posture can be obtained based on the sitting posture adjustment parameters.
  • the specific calculation process can be implemented with reference to related technologies, which will not be repeated here.
  • Step S450 Adjust the target display area based on the change vector to obtain a newly determined target display area.
  • the display position of the target display area may be adjusted based on the change vector corresponding to the first spatial pose to obtain the newly determined target display area.
  • the specific adjustment principle reference may be made to the description in the foregoing embodiment, which will not be repeated here.
  • Step S460 Obtain a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • Step S470 Generate driving guidance information based on the real scene information.
  • step S470 may be implemented after step S410.
  • the process pointed by the hollow arrow may be the initial process
  • the process pointed by the solid arrow may be the real-time continuous process.
  • the scene perception device can acquire real-time information in real time, use the real-time information as the information to be displayed and send it to the image processing device. It is projected onto the HUD display screen (that is, the aforementioned target reality area) for display, so as to improve the accuracy of the display position of the driving guidance information, reduce user operations, and improve user experience.
  • Step S480 Display the driving guidance information at the corresponding position of the newly determined target display area based on the coordinate transformation rule.
  • the present application provides an information display method based on augmented reality, which detects the change of the user's first spatial posture by acquiring the sitting posture adjustment parameters of the electric seat, and realizes that the driving guidance information generated based on the real scene information is passed through the coordinate transformation rule Displayed in the corresponding position of the target display area determined based on the user's first spatial pose, so that the user can accurately and conveniently view the virtual driving guidance information corresponding to the driving scene during driving, without the need to repeatedly confirm the driving guidance information
  • the accuracy of this reduces the fatigue of frequent sight changes caused by viewing road conditions and driving guidance information such as navigation, and improves the safety and comfort of driving.
  • an information display device 500 based on augmented reality provided by an embodiment of the present application can be run on a projection device, and the device 500 includes:
  • the image sensing module 510 is used to obtain real scene information collected by the image sensing device.
  • the coordinate transformation module 520 is configured to obtain a target display area determined based on the user's first spatial pose.
  • the coordinate transformation module 520 may be used to obtain the target display area re-determined based on the changed spatial pose.
  • the change of the first spatial posture can be detected by acquiring the sitting posture adjustment parameters of the electric seat.
  • the coordinate transformation module 520 may be specifically used to obtain the sitting posture adjustment parameters of the electric seat; obtain the change vector corresponding to the first spatial posture based on the sitting posture adjustment parameter; and adjust based on the change vector
  • the target display area obtains the re-determined target display area.
  • the coordinate transformation module 520 may also be used to obtain a coordinate transformation rule corresponding to the real scene information mapped to the target display area.
  • the coordinate transformation rule may include a first transformation matrix and a second transformation matrix, where the first transformation matrix is used to determine reference world coordinates corresponding to the coordinates of the real scene information collected by the scene sensing device, and The second transformation matrix is used to transform the reference world coordinates into view coordinates in the target display area.
  • the first transformation matrix may include a first rotation matrix and a first translation vector, the first rotation matrix is used to rotate the coordinates of the real scene information collected by the scene sensing device, and the first translation vector is used to The coordinates are translated, and the first transformation matrix determines a reference world coordinate corresponding to the coordinates of the real scene information collected by the scene sensing device based on the first rotation matrix and the first translation vector.
  • the second transformation matrix may include a view matrix and a projection matrix, the projection matrix is used to determine the mapping range for mapping the real scene information to the target display area, and the view matrix is used to determine the display in the mapping range For the relative position of the driving guidance information, the second transformation matrix converts the reference world coordinates into view coordinates in the target display area based on the mapping range and the relative position.
  • the view matrix may include a second rotation matrix and a second translation vector, the second rotation matrix is used to rotate the reference world coordinates, and the second translation vector is used to translate the reference world coordinates;
  • the projection matrix includes a field of view parameter, and the field of view includes a horizontal field of view and a vertical field of view.
  • the product of the parameter represented by the first transformation matrix and the parameter represented by the second transformation matrix may be obtained as a coordinate transformation rule corresponding to the mapping of the real scene information to the target display area.
  • the display module 530 is configured to generate driving guidance information based on the real scene information.
  • the display module 530 may also be used to display the driving guidance information at the corresponding position of the target display area based on the coordinate transformation rule.
  • the display module 530 may be specifically configured to input the position coordinates of the real scene information in the coordinate system corresponding to the scene sensing device into the first transformation matrix to obtain the coordinate transformation matrix to be processed;
  • the coordinate transformation matrix performs coordinate transformation according to the second transformation matrix to obtain the relative position coordinates of the real scene information in the target display area; the driving guidance information is displayed at the position characterized by the relative position coordinates.
  • the driving guidance information may be displayed in the corresponding position of the newly determined target display area based on the coordinate transformation rule corresponding to the changed spatial pose.
  • an embodiment of the present application also provides another projection device 100 that can execute the foregoing augmented reality-based information display method.
  • the projection device 100 includes one or more (only one shown in the figure) processor 102, a memory 104, an image sensing module 11, a coordinate transformation module 12, and a display module 13 coupled with each other.
  • the memory 104 stores a program that can execute the content in the foregoing embodiment
  • the processor 102 can execute the program stored in the memory 104
  • the memory 104 includes the apparatus 500 described in the foregoing embodiment.
  • the processor 102 may include one or more processing cores.
  • the processor 102 uses various interfaces and lines to connect various parts of the entire projection device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and calling data stored in the memory 104.
  • the processor 102 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 102 may integrate one or a combination of a central processing unit (CPU), a video image processor (Graphics Processing Unit, GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 102, but may be implemented by a communication chip alone.
  • the memory 104 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 104 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 104 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, video image playback function, etc.) ), instructions used to implement the foregoing method embodiments, etc.
  • the data storage area can also store data (for example, audio and video data) created by the projection device 100 during use.
  • the image sensing module 11 is used to obtain the real scene information collected by the image sensing device; the coordinate conversion module 12 is used to obtain the target display area determined based on the user's first spatial pose; the coordinate conversion module 12 is also used to obtain the real scene information Mapped to the coordinate transformation rule corresponding to the target display area; the display module 13 is configured to generate driving guidance information based on the real scene information; the display module 13 is also configured to convert the driving guidance information based on the coordinate transformation rule Displayed in the corresponding position of the target display area.
  • FIG. 14 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 600 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 600 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 600 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 600 has storage space for the program code 610 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
  • the program code 610 may be compressed in a suitable form, for example.
  • the present application provides an augmented reality-based information display method, system, device, projection equipment, and storage medium.
  • the first spatial pose determination based on the user is obtained.
  • obtain the coordinate transformation rules corresponding to the target display area from the real scene information and map it to the target display area, then generate driving guidance information based on the real scene information, and then display the driving guidance information in the corresponding position of the target display area based on the coordinate transformation rules.
  • the driving guidance information generated based on the real scene information is displayed in the corresponding position of the target display area determined based on the user's first spatial pose through the coordinate transformation rule, so that the user can be accurate and convenient in the driving process.
  • View the virtual driving guidance information corresponding to the driving scene without repeatedly confirming the accuracy of the driving guidance information, reducing the fatigue of frequent sight changes caused by viewing road conditions and driving guidance information such as navigation, and improving the safety and comfort of driving sex.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Navigation (AREA)

Abstract

本申请实施例公开了一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质。该方法包括:获取场景感知装置采集的实景信息;获取基于用户的第一空间位姿确定的目标显示区域;获取实景信息映射到目标显示区域对应的坐标变换规则;基于实景信息生成驾驶指引信息;基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。本方法将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升驾驶安全性与舒适性。

Description

基于增强现实的信息显示方法、系统、装置及投影设备 技术领域
本申请涉及坐标变换技术领域,更具体地,涉及一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质。
背景技术
HUD(head up display)为平视显示(或称抬头显示),能够将重要信息在视线前方的一块透明玻璃上显示,最早应用于战斗机上,其主要目的是为了让飞行员不需要频繁集中注意力看低头看仪表盘中的数据,从而避免飞行员在观看仪表盘中的数据时,不能观察到飞行前方领域的环境信息。为了减少用户低头看仪表盘或中控台引发的事故,HUD从飞机引入至汽车领域。
然而,现有的HUD显示信息的方式较为单一,以汽车驾驶为例,随着路况、导航以及危险预警等更多辅助驾驶信息的加入,当身高较高的用户驾驶一辆为普通身高(低于较高身高)的用户定制的车辆时,可能会导致身高较高的用户看到该车辆的HUD显示环境信息的位置与环境信息的实际位置存在差异,给用户驾驶带来诸多不便,降低用户体验。
发明内容
鉴于上述问题,本申请提出了一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质,以改善上述问题。
第一方面,本申请实施例提供了一种基于增强现实的信息显示方法,所述方法包括:获取场景感知装置采集的实景信息;获取基于用户的第一空间位姿确定的目标显示区域;获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;基于所述实景信息生成驾驶指引信息;基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
第二方面,本申请实施例提供了一种增强现实的信息显示装置,所述信息显示装置包括图像感知模块、坐标变换模块以及显示模块:所述图像感知模块用于获取图像感知装置采集的实景信息;所述坐标变换模块用于获取基于用户的第一空间位姿确定的目标显示区域;所述坐标变换模块还用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;所述显示模块用于基于所述实景信息生成驾驶指引信息;所述显示模块还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
第三方面,本申请实施例提供了一种基于增强现实的车载信息显示系统,所述系统包括:场景感知装置,用于采集车辆外部环境的实景信息;图像处理装置,用于获取所述场景感知装置采集的实景信息,获取基于用户的第一空间位姿确定的目标显示区域,获取所述实景信息映射到所述目标显示区域 对应的坐标变换规则,基于所述实景信息生成驾驶指引信息,基于所述坐标变换规则生成所述驾驶指引信息显示在所述目标显示区域的目标位置坐标;HUD显示装置,用于将所述驾驶指引信息展示到所述目标显示区域的所述目标位置坐标处。
第四方面,本申请实施例提供了一种投影设备,包括数据采集模块、投影模块、一个或多个处理器以及存储器;一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述第一方面所述的方法。
第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码运行时执行上述第一方面所述的方法。
本申请提供的一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质,通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,再获取实景信息映射到目标显示区域对应的坐标变换规则,再基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。从而通过上述方式实现了将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请一实施例提出的一种基于增强现实的信息显示方法的方法流程图。
图2示出了适用于本实施例提供的基于增强现实的信息显示方法的基于增强现实的车载信息显示系统的结构示例图。
图3示出了本实施例提供的危险场景下通过本申请提出的基于增强现实的车载信息显示系统显示驾驶指引信息的示例图。
图4示出了本申请另一实施例提出的一种基于增强现实的信息显示方法的方法流程图。
图5示出了本实施例提供的基于增强现实的车载信息显示系统的显示效果示例图。
图6示出了本申请又一实施例提出的一种基于增强现实的信息显示方法的方法流程图。
图7示出了本实施例提供的基于用户的第一空间位姿确定的目标显示区域的一示例图。
图8示出了本实施例提供的基于用户的第一空间位姿确定的目标显示区域的另一示例图。
图9示出了本实施例提供的基于用户的空间位姿确定的目标显示区域的示例图。
图10示出了本申请再一实施例提出的一种基于增强现实的信息显示方法的方法流程图。
图11示出了本实施例中提出的基于增强显示的信息显示方法的处理过程示例图。
图12示出了本申请实施例提出的一种基于增强现实的信息显示装置的结构框图。
图13示出了本申请的用于执行根据本申请实施例的一种基于增强现实的信息显示方法的投影设备的结构框图。
图14示出了本申请实施例的用于保存或者携带实现根据本申请实施例的一种基于增强现实的信息显示方法的程序代码的存储单元。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
HUD(head up display)为平视显示(或称抬头显示),能够将重要信息在视线前方的一块透明玻璃上显示,最早应用于战斗机上,其主要目的是为了让飞行员不需要频繁集中注意力看低头看仪表盘中的数据,从而避免飞行员在观看仪表盘中的数据时,不能观察到飞行前方领域的环境信息。为了减少用户低头看仪表盘或中控台引发的事故,HUD从飞机引入至汽车领域。
HUD主要分后装(也被称为Combine HUD,C型HUD)和前装(也被称为Windshield HUD,W型HUD)两种。其中,前装HUD将挡风玻璃作为组合器,把驾驶员需要的内容通过光学系统投射至前挡风玻璃,人眼通过挡风玻璃可以在平视范围内同时观察到HUD虚像和外界景象,提高行车安全性和驾驶舒适度。但是,一些现有HUD设备仅在驾驶员视线前方显示虚拟信息,并没有与真实环境融合。随着路况、导航、危险预警等更多辅助驾驶信息的加入,这种虚拟内容与真实场景的不匹配反而会导致驾驶员注意力的分散。
增强现实(Augmented Reality,AR)是一种将虚拟信息与真实世界巧妙融合的技术。
作为一种方式,伴随着自动驾驶和增强现实、混合现实技术的发展,可以将AR技术引入HUD领域,AR-HUD通过AR技术与前装HUD的结合能够解决传统HUD虚拟信息与实际场景分离、不匹配的问题,在丰富HUD显 示内容的同时提高驾驶的安全性与舒适度。然而,现有的HUD显示信息的方式较为单一,以汽车驾驶为例,随着路况、导航以及危险预警等更多辅助驾驶信息的加入,当身高较高的用户驾驶一辆为普通身高(低于较高身高)的用户定制的车辆时,可能会导致身高较高的用户看到该车辆的HUD显示环境信息的位置与环境信息的实际位置存在差异,给用户驾驶带来诸多不便,降低用户体验。
因此,为了改善上述问题,发明人提出了本申请提供的可以通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,再获取实景信息映射到目标显示区域对应的坐标变换规则,再基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置,实现了将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
下面将结合附图具体描述本申请的各实施例。
请参阅图1,为本申请一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:
步骤S110:获取场景感知装置采集的实景信息。
其中,本申请实施例中的实景信息可以为与多种场景对应的实景信息。可选的,多种场景可以包括但不限于驾驶场景,旅游场景以及户外活动场景等。例如,若为驾驶场景,实景信息可以包括车道、标识牌、危险行人(例如盲人、独自行走的老人、孕妇或儿童等弱势群体)与车辆等;若为旅游场景,实景信息可以包括旅游地标识、旅游路线、旅游景点信息以及旅游景点天气信息等;若为户外活动场景,实景信息可以包括当前位置信息以及附近便利店信息等。
可选的,场景感知装置可以包括激光、红外雷达等感应装置,也可以包括相机(包括单目摄像机、双目摄像机以及RGB-D相机等)等图像采集装置。作为一种方式,可以通过场景感知装置获取与当前的场景对应的实景信息。例如,假设当前的场景为驾驶场景,场景感知装置为相机,相机可以安装在汽车上(可选的,安装位置可以根据汽车的款式结构或者是实际需要进行调整),使得相机可以实时获取到与驾驶相关的实景信息。其中,对于场景感知装置(包括激光、红外雷达或是相机)采集实景信息的采集原理及实现可以参考相关技术,在此不再赘述。
步骤S120:获取基于用户的第一空间位姿确定的目标显示区域。
可选的,用户的第一空间位姿可以为用户处于驾驶状态下的坐姿,或者为调节好座椅(这里可以为当前用户首次调节好座椅)后的坐姿。可以理解的是,用户的坐姿不同,对应的空间位姿不同,作为一种方式,可以将用户调节好座椅后的坐姿状态作为该用户的第一空间位姿。
本实施例中,目标显示区域为用于显示与实景信息对应的虚拟图像信息的区域。以驾驶场景为例,目标显示区域可以为汽车的挡风玻璃上用于显示投射的与实景信息对应的虚拟图像信息的区域。可选的,同一用户的不同空间位姿对应的目标显示区域可以不同,不同用户的空间位姿对应的目标显示区域可以不同。
为了消除显示与实景信息对应的虚拟图像信息的位置与实景信息的实际位置的显示差异,作为一种方式,可以获取基于用户的第一空间位姿确定的目标显示区域,以便可以在目标显示区域显示与实景信息对应的虚拟图像信息,降低前述显示差异,进而提升显示与实景信息对应的虚拟图像信息的显示位置的准确性。
步骤S130:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。
其中,坐标变换规则可以用于将实景信息的坐标映射为目标显示区域的对应坐标。作为一种方式,在获取了实景信息以及目标显示区域的情况下,可以获取实景信息映射到目标显示区域对应的坐标变换规则,以便后续可以基于坐标变换规则将与实景信息对应的驾驶指引信息准确的显示在目标显示区域的对应位置。
可选的,坐标变换规则可以包括第一变换矩阵以及第二变换矩阵。其中,第一变换矩阵可以用于确定与场景感知装置采集的实景信息的坐标对应的参考世界坐标,第二变换矩阵可以用于将参考世界坐标转换为目标显示区域内的视图坐标。其中,参考世界坐标可以理解为实景信息在建立的与场景感知装置对应的坐标系下的相对位置坐标,可选的,本实施例中的参考世界坐标可以理解为与汽车相对静止的世界坐标。视图坐标可以理解为参考世界坐标在目标显示区域对应的坐标系下的相对位置坐标。
作为一种实现方式,可以获取第一变换矩阵以及第二变换矩阵,继而将第一变换矩阵所表征的参数与第二变换矩阵所表征的参数的乘积作为实景信息映射到目标显示区域对应的坐标变换规则。
可选的,第一变换矩阵可以包括第一旋转矩阵以及第一平移向量。其中,第一旋转矩阵可以用于对场景感知装置采集的实景信息的坐标进行旋转,第一平移向量可以用于对该坐标进行平移。作为一种方式,可以基于第一旋转矩阵以及第一平移向量确定与场景感知装置采集的实景信息的坐标对应的参考世界坐标。
可选的,第二变换矩阵可以包括视图矩阵以及投影矩阵。其中,投影矩阵可以用于确定将实景信息映射到目标显示区域的映射范围,视图矩阵可以用于确定该映射范围内显示驾驶指引信息(可以理解为前述的与实景信息对应的虚拟图像信息)的相对位置。作为一种方式,可以基于该映射范围以及该相对位置将参考世界坐标转换为目标显示区域内的视图坐标。其中,视图矩阵可以包括第二旋转矩阵以及第二平移向量。其中,第二旋转矩阵可以用于对参考世界坐标进行旋转,第二平移向量可以用于对参考世界坐标进行平移;投影矩阵可以包括视场角参数,视场角可以包括水平视场角和垂直视场角。
下面以驾驶场景为例,对本实施例进行示例性的说明:
请参阅图2,为适用于本实施例提供的基于增强现实的信息显示方法的基于增强现实的车载信息显示系统的结构示例图。如图2所示,该基于增强现实的车载信息显示系统可以包括场景感知装置、图像处理装置以及HUD显示装置。其中,场景感知装置可以用于采集车辆外部环境的实景信息。图像处理装置可以用于获取场景感知装置采集的实景信息,获取基于用户的第一空间位姿确定的目标显示区域,获取实景信息映射到目标显示区域对应的坐标变换规则,基于实景信息生成驾驶指引信息,基于坐标变换规则生成驾驶指引信息显示在目标显示区域的目标位置坐标。HUD显示装置可以用于将驾驶指引信息展示到目标显示区域的目标位置坐标处。
其中,图像处理装置可以是车机系统的处理器芯片,或者是独立的车载计算机系统的处理芯片,或者是集成在场景感知装置(例如激光雷达)中的处理器芯片等,在此不作限定。
在一种实现方式中,该车载信息显示系统可以包括汽车、驾驶员、场景感知装置、图像处理装置以及具备AR-HUD功能的HUD显示装置。作为一种实施方式,场景感知装置可以安装在汽车上并能获取到与驾驶相关的场景信息(也可以理解为前述的实景信息),驾驶员坐在汽车的驾驶位,HUD显示装置安装在汽车前挡风玻璃,HUD显示装置的位置可以进行调节,以使得驾驶员的眼睛能够看到与驾驶场景信息对应的整个虚像,图像处理装置可以将场景感知装置采集到的实景信息转换为与真实场景的实景信息相融合的图像,并发送给HUD显示装置进行显示。
作为一种方式,场景感知装置在采集得到实景信息之后,可以基于GPS定位等位置获取方式获取实景信息在世界坐标系(如图2所示的O-xyz)下的位置坐标,继而可以基于汽车的行进方向选定世界坐标原点及坐标轴方向,根据世界坐标原点以及坐标轴方向确定与汽车相对静止的参考世界坐标系,通过确定与汽车相对静止的参考世界坐标系,可以获得参考世界坐标系下与实景信息的坐标对应的参考世界坐标。其中,世界坐标原点以及坐标轴方向的选取方式可以参考相关技术,在此不再赘述。需要说明的是,参考世界坐标系可以理解为将世界坐标系进行旋转和/或平移之后得到的坐标系。
例如,作为一种实施方式,在确定了与汽车相对静止的参考世界坐标系的情况下,可以获取场景感知装置以及驾驶员眼睛在参考世界坐标系下的空间位姿,在这种方式下,可以根据场景感知装置在参考世界坐标系下的空间位姿计算感知模块变换矩阵M(即前述的第一变换矩阵)。示例性的,可以假设将世界坐标系变化为参考世界坐标系的变化过程包括第一旋转矩阵(也可以理解为场景感知装置的总旋转矩阵)R M以及第一平移向量T M,可选的,感知模块变换矩阵M与第一旋转矩阵R M以及第一平移向量T M之间的关系可以表示为:
Figure PCTCN2021082944-appb-000001
其中,
Figure PCTCN2021082944-appb-000002
Figure PCTCN2021082944-appb-000003
其中,R Mx、R My、R Mz分别为感知模块变换矩阵M绕世界坐标系的x轴、y轴、z轴的旋转矩阵,旋转的欧拉角度依次为α M、β M、γ M,(T Mx,T My,T Mz)为实景信息在参考世界坐标系下的坐标。
可选的,本实施例中的HUD显示装置可以为逆向的虚拟相机模型,假设虚拟相机的位姿与驾驶员眼睛的位姿相同,在这种方式下,可以基于HUD显示装置的相关参数以及虚拟相机的位姿计算前述的第二变换矩阵。第二变换矩阵(这里也可以理解为虚拟相机的成像矩阵)C可以包括视图矩阵V以及投影矩阵P,三者之间的关系可以表示为:
C=PV。
其中,视图矩阵V可以包括第二旋转矩阵R H T以及第二平移向量T H,第二旋转矩阵R H T可以理解为将实景信息在参考世界坐标系下的坐标转换为HUD显示装置所在坐标系下的坐标的总旋转矩阵,可选的,第二平移向量T H可以用于对参考世界坐标进行平移或者对HUD显示装置所在坐标系下的坐标进行平移。可选的,本实施例中的视图矩阵V与第二旋转矩阵R H T以及第二平移向量T H之间的关系可以表示为:
Figure PCTCN2021082944-appb-000004
其中,
Figure PCTCN2021082944-appb-000005
Figure PCTCN2021082944-appb-000006
其中,R Hx、R Hy、R Hz可以理解为分别绕参考世界坐标系的x轴、y轴、z轴的旋转矩阵,旋转的欧拉角度依次为α H、β H、γ H,(T Hx,T Hy,T Hz)为虚拟相机的位姿在参考世界坐标系下的坐标。
可选的,投影矩阵P满足的关系式可以为:
Figure PCTCN2021082944-appb-000007
可选的,投影矩阵包括视场角参数,视场角可以包括水平视场角和垂直视场角,hFOV、vFOV可以分别表示水平视场角以及垂直视场角,n、f可以理解为假定的远近裁剪面的距离,例如,如图2所示的虚像O h所在的平面距离车前挡风玻璃的中心的距离可以理解为假定的近裁剪面的距离,虚像Ow所在的平面距离车前挡风玻璃的中心的距离可以理解为假定的远裁剪面的距离。可以理解的是,此处仅作为示例进行说明,实际实现时可以根据实际需要调整远近裁剪面的距离。
作为一种方式,在获取了第一变换矩阵以及第二变换矩阵的情况下,可以将第一变换矩阵所表征的参数与第二变换矩阵所表征的参数的乘积获取作为实景信息映射到目标显示区域对应的坐标变换规则,即此种方式下可以将坐标变换规则表示为:
F=CM=PVM。
步骤S140:基于所述实景信息生成驾驶指引信息。
本实施例中的驾驶指引信息可以包括与路况对应的导航指示信息,行人预警信息以及旅游景点提示信息等,驾驶指引信息的种类及具体内容可以不作限定。例如,如图3所示,示出了本实施例提供的危险场景下通过本申请提出的基于增强现实的信息显示系统显示驾驶指引信息的示例图,如图3所示,图像处理装置可以将场景感知装置采集的实景信息转换为HUD虚像在HUD显示装置进行显示,显示的具体内容如图3的右侧图像所示,此种情况下,驾驶员眼睛看到的场景可以包括车道指引信息(即图3中所示的“虚像中的导航指示”)以及行人预警信息(即图3中所示的“虚像中的行人提示框”)。
作为一种方式,在获取了实景信息的情况下,可以基于实景信息生成驾驶指引信息,可选的,本实施例中驾驶指引信息的提示方式可以不做限定,例如,可以是以图标(例如箭头)、图片、动画、语音或是视频等方式进行提示,那么对于不同提示方式的驾驶指引信息,可以以对应的方式进行生成。可选的,对于基于实景信息生成与每一种提示方式的驾驶指引信息的生成原理可以参考相关技术,在此不再赘述。可选的,本实施例中的驾驶指引信息可以包括至少一种提示方式,例如,可以在显示与道路对应的导航指示图标的基础上,结合语音提示用户,以便可以更加准确的对用户进行驾驶指引提示,保障行车安全,提升用户体验。
步骤S150:基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
可选的,通过基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置,可以避免HUD显示实景信息的位置与实景信息的实际位置存在差异, 提升显示的准确性与可靠性。
本申请提供的一种基于增强现实的信息显示方法,通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,再获取实景信息映射到目标显示区域对应的坐标变换规则,再基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。实现了将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
请参阅图4,为本申请另一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:
步骤S210:获取场景感知装置采集的实景信息。
步骤S220:获取基于用户的第一空间位姿确定的目标显示区域。
步骤S230:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。
步骤S240:基于所述实景信息生成驾驶指引信息。
步骤S250:将所述实景信息在所述场景感知装置对应的坐标系中的位置坐标输入所述第一变换矩阵,得到待处理坐标变换矩阵。
作为一种方式,可以将实景信息在场景感知装置对应的坐标系中的位置坐标输入第一变换矩阵,将输出得到的结果作为待处理坐标变换矩阵。例如,在一个具体的应用场景中,假设实景信息在场景感知装置对应的坐标系中的位置坐标为O w(x,y,z),可选的,在将位置坐标O w(x,y,z)输入到前述第一变换矩阵之后,可以得到:
Figure PCTCN2021082944-appb-000008
Figure PCTCN2021082944-appb-000009
其中,O'可以作为待处理坐标变换矩阵,O W使用了齐次坐标。
步骤S260:将所述待处理坐标变换矩阵按照所述第二变换矩阵进行坐标变换,得到所述实景信息在所述目标显示区域内的相对位置坐标。
作为一种方式,可以将待处理坐标变换矩阵按照前述的第二变换矩阵进行坐标变换,得到实景信息在目标显示区域内的相对位置坐标。可选的,坐标变换的具体实现过程可以参考相关技术,在此不再赘述。
例如,以上述示例为例,假设HUD显示装置的目标显示区域的HUD图像 中,与位置坐标O w(x,y,z)相对应的位置坐标表示为O h(u,v),那么在将待处理坐标变换矩阵O'按照第二变换矩阵进行坐标变换后,可以得到:
Figure PCTCN2021082944-appb-000010
其中,width和height为HUD图像的宽度和高度,单位可以均为像素。在这种方式下,可以将O h(u,v)作为实景信息在目标显示区域内的相对位置坐标。
步骤S270:在所述相对位置坐标所表征的位置处显示所述驾驶指引信息。
可选的,在这种方式下,可以在相对位置坐标所表征的位置处显示与实景信息对应的驾驶指引信息。
下面以一个具体的示例对本实施例进行说明:
请参阅图5,示出了本实施例提供的基于增强现实的信息显示系统的显示效果示例图。如图5所示,可以在相关的建模软件(例如Unity3D等)中搭建虚实场景融合系统(可以理解为本申请中的基于增强现实的信息显示系统),该虚实场景融合系统可以包括汽车,摄像机1(用于模拟驾驶员的眼睛)、摄像机2和平面一起模拟的HUD成像模块(即前述的HUD显示装置)、由棋盘格模拟的图像感知模块所获取的空间场景信息(可以是不同场景下的空间场景信息)、以及由程序脚本完成的图像处理装置的信息变换和图像绘制、渲染。
可选的,在模拟的该虚实场景融合系统中,可以选定汽车底部的中心位置为坐标原点,以汽车前向为Z轴正方向,采用右手坐标系,假定驾驶员坐在驾驶位,摄像头1模拟的驾驶员的眼睛朝向前方,摄像头2模拟的HUD虚拟相机的位姿与驾驶员眼睛的位姿相同,在这种方式下,场景感知装置可以在汽车上获取棋盘格的空间角点信息,图像处理装置可以把角点绘制到HUD图像空间(如图5所示的左下角即为图像处理装置绘制到HUD图像空间的角点图像),然后将绘制后的图像发送给HUD模块进行显示(如图5所示的将角点图像输送给HUD进行虚像显示)。通过这种方式,可以得到如图5所示的驾驶员视角场景,通过图5所示的虚实融合结果放大图,可以看出,虚实场景可以准确融合。
本申请提供的一种基于增强现实的信息显示方法,通过将基于实景信息生成得到的驾驶指引信息通过第一变换矩阵以及第二变换矩阵分别进行坐标变换后显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
请参阅图6,为本申请又一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:
步骤S310:获取场景感知装置采集的实景信息。
步骤S320:获取基于变化后的空间位姿重新确定的目标显示区域。
可以理解的是,在用户驾驶的过程中,用户的姿态会发生变化,例如用户的坐姿会发生变化(包括左右倾斜身体或上下调节座椅的高度或者是前后调节座椅靠背的倾斜程度等),或者用户的头部会随着行进路况的变化而晃动,那么在这种方式下,用户的第一空间位姿可以发生变化,而若仍旧采用原来的HUD显示方式显示与实景信息对应的驾驶指引信息,可能会因位置显示误差造成安全隐患。
作为一种改善这一问题的方式,本实施例采取实时检测用户的空间位姿,以便于若检测到第一空间位姿发生变化,则基于变化后的空间位姿重新确定目标显示区域,从而可以保证与实景信息对应的驾驶指引信息的显示位置的准确性,而不需要用户反复确认驾驶指引信息的准确性,提升了显示驾驶指引信息的灵活性,进而提升用户体验。
例如,作为一种实施方式,请参阅图7,示出了本实施例提供的基于用户的第一空间位姿确定的目标显示区域的一示例图。如图7所示,若用户22当前的坐姿对应的空间位姿为第一空间位姿,在这种方式下,汽车的前置挡风玻璃的屏幕21上可以显示如图7所示的目标显示区域23。可选的,若该用户的空间位姿由22变化为如图8所示的位姿22’时,屏幕21上可以显示如图8所示的目标显示区域23’,目标显示区域23’为基于变化后的空间位姿22’重新确定的目标显示区域。
可选的,本实施例中可以预先设置用户的空间位姿的变化幅度与目标显示区域的变化范围的对应关系。例如,可以设置用户的空间位姿包括变化幅度A、B、C、D以及E,变化幅度A、B、C、D以及E对应的目标显示区域的变化范围可以设置为1、2、3、4、5(假设数值越大对应的变化范围越大,单位数值对应的变化范围为5°),可选的,假设变化幅度A>B>C>D>E,变化幅度越大,对应的变化范围越大。那么在这种方式下,若检测到用户的第一空间位姿发生变化,可以基于变化的参数确定空间位姿的变化幅度,进而根据变化幅度确定对应的目标显示区域的变化范围。
可以理解的是,假设用户的第一空间位姿的变化范围很小,即没有达到任意对应的变化幅度,在这种方式下,可以不调节目标显示区域的显示位置。
步骤S330:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。
可选的,若用户的第一空间位姿发生了变化,在上述方式下,可以获取实景信息映射到重新确定的目标显示区域对应的第二坐标变换规则,其中,第二坐标变换规则的具体确定过程可以参照前述坐标变换规则的确定原理以及确定过程,在此不再赘述。
步骤S340:基于所述实景信息生成驾驶指引信息。
步骤S350:基于所述坐标变换规则将所述驾驶指引信息显示在所述重新确定的目标显示区域的对应位置。
可选的,在基于变化后的空间位姿重新确定目标显示区域的基础上,可以 基于第二坐标变化规则将驾驶指引信息显示在重新确定的目标显示区域的对应位置。
作为一种实施方式,本实施例中的目标显示区域可以根据用户的第一空间位姿的变化而进行调整。例如,若检测到用户存在低头等姿态时,目标显示区域可以显示在中控显示屏上的对应位置;若检测到用户在驾驶的过程中看手机的频率较高,可以将目标显示区域显示到手机的显示屏上;或者是驾驶场景下其他可以用于作为目标显示区域的屏幕,例如,位于驾驶位左右两侧的车窗等。
作为又一种实施方式,本实施例中可以同时设置至少一处目标显示区域,以使得驾驶用户在疲劳状态或者是视觉不佳状态下可以由其他用户辅助其进行安全驾驶。例如,如图9所示,可以将车前挡风玻璃21划分为两个区域,包括第一显示区域211以及第二显示区域212,在这种方式下,假设驾驶员221的空间位姿对应的目标显示区域为231,那么目标显示区域232为与副驾的用户222的空间位姿对应的目标显示区域。其中,目标显示区域231内所显示的内容与目标显示区域232内显示的内容可以相同,可选的,目标显示区域232的显示状态可以根据实际需要进行关闭或者开启,例如,在主驾(即驾驶员221)精神状态较为疲惫时可以选择开启目标显示区域232的显示功能。目标显示区域232在第二显示区域212内的显示位置可以随着用户222的空间位姿的变化而变化,具体变化原理可以参照前述对应描述,在此不再赘述。
可选的,若目标显示区域232的显示状态处于开启状态,在驾驶的过程中,若发现较为危险的驾驶信息,副驾用户222可以及时提醒驾驶员用户221,从而实现通过其他用户辅助提醒驾驶员的方式,实现多重提升驾驶的安全性与舒适性,同时减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升驾驶过程中用户之间的互动,提升用户友好体验。可选的,若目标显示区域232的显示功能的实现原理可以参照前述实施例中的描述,在此不再赘述。
本申请提供的一种基于增强现实的信息显示方法,实现了将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,或者显示在基于用户变化后的空间位姿重新确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
请参阅图10,为本申请再一实施例提供的一种基于增强现实的信息显示方法的方法流程图。本实施例的方法可以由基于增强现实的处理实景信息的装置来执行,该装置可以通过硬件和/或软件的方式实现,所述方法包括:
步骤S410:获取场景感知装置采集的实景信息。
步骤S420:通过获取电动座椅的坐姿调节参数检测所述第一空间位姿的变化。
可选的,本实施例中汽车的座椅可以为电动座椅,在这种方式下,若用户调节了电动座椅的位置,电动座椅可以自动生成调节参数,可以将该参数作为用户的坐姿调节参数。那么,作为一种方式,可以通过获取电动座椅的坐姿调 节参数检测用户的第一空间位姿的变化。
步骤S430:获取所述电动座椅的坐姿调节参数。
可选的,可以通过读取电动座椅自动生成的数据获取电动座椅的坐姿调节参数,或者可以安装摄像头,通过摄像头采集电动座椅的坐姿调节参数,可选的,具体获取方式可以不作限定。
步骤S440:基于所述坐姿调节参数获取所述第一空间位姿对应的变化向量。
在获取了电动座椅的坐姿调节参数后,可以基于坐姿调节参数获取与第一空间位姿对应的变化向量。可选的,具体计算过程可以参考相关技术实现,在此不再赘述。
步骤S450:基于所述变化向量调整所述目标显示区域,得到重新确定的目标显示区域。
作为一种方式,可以基于与第一空间位姿对应的变化向量调整目标显示区域的显示位置,以得到重新确定的目标显示区域。可选的,具体调整原理可以参照前述实施例中的描述,在此不再赘述。
步骤S460:获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。
步骤S470:基于所述实景信息生成驾驶指引信息。
可选的,本实施例中,各个步骤之间的先后顺序可以不作限定,例如,步骤S470可以在步骤S410之后实施。
示例性的,下面示出了一种具体的实施流程:
如图11所示,示出了本实施例中提出的基于增强显示的信息显示方法的处理过程示例图。在图11中,空心箭头所指向的流程可以为初始流程,实心箭头所指向的流程可以为实时持续流程。作为一种实施方式,可以先确立坐标系,继而测定场景感知装置以及驾驶员眼睛的空间位姿,再分别计算场景感知装置矩阵M以及HUD成像矩阵C,然后得到总变换矩阵(即前述的坐标变换规则)F=CM。可选的,场景感知装置可以实时获取实景信息,将实景信息作为待显示信息并发送给图像处理装置,图像处理装置对实景信息对应的坐标进行坐标变化处理并绘制最终得到的图像,将该图像投射到HUD显示屏幕(即前述的目标现实区域)中进行显示,以实现提升显示驾驶指引信息的显示位置的准确性,减少用户操作,进而提升用户体验。
步骤S480:基于所述坐标变换规则将所述驾驶指引信息显示在所述重新确定的目标显示区域的对应位置。
本申请提供的一种基于增强现实的信息显示方法,通过获取电动座椅的坐姿调节参数检测用户的第一空间位姿的变化,实现了将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
请参阅图12,本申请实施例提供的一种基于增强现实的信息显示装置500, 可以运行于投影设备,所述装置500包括:
图像感知模块510,用于获取图像感知装置采集的实景信息。
坐标变换模块520,用于获取基于用户的第一空间位姿确定的目标显示区域。
可选的,若检测到第一空间位姿发生变化,坐标变换模块520可以用于获取基于变化后的空间位姿重新确定的目标显示区域。
作为一种方式,可以通过获取电动座椅的坐姿调节参数检测所述第一空间位姿的变化。在这种方式下,坐标变换模块520具体可以用于获取所述电动座椅的坐姿调节参数;基于所述坐姿调节参数获取所述第一空间位姿对应的变化向量;基于所述变化向量调整所述目标显示区域,得到重新确定的目标显示区域。
作为一种方式,坐标变换模块520还可以用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则。
可选的,所述坐标变换规则可以包括第一变换矩阵和第二变换矩阵,所述第一变换矩阵用于确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标,所述第二变换矩阵用于将所述参考世界坐标转换为所述目标显示区域内的视图坐标。
所述第一变换矩阵可以包括第一旋转矩阵以及第一平移向量,所述第一旋转矩阵用于对所述场景感知装置采集的实景信息的坐标进行旋转,所述第一平移向量用于对所述坐标进行平移,所述第一变换矩阵基于所述第一旋转矩阵以及所述第一平移向量确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标。
所述第二变换矩阵可以包括视图矩阵以及投影矩阵,所述投影矩阵用于确定将所述实景信息映射到所述目标显示区域的映射范围,所述视图矩阵用于确定所述映射范围内显示所述驾驶指引信息的相对位置,所述第二变换矩阵基于所述映射范围以及所述相对位置将所述参考世界坐标转换为所述目标显示区域内的视图坐标。
所述视图矩阵可以包括第二旋转矩阵以及第二平移向量,所述第二旋转矩阵用于对所述参考世界坐标进行旋转,所述第二平移向量用于对所述参考世界坐标进行平移;所述投影矩阵包括视场角参数,所述视场角包括水平视场角和垂直视场角。
作为一种实施方式,可以将所述第一变换矩阵所表征的参数与所述第二变换矩阵所表征的参数的乘积获取作为所述实景信息映射到所述目标显示区域对应的坐标变换规则。
显示模块530,用于基于所述实景信息生成驾驶指引信息。
作为一种方式,显示模块530还可以用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
可选的,显示模块530具体可以用于将所述实景信息在所述场景感知装置对应的坐标系中的位置坐标输入所述第一变换矩阵,得到待处理坐标变换矩阵;将所述待处理坐标变换矩阵按照所述第二变换矩阵进行坐标变换,得到所述实 景信息在所述目标显示区域内的相对位置坐标;在所述相对位置坐标所表征的位置处显示所述驾驶指引信息。
可选的,若检测到所述第一空间位姿发生变化,可以基于变化后的空间位姿对应的坐标变换规则将所述驾驶指引信息显示在所述重新确定的目标显示区域的对应位置。
需要说明的是,本申请中装置实施例与前述方法实施例是相互对应的,装置实施例中具体的原理可以参见前述方法实施例中的内容,此处不再赘述。
下面将结合图13对本申请提供的一种投影设备进行说明。
请参阅图13,基于上述的基于增强现实的信息显示方法、系统、装置,本申请实施例还提供的另一种可以执行前述基于增强现实的信息显示方法的投影设备100。投影设备100包括相互耦合的一个或多个(图中仅示出一个)处理器102、存储器104、图像感知模块11、坐标变换模块12以及显示模块13。其中,该存储器104中存储有可以执行前述实施例中内容的程序,而处理器102可以执行该存储器104中存储的程序,存储器104包括前述实施例中所描述的装置500。
其中,处理器102可以包括一个或者多个处理核。处理器102利用各种接口和线路连接整个投影设备100内的各个部分,通过运行或执行存储在存储器104内的指令、程序、代码集或指令集,以及调用存储在存储器104内的数据,执行投影设备100的各种功能和处理数据。可选地,处理器102可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器102可集成中央处理器(Central Processing Unit,CPU)、视频图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器102中,单独通过一块通信芯片进行实现。
存储器104可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器104可用于存储指令、程序、代码、代码集或指令集。存储器104可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、视频图像播放功能等)、用于实现上述各个方法实施例的指令等。存储数据区还可以存储投影设备100在使用中所创建的数据(例如音视频数据)等。
图像感知模块11用于获取图像感知装置采集的实景信息;坐标变换模块12用于获取基于用户的第一空间位姿确定的目标显示区域;所述坐标变换模块12还用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;所述显示模块13用于基于所述实景信息生成驾驶指引信息;所述显示模块13还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
请参考图14,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质600中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质600可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质600包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质600具有执行上述方法中的任何方法步骤的程序代码610的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码610可以例如以适当形式进行压缩。
综上所述,本申请提供的一种基于增强现实的信息显示方法、系统、装置、投影设备以及存储介质,通过获取图像感知装置采集的实景信息,继而获取基于用户的第一空间位姿确定的目标显示区域,再获取实景信息映射到目标显示区域对应的坐标变换规则,再基于实景信息生成驾驶指引信息,然后基于坐标变换规则将驾驶指引信息显示在目标显示区域的对应位置。从而通过上述方式实现了将基于实景信息生成得到的驾驶指引信息通过坐标变换规则显示在基于用户的第一空间位姿确定的目标显示区域的对应位置,以使用户在驾驶的过程中可以准确便捷的查看与驾驶场景对应的虚拟驾驶指引信息,而不需要反复确认驾驶指引信息的准确性,减少了因查看路况以及导航等驾驶指引信息导致的视线频繁转换疲劳,提升了驾驶的安全性与舒适性。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (13)

  1. 一种基于增强现实的信息显示方法,其特征在于,所述方法包括:
    获取场景感知装置采集的实景信息;
    获取基于用户的第一空间位姿确定的目标显示区域;
    获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;
    基于所述实景信息生成驾驶指引信息;
    基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
  2. 根据权利要求1所述的方法,其特征在于,所述坐标变换规则包括第一变换矩阵和第二变换矩阵,所述第一变换矩阵用于确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标,所述第二变换矩阵用于将所述参考世界坐标转换为所述目标显示区域内的视图坐标。
  3. 根据权利要求2所述的方法,其特征在于,所述第一变换矩阵包括第一旋转矩阵以及第一平移向量,所述第一旋转矩阵用于对所述场景感知装置采集的实景信息的坐标进行旋转,所述第一平移向量用于对所述坐标进行平移,所述第一变换矩阵基于所述第一旋转矩阵以及所述第一平移向量确定与所述场景感知装置采集的实景信息的坐标对应的参考世界坐标。
  4. 根据权利要求2所述的方法,其特征在于,所述第二变换矩阵包括视图矩阵以及投影矩阵,所述投影矩阵用于确定将所述实景信息映射到所述目标显示区域的映射范围,所述视图矩阵用于确定所述映射范围内显示所述驾驶指引信息的相对位置,所述第二变换矩阵基于所述映射范围以及所述相对位置将所述参考世界坐标转换为所述目标显示区域内的视图坐标。
  5. 根据权利要求4所述的方法,其特征在于,所述视图矩阵包括第二旋转矩阵以及第二平移向量,所述第二旋转矩阵用于对所述参考世界坐标进行旋转,所述第二平移向量用于对所述参考世界坐标进行平移;所述投影矩阵包括视场角参数,所述视场角包括水平视场角和垂直视场角。
  6. 根据权利要求2-5任一项所述的方法,其特征在于,所述基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置,包括:
    将所述实景信息在所述场景感知装置对应的坐标系中的位置坐标输入所述第一变换矩阵,得到待处理坐标变换矩阵;
    将所述待处理坐标变换矩阵按照所述第二变换矩阵进行坐标变换,得到所述实景信息在所述目标显示区域内的相对位置坐标;
    在所述相对位置坐标所表征的位置处显示所述驾驶指引信息。
  7. 根据权利要求6所述的方法,其特征在于,所述获取所述实景信息映射到所述目标显示区域对应的坐标变换规则,包括:
    将所述第一变换矩阵所表征的参数与所述第二变换矩阵所表征的参数的乘积获取作为所述实景信息映射到所述目标显示区域对应的坐标变换规则。
  8. 根据权利要求1所述的方法,其特征在于,若检测到所述第一空间位 姿发生变化,所述获取基于用户的第一空间位姿确定的目标显示区域,包括:
    获取基于变化后的空间位姿重新确定的目标显示区域;
    所述基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置,包括:
    基于所述坐标变换规则将所述驾驶指引信息显示在所述重新确定的目标显示区域的对应位置。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    通过获取电动座椅的坐姿调节参数检测所述第一空间位姿的变化;
    所述获取基于变化后的空间位姿重新确定的目标显示区域,包括:
    获取所述电动座椅的坐姿调节参数;
    基于所述坐姿调节参数获取所述第一空间位姿对应的变化向量;
    基于所述变化向量调整所述目标显示区域,得到重新确定的目标显示区域。
  10. 一种基于增强现实的信息显示装置,其特征在于,所述信息显示装置包括图像感知模块、坐标变换模块以及显示模块:
    所述图像感知模块用于获取图像感知装置采集的实景信息;
    所述坐标变换模块用于获取基于用户的第一空间位姿确定的目标显示区域;
    所述坐标变换模块还用于获取所述实景信息映射到所述目标显示区域对应的坐标变换规则;
    所述显示模块用于基于所述实景信息生成驾驶指引信息;
    所述显示模块还用于基于所述坐标变换规则将所述驾驶指引信息显示在所述目标显示区域的对应位置。
  11. 一种基于增强现实的车载信息显示系统,其特征在于,所述系统包括:
    场景感知装置,用于采集车辆外部环境的实景信息;
    图像处理装置,用于获取所述场景感知装置采集的实景信息,获取基于用户的第一空间位姿确定的目标显示区域,获取所述实景信息映射到所述目标显示区域对应的坐标变换规则,基于所述实景信息生成驾驶指引信息,基于所述坐标变换规则生成所述驾驶指引信息显示在所述目标显示区域的目标位置坐标;
    HUD显示装置,用于将所述驾驶指引信息展示到所述目标显示区域的所述目标位置坐标处。
  12. 一种投影设备,其特征在于,包括一个或多个处理器以及存储器;
    一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行权利要求1-9任一所述的方法。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码被处理器运行时执行权利要求1-9任一所述的方法。
PCT/CN2021/082944 2020-03-31 2021-03-25 基于增强现实的信息显示方法、系统、装置及投影设备 WO2021197190A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010244728.4 2020-03-31
CN202010244728.4A CN113467601A (zh) 2020-03-31 2020-03-31 基于增强现实的信息显示方法、系统、装置及投影设备

Publications (1)

Publication Number Publication Date
WO2021197190A1 true WO2021197190A1 (zh) 2021-10-07

Family

ID=77865430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082944 WO2021197190A1 (zh) 2020-03-31 2021-03-25 基于增强现实的信息显示方法、系统、装置及投影设备

Country Status (2)

Country Link
CN (1) CN113467601A (zh)
WO (1) WO2021197190A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (zh) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 基于磁传感器的定位处理方法、装置、设备与介质
CN114581627A (zh) * 2022-03-04 2022-06-03 合众新能源汽车有限公司 基于arhud的成像方法和系统
CN114715175A (zh) * 2022-05-06 2022-07-08 Oppo广东移动通信有限公司 目标对象的确定方法、装置、电子设备以及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760458B (zh) * 2022-04-28 2023-02-24 中南大学 高真实感增强现实演播室虚拟与现实相机轨迹同步的方法
CN115061565A (zh) * 2022-05-10 2022-09-16 华为技术有限公司 调节显示设备的方法和装置
CN115984514A (zh) * 2022-10-21 2023-04-18 长城汽车股份有限公司 一种增强显示的方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200445A (zh) * 2010-03-23 2011-09-28 财团法人资讯工业策进会 实时扩增实境装置及其实时扩增实境方法
CN102542868A (zh) * 2012-01-09 2012-07-04 中国人民解放军空军军训器材研究所 视景模拟方法及装置
US20120224060A1 (en) * 2011-02-10 2012-09-06 Integrated Night Vision Systems Inc. Reducing Driver Distraction Using a Heads-Up Display
CN102735253A (zh) * 2011-04-05 2012-10-17 现代自动车株式会社 用于在挡风玻璃上显示道路引导信息的装置和方法
CN103129466A (zh) * 2011-12-02 2013-06-05 通用汽车环球科技运作有限责任公司 在全挡风玻璃平视显示器上的驾驶操纵辅助
CN107230199A (zh) * 2017-06-23 2017-10-03 歌尔科技有限公司 图像处理方法、装置和增强现实设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200445A (zh) * 2010-03-23 2011-09-28 财团法人资讯工业策进会 实时扩增实境装置及其实时扩增实境方法
US20120224060A1 (en) * 2011-02-10 2012-09-06 Integrated Night Vision Systems Inc. Reducing Driver Distraction Using a Heads-Up Display
CN102735253A (zh) * 2011-04-05 2012-10-17 现代自动车株式会社 用于在挡风玻璃上显示道路引导信息的装置和方法
CN103129466A (zh) * 2011-12-02 2013-06-05 通用汽车环球科技运作有限责任公司 在全挡风玻璃平视显示器上的驾驶操纵辅助
CN102542868A (zh) * 2012-01-09 2012-07-04 中国人民解放军空军军训器材研究所 视景模拟方法及装置
CN107230199A (zh) * 2017-06-23 2017-10-03 歌尔科技有限公司 图像处理方法、装置和增强现实设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (zh) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 基于磁传感器的定位处理方法、装置、设备与介质
CN114581627A (zh) * 2022-03-04 2022-06-03 合众新能源汽车有限公司 基于arhud的成像方法和系统
CN114581627B (zh) * 2022-03-04 2024-04-16 合众新能源汽车股份有限公司 基于arhud的成像方法和系统
CN114715175A (zh) * 2022-05-06 2022-07-08 Oppo广东移动通信有限公司 目标对象的确定方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN113467601A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2021197189A1 (zh) 基于增强现实的信息显示方法、系统、装置及投影设备
WO2021197190A1 (zh) 基于增强现实的信息显示方法、系统、装置及投影设备
US11715238B2 (en) Image projection method, apparatus, device and storage medium
CA3069114C (en) Parking assistance method and parking assistance device
JP5397373B2 (ja) 車両用画像処理装置、車両用画像処理方法
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
JP6695049B2 (ja) 表示装置及び表示制御方法
KR101921969B1 (ko) 차량용 증강현실 헤드업 디스플레이 장치 및 방법
WO2014199574A1 (ja) 車載表示装置およびプログラム製品
US11525694B2 (en) Superimposed-image display device and computer program
JP2015523624A (ja) 道路を基準にした風景のビデオ画像から仮想表示面を生成する方法
JP2007127437A (ja) 情報表示装置
JP2009232310A (ja) 車両用画像処理装置、車両用画像処理方法、車両用画像処理プログラム
KR20170048781A (ko) 차량용 증강현실 제공 장치 및 그 제어방법
EP3811326B1 (en) Heads up display (hud) content control system and methodologies
JP2022095303A (ja) 周辺画像表示装置、表示制御方法
CN115525152A (zh) 图像处理方法及系统、装置、电子设备和存储介质
JP6186905B2 (ja) 車載表示装置およびプログラム
US20240042857A1 (en) Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program
JP2023165721A (ja) 表示制御装置
US20200152157A1 (en) Image processing unit, and head-up display device provided with same
CN115493614A (zh) 航迹线的显示方法、装置、存储介质及电子设备
JP2020019369A (ja) 車両用表示装置、方法、及びコンピュータ・プログラム
WO2023145852A1 (ja) 表示制御装置、表示システム、及び表示制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21780047

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21780047

Country of ref document: EP

Kind code of ref document: A1