CN111932687B - In-vehicle mixed reality display method and device - Google Patents

In-vehicle mixed reality display method and device Download PDF

Info

Publication number
CN111932687B
CN111932687B CN202011089648.2A CN202011089648A CN111932687B CN 111932687 B CN111932687 B CN 111932687B CN 202011089648 A CN202011089648 A CN 202011089648A CN 111932687 B CN111932687 B CN 111932687B
Authority
CN
China
Prior art keywords
vehicle
computing unit
model
environment data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011089648.2A
Other languages
Chinese (zh)
Other versions
CN111932687A (en
Inventor
陈翔
樊潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Joynext Technology Corp
Original Assignee
Ningbo Joynext Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Joynext Technology Corp filed Critical Ningbo Joynext Technology Corp
Priority to CN202011089648.2A priority Critical patent/CN111932687B/en
Publication of CN111932687A publication Critical patent/CN111932687A/en
Application granted granted Critical
Publication of CN111932687B publication Critical patent/CN111932687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Abstract

The invention discloses a method and a device for displaying mixed reality in a vehicle, which can provide immersive viewing experience for drivers and passengers. The method comprises the following steps: acquiring in-vehicle environment data and out-vehicle environment data, and judging whether the vehicle-mounted computing unit and the roadside computing unit are networked or not after receiving a trigger instruction; if the network is connected and the calculation capacity of the vehicle-mounted computing unit is sufficient, requesting the roadside computing unit to acquire a rendering model, and rendering the vehicle-mounted GPU according to the rendering model, the environment data in the vehicle and the target scene in the trigger instruction to form display data; if the network is connected but the calculation power of the vehicle-mounted calculation unit is insufficient, the environment data in the vehicle and the target scene are packaged and sent to the roadside calculation unit, and the roadside calculation unit renders the data to generate display data; if not, performing real-time modeling rendering through a vehicle-mounted GPU or a cloud to generate display data; and identifying the visual angle sight of the driver and the passenger from the environment data in the automobile, and playing the video stream corresponding to the display data on the window glass display screen at the corresponding position.

Description

In-vehicle mixed reality display method and device
Technical Field
The invention relates to the technical field of vehicle networking, in particular to a method and a device for displaying mixed reality in a vehicle.
Background
With the development of economy, automobiles become an indispensable travel tool for people, and more automobiles run on roads. In the prior art, a driver and an occupant can only watch corresponding display pictures through a display screen of a vehicle machine, for the driver and the occupant, the picture content is monotonous and tasteless, and the driver and the occupant need to keep watching in a specific direction for a long time due to the fact that the set direction of the display screen is relatively fixed, and experience is poor.
Disclosure of Invention
The invention aims to provide a method and a device for displaying mixed reality in a vehicle, which can provide immersive viewing experience for drivers and passengers.
In order to achieve the above object, a first aspect of the present invention provides an in-vehicle mixed reality display method in which a display screen is provided in a plurality of window glass interlayers, the method comprising:
acquiring in-vehicle environment data and out-vehicle environment data, and judging whether the vehicle-mounted computing unit and the roadside computing unit are networked or not after receiving a trigger instruction;
if the judgment result is networking and the calculation force of the vehicle-mounted computing unit is sufficient, requesting the roadside computing unit to acquire a rendering model, and rendering the vehicle-mounted GPU according to the rendering model, the in-vehicle environment data and the target scene in the trigger instruction to form display data;
if the judgment result is that the network is connected but the calculation force of the vehicle-mounted calculation unit is insufficient, packaging the environment data in the vehicle and the target scene in the trigger instruction and sending the packaged environment data to a roadside calculation unit, and rendering the packaged environment data and the target scene in the trigger instruction by the roadside calculation unit based on the rendering model to generate display data;
if the judgment result is that the network is not connected, performing real-time modeling rendering according to the environment data inside the vehicle, the environment data outside the vehicle and the target scene in the trigger instruction through a vehicle-mounted GPU or a cloud end to generate display data;
and identifying the visual angle sight of the driver and the passenger from the environment data in the automobile, and playing the video stream corresponding to the display data on the window glass display screen at the corresponding position.
Preferably, the method for acquiring the environment data inside the vehicle and the environment data outside the vehicle and judging whether the vehicle-mounted computing unit and the roadside computing unit are networked after receiving the triggering instruction comprises the following steps:
the method comprises the steps that external environment data are collected through an external camera and internal environment data are collected through an internal camera and a sensor, the external environment data comprise external environment scene information, and the internal environment data comprise light intensity information, driver and passenger position information, current vehicle position information and driver and passenger visual angle and sight information;
and the driver and the passenger send the trigger instruction comprising the target scene to the vehicle-mounted communication unit, and the vehicle-mounted communication unit detects whether the roadside computing unit exists in a preset range or not and networks the vehicle-mounted computing unit and the roadside computing unit when the roadside computing unit exists.
Preferably, before the step of requesting the roadside calculation unit to acquire the rendering model, the method further comprises:
the roadside computing unit stores a static scene 3D model and a dynamic scene 3D model of the surrounding environment, the static scene 3D model comprises one or more of a road model, a building model and a traffic facility model, and the dynamic scene 3D model comprises a vehicle model and/or a pedestrian model;
the static scene 3D model and the dynamic scene 3D model vehicle model are formed by modeling pictures captured by an infrared camera and an RGB camera.
Further, after receiving the trigger instruction, the method further comprises:
when the vehicle is in a navigation state, the vehicle-mounted communication unit preloads the static scene 3D model stored by the roadside computing unit in the passing point, and requests to load the dynamic scene 3D model when the vehicle approaches the corresponding roadside computing unit, and then the vehicle-mounted GPU renders and forms display data according to the dynamic scene 3D model, the static scene 3D model, the vehicle-mounted environmental data and a target scene in the trigger instruction.
Further, the in-vehicle environment data and the target scene in the trigger instruction are packaged and sent to a roadside computing unit, and the method for generating the display data by the roadside computing unit based on the rendering model comprises the following steps:
under the condition that the calculation force of a roadside computing unit is sufficient, the in-vehicle environment data and the trigger instruction are packaged and sent to the roadside computing unit through a vehicle-mounted communication unit, when the calculation force of the roadside computing unit is sufficient, the roadside computing unit extracts driver and passenger position information and driver and passenger visual angle and sight line information from the in-vehicle environment data, extracts a target scene from the trigger instruction at the same time, and renders display data by combining the rendering model;
under the condition that the calculation power of a roadside computing unit is insufficient, the in-vehicle environment data and the trigger instruction are packaged and sent to the roadside computing unit through a vehicle-mounted communication unit, then the roadside computing unit forwards the in-vehicle environment data and the trigger instruction to a cloud, the cloud extracts the position information and the visual angle and sight information of the driver and the passenger from the in-vehicle environment data, meanwhile, a target scene is extracted from the trigger instruction, and display data are rendered by combining the static scene 3D model and the dynamic scene 3D model.
Further, if the judgment result is that the vehicle-mounted GPU or the cloud is not connected to the network, the method for generating the display data after performing real-time modeling rendering according to the environment data inside the vehicle, the environment data outside the vehicle and the trigger instruction comprises the following steps:
if the judgment result is that the vehicle-mounted communication unit is not connected to the network, whether the network communication between the vehicle-mounted communication unit and the cloud is smooth or not is continuously judged;
when the judgment result is yes, sending the environment data outside the vehicle to a cloud to establish the static scene 3D model and the dynamic scene 3D model, extracting position information and visual angle and sight information of a driver from the environment data inside the vehicle, extracting a target scene from the trigger instruction, and rendering display data by combining the static scene 3D model and the dynamic scene 3D model;
when the judgment result is negative and the calculation capacity of the vehicle-mounted computing unit is sufficient, the vehicle-mounted GPU establishes the static scene 3D model and the dynamic scene 3D model based on the vehicle external environment data, extracts the position information and the visual angle and sight information of the driver and the passengers from the vehicle internal environment data, extracts a target scene from the trigger instruction at the same time, and renders display data by combining the static scene 3D model and the dynamic scene 3D model;
when the judgment result is yes and the calculation power of the vehicle-mounted computing unit is insufficient, the vehicle-mounted GPU establishes the static scene 3D model and the dynamic scene 3D model based on the vehicle external environment data, extracts the position information and the visual angle and sight information of a driver and an occupant from the vehicle internal environment data, extracts a target scene from the trigger instruction at the same time, and renders the resolution reduction display data by combining the static scene 3D model and the dynamic scene 3D model; alternatively, the user is fed back directly to refuse to provide rendering functionality.
Illustratively, the roadside computing unit is an RSU computing unit.
Illustratively, the vehicle-mounted communication unit is a vehicle-mounted T-box or OBU.
Compared with the prior art, the in-vehicle mixed reality display method provided by the invention has the following beneficial effects:
in the method for displaying the mixed reality inside the vehicle, the camera outside the vehicle is used for acquiring the environmental data outside the vehicle in real time, and the in-vehicle camera and the sensor are used for acquiring the in-vehicle environmental data in real time, and the vehicle-mounted computing unit is used for preprocessing the out-vehicle environmental data and the in-vehicle environmental data in real time, then, whether a roadside computing unit capable of being networked exists in a preset distance is judged based on the current vehicle position, if the roadside computing unit capable of being networked exists in the preset distance, whether the computing power of the vehicle-mounted computing unit is sufficient is continuously judged, if the computing power is sufficient, the roadside computing unit is actively requested to acquire a rendering model, rendering the vehicle-mounted GPU according to the rendering model, the in-vehicle environment data and the target scene in the trigger instruction to form display data, packaging the in-vehicle environment data and the target scene in the trigger instruction and sending the packaged in-vehicle environment data and the target scene to a roadside computing unit if the in-vehicle environment data and the target scene in the trigger instruction are insufficient, and rendering the display data based on the rendering model by the roadside computing unit to generate the display data; in addition, if no roadside computing unit exists at the current vehicle position, real-time modeling rendering is carried out according to the vehicle internal environment data, the vehicle external environment data and a target scene in the trigger instruction by means of a vehicle-mounted GPU or a cloud end to generate display data; after the display data are generated, the system can identify the visual angle sight of the driver and the passenger from the environment data in the automobile, and play the video stream corresponding to the display data on the window glass display screen at the corresponding position.
Therefore, the invention provides a plurality of rendering modes of the display data, any mode can be flexibly selected according to the vehicle condition to perform rendering processing on the display data, and the immersive viewing experience can be provided for drivers and passengers while the operation efficiency is improved.
A second aspect of the present invention provides an in-vehicle mixed reality display device, which is applied to the in-vehicle mixed reality display method according to the above technical solution, and the display device includes:
the acquisition unit is used for acquiring the environment data inside the vehicle and the environment data outside the vehicle and judging whether the vehicle-mounted computing unit and the roadside computing unit are networked or not after receiving the trigger instruction;
the processing unit is used for requesting the roadside computing unit to acquire a rendering model if the judgment result is networking and the calculation force of the vehicle-mounted computing unit is sufficient, so that the vehicle-mounted GPU forms display data after rendering according to the rendering model, the in-vehicle environment data and the target scene in the trigger instruction; alternatively, the first and second electrodes may be,
if the judgment result is that the network is connected but the calculation force of the vehicle-mounted calculation unit is insufficient, packaging the environment data in the vehicle and the target scene in the trigger instruction and sending the packaged environment data to a roadside calculation unit, and rendering the packaged environment data and the target scene in the trigger instruction by the roadside calculation unit based on the rendering model to generate display data; alternatively, the first and second electrodes may be,
if the judgment result is that the network is not connected, performing real-time modeling rendering according to the environment data inside the vehicle, the environment data outside the vehicle and the target scene in the trigger instruction through a vehicle-mounted GPU or a cloud end to generate display data;
and the display unit is used for identifying the visual angle sight of the driver and the passenger from the environment data in the automobile and playing the video stream corresponding to the display data on the window glass display screen at the corresponding position.
Compared with the prior art, the beneficial effects of the in-vehicle mixed reality display device provided by the invention are the same as the beneficial effects of the in-vehicle mixed reality display method provided by the technical scheme, and the detailed description is omitted here.
A third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the in-vehicle mixed reality display method described above.
Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the invention are the same as the beneficial effects of the in-vehicle mixed reality display method provided by the technical scheme, and the detailed description is omitted here.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic overall flow chart of a display method of mixed reality in a vehicle according to an embodiment of the present invention;
fig. 2 is a detailed flow diagram of an in-vehicle mixed reality display method according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, the present embodiment provides a method for displaying mixed reality in a vehicle, in which a display screen is disposed in a plurality of window glass interlayers, the method including: acquiring in-vehicle environment data and out-vehicle environment data, and judging whether the vehicle-mounted computing unit and the roadside computing unit are networked or not after receiving a trigger instruction; if the judgment result is networking and the calculation force of the vehicle-mounted computing unit is sufficient, requesting the roadside computing unit to acquire a rendering model, and rendering the vehicle-mounted GPU according to the rendering model, the environment data in the vehicle and the target scene in the trigger instruction to form display data; if the judgment result is that the network is connected but the calculation force of the vehicle-mounted calculation unit is insufficient, the environment data in the vehicle and the target scene in the trigger instruction are packaged and sent to the roadside calculation unit, and the roadside calculation unit generates display data based on rendering of the rendering model; if the judgment result is that the network is not connected, real-time modeling rendering is carried out according to the environment data inside the vehicle, the environment data outside the vehicle and the target scene in the trigger instruction through the vehicle-mounted GPU or the cloud end to generate display data; and identifying the visual angle sight of the driver and the passenger from the environment data in the automobile, and playing the video stream corresponding to the display data on the window glass display screen at the corresponding position.
In the in-vehicle mixed reality display method provided by this embodiment, the external camera acquires the external environment data in real time, the internal camera acquires the internal environment data in real time, the vehicle-mounted computing unit performs real-time preprocessing on the external environment data and the internal environment data, then the roadside computing unit determines whether there is a roadside computing unit capable of being networked within a preset distance based on the current vehicle position, if there is networking between the vehicle-mounted computing unit and the roadside computing unit, it continues to determine whether the computing power of the vehicle-mounted computing unit is sufficient, if sufficient, it actively requests the roadside computing unit to acquire a rendering model, so that the vehicle-mounted GPU renders according to the rendering model, the internal environment data and a target scene in a trigger instruction to form display data, and if not, packs the internal environment data and the target scene in the trigger instruction and sends the display data to the roadside computing unit, rendering and generating display data based on the rendering model by a roadside computing unit; in addition, if no roadside computing unit exists at the current vehicle position, real-time modeling rendering is carried out according to the vehicle internal environment data, the vehicle external environment data and a target scene in the trigger instruction by means of a vehicle-mounted GPU or a cloud end to generate display data; after the display data are generated, the system can identify the visual angle sight of the driver and the passenger from the environment data in the automobile, and play the video stream corresponding to the display data on the window glass display screen at the corresponding position.
Therefore, the embodiment provides a plurality of rendering modes of the display data, any mode can be flexibly selected according to the vehicle condition to perform rendering processing on the display data, and the immersive viewing experience can be provided for drivers and passengers while the operation efficiency is improved. For example, in rainy days, a driver and passengers can select a target scene in a sunny mode through a trigger instruction, data calculated in real time through the vehicle-mounted computing unit and cloud data transmitted by 5G are fused with the rendering model, a sunny effect is displayed in a vehicle, and the sunny effect can be independently displayed on a display screen of a windshield and also can be displayed on a display screen of any window glass.
In the above embodiment, the method for acquiring in-vehicle environment data and out-vehicle environment data and determining whether the vehicle-mounted computing unit and the roadside computing unit are networked after receiving the trigger instruction includes:
the method comprises the steps that external environment data are collected through an external camera and internal environment data are collected through an internal camera and a sensor, the external environment data comprise external environment scene information, and the internal environment data comprise light intensity information, driver and passenger position information, current vehicle position information and driver and passenger visual angle and sight information; the driver and the passenger send a trigger instruction comprising a target scene to the vehicle-mounted communication unit, and the vehicle-mounted communication unit detects whether the roadside computing unit exists in a preset range or not and networks the vehicle-mounted computing unit and the roadside computing unit when the roadside computing unit exists.
During specific implementation, the acquired environment data outside the vehicle includes environment scene information outside the vehicle, and the acquired environment data inside the vehicle includes light intensity information, driver and passenger position information, current vehicle position information, driver and passenger visual angle and sight information, and it needs to be explained that, according to the environment data inside the vehicle, it is the prior art in the field that driver and passenger position information and driver and passenger visual angle and sight information are extracted from videos, and this embodiment is not described herein.
In the above embodiment, before the step of requesting the roadside calculation unit to acquire the rendering model, the method further includes:
the roadside computing unit stores a static scene 3D model and a dynamic scene 3D model of the surrounding environment, the static scene 3D model comprises one or more of a road model, a building model and a traffic facility model, and the dynamic scene 3D model comprises a vehicle model and/or a pedestrian model; the static scene 3D model and the dynamic scene 3D model vehicle model are formed by modeling pictures captured by an infrared camera and an RGB camera.
In specific implementation, each roadside computing unit stores a static scene 3D model and a dynamic scene 3D model within a range of n meters of the surrounding radius, and the static scene 3D model and the dynamic scene 3D model can be obtained based on video image training. It can be understood that each roadside computing unit comprises an infrared camera, an RGB camera and a LiDAR sensor which are in one-to-one correspondence with the roadside computing unit, wherein the infrared camera and the RGB camera are used for collecting video images, the LiDAR sensor is used for sensing dynamic objects and/or static objects within n meters, building a dynamic scene 3D model based on sensed dynamic object data, and building a static scene 3D model based on static object data.
In the above embodiment, after receiving the trigger instruction, the method further includes: when the vehicle is in a navigation state, the vehicle-mounted communication unit preloads the static scene 3D model stored by the roadside computing unit in the passing point, and requests to load the dynamic scene 3D model when the vehicle approaches the corresponding roadside computing unit, and then the vehicle-mounted GPU renders and forms display data according to the dynamic scene 3D model, the static scene 3D model, the in-vehicle environment data and a target scene in the trigger instruction.
Specifically, the method for generating the display data by the roadside computing unit based on rendering of the rendering model comprises the following steps of:
under the condition that the calculation force of the roadside computing unit is sufficient, the in-vehicle environment data and the trigger instruction are packaged and sent to the roadside computing unit through the vehicle-mounted communication unit, when the calculation force of the roadside computing unit is sufficient, the roadside computing unit extracts the position information and the visual angle and sight information of the driver and the passenger from the in-vehicle environment data, extracts a target scene from the trigger instruction at the same time, and renders display data by combining with a rendering model; under the condition that the calculation power of the roadside computing unit is insufficient, the in-vehicle environment data and the trigger instruction are packaged and sent to the roadside computing unit through the vehicle-mounted communication unit, then the roadside computing unit forwards the in-vehicle environment data to the cloud, the cloud extracts the position information and the visual angle and sight information of the driver and the crew from the in-vehicle environment data, meanwhile, the target scene is extracted from the trigger instruction, and the display data are rendered by combining the static scene 3D model and the dynamic scene 3D model.
Specifically, if the judgment result is that the network is not connected, the method for generating the display data after performing real-time modeling rendering according to the environment data inside the vehicle, the environment data outside the vehicle and the trigger instruction through the vehicle-mounted GPU or the cloud comprises the following steps:
if the judgment result is that the vehicle-mounted communication unit is not connected to the network, whether the network communication between the vehicle-mounted communication unit and the cloud is smooth or not is continuously judged; when the judgment result is yes, sending the environment data outside the vehicle to a cloud to establish a static scene 3D model and a dynamic scene 3D model, extracting position information and visual angle and sight information of a driver from the environment data inside the vehicle, extracting a target scene from a trigger instruction, and rendering display data by combining the static scene 3D model and the dynamic scene 3D model; when the judgment result is negative and the calculation capacity of the vehicle-mounted computing unit is sufficient, the vehicle-mounted GPU establishes a static scene 3D model and a dynamic scene 3D model based on the vehicle external environment data, extracts the position information and the visual angle and sight information of the driver and the passengers from the vehicle internal environment data, extracts a target scene from a trigger instruction, and renders display data by combining the static scene 3D model and the dynamic scene 3D model; when the judgment result is yes and the calculation power of the vehicle-mounted computing unit is insufficient, the vehicle-mounted GPU establishes a static scene 3D model and a dynamic scene 3D model based on the vehicle external environment data, extracts the position information and the visual angle and sight line information of the driver and the passenger from the vehicle internal environment data, extracts a target scene from a trigger instruction, and renders the resolution reduction display data by combining the static scene 3D model and the dynamic scene 3D model; alternatively, the user is fed back directly to refuse to provide rendering functionality.
Illustratively, the roadside computing unit is an RSU computing unit, and the vehicle-mounted communication unit is a vehicle-mounted T-box or OBU.
Referring to fig. 2, for ease of understanding, the above embodiment is now illustrated:
the driver and passenger in the car sends out trigger command through the interactive mode of multimode, and the camera outside the car gathers car external environment scene information this moment, and the camera gathers light intensity information, driver and passenger positional information, current vehicle place positional information and driver and passenger visual angle sight information etc. in the car, then carries out the post-processing to above-mentioned information through on-vehicle calculating unit, judges whether there is the roadside calculating unit current car position periphery:
if the roadside computing unit exists, whether the computing power of the vehicle-mounted computing unit is sufficient or not is continuously judged after networking, at the moment, if the computing power of the vehicle-mounted computing unit is sufficient, the vehicle-mounted computing unit is actively requested to acquire a rendering model, driver position information and driver visual angle and visual line information are extracted from environment data in the vehicle, a target scene (such as a sunny mode or a rainy mode) is extracted from a trigger instruction, the vehicle-mounted GPU is enabled to render display data matched with the visual angle and visual line of the driver according to the rendering model, the driver visual angle and visual line information and the target scene, and then video streams corresponding to the display data are played on a vehicle window glass display screen at the corresponding position, so that the driver achieves a better viewing effect. If the calculation power of the vehicle-mounted computing unit is insufficient, the vehicle-mounted computing unit packages and sends the position information of the driver and passengers, the visual angle and sight information of the driver and passengers and the target scene in the trigger instruction to the roadside computing unit, at the moment, whether the calculation power of the roadside computing unit is sufficient needs to be continuously judged, if the calculation power is sufficient, the roadside computing unit executes corresponding rendering operation, if the calculation power is insufficient, the roadside computing unit forwards the packaged data sent by the vehicle-mounted computing unit to the cloud end to execute the corresponding rendering operation, and then the packaged data are returned to a vehicle window display screen corresponding to a vehicle end to be played in a video stream mode. And if the vehicle is in a navigation state, the vehicle-mounted communication unit can pre-load the static scene 3D model stored by the passing point roadside calculation unit based on the navigation route, and request to load the dynamic scene 3D model when the vehicle approaches, such as within m meters, and then render display data matched with the visual angle and the visual line of the driver and the passenger according to the dynamic scene 3D model, the static scene 3D model, the visual angle and the visual line information of the driver and the passenger and the target scene through the vehicle-mounted GPU, and play the video stream on a corresponding vehicle window display screen. Particularly, if the target scene selected by the user is in a sunny mode, the played video stream can be subjected to illumination and material reflection rendering according to the virtual sun position at the current time so as to achieve the display effect of simulating sunny days, the picture can be rendered in real time according to the visual angle and sight information of drivers and passengers during rendering, the displayed picture angle is corrected, the rendered target scene video stream which accords with the visual angle of an observer is finally simulated, and then the video stream is pushed to the vehicle-mounted T-box and played by the corresponding vehicle window display screen.
If the roadside computing unit does not exist, whether network communication between the vehicle-mounted communication unit and the cloud is smooth (such as whether 4G/5G signals are smooth) needs to be continuously judged, if the network communication is smooth and when the computing power of the vehicle-mounted computing unit is sufficient, a static scene 3D model and a dynamic scene 3D model are established by the vehicle-mounted GPU based on the vehicle exterior environment data, the position information and the visual angle and sight information of a driver and an occupant are extracted from the vehicle interior environment data, meanwhile, a target scene is extracted from a trigger instruction, and display data are rendered after the visual angle is corrected; if the vehicle-mounted computing unit is smooth and the computing power of the vehicle-mounted computing unit is insufficient, the propaganda operation is executed after the cloud is used for correcting the view angle. If the propaganda operation is not performed smoothly by the vehicle-mounted GPU (the vehicle-mounted GPU is sufficient in computing power), or the rendering operation is performed after the resolution is reduced (the vehicle-mounted GPU is slightly insufficient in computing power), or the rendering function is directly refused to be provided (the vehicle-mounted GPU is seriously insufficient in computing power). By the method, data throughput under the condition of poor network communication can be reduced, so that rendering instantaneity is improved.
Example two
This embodiment provides a mixed reality display device in car, includes:
the acquisition unit is used for acquiring the environment data inside the vehicle and the environment data outside the vehicle and judging whether the vehicle-mounted computing unit and the roadside computing unit are networked or not after receiving the trigger instruction;
the processing unit is used for requesting the roadside computing unit to acquire a rendering model if the judgment result is networking and the calculation force of the vehicle-mounted computing unit is sufficient, so that the vehicle-mounted GPU forms display data after rendering according to the rendering model, the in-vehicle environment data and the target scene in the trigger instruction; alternatively, the first and second electrodes may be,
if the judgment result is that the network is connected but the calculation force of the vehicle-mounted calculation unit is insufficient, packaging the environment data in the vehicle and the target scene in the trigger instruction and sending the packaged environment data to a roadside calculation unit, and rendering the packaged environment data and the target scene in the trigger instruction by the roadside calculation unit based on the rendering model to generate display data; alternatively, the first and second electrodes may be,
if the judgment result is that the network is not connected, performing real-time modeling rendering according to the environment data inside the vehicle, the environment data outside the vehicle and the target scene in the trigger instruction through a vehicle-mounted GPU or a cloud end to generate display data;
and the display unit is used for identifying the visual angle sight of the driver and the passenger from the environment data in the automobile and playing the video stream corresponding to the display data on the window glass display screen at the corresponding position.
Compared with the prior art, the beneficial effects of the in-vehicle mixed reality display device provided by the embodiment of the invention are the same as the beneficial effects of the in-vehicle mixed reality display method provided by the first embodiment, and are not repeated herein.
EXAMPLE III
The present embodiment provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the in-vehicle mixed reality display method.
Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the embodiment are the same as the beneficial effects of the in-vehicle mixed reality display method provided by the technical scheme, and are not repeated herein.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the invention may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the embodiment, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. A method for displaying mixed reality in a vehicle is characterized in that a display screen is arranged in a plurality of glass interlayers of vehicle windows, and the method comprises the following steps:
acquiring in-vehicle environment data and out-vehicle environment data, detecting whether a roadside computing unit exists in a preset range by a vehicle-mounted communication unit after receiving a trigger instruction of a target scene selected by a driver, and networking the vehicle-mounted computing unit and the roadside computing unit when the roadside computing unit exists;
if the vehicle-mounted computing unit is connected with the roadside computing unit in a networking mode and the computing power of the vehicle-mounted computing unit is sufficient, actively requesting the roadside computing unit to acquire a rendering model, and enabling a vehicle-mounted GPU to render according to the rendering model, the in-vehicle environment data and the target scene in the trigger instruction to form display data;
if the vehicle-mounted computing unit is connected with the roadside computing unit in a networking mode but the computing power of the vehicle-mounted computing unit is insufficient, packaging the in-vehicle environment data and the target scene in the trigger instruction and sending the in-vehicle environment data and the target scene to the roadside computing unit, and rendering the in-vehicle environment data and the target scene based on the rendering model by the roadside computing unit to generate display data;
if the vehicle-mounted computing unit and the roadside computing unit are not connected to the network, real-time modeling rendering is carried out according to the in-vehicle environment data, the out-of-vehicle environment data and the target scene in the trigger instruction through a vehicle-mounted GPU or a cloud end to generate display data;
after receiving a triggering instruction including a target scene selected by an occupant, the method further comprises the following steps: when a vehicle is in a navigation state, the vehicle-mounted communication unit preloads a static scene 3D model stored by a roadside computing unit in a passing point, and requests to load a dynamic scene 3D model when the vehicle approaches the corresponding roadside computing unit, and then a vehicle-mounted GPU renders according to the dynamic scene 3D model, the static scene 3D model, the in-vehicle environment data and a target scene in a trigger instruction to form display data;
and identifying the visual angle sight of the driver and the passenger from the environment data in the automobile, and playing a video stream corresponding to display data on a window glass display screen at a corresponding position, wherein the display data has a display effect corresponding to the target scene.
2. The method according to claim 1, wherein after receiving a triggering instruction including a target scene selected by a driver, the vehicle-mounted communication unit detects whether a roadside computing unit exists within a preset range, and when the roadside computing unit exists, the method for networking the vehicle-mounted computing unit with the roadside computing unit comprises the following steps:
the method comprises the steps that external environment data are collected through an external camera and internal environment data are collected through an internal camera and a sensor, the external environment data comprise external environment scene information, and the internal environment data comprise light intensity information, driver and passenger position information, current vehicle position information and driver and passenger visual angle and sight information;
and the driver and the passenger send the trigger instruction to the vehicle-mounted communication unit, and the vehicle-mounted communication unit detects whether the roadside computing unit exists in a preset range and networks the vehicle-mounted computing unit and the roadside computing unit when the roadside computing unit exists.
3. The method of claim 2, further comprising, prior to requesting the rendering model from the roadside computing unit:
the roadside computing unit stores a static scene 3D model and a dynamic scene 3D model of the surrounding environment, the static scene 3D model comprises one or more of a road model, a building model and a traffic facility model, and the dynamic scene 3D model comprises a vehicle model and/or a pedestrian model;
the static scene 3D model and the dynamic scene 3D model vehicle model are formed by modeling pictures captured by an infrared camera and an RGB camera.
4. The method according to claim 3, wherein the in-vehicle environment data and the target scene in the trigger instruction are packaged and transmitted to a roadside computing unit, and the method for generating display data by the roadside computing unit based on the rendering model comprises:
under the condition that the calculation force of a roadside computing unit is sufficient, the in-vehicle environment data and the trigger instruction are packaged and sent to the roadside computing unit through a vehicle-mounted communication unit, when the calculation force of the roadside computing unit is sufficient, the roadside computing unit extracts driver and passenger position information and driver and passenger visual angle and sight line information from the in-vehicle environment data, extracts a target scene from the trigger instruction at the same time, and renders display data by combining the rendering model;
under the condition that the calculation power of a roadside computing unit is insufficient, the in-vehicle environment data and the trigger instruction are packaged and sent to the roadside computing unit through a vehicle-mounted communication unit, then the roadside computing unit forwards the in-vehicle environment data and the trigger instruction to a cloud, the cloud extracts the position information and the visual angle and sight information of the driver and the passenger from the in-vehicle environment data, meanwhile, a target scene is extracted from the trigger instruction, and display data are rendered by combining the static scene 3D model and the dynamic scene 3D model.
5. The method according to claim 3, wherein if the vehicle-mounted computing unit and the roadside computing unit are not connected to the network, the method for generating the display data after performing real-time modeling rendering according to the in-vehicle environment data, the out-of-vehicle environment data and the trigger instruction through a vehicle-mounted GPU or a cloud comprises the following steps:
if the judgment result is that the vehicle-mounted communication unit is not connected to the network, whether the network communication between the vehicle-mounted communication unit and the cloud is smooth or not is continuously judged;
when the judgment result is yes, sending the environment data outside the vehicle to a cloud to establish the static scene 3D model and the dynamic scene 3D model, extracting position information and visual angle and sight information of a driver from the environment data inside the vehicle, extracting a target scene from the trigger instruction, and rendering display data by combining the static scene 3D model and the dynamic scene 3D model;
when the judgment result is negative and the calculation capacity of the vehicle-mounted computing unit is sufficient, the vehicle-mounted GPU establishes the static scene 3D model and the dynamic scene 3D model based on the vehicle external environment data, extracts the position information and the visual angle and sight information of the driver and the passengers from the vehicle internal environment data, extracts a target scene from the trigger instruction at the same time, and renders display data by combining the static scene 3D model and the dynamic scene 3D model;
when the judgment result is yes and the calculation power of the vehicle-mounted computing unit is insufficient, the vehicle-mounted GPU establishes the static scene 3D model and the dynamic scene 3D model based on the vehicle external environment data, extracts the position information and the visual angle and sight information of a driver and an occupant from the vehicle internal environment data, extracts a target scene from the trigger instruction at the same time, and renders the resolution reduction display data by combining the static scene 3D model and the dynamic scene 3D model; alternatively, the user is fed back directly to refuse to provide rendering functionality.
6. The method according to any of the claims 1-5, characterized in that the roadside computing unit is an RSU computing unit.
7. The method according to any of claims 1-5, wherein the onboard communication unit is an onboard T-box or OBU.
8. An in-vehicle mixed reality display device, comprising:
the system comprises a collecting unit, a vehicle-mounted communication unit and a roadside computing unit, wherein the collecting unit is used for collecting the environment data inside the vehicle and the environment data outside the vehicle, detecting whether the roadside computing unit exists in a preset range or not by the vehicle-mounted communication unit after receiving a trigger instruction of a target scene selected by a driver and a passenger, and networking the vehicle-mounted computing unit and the roadside computing unit when the roadside computing unit exists;
the processing unit is used for actively requesting the roadside computing unit to acquire a rendering model if the vehicular computing unit is judged to be networked with the roadside computing unit and the computing power of the vehicular computing unit is sufficient, so that the vehicular GPU forms display data after rendering according to the rendering model, the in-vehicle environment data and the target scene in the trigger instruction; alternatively, the first and second electrodes may be,
if the vehicle-mounted computing unit is connected with the roadside computing unit in a networking mode but the computing power of the vehicle-mounted computing unit is insufficient, packaging the in-vehicle environment data and the target scene in the trigger instruction and sending the in-vehicle environment data and the target scene to the roadside computing unit, and rendering the in-vehicle environment data and the target scene based on the rendering model by the roadside computing unit to generate display data; alternatively, the first and second electrodes may be,
if the vehicle-mounted computing unit and the roadside computing unit are not connected to the network, real-time modeling rendering is carried out according to the in-vehicle environment data, the out-of-vehicle environment data and the target scene in the trigger instruction through a vehicle-mounted GPU or a cloud end to generate display data;
after receiving a triggering instruction including a target scene selected by an occupant, the method further comprises the following steps: when a vehicle is in a navigation state, the vehicle-mounted communication unit preloads a static scene 3D model stored by a roadside computing unit in a passing point, and requests to load a dynamic scene 3D model when the vehicle approaches the corresponding roadside computing unit, and then a vehicle-mounted GPU renders according to the dynamic scene 3D model, the static scene 3D model, the in-vehicle environment data and a target scene in a trigger instruction to form display data;
and the display unit is used for identifying the visual angle sight of the driver and the passenger from the environment data in the automobile and playing a video stream corresponding to the display data on the window glass display screen at the corresponding position, wherein the display data has a display effect corresponding to the target scene.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 7.
CN202011089648.2A 2020-10-13 2020-10-13 In-vehicle mixed reality display method and device Active CN111932687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011089648.2A CN111932687B (en) 2020-10-13 2020-10-13 In-vehicle mixed reality display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011089648.2A CN111932687B (en) 2020-10-13 2020-10-13 In-vehicle mixed reality display method and device

Publications (2)

Publication Number Publication Date
CN111932687A CN111932687A (en) 2020-11-13
CN111932687B true CN111932687B (en) 2021-02-02

Family

ID=73334439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011089648.2A Active CN111932687B (en) 2020-10-13 2020-10-13 In-vehicle mixed reality display method and device

Country Status (1)

Country Link
CN (1) CN111932687B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202806308U (en) * 2012-08-21 2013-03-20 惠州市德赛西威汽车电子有限公司 Automotive windshield
DE102013222584A1 (en) * 2013-11-07 2015-05-21 Robert Bosch Gmbh Optical playback and recognition system in a vehicle
US10332320B2 (en) * 2017-04-17 2019-06-25 Intel Corporation Autonomous vehicle advanced sensing and response
CN109285373B (en) * 2018-08-31 2020-08-14 南京锦和佳鑫信息科技有限公司 Intelligent network traffic system for whole road network
CN109714421B (en) * 2018-12-28 2021-08-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent networking automobile operation system based on vehicle-road cooperation
CN111431950B (en) * 2019-01-08 2023-04-07 上海科技大学 Task unloading method and device, mobile terminal, fog node and storage medium
CN110901693B (en) * 2019-10-15 2021-04-13 北京交通大学 Train operation control system based on 5G and cloud computing technology
CN110928658B (en) * 2019-11-20 2024-03-01 湖南大学 Cooperative task migration system and algorithm of vehicle edge cloud cooperative framework

Also Published As

Publication number Publication date
CN111932687A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
US20180330547A1 (en) Method of Ground Adjustment for In-Vehicle Augmented Reality Systems
KR20200091951A (en) Multiple operating modes to extend dynamic range
US20220180483A1 (en) Image processing device, image processing method, and program
CN111164652B (en) Moving body image generation record display device and program product
CN111024115A (en) Live-action navigation method, device, equipment, storage medium and vehicle-mounted multimedia system
CN113607184A (en) Vehicle navigation method, device, electronic equipment and storage medium
JP2020519037A (en) Method and system for orienting a bar channel camera when turning a vehicle
US9849835B2 (en) Operating a head-up display of a vehicle and image determining system for the head-up display
CN113260430A (en) Scene processing method, device and system and related equipment
CN109050401B (en) Augmented reality driving display method and device
CN114093186A (en) Vehicle early warning information prompting system, method and storage medium
CN105774657B (en) Single-camera panoramic reverse image system
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
JP6930541B2 (en) Image processing device and image processing method
CN111932687B (en) In-vehicle mixed reality display method and device
CN117169873A (en) Beyond-view bird's eye view sensing method and device, target sensing device, equipment and medium
CN114820504B (en) Method and device for detecting image fusion deviation, electronic equipment and storage medium
KR102625688B1 (en) Display devices and route guidance systems based on mixed reality
Olaverri-Monreal et al. Tailigator: Cooperative system for safety distance observance
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115205311A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
CN110979319A (en) Driving assistance method, device and system
CN112770139A (en) Virtual competition system and method for vehicle
EP2639771B1 (en) Augmented vision in image sequence generated from a moving vehicle
CN115221260B (en) Data processing method, device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Applicant after: Ningbo Junlian Zhixing Technology Co.,Ltd.

Address before: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Applicant before: Ningbo Junlian Zhixing Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant