CN115035239A - Method and device for constructing virtual environment, computer equipment and vehicle - Google Patents

Method and device for constructing virtual environment, computer equipment and vehicle Download PDF

Info

Publication number
CN115035239A
CN115035239A CN202210515734.8A CN202210515734A CN115035239A CN 115035239 A CN115035239 A CN 115035239A CN 202210515734 A CN202210515734 A CN 202210515734A CN 115035239 A CN115035239 A CN 115035239A
Authority
CN
China
Prior art keywords
vehicle
data
virtual environment
static
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210515734.8A
Other languages
Chinese (zh)
Other versions
CN115035239B (en
Inventor
郭麟
布如国
汤曌
魏博源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Binli Information Technology Co Ltd
Original Assignee
Beijing Binli Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Binli Information Technology Co Ltd filed Critical Beijing Binli Information Technology Co Ltd
Priority to CN202210515734.8A priority Critical patent/CN115035239B/en
Publication of CN115035239A publication Critical patent/CN115035239A/en
Application granted granted Critical
Publication of CN115035239B publication Critical patent/CN115035239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The present disclosure provides a method for constructing a virtual environment. The method comprises the following steps: acquiring dynamic object attribute data of dynamic objects around a vehicle at the current moment; obtaining map navigation data of the vehicle at the current moment, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle; fusing at least the dynamic object attribute data with the static object attribute data to obtain ambient data of the vehicle; and performing at least one of: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data.

Description

Method and device for constructing virtual environment, computer equipment and vehicle
Technical Field
The present disclosure relates to the field of vehicle technology, and in particular, to a method and apparatus, a computer device, a vehicle, a computer-readable storage medium, and a computer program product for constructing a virtual environment.
Background
In the modern society, people often need to go back and forth in two places due to the needs of work and life, and vehicles are important in the life of people as indispensable vehicles when people go out. As the time in a vehicle increases, the requirements of people for the functionality and entertainment of the vehicle are continuously increased. Therefore, how to improve the functionality and entertainment of the vehicle to meet different requirements of people in the vehicle is a hot issue at present.
Virtual reality is a common technique for providing entertainment in gaming systems by creating a computer-based virtual environment to allow people to experience a variety of situations that have never been experienced in real life due to spatial and physical constraints. The basic principle of the virtual reality is to use a computer, other special hardware devices (such as a video helmet, a 3D sound device, a force feedback game device and the like) and information software to simulate a three-dimensional environment, so that a user can interact with the computer in the virtual world.
Accordingly, it is desirable to build virtual environments in vehicles to increase the functionality and entertainment of the vehicle.
Disclosure of Invention
It would be advantageous to provide a mechanism that alleviates, mitigates or even eliminates one or more of the above-mentioned problems.
According to an aspect of the present disclosure, there is provided a method for constructing a virtual environment, comprising: acquiring dynamic object attribute data of dynamic objects around a vehicle at the current moment; obtaining map navigation data of the vehicle at the current moment, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle; fusing at least the dynamic object attribute data with the static object attribute data to obtain ambient data of the vehicle; and performing at least one of the following: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data, wherein the virtual environment comprises a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance conforming to a scene setting different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance conforming to the scene setting different from a real appearance of the static object.
According to another aspect of the present disclosure, there is provided an apparatus for constructing a virtual environment, comprising: a first acquisition module configured to acquire dynamic object attribute data of a dynamic object around a vehicle at a current time; a second obtaining module configured to obtain map navigation data of the vehicle at the current time, the map navigation data including at least static object attribute data of static objects around the vehicle; a fusion module configured to fuse at least the dynamic object property data with the static object property data to obtain ambient data of the vehicle; and a build module configured to perform at least one of the following operations: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data, wherein the virtual environment comprises a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance conforming to a scene setting different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance conforming to the scene setting different from a real appearance of the static object.
According to yet another aspect of the present disclosure, there is provided a computer apparatus including: at least one processor; and at least one memory having a computer program stored thereon, wherein the computer program, when executed by the at least one processor, causes the at least one processor to perform a method for building a virtual environment according to the present disclosure.
According to yet another aspect of the present disclosure, there is provided a vehicle comprising an apparatus for building a virtual environment according to the present disclosure or a computer device according to the present disclosure.
According to yet another aspect of the present disclosure, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, causes the processor to execute a method for building a virtual environment according to the present disclosure.
According to yet another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, causes the processor to carry out a method for building a virtual environment according to the present disclosure.
According to one or more embodiments of the disclosure, by acquiring dynamic object attribute data and map navigation data around a vehicle in real time and constructing a virtual environment based on ambient environment data generated by fusing the dynamic object attribute data and the map navigation data, on one hand, ambient environment information of the vehicle can be acquired in multiple channels, so that the acquired ambient environment data is more comprehensive, and on the other hand, ambient environment information of the vehicle can be acquired in real time, so that physical body feeling of passengers in the vehicle can be fed back to the constructed virtual environment in real time, and thus, entertainment experience fusing real dynamic feeling of the vehicle and completely simulating virtual visual experience is realized.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Further details, features and advantages of the disclosure are disclosed in the following description of exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram illustrating an example system in which various methods described herein may be implemented, according to an example embodiment;
FIG. 2 is a flowchart illustrating a method for building a virtual environment in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a real environment of a vehicle according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a virtual environment in accordance with an illustrative embodiment;
FIG. 5 is a schematic block diagram illustrating an apparatus for building a virtual environment in accordance with an illustrative embodiment; and
FIG. 6 is a block diagram illustrating an exemplary computer device that can be applied to the exemplary embodiments.
Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to define a positional relationship, a temporal relationship, or an importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the element may be one or a plurality of. As used herein, the term "plurality" means two or more, and the term "based on": should be interpreted as "based at least in part on. Further, the terms "and/or" and at least one of "… …" encompass any and all possible combinations of the listed items.
In the related art, as the automobile intelligent technology is developed, more and more automobiles are equipped with an on-vehicle sensor (e.g., an on-vehicle lidar, a milliwave radar, a camera, etc.) for sensing the surrounding environment. These in-vehicle sensors are used to acquire environmental information around the vehicle, including the types of surrounding objects, and the position and speed in the automobile coordinate system, and the like. The ambient information acquired by the sensors of the vehicle is mainly used to complete advanced Assistant Driving (ADAS), Automatic Driving (AD), or the like at present. Therefore, it is desirable to construct a virtual environment in a vehicle using surrounding environment information that can be acquired by the vehicle to increase the functionality and entertainment of the vehicle.
In the present disclosure, a method of constructing a virtual environment using surrounding environment information that can be perceived by a vehicle is provided. By acquiring dynamic object attribute data and map navigation data around the vehicle in real time and constructing a virtual environment based on ambient environment data generated by fusing the dynamic object attribute data and the map navigation data, on one hand, ambient environment information around the vehicle can be acquired in multiple channels, so that the acquired ambient environment data is more comprehensive, on the other hand, the ambient environment information around the vehicle can be acquired in real time, so that physical body feeling of passengers in the vehicle can be fed back to the constructed virtual environment in real time, and entertainment experience integrating real dynamic feeling of the vehicle and completely simulating virtual visual experience is realized.
Exemplary embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram illustrating an example system 100 in which various methods described herein may be implemented, according to an example embodiment.
Referring to FIG. 1, the system 100 includes an in-vehicle system 110, a server 120, and a network 130 communicatively coupling the in-vehicle system 110 and the server 120.
In-vehicle system 110 includes a display screen 114 and an application program (APP)112 displayable via display screen 114. The application 112 may be an application installed by default or downloaded and installed by the user 102 for the in-vehicle system 110, or an applet that is a lightweight application. In the case where the application 112 is an applet, the user 102 may run the application 112 directly on the in-vehicle system 110 by searching the application 112 in a host application (e.g., by name of the application 112, etc.) or scanning a graphical code (e.g., barcode, two-dimensional code, etc.) of the application 112, etc., without installing the application 112. In some embodiments, the in-vehicle system 110 may include one or more processors and one or more memories (not shown), and the in-vehicle system 110 is implemented as an in-vehicle computer. In some embodiments, in-vehicle system 110 may include more or fewer display screens 114 (e.g., not include display screens 114), e.g., may be used to display a virtual environment, and/or one or more speakers or other human interaction devices. In some embodiments, the in-vehicle system 110 may not be in communication with the server 120.
Server 120 may represent a single server, a cluster of multiple servers, a distributed system, or a cloud server providing an underlying cloud service (such as cloud database, cloud computing, cloud storage, cloud communications). It will be understood that although the server 120 is shown in FIG. 1 as communicating with only one in-vehicle system 110, the server 120 may provide background services for multiple in-vehicle systems simultaneously.
The network 130 allows wireless communication and information exchange between vehicles-X ("X" means vehicle, road, pedestrian, or internet, etc.) according to agreed communication protocols and data interaction standards. Examples of network 130 include a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), and/or a combination of communication networks such as the Internet. The network 130 may be a wired or wireless network. In one example, the network 130 may be an in-vehicle network, an inter-vehicle network, and/or an in-vehicle mobile internet network.
For purposes of the disclosed embodiments, in the example of fig. 1, the application 112 may be an electronic map application that may provide various electronic map-based functions, such as navigation, route queries, location finding, and the like. Accordingly, the server 120 may be a server used with an electronic map application. The server 120 may provide online mapping services, such as online navigation, online route query, and online location finding, to the application 112 running in the in-vehicle system 110 based on the road network data. Alternatively, the server 120 may provide the road network data to the vehicle-mounted system 110, and the application 112 running in the vehicle-mounted system 110 provides the local map service according to the road network data.
FIG. 2 is a flowchart illustrating a method 200 for building a virtual environment, according to an example embodiment. The method 200 may be performed at an on-board system (e.g., the on-board system 110 shown in fig. 1), i.e., the subject of execution of the various steps of the method 200 may be the on-board system 110 shown in fig. 1. In some embodiments, method 200 may be performed at a server (e.g., server 120 shown in fig. 1). In some embodiments, method 200 may be performed by an in-vehicle system (e.g., in-vehicle system 110) in combination with a server (e.g., server 120). In the following, the steps 210 to 240 of the method 200 are described in detail, taking the implementation subject as the vehicle-mounted system 110 as an example, and in conjunction with the real environment of the vehicle of fig. 3.
Referring to fig. 2, in step 210, dynamic object attribute data of dynamic objects around a vehicle at the present time is acquired.
For example, the in-vehicle system 110 may obtain dynamic object attribute data of dynamic objects (e.g., surrounding vehicles 320) around the vehicle 310 at the current time. Here, the current time refers to the time when step 210 is executed. The dynamic objects around the vehicle may include surrounding vehicles, pedestrians, and other movable objects. The dynamic object attribute data may include at least one of a position, orientation, velocity, and type of the dynamic object.
In some examples, the dynamic object property data of the dynamic objects around the vehicle may be dynamic object property data collected by sensors on the vehicle. The sensors on the vehicle may include on-board sensors and/or external sensors attached to the vehicle. Wherein the onboard sensors may include one or more of a laser radar, an ultrasonic radar, or a camera, among others. The external sensor may include an additional camera or the like. The raw data (e.g., images, etc.) sensed by the above-described sensors may be used to obtain dynamic object attribute data such as position, orientation, velocity, etc. of the dynamic object by corresponding algorithms in the prior art. For example, taking a sensor as a camera as an example, the acquired image may be converted into two-dimensional data, and then the acquired image is subjected to image analysis to obtain data of the type, position, orientation, speed, and the like of the surrounding dynamic object. In addition, when the data of the dynamic objects around the vehicle are acquired by adopting various sensors, the data acquired by the various sensors can be fused through a fusion algorithm (such as Bayesian inference, D-S evidence theory, maximum likelihood estimation and the like), that is, information describing a certain target or environment characteristic by different sensors is integrated into uniform characteristic expression information, so that the acquired attribute data of the dynamic objects are more accurate. It should be understood herein that the dynamic object property data obtained by the sensors on the vehicle is data based on the vehicle coordinate system, and may be represented in coordinates.
At step 220, map navigation data of the vehicle at the current time is obtained, wherein the map navigation data at least includes static object attribute data of static objects around the vehicle.
For example, the in-vehicle system 110 may acquire static object attribute data of static objects (e.g., building 340, road 330) around the vehicle 310 at the current time. Here, the current time refers to the time when step 210 is executed. Static objects around a vehicle may be roads, lanes, objects that do not change or move for long periods of time, such as various objects (e.g., markers, poles, fire taps, etc.) or landmarks (e.g., buildings, bridges, etc.) located along or near the roads. The static object attribute data may include at least one of a location, a type, a size, and an outline of the static object. In some examples, the map navigation data may also include at least one of current location information, departure and destination location information, navigation path information, and the like of the vehicle.
In some embodiments, map navigation data for the vehicle may be acquired by an in-vehicle satellite navigation system. The vehicle-mounted satellite navigation system may be a Beidou satellite navigation System (BDS), a Global Positioning System (GPS), a Glonass System (GLONASS), or the like. An in-vehicle satellite navigation system may include a receiver and a database. The receiver may receive the current location information of the vehicle in longitude and latitude coordinates in real time. The database may be used to store detailed road and highway map information (including the map navigation data described above) in the form of digital map data. In some examples, the map navigation data at the current time may be obtained by: firstly, acquiring current position information of a vehicle at the current moment through a receiver of a vehicle-mounted satellite navigation system; then, the vehicle-mounted satellite navigation system searches map information of roads and highways around the current position information in the database based on the current position information.
At step 230, at least the dynamic object property data is fused with the static object property data to obtain the vehicle's surroundings data.
For example, the in-vehicle system 110 may fuse dynamic object attribute data (e.g., attribute data of the surrounding vehicle 320) with static object attribute data (e.g., attribute data of the building 340 and the road 330) to obtain surrounding environment data of the vehicle 310 (e.g., including attribute data of the surrounding vehicle 320, the building 340, and the road 330). By fusing the dynamic object attribute data and the static object attribute data acquired by the different channels at the current moment, more comprehensive surrounding environment data at the current moment can be acquired. In some examples, the fusion may be simply combining the dynamic object property data and the static object property data at the current time, and may also associate the two with a specific rule.
At step 240, at least one of the following operations is performed: building a virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to the in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data.
For example, in-vehicle system 110 may build a virtual environment, such as virtual environment 400 shown in FIG. 4, based at least in part on ambient environment data. At this time, the constructed virtual environment may be transmitted to the display screen 114 for display, or may be transmitted to another in-vehicle entertainment terminal for display. Alternatively or additionally, in-vehicle system 110 may also transmit ambient environment data to in-vehicle entertainment terminals for the in-vehicle entertainment terminals to build virtual environments, such as virtual environment 400 shown in FIG. 4, based at least in part on the ambient environment data. The in-vehicle system 110 may transmit the virtual environment and/or ambient environment data to the in-vehicle entertainment terminal via a wired connection or a wireless connection (e.g., a bluetooth connection or a Wi-Fi connection, etc.). Wherein, the in-vehicle entertainment terminal may be one or more of a tablet, a VR device (e.g., VR headset), an AR device (e.g., AR headset or glasses), a gamepad, and the like. In some examples, where the in-vehicle entertainment terminal includes multiple in-vehicle entertainment terminals, the presentation of the virtual environment may be inconsistent. For example, on a VR headset, a virtual environment is presented in the display screen of the VR device, providing a fully immersive experience. On an AR helmet or glasses, the virtual environment is superimposed on the visual perception of the user's actual view, providing a hybrid augmented experience. On tablet computers and on-board screens, a virtual environment is presented on the screen. It should be understood herein that the content of the virtual environments of the various in-vehicle entertainment terminals may or may not be consistent, i.e., may have the same or different scene settings.
Further, the virtual environment may include a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance that conforms to the scene setting that is different from the real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance that conforms to the scene setting that is different from the real appearance of the static object. Taking the real environment of the vehicle 310 in fig. 3 as an example, the static visual elements corresponding to the building 340 may be rendered to have the appearance of a planet, and the dynamic visual elements corresponding to the surrounding vehicles 320 may be rendered to have the appearance of a space ship.
According to the embodiment of the disclosure, by acquiring the dynamic object attribute data and the map navigation data around the vehicle in real time and constructing the virtual environment based on the ambient environment data generated by fusing the dynamic object attribute data and the map navigation data, on one hand, the ambient environment information of the vehicle can be acquired in multiple channels, so that the acquired ambient environment data is more comprehensive, on the other hand, the ambient environment information of the vehicle can be acquired in real time, so that the physical body feeling of passengers in the vehicle can be fed back to the constructed virtual environment in real time, and the entertainment experience integrating the real dynamic feeling of the vehicle and completely simulating the virtual visual experience is realized.
In some embodiments, step 230 may include associating the dynamic object property data with the static object property data to obtain ambient environment data of the vehicle at the current time, thereby facilitating better organization of the dynamic object property data and the static object property data together, facilitating more accurate restoration of the physical somatosensory and perceived real environment of the passenger at the current time when constructing the virtual environment. For example, the dynamic object attribute data of the dynamic object and the static object attribute data of the static object that appear around the vehicle at the same time, that is, the current time, may be associated based on time.
In some other embodiments, associating the dynamic object attribute data with the static object attribute data may further include associating the dynamic object attribute data with the static object attribute data via current location information of the vehicle. The dynamic object attribute data acquired by the sensors is based on the vehicle coordinate system (i.e., the origin of the coordinate system is the current position of the vehicle), while the static object attribute data acquired by the onboard satellite navigation system is based on the geodetic coordinate system. The association of the dynamic object attribute data with the static object attribute data through the current position information of the vehicle enables the dynamic visual element constructed from the dynamic object attribute data to be directly attached to the virtual environment constructed from the map navigation data with the visual element corresponding to the vehicle as a reference, without performing coordinate transformation, thereby facilitating the construction of the subsequent virtual environment.
In some embodiments, where the in-vehicle entertainment terminal includes a plurality of in-vehicle entertainment terminals, the transmitting ambient environment data to the in-vehicle entertainment terminals in step 240 may include transmitting the ambient environment data to the plurality of in-vehicle entertainment terminals in synchronization such that each in-vehicle entertainment terminal may construct the virtual environment in synchronization.
In some embodiments, transmitting the ambient data to the in-vehicle entertainment terminal in step 240 may include: converting the surrounding environment data into a format which can be recognized by the vehicle-mounted entertainment terminal; and transmitting the ambient environment data after the format conversion to the vehicle-mounted entertainment terminal. Alternatively, the fused ambient data may be directly transmitted to the in-vehicle entertainment terminal and then subjected to format conversion by the in-vehicle entertainment terminal.
In some embodiments, the aforementioned static object attribute data included in the map navigation data may be referred to as first static object attribute data, and the method 200 may further include: acquiring second static object attribute data of static objects around the vehicle through the sensor; and modifying the first static object attribute data by using the second static object attribute data. For example, the first static object attribute data and the second static attribute data may be fused by a fusion algorithm (e.g., Bayesian inference, D-S evidence theory, maximum likelihood estimation, and the like) to modify the first static object attribute data. Since the first static attribute data obtained by the on-board satellite navigation system is typically not updated in real-time, there may be a situation of hysteresis. The sensors may include on-board sensors and/or external sensors attached to the vehicle. Wherein the onboard sensors may include one or more of a laser radar, an ultrasonic radar, or a camera, among others. The external sensor attached to the vehicle may include an additional camera or the like. According to the embodiment, the attribute data of the first static object acquired by the vehicle-mounted satellite navigation system is corrected through the attribute data of the second static object acquired by the sensor in real time, so that the surrounding environment data can be more accurate, and the physical somatosensory and the seen real environment of the passenger at the current moment can be better restored when the virtual environment is constructed.
In some embodiments, method 200 may further include collecting pose data of vehicle 310 via sensors on vehicle 310, and wherein constructing the virtual environment based at least in part on the ambient environment data in step 240 may include: and constructing a virtual environment based on the surrounding environment data and the attitude data. Wherein the attitude data may include at least one of vehicle speed, acceleration, heading angle, and the like. The above-mentioned on-vehicle sensor may include one or both of an on-vehicle sensor and an external sensor attached to the vehicle. Among other things, the onboard sensors may include one or more of a speed sensor, an acceleration sensor, a steering wheel angle sensor, a lateral angle sensor, a lidar, a camera, and the like. The external sensors attached to the vehicle may include one or more of additional cameras, Inertial Measurement Unit (IMU) devices, and the like. In some examples, the virtual environment may also include visual elements corresponding to the vehicle 310. At this time, constructing the virtual environment based on the ambient environment data and the pose data may include changing a pose of a visual element corresponding to the vehicle 310 based on the pose data. For example, when the vehicle 310 turns left, the visual element in the virtual environment corresponding to the vehicle 310 also turns left. Therefore, the visual angle change consistent with the visual angle change in the vehicle of the real scene in the constructed virtual environment can be obtained, and the virtual experience consistent with the real scene can be obtained. In some examples, the perspective of the virtual environment may also be changed based on the pose data to enable the user to obtain perspective changes in the built virtual environment that are consistent with those in the vehicle of the real scene, thereby obtaining a virtual experience that is consistent with the real scene. For example, as the vehicle 310 turns left, the FOV of the virtual environment moves to the right (i.e., the static visual elements move to the right as well as the dynamic visual elements move to the right). Through the virtual environment constructed based on the surrounding environment data and the attitude data, physical body feeling of passengers in the driving process of the vehicle can be better restored.
In some other embodiments, the attitude data of the vehicle may also be generated based on the ambient environment data of the vehicle. For example, taking the real environment of FIG. 3 as an example, the pose data for the vehicle 310 at that moment may be determined based on changes in the position of the vehicle 320, the road 330, and/or the building 340 around the vehicle 310 relative to the vehicle 310.
In some embodiments, the method 200 may further include causing the in-vehicle device to perform a corresponding action based on the virtual environment, wherein the action includes at least one of seat shake, audio playback, and light on. The in-vehicle devices may include seats, speakers, mood lights, and the like. For example, when a collision occurs in a virtual scene, the seat is vibrated accordingly, and/or a car audio is played to make audio conforming to the atmosphere. Therefore, the corresponding physical action can be executed according to the constructed virtual environment, and therefore a three-dimensional entertainment experience is provided for passengers. Additionally or alternatively, the respective actions performed may also be made to conform to the scene settings. For example, when a scene begins to rain, a sound box may be played to play a rain sound and/or an ambience light may be dimmed or flashed to create an ambience that conforms to the scene setting.
In some embodiments, the constructed virtual environment may be used in various online activities, such as gaming, video conferencing, shopping, and the like.
Although the operations are depicted in fig. 2 as occurring in a particular order, this should not be construed as requiring that such operations be performed in the particular order shown or in sequential order, nor that all illustrated operations be performed, to achieve desirable results. For example, step 220 may be performed prior to step 210, or concurrently with step 210.
Fig. 5 is a schematic block diagram illustrating an apparatus 500 for building a virtual environment, according to an example embodiment. The apparatus 500 may include a first acquisition module 510, a second acquisition module 520, a fusion module 530, and a construction module 540. The first obtaining module 510 is configured to obtain dynamic object attribute data of dynamic objects around the vehicle at a current time. The second obtaining module 520 is configured to obtain map navigation data of the vehicle at the current time, the map navigation data including at least static object attribute data of static objects around the vehicle. The fusion module 530 is configured to fuse at least the dynamic object property data with the static object property data to obtain the ambient data of the vehicle. The build module 540 is configured to perform at least one of the following operations: constructing a virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to the in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data. Wherein the virtual environment comprises a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment with an appearance conforming to the scene setting different from the real appearance of the dynamic object, the static visual element being rendered in the virtual environment with an appearance conforming to the scene setting different from the real appearance of the static object.
According to the embodiment of the disclosure, on one hand, the vehicle surrounding environment information can be collected in multiple channels, so that the obtained surrounding environment data are more comprehensive, on the other hand, the vehicle surrounding environment information can be collected in real time, so that the physical body sense of a passenger in the vehicle can be fed back to the constructed virtual environment in real time, and the entertainment experience integrating the real dynamic sense of the vehicle and completely simulating the virtual visual experience is realized.
It should be understood that the various modules of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to the method 200 are equally applicable to the apparatus 500 and the modules comprised thereby. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Although specific functionality is discussed above with reference to particular modules, it should be noted that the functionality of the various modules discussed herein can be separated into multiple modules and/or at least some of the functionality of multiple modules can be combined into a single module. Performing an action by a particular module discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module that performs the action (or performs the action in conjunction with the particular module). Thus, a particular module that performs an action can include the particular module that performs the action itself and/or another module that the particular module invokes or otherwise accesses that performs the action. For example, the first acquisition module 510/second acquisition module 520 described above may be combined into a single module in some embodiments. As another example, the fusion module 530 may include the first acquisition module 510 in some embodiments.
It should also be appreciated that various techniques may be described herein in the general context of software, hardware elements, or program modules. The various modules described above with respect to fig. 5 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the first acquisition module 510, the second acquisition module 520, the fusion module 530, and the construction module 540 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip (which includes one or more components of a Processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry), and may optionally execute received program code and/or include embedded firmware to perform functions.
According to an aspect of the disclosure, a computer device is provided that includes at least one memory, at least one processor, and a computer program stored on the at least one memory. The at least one processor is configured to execute the computer program to implement the steps of any of the method embodiments described above.
According to an aspect of the present disclosure, there is provided a vehicle comprising the apparatus 500 or the computer device as described above.
In some embodiments, the vehicle further comprises an in-vehicle entertainment terminal for constructing a virtual environment based at least in part on the ambient environment data described above.
According to an aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
According to an aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of any of the method embodiments described above.
Illustrative examples of such computer devices, non-transitory computer-readable storage media, and computer program products are described below in connection with FIG. 6.
FIG. 6 illustrates an example configuration of a computer device 600 that can be used to implement the methods described herein. For example, server 120 and/or in-vehicle system 110 shown in FIG. 1 may include an architecture similar to computer device 600. The apparatus 500 described above may also be implemented in whole or at least in part by a computer device 600 or similar device or system.
The computer device 600 may include at least one processor 602, memory 604, communication interface(s) 606, display device 608, other input/output (I/O) devices 610, and one or more mass storage devices 612, capable of communicating with each other, such as through a system bus 614 or other suitable connection.
Processor 602 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. The processor 602 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 602 can be configured to retrieve and execute computer readable instructions stored in the memory 604, mass storage device 612, or other computer readable medium, such as program code for an operating system 616, program code for an application program 618, program code for other programs 620, and so forth.
Memory 604 and mass storage device 612 are examples of computer readable storage media for storing instructions that are executed by processor 602 to implement the various functions described above. By way of example, memory 604 may generally include both volatile and nonvolatile memory (e.g., RAM, ROM, and the like). In addition, mass storage device 612 may generally include a hard disk drive, solid state drive, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and the like. Memory 604 and mass storage device 612 may both be referred to herein collectively as memory or computer-readable storage media, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 602 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of programs may be stored on the mass storage device 612. These programs include an operating system 616, one or more application programs 618, other programs 620, and program data 622, which can be loaded into memory 604 for execution. Examples of such applications or program modules may include, for instance, computer program logic (e.g., computer program code or instructions) for implementing the following components/functions: method 200 (including any suitable steps of method 200), and/or additional embodiments described herein.
Although illustrated in fig. 6 as being stored in memory 604 of computer device 600, modules 616, 618, 620, and 622, or portions thereof, may be implemented using any form of computer-readable media that is accessible by computer device 600. As used herein, "computer-readable media" includes at least two types of computer-readable media, namely computer-readable storage media and communication media.
Computer-readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computer device. In contrast, communication media may embody computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism. Computer-readable storage media, as defined herein, does not include communication media.
One or more communication interfaces 606 forSuch as exchanging data with other devices via a network, direct connection, etc. Such communication interfaces may be one or more of the following: any type of network interface (e.g., a Network Interface Card (NIC)), wired or wireless (such as IEEE 802.11 Wireless LAN (WLAN)) wireless interface, worldwide interoperability for microwave Access (Wi-MAX) interface, Ethernet interface, Universal Serial Bus (USB) interface, cellular network interface, Bluetooth TM An interface, a Near Field Communication (NFC) interface, etc. The communication interface 606 may facilitate communication within a variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the internet, and so forth. The communication interface 606 may also provide for communication with external storage devices (not shown), such as in storage arrays, network attached storage, storage area networks, and so forth.
In some examples, a display device 608, such as a monitor, may be included for displaying information and images to a user. Other I/O devices 610 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so forth.
The techniques described herein may be supported by these various configurations of computer device 600 and are not limited to specific examples of the techniques described herein. The functionality may also be implemented, in whole or in part, on a "cloud" using a distributed system, for example. The cloud includes and/or represents a platform for resources. The platform abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud. The resources can include applications and/or data that can be used when performing computing processes on servers remote from the computer device 600. Resources may also include services provided over the internet and/or over a subscriber network such as a cellular or Wi-Fi network. The platform may abstract resources and functions to connect the computer device 600 with other computer devices. Thus, implementations of the functionality described herein may be distributed throughout the cloud. For example, the functionality may be implemented in part on the computer device 600 and in part by a platform that abstracts the functionality of the cloud.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative and exemplary and not restrictive; the present disclosure is not limited to the disclosed embodiments. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps not listed, the indefinite article "a" or "an" does not exclude a plurality, the term "a" or "an" means two or more, and the term "based on" should be construed as "based at least in part on". The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (10)

1. A method for constructing a virtual environment, comprising:
acquiring dynamic object attribute data of dynamic objects around a vehicle at the current moment;
obtaining map navigation data of the vehicle at the current moment, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle;
fusing at least the dynamic object attribute data with the static object attribute data to obtain ambient data of the vehicle; and
performing at least one of: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data,
wherein the virtual environment comprises a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance that conforms to a scene setting that is different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance that conforms to the scene setting that is different from a real appearance of the static object.
2. The method of claim 1, wherein fusing at least the dynamic object property data with the static object property data comprises: and associating the dynamic object attribute data with the static object attribute data to obtain the surrounding environment data of the vehicle at the current moment.
3. The method of claim 2, wherein the map navigation data further includes current location information of the vehicle at the current time, and wherein associating the dynamic object attribute data and the static object attribute data comprises: associating the dynamic object attribute data with the static object attribute data with current location information of the vehicle.
4. The method of claim 1, wherein the dynamic object property data of the dynamic objects surrounding the vehicle is dynamic object property data collected by sensors on the vehicle, and wherein the map navigation data of the vehicle is acquired by an on-board satellite navigation system.
5. An apparatus for building a virtual environment, comprising:
a first acquisition module configured to acquire dynamic object attribute data of a dynamic object around a vehicle at a current time;
a second obtaining module configured to obtain map navigation data of the vehicle at the current time, the map navigation data including at least static object attribute data of static objects around the vehicle;
a fusion module configured to fuse at least the dynamic object property data with the static object property data to obtain ambient data of the vehicle; and
a build module configured to perform at least one of the following operations: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data,
wherein the virtual environment comprises a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance that conforms to a scene setting that is different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance that conforms to the scene setting that is different from a real appearance of the static object.
6. A computer device, comprising:
at least one processor; and
at least one memory having a computer program stored thereon,
wherein the computer program, when executed by the at least one processor, causes the at least one processor to perform the method of any one of claims 1 to 4.
7. A vehicle comprising the apparatus of claim 5 or the computer device of claim 6.
8. The vehicle of claim 7, further comprising an in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data.
9. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, causes the processor to carry out the method of any one of claims 1 to 4.
10. A computer program product comprising a computer program which, when executed by a processor, causes the processor to carry out the method of any one of claims 1 to 4.
CN202210515734.8A 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle Active CN115035239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210515734.8A CN115035239B (en) 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210515734.8A CN115035239B (en) 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle

Publications (2)

Publication Number Publication Date
CN115035239A true CN115035239A (en) 2022-09-09
CN115035239B CN115035239B (en) 2023-05-09

Family

ID=83120258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210515734.8A Active CN115035239B (en) 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle

Country Status (1)

Country Link
CN (1) CN115035239B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765989A (en) * 2017-11-03 2019-05-17 奥多比公司 The dynamic mapping of virtual and physics interaction
CN110717991A (en) * 2018-07-12 2020-01-21 通用汽车环球科技运作有限责任公司 System and method for in-vehicle augmented virtual reality system
CN110853393A (en) * 2019-11-26 2020-02-28 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN114461064A (en) * 2022-01-21 2022-05-10 北京字跳网络技术有限公司 Virtual reality interaction method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765989A (en) * 2017-11-03 2019-05-17 奥多比公司 The dynamic mapping of virtual and physics interaction
CN110717991A (en) * 2018-07-12 2020-01-21 通用汽车环球科技运作有限责任公司 System and method for in-vehicle augmented virtual reality system
CN110853393A (en) * 2019-11-26 2020-02-28 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN114461064A (en) * 2022-01-21 2022-05-10 北京字跳网络技术有限公司 Virtual reality interaction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115035239B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN109215433B (en) Vision-based driving scenario generator for automated driving simulation
EP3244591B1 (en) System and method for providing augmented virtual reality content in autonomous vehicles
US11155268B2 (en) Utilizing passenger attention data captured in vehicles for localization and location-based services
US10262234B2 (en) Automatically collecting training data for object recognition with 3D lidar and localization
JP7043755B2 (en) Information processing equipment, information processing methods, programs, and mobiles
CN107563267B (en) System and method for providing content in unmanned vehicle
US20200293041A1 (en) Method and system for executing a composite behavior policy for an autonomous vehicle
US11127373B2 (en) Augmented reality wearable system for vehicle occupants
JP7259749B2 (en) Information processing device, information processing method, program, and moving body
KR102279078B1 (en) A v2x communication-based vehicle lane system for autonomous vehicles
JP6813027B2 (en) Image processing device and image processing method
WO2021193099A1 (en) Information processing device, information processing method, and program
CN110007752A (en) The connection of augmented reality vehicle interfaces
KR20200062193A (en) Information processing device, mobile device, information processing method, mobile device control method, and program
CN114201038A (en) Integrated augmented reality system for sharing augmented reality content between vehicle occupants
JP2022132075A (en) Ground Truth Data Generation for Deep Neural Network Perception in Autonomous Driving Applications
Karle et al. EDGAR: An Autonomous Driving Research Platform--From Feature Development to Real-World Application
WO2021033591A1 (en) Information processing device, information processing method, and program
WO2021024805A1 (en) Information processing device, information processing method, and program
US20220315033A1 (en) Apparatus and method for providing extended function to vehicle
EP4358524A1 (en) Mr service platform for providing mixed reality automotive meta service, and control method therefor
CN115035239B (en) Method and device for building virtual environment, computer equipment and vehicle
JP2019117435A (en) Image generation device
US11687149B2 (en) Method for operating a mobile, portable output apparatus in a motor vehicle, context processing device, mobile output apparatus and motor vehicle
Hussein Control and communication systems for automated vehicles cooperation and coordination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant