CN115035239B - Method and device for building virtual environment, computer equipment and vehicle - Google Patents

Method and device for building virtual environment, computer equipment and vehicle Download PDF

Info

Publication number
CN115035239B
CN115035239B CN202210515734.8A CN202210515734A CN115035239B CN 115035239 B CN115035239 B CN 115035239B CN 202210515734 A CN202210515734 A CN 202210515734A CN 115035239 B CN115035239 B CN 115035239B
Authority
CN
China
Prior art keywords
vehicle
data
attribute data
object attribute
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210515734.8A
Other languages
Chinese (zh)
Other versions
CN115035239A (en
Inventor
郭麟
布如国
汤曌
魏博源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Binli Information Technology Co Ltd
Original Assignee
Beijing Binli Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Binli Information Technology Co Ltd filed Critical Beijing Binli Information Technology Co Ltd
Priority to CN202210515734.8A priority Critical patent/CN115035239B/en
Publication of CN115035239A publication Critical patent/CN115035239A/en
Application granted granted Critical
Publication of CN115035239B publication Critical patent/CN115035239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a method for building a virtual environment. The method comprises the following steps: acquiring dynamic object attribute data of dynamic objects around a vehicle at the current moment; acquiring map navigation data of the vehicle at the current moment, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle; fusing at least the dynamic object attribute data with the static object attribute data to obtain surrounding data of the vehicle; and performing at least one of the following operations: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data.

Description

Method and device for building virtual environment, computer equipment and vehicle
Technical Field
The present disclosure relates to the field of vehicle technology, and in particular, to a method and apparatus for building a virtual environment, a computer device, a vehicle, a computer readable storage medium, and a computer program product.
Background
In the current society, people often need to go around two places due to the demands of work and life, and vehicles play an important role in the life of people as an indispensable transportation means when people go out. As the time that people are in a car increases, the demands on the functionality and entertainment of the car continue to increase. Therefore, how to improve the functionality and entertainment of vehicles to meet different needs of in-vehicle personnel is a current hot spot problem.
Virtual reality is a common technique for providing entertainment in gaming systems by creating a computer-based virtual environment to allow people to experience a variety of situations that have never been experienced in real life due to space and physical constraints. The basic principle of the virtual reality is to simulate a three-dimensional space environment by using a computer and other special hard equipment (such as a video helmet, a 3D sound device, a force feedback game device and the like) and information software, so that a user can interact with the computer in the virtual world.
Accordingly, it is desirable to build a virtual environment in a vehicle to increase the functionality and entertainment of the vehicle.
Disclosure of Invention
It would be advantageous to provide a mechanism that alleviates, mitigates or even eliminates one or more of the above problems.
According to an aspect of the present disclosure, there is provided a method for constructing a virtual environment, including: acquiring dynamic object attribute data of dynamic objects around a vehicle at the current moment; acquiring map navigation data of the vehicle at the current moment, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle; fusing at least the dynamic object attribute data with the static object attribute data to obtain surrounding data of the vehicle; and performing at least one of the following operations: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data, wherein the virtual environment includes a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance conforming to a scene setting that is different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance conforming to the scene setting that is different from the real appearance of the static object.
According to another aspect of the present disclosure, there is provided an apparatus for constructing a virtual environment, including: the first acquisition module is configured to acquire dynamic object attribute data of dynamic objects around the vehicle at the current moment; a second acquisition module configured to acquire map navigation data of the vehicle at the current time, the map navigation data including at least static object attribute data of static objects around the vehicle; a fusion module configured to fuse at least the dynamic object attribute data with the static object attribute data to obtain ambient data of the vehicle; and a build module configured to perform at least one of: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data, wherein the virtual environment includes a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance conforming to a scene setting that is different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance conforming to the scene setting that is different from the real appearance of the static object.
According to yet another aspect of the present disclosure, there is provided a computer apparatus comprising: at least one processor; and at least one memory having stored thereon a computer program, wherein the computer program, when executed by the at least one processor, causes the at least one processor to perform a method for building a virtual environment according to the present disclosure.
According to yet another aspect of the present disclosure, there is provided a vehicle comprising an apparatus for building a virtual environment according to the present disclosure or a computer device according to the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform a method for building a virtual environment according to the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, causes the processor to perform a method for building a virtual environment according to the present disclosure.
According to one or more embodiments of the present disclosure, by acquiring the attribute data of the dynamic object around the vehicle and the map navigation data in real time, and constructing the virtual environment based on the surrounding environment data generated by fusing the two, on one hand, the vehicle surrounding environment information can be acquired in multiple channels, so that the acquired surrounding environment data is more comprehensive, on the other hand, the vehicle surrounding environment information can be acquired in real time, so that the physical sensation of the passenger in the vehicle can be fed back to the constructed virtual environment in real time, and therefore, an entertainment experience that fuses the real dynamic sensation of the vehicle and completely simulates the virtual visual experience is realized.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 is a schematic diagram illustrating an example system in which various methods described herein may be implemented, according to an example embodiment;
FIG. 2 is a flowchart illustrating a method for building a virtual environment in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a real environment of a vehicle according to an example embodiment;
FIG. 4 is a schematic diagram illustrating a virtual environment in accordance with an example embodiment;
FIG. 5 is a schematic block diagram illustrating an apparatus for building a virtual environment in accordance with an example embodiment; and
fig. 6 is a block diagram illustrating an exemplary computer device that can be applied to exemplary embodiments.
Detailed Description
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. As used herein, the term "plurality" means two or more, and the term "based on": should be interpreted as "based at least in part on". Furthermore, the term "and/or" and "at least one of … …" encompasses any and all possible combinations of the listed items.
In the related art, as the technology of intelligent automobiles is developed, more and more automobiles are equipped with in-vehicle sensors (e.g., in-vehicle lidar, milliwave radar, cameras, etc.) for sensing the surrounding environment. These in-vehicle sensors are used to obtain environmental information around the vehicle, including the type of surrounding objects, and the position and speed in the car coordinate system, etc. Ambient information acquired by sensors of a vehicle is currently mainly used to accomplish advanced Assisted Driving (ADAS), or Automated Driving (AD), or the like. It is therefore desirable to build a virtual environment in a vehicle using ambient information that the vehicle can obtain to increase the functionality and entertainment of the vehicle.
In the present disclosure, a method of constructing a virtual environment using surrounding information that a vehicle can perceive is provided. The dynamic object attribute data and the map navigation data around the vehicle are acquired in real time, and the virtual environment is constructed based on the surrounding environment data generated by fusion of the dynamic object attribute data and the map navigation data, so that on one hand, the surrounding environment information of the vehicle can be acquired in multiple channels, the acquired surrounding environment data is more comprehensive, on the other hand, the surrounding environment information of the vehicle can be acquired in real time, and the physical body feeling of a passenger in the vehicle can be fed back to the constructed virtual environment in real time, so that entertainment experience of fusing real dynamic feeling of the vehicle and completely simulating virtual visual experience is realized.
Exemplary embodiments of the present disclosure are described in detail below with reference to the attached drawings.
FIG. 1 is a schematic diagram illustrating an example system 100 in which various methods described herein may be implemented, according to an example embodiment.
Referring to fig. 1, the system 100 includes an in-vehicle system 110, a server 120, and a network 130 communicatively coupling the in-vehicle system 110 with the server 120.
In-vehicle system 110 includes a display 114 and an Application (APP) 112 displayable via display 114. The application 112 may be an application installed by default for the in-vehicle system 110 or downloaded and installed by the user 102, or as an applet for a lightweight application. In the case where the application 112 is an applet, the user 102 may directly run the application 112 on the in-vehicle system 110 by searching the host application for the application 112 (e.g., by name of the application 112, etc.), or by scanning a graphical code (e.g., bar code, two-dimensional code, etc.) of the application 112, etc., without installing the application 112. In some embodiments, the in-vehicle system 110 may include one or more processors and one or more memories (not shown), and the in-vehicle system 110 is implemented as an in-vehicle computer. In some embodiments, the in-vehicle system 110 may include more or fewer display screens 114 (e.g., no display screens 114), for example, may be used to display a virtual environment, and/or one or more speakers or other human-machine interaction devices. In some embodiments, the in-vehicle system 110 may not communicate with the server 120.
Server 120 may represent a single server, a cluster of multiple servers, a distributed system, or a cloud server providing basic cloud services (such as cloud databases, cloud computing, cloud storage, cloud communication). It will be appreciated that although server 120 is shown in fig. 1 as communicating with only one in-vehicle system 110, server 120 may provide background services for multiple in-vehicle systems simultaneously.
The network 130 allows wireless communication and information exchange between vehicles-X ("X" means vehicles, roads, pedestrians, the internet, etc.) in accordance with agreed communication protocols and data interaction standards. Examples of network 130 include a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), and/or a combination of communication networks such as the internet. The network 130 may be a wired or wireless network. In one example, the network 130 may be an in-vehicle network, an inter-vehicle network, and/or an in-vehicle mobile internet.
For purposes of embodiments of the present disclosure, in the example of fig. 1, the application 112 may be an electronic map application that may provide various electronic map-based functions, such as navigation, route query, location finding, and so forth. Accordingly, the server 120 may be a server used with an electronic map application. The server 120 may provide online map services, such as online navigation, online route query, online location lookup, etc., to applications 112 running in the in-vehicle system 110 based on the road network data. Alternatively, the server 120 may also provide road network data to the in-vehicle system 110, from which local map services are provided by the application 112 running in the in-vehicle system 110.
Fig. 2 is a flowchart illustrating a method 200 for building a virtual environment in accordance with an example embodiment. The method 200 may be performed at an in-vehicle system (e.g., the in-vehicle system 110 shown in fig. 1), i.e., the subject of execution of the steps of the method 200 may be the in-vehicle system 110 shown in fig. 1. In some embodiments, the method 200 may be performed at a server (e.g., the server 120 shown in fig. 1). In some embodiments, the method 200 may be performed by an in-vehicle system (e.g., in-vehicle system 110) in combination with a server (e.g., server 120). Hereinafter, the steps 210 to 240 of the method 200 will be described in detail with reference to the real environment of the vehicle of fig. 3, taking the execution subject as the in-vehicle system 110 as an example.
Referring to fig. 2, at step 210, dynamic object attribute data of dynamic objects around a vehicle at a current time is acquired.
For example, the in-vehicle system 110 may obtain dynamic object attribute data for dynamic objects (e.g., the surrounding vehicles 320) surrounding the vehicle 310 at the current time. The current time refers to the time of executing step 210. Dynamic objects around a vehicle may include surrounding vehicles, pedestrians, and like movable objects. The dynamic object attribute data may include at least one of a position, an orientation, a speed, and a type of the dynamic object.
In some examples, the dynamic object attribute data of the dynamic objects surrounding the vehicle may be dynamic object attribute data collected by sensors on the vehicle. The sensors on the vehicle may include on-board sensors and/or external sensors attached to the vehicle. Wherein the in-vehicle sensor may include one or more of a lidar, an ultrasonic radar, a camera, or the like. The external sensor may include an additional camera or the like. The raw data (e.g., images, etc.) sensed by the above-mentioned sensors can be used to obtain the dynamic object attribute data of the position, orientation, speed, etc. of the dynamic object by the corresponding algorithm in the prior art. For example, taking a sensor as a camera, the acquired image may be converted into two-dimensional data, and then image analysis is performed on the acquired image to obtain data such as type, position, orientation, speed, and the like of surrounding dynamic objects. In addition, when various sensors are adopted to collect data of dynamic objects around the vehicle, the data collected by the various sensors can be fused through fusion algorithms (such as Bayesian inference, D-S evidence theory, maximum likelihood estimation and the like), namely, information of different sensors describing certain object or environmental characteristics is synthesized into unified characteristic expression information, so that the obtained attribute data of the dynamic objects are more accurate. It should be understood herein that the dynamic object attribute data obtained by the sensors on the vehicle is data based on the vehicle coordinate system and may be represented in coordinates.
In step 220, map navigation data of the vehicle at the current moment is obtained, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle.
For example, the in-vehicle system 110 may obtain static object attribute data for static objects (e.g., building 340, road 330) surrounding the vehicle 310 at the current time. The current time refers to the time of executing step 210. The stationary objects surrounding the vehicle may be objects that do not change or move for a long period of time, such as roads, lanes, various objects (e.g., markers, poles, fire taps, etc.) or landmarks (e.g., buildings, bridges, etc.) located along or near the roads. The static object attribute data may include at least one of a location, a type, a size, and an outline of the static object. In some examples, the map navigation data may further include at least one of current location information, departure location information, and destination location information of the vehicle, navigation path information, and the like.
In some embodiments, map navigation data of the vehicle may be acquired by an in-vehicle satellite navigation system. The vehicle-mounted satellite navigation system may be a Beidou satellite navigation system (BDS), a Global Positioning System (GPS), a GLONASS, etc. The in-vehicle satellite navigation system may include a receiver and a database. The receiver may receive the current location information of the vehicle in the form of longitude and latitude coordinates in real time. The database may be used to store detailed road and road map information in the form of digital map data (including the map navigation data described above). In some examples, map navigation data at the current time may be obtained by: firstly, acquiring current position information of a vehicle at a current moment through a receiver of a vehicle-mounted satellite navigation system; and then the vehicle-mounted satellite navigation system searches the map information of the roads and the highways around the current position information in the database based on the current position information.
At step 230, at least the dynamic object attribute data is fused with the static object attribute data to obtain the surrounding environment data of the vehicle.
For example, the in-vehicle system 110 may fuse dynamic object attribute data (e.g., attribute data of the surrounding vehicle 320) with static object attribute data (e.g., attribute data of the building 340 and the road 330) to obtain surrounding environment data of the vehicle 310 (e.g., attribute data including the surrounding vehicle 320, the building 340, and the road 330). By fusing the dynamic object attribute data and the static object attribute data which are acquired by different channels at the current time, comprehensive surrounding environment data at the current time can be acquired. In some examples, the above fusion may be simply combining the dynamic object attribute data and the static object attribute data at the current time, or may associate both with a specific rule.
At step 240, at least one of the following operations is performed: constructing a virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to the in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct a virtual environment based at least in part on the ambient environment data.
For example, the in-vehicle system 110 may construct a virtual environment based at least in part on the ambient environment data, such as the virtual environment 400 shown in fig. 4. At this time, the constructed virtual environment may be transmitted to the display screen 114 for display, or may be transmitted to other vehicle-mounted entertainment terminals for display. Alternatively or additionally, the in-vehicle system 110 may also transmit ambient data to the in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct a virtual environment based at least in part on the ambient data, such as the virtual environment 400 shown in FIG. 4. The in-vehicle system 110 may communicate virtual environment and/or ambient environment data to the in-vehicle entertainment terminal via a wired connection or a wireless connection (e.g., a Bluetooth connection or a Wi-Fi connection, etc.). The in-vehicle entertainment terminal may be one or more of a tablet computer, VR device (e.g., VR headset), AR device (e.g., AR headset or glasses), gamepad, and the like, among others. In some examples, where the in-vehicle entertainment terminal includes multiple in-vehicle entertainment terminals, the presentation forms of the virtual environments may be inconsistent. For example, on a VR headset, a virtual environment is presented in the display of the VR device, providing a fully immersive experience. On AR helmets or glasses, the virtual environment is superimposed on the actual perceived visual perception of the user, providing a hybrid enhanced experience. On tablet computers and vehicle-mounted screens, a virtual environment is presented on the screen. It should be understood herein that the content of the virtual environments of the various in-vehicle entertainment terminals may or may not be consistent, i.e., may have the same or different scene settings.
Further, the virtual environment may include a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance conforming to the scene setting that is different from the real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance conforming to the scene setting that is different from the real appearance of the static object. Taking the real environment of the vehicle 310 in fig. 3 as an example, the static visual elements corresponding to the building 340 may be rendered to have the appearance of a planet, and the dynamic visual elements of the surrounding vehicles 320 may be rendered to have the appearance of a spacecraft.
According to the embodiment of the disclosure, the dynamic object attribute data and the map navigation data around the vehicle are obtained in real time, and the virtual environment is constructed based on the surrounding environment data generated by fusion of the dynamic object attribute data and the map navigation data, so that on one hand, the surrounding environment information of the vehicle can be collected in multiple channels, the obtained surrounding environment data is more comprehensive, on the other hand, the surrounding environment information of the vehicle can be collected in real time, and the physical body feeling of a passenger in the vehicle can be fed back to the constructed virtual environment in real time, so that entertainment experience of fusing real dynamic feeling of the vehicle and completely simulating virtual visual experience is realized.
In some embodiments, step 230 may include associating dynamic object attribute data with static object attribute data to obtain ambient environment data of the vehicle at the current time, thereby facilitating better organization of the dynamic object attribute data and the static object attribute data together, facilitating more accurate restoration of physical feelings and perceived real environments of passengers at the current time when constructing the virtual environment. For example, the dynamic object attribute data of the dynamic object appearing around the vehicle at the same time, i.e., at the current time, and the static object attribute data of the static object may be associated based on time.
In some other embodiments, associating the dynamic object attribute data with the static object attribute data may further include associating the dynamic object attribute data with the static object attribute data via current location information of the vehicle. The dynamic object attribute data acquired by the sensor is based on the vehicle coordinate system (i.e., the origin of the coordinate system is the current position of the vehicle), while the static object attribute data acquired by the on-board satellite navigation system is based on the geodetic coordinate system. The above-mentioned association of the dynamic object attribute data with the static object attribute data by the current position information of the vehicle may enable, when constructing the virtual environment, the dynamic visual element constructed from the dynamic object attribute data to be directly attached to the virtual environment constructed from the map navigation data with reference to the visual element corresponding to the vehicle itself without performing coordinate transformation, thereby facilitating the construction of the subsequent virtual environment.
In some embodiments, where the in-vehicle entertainment terminal includes a plurality of in-vehicle entertainment terminals, transmitting the ambient data to the in-vehicle entertainment terminal in step 240 may include synchronously transmitting the ambient data to the plurality of in-vehicle entertainment terminals, such that each in-vehicle entertainment terminal may synchronously construct the virtual environment.
In some embodiments, the transmitting of the ambient data to the in-vehicle entertainment terminal in step 240 may include: converting the surrounding environment data into a format which can be identified by the vehicle-mounted entertainment terminal; and transmitting the ambient data after the format conversion to the vehicle-mounted entertainment terminal. Alternatively, the fused surrounding data may be directly transmitted to the in-vehicle entertainment terminal, and then subjected to format conversion by the in-vehicle entertainment terminal.
In some embodiments, the aforementioned static object attribute data included in the map navigation data may be referred to as first static object attribute data, and the method 200 may further include: collecting second static object attribute data of static objects around the vehicle through a sensor; and correcting the first static object attribute data by using the second static object attribute data. For example, the first static object attribute data and the second static object attribute data may be fused by a fusion algorithm (e.g., bayesian inference, D-S evidence theory, and maximum likelihood estimation, etc.) to correct the first static object attribute data. Since the first static attribute data obtained by the in-vehicle satellite navigation system is typically not updated in real time, there may be a hysteresis situation. The sensor may include an onboard sensor and/or an external sensor attached to the vehicle. Wherein the in-vehicle sensor may include one or more of a lidar, an ultrasonic radar, a camera, or the like. External sensors attached to the vehicle may include an additional camera or the like. According to the embodiment, the second static object attribute data of the static object acquired in real time through the sensor corrects the first static object attribute data acquired through the vehicle-mounted satellite navigation system, so that the surrounding environment data are more accurate, and the physical body feeling of a passenger at the current moment and the seen real environment can be better restored when the virtual environment is constructed.
In some embodiments, the method 200 may further include collecting pose data of the vehicle 310 by sensors on the vehicle 310, and wherein constructing the virtual environment based at least in part on the ambient environment data in step 240 may include: based on the surrounding environment data and the gesture data, a virtual environment is constructed. Wherein the gesture data may include at least one of a vehicle speed, acceleration, heading angle, etc. The above-described sensors on the vehicle may include one or both of an in-vehicle sensor and an external sensor attached to the vehicle. The vehicle-mounted sensor may include one or more of a speed sensor, an acceleration sensor, a steering wheel angle sensor, a lateral angle sensor, a lidar, a camera, and the like. External sensors attached to the vehicle may include one or more of an attached camera, inertial Measurement Unit (IMU) device, and the like. In some examples, the virtual environment may also include visual elements corresponding to the vehicle 310. At this time, constructing the virtual environment based on the surrounding environment data and the gesture data may include changing the gesture of the visual element corresponding to the vehicle 310 based on the gesture data. For example, when the vehicle 310 turns left, the visual element corresponding to the vehicle 310 in the virtual environment also turns left. Thus, the user can be enabled to obtain a viewing angle change consistent with that in a vehicle of a real scene in the constructed virtual environment, thereby obtaining a virtual experience consistent with the real scene. In some examples, the perspective of the virtual environment may also be changed based on the gesture data such that the user obtains a consistent perspective change in the constructed virtual environment with a vehicle in the real scene, thereby obtaining a virtual experience consistent with the real scene. For example, as the vehicle 310 turns left, the field of view of the virtual environment moves right (i.e., the static visual element as well as the dynamic visual element moves right). Through the virtual environment constructed based on the surrounding environment data and the gesture data, the physical body feeling felt by the passengers in the running process of the vehicle can be better restored.
In some other embodiments, the pose data of the vehicle may also be generated based on the surrounding environment data of the vehicle. For example, taking the real environment of fig. 3 as an example, the pose data for the vehicle 310 at the moment may be determined based on the change in position of the vehicle 320, road 330, and/or building 340 around the vehicle 310 relative to the vehicle 310.
In some embodiments, the method 200 may further include causing the in-vehicle device to perform a corresponding action based on the virtual environment, wherein the action includes at least one of a seat shake, an audio play, and a light on. The in-vehicle apparatus may include a seat, a sound, an atmosphere lamp, an illumination lamp, and the like. For example, when a collision occurs in a virtual scene, the seat is vibrated accordingly and/or the car audio is played in accordance with the atmosphere. Thus, corresponding physical actions can be performed according to the constructed virtual environment, thereby providing a stereoscopic entertainment experience for passengers. Additionally or alternatively, the corresponding actions performed may also be made to conform to the scene setting. For example, when a scene begins to rain, the sound may be made to play a rain sound and/or the atmosphere light may be dimmed or blinked to create an atmosphere conforming to the scene setting.
In some embodiments, the built virtual environment may be used in various online activities such as gaming, video conferencing, shopping, and the like.
Although the various operations are depicted in FIG. 2 as being performed in a particular order, this should not be understood as requiring that the operations be performed in the particular order shown or in an order that is antegrade, nor should it be understood as requiring that all illustrated operations be performed in order to achieve desirable results. For example, step 220 may be performed prior to step 210 or concurrently with step 210.
Fig. 5 is a schematic block diagram illustrating an apparatus 500 for building a virtual environment according to an example embodiment. The apparatus 500 may include a first acquisition module 510, a second acquisition module 520, a fusion module 530, and a build module 540. The first acquisition module 510 is configured to acquire dynamic object attribute data of dynamic objects around the vehicle at the current time. The second acquisition module 520 is configured to acquire map navigation data of the vehicle at the current moment, the map navigation data including at least static object attribute data of static objects around the vehicle. The fusion module 530 is configured to fuse at least the dynamic object attribute data with the static object attribute data to obtain surrounding environment data of the vehicle. The build module 540 is configured to perform at least one of the following operations: constructing a virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to the in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct a virtual environment based at least in part on the ambient environment data. Wherein the virtual environment includes a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment to have an appearance conforming to the scene setting that is different from the real appearance of the dynamic object, the static visual element being rendered in the virtual environment to have an appearance conforming to the scene setting that is different from the real appearance of the static object.
According to the embodiment of the disclosure, on one hand, the vehicle surrounding environment information can be acquired in multiple channels, so that the acquired surrounding environment data is more comprehensive, and on the other hand, the vehicle surrounding environment information can be acquired in real time, so that the physical sense of passengers in the vehicle can be fed back to the constructed virtual environment in real time, and therefore entertainment experience integrating real dynamic sense of the vehicle and completely simulating virtual visual experience is achieved.
It should be appreciated that the various modules of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features, and advantages described above with respect to method 200 are equally applicable to apparatus 500 and the modules that it comprises. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module. The particular module performing the actions discussed herein includes the particular module itself performing the actions, or alternatively the particular module invoking or otherwise accessing another component or module that performs the actions (or performs the actions in conjunction with the particular module). Thus, a particular module that performs an action may include that particular module itself that performs the action and/or another module that the particular module invokes or otherwise accesses that performs the action. For example, the first acquisition module 510/second acquisition module 520 described above may be combined into a single module in some embodiments. For another example, the fusion module 530 may include a first acquisition module 510 in some embodiments.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various modules described above with respect to fig. 5 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the first acquisition module 510, the second acquisition module 520, the fusion module 530, and the build module 540 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to an aspect of the present disclosure, a computer device is provided that includes at least one memory, at least one processor, and a computer program stored on the at least one memory. The at least one processor is configured to execute a computer program to implement the steps of any of the method embodiments described above.
According to an aspect of the present disclosure, there is provided a vehicle comprising the apparatus 500 or the computer device as described above.
In some embodiments, the vehicle further comprises an in-vehicle entertainment terminal for building a virtual environment based at least in part on the above-described ambient environment data.
According to an aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
Illustrative examples of such computer devices, non-transitory computer readable storage media, and computer program products are described below in connection with fig. 6.
Fig. 6 illustrates an example configuration of a computer device 600 that may be used to implement the methods described herein. For example, the server 120 and/or the in-vehicle system 110 shown in fig. 1 may include an architecture similar to that of the computer device 600. The apparatus 500 described above may also be implemented, in whole or at least in part, by a computer device 600 or similar device or system.
Computer device 600 may include at least one processor 602, memory 604, communication interface(s) 606, display device 608, other input/output (I/O) devices 610, and one or more mass storage devices 612, capable of communicating with each other, such as via a system bus 614 or other suitable connection.
The processor 602 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. The processor 602 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor 602 may be configured to, among other capabilities, obtain and execute computer-readable instructions stored in the memory 604, mass storage device 612, or other computer-readable medium, such as program code for the operating system 616, program code for the application programs 618, program code for the other programs 620, and so forth.
Memory 604 and mass storage device 612 are examples of computer-readable storage media for storing instructions that are executed by processor 602 to implement the various functions as previously described. For example, memory 604 may generally include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, mass storage device 612 may generally include hard disk drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Memory 604 and mass storage device 612 may both be referred to herein collectively as memory or a computer-readable storage medium, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 602 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of programs may be stored on the mass storage device 612. These programs include an operating system 616, one or more application programs 618, other programs 620, and program data 622, and may be loaded into the memory 604 for execution. Examples of such application programs or program modules may include, for example, computer program logic (e.g., computer program code or instructions) for implementing the following components/functions: method 200 (including any suitable steps of method 200), and/or additional embodiments described herein.
Although illustrated in fig. 6 as being stored in memory 604 of computer device 600, modules 616, 618, 620, and 622, or portions thereof, may be implemented using any form of computer-readable media accessible by computer device 600. As used herein, "computer-readable medium" includes at least two types of computer-readable media, namely computer-readable storage media and communication media.
Computer-readable storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information for access by a computer device. In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism. Computer-readable storage media as defined herein do not include communication media.
One or more communication interfaces 606 are used to exchange data with other devices, such as via a network, direct connection, or the like. Such communication interfaces may be one or more of the following: any type of network interface (e.g., a Network Interface Card (NIC)), a wired or wireless (such as IEEE 802.11 Wireless LAN (WLAN)) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, bluetooth, etc TM An interface, a Near Field Communication (NFC) interface, etc. Communication interface 606 may facilitate communication within a variety of network and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet, and so forth. Communication interface 606 may also provide for communication with external storage devices (not shown) such as in a storage array, network attached storage, storage area network, or the like.
In some examples, a display device 608, such as a monitor, may be included for displaying information and images to a user. Other I/O devices 610 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so on.
The techniques described herein may be supported by these various configurations of computer device 600 and are not limited to the specific examples of techniques described herein. For example, this functionality may also be implemented in whole or in part on a "cloud" using a distributed system. The cloud includes and/or represents a platform for the resource. The platform abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud. Resources may include applications and/or data that may be used when performing computing processes on servers remote from computer device 600. Resources may also include services provided over the internet and/or over subscriber networks such as cellular or Wi-Fi networks. The platform may abstract resources and functions to connect the computer device 600 with other computer devices. Thus, implementations of the functionality described herein may be distributed throughout the cloud. For example, the functionality may be implemented in part on computer device 600 and in part by a platform that abstracts the functionality of the cloud.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative and schematic and not restrictive; the present disclosure is not limited to the disclosed embodiments. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps than those listed and the indefinite article "a" or "an" does not exclude a plurality, the term "a" or "an" means two or more, and the term "based on" is to be interpreted as "based at least in part on". The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (11)

1. A method for building a virtual environment, comprising:
acquiring dynamic object attribute data of a dynamic object around a vehicle at the current moment, wherein the dynamic object attribute data comprises the position, the orientation, the speed and the type of the dynamic object;
acquiring map navigation data of the vehicle at the current moment, wherein the map navigation data at least comprises static object attribute data of static objects around the vehicle, and the static object attribute data comprises the position, the type, the size and the outline of the static objects;
fusing at least the dynamic object attribute data with the static object attribute data to obtain surrounding data of the vehicle; and
at least one of the following operations is performed: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data,
wherein the virtual environment includes a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment based on the dynamic object attribute data to have an appearance conforming to a scene setting different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment based on the static object attribute data to have an appearance conforming to the scene setting different from the real appearance of the static object,
Wherein the static object attribute data is first static object attribute data, and wherein the method further comprises:
collecting second static object attribute data of static objects around the vehicle through a sensor on the vehicle; and
and correcting the first static object attribute data by using the second static object attribute data.
2. The method of claim 1, wherein fusing at least the dynamic object attribute data with the static object attribute data comprises: and associating the dynamic object attribute data with the static object attribute data to obtain surrounding environment data of the vehicle at the current moment.
3. The method of claim 2, wherein the map navigation data further includes current location information of the vehicle at the current time, and wherein associating the dynamic object attribute data and the static object attribute data includes: the dynamic object attribute data is associated with the static object attribute data by current location information of the vehicle.
4. The method of claim 1, wherein the dynamic object attribute data of the dynamic objects surrounding the vehicle is dynamic object attribute data acquired by sensors on the vehicle, and wherein the map navigation data of the vehicle is acquired by an in-vehicle satellite navigation system.
5. The method of any one of claims 1 to 4, further comprising: collecting attitude data of the vehicle by sensors on the vehicle, and wherein constructing the virtual environment comprises: the virtual environment is constructed based on the surrounding environment data and the gesture data.
6. The method of any one of claims 1 to 4, further comprising: and enabling the vehicle-mounted equipment to execute corresponding actions based on the virtual environment, wherein the actions comprise at least one of seat vibration, audio playing and light turning-on.
7. An apparatus for building a virtual environment, comprising:
the system comprises a first acquisition module, a second acquisition module and a first control module, wherein the first acquisition module is configured to acquire dynamic object attribute data of a dynamic object around a vehicle at the current moment, and the dynamic object attribute data comprises the position, the orientation, the speed and the type of the dynamic object;
a second acquisition module configured to acquire map navigation data of the vehicle at the current time, the map navigation data including at least static object attribute data of static objects around the vehicle, the static object attribute data including a position, a type, a size, and a contour of the static objects;
A fusion module configured to fuse at least the dynamic object attribute data with the static object attribute data to obtain ambient data of the vehicle; and
a build module configured to perform at least one of: constructing the virtual environment based at least in part on the ambient environment data; or transmitting the ambient environment data to an in-vehicle entertainment terminal for the in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data,
wherein the virtual environment includes a dynamic visual element corresponding to the dynamic object and a static visual element corresponding to the static object, the dynamic visual element being rendered in the virtual environment based on the dynamic object attribute data to have an appearance conforming to a scene setting different from a real appearance of the dynamic object, the static visual element being rendered in the virtual environment based on the static object attribute data to have an appearance conforming to the scene setting different from the real appearance of the static object,
wherein the static object attribute data is first static object attribute data, and wherein the apparatus further comprises:
Collecting second static object attribute data of static objects around the vehicle through a sensor on the vehicle; and
and correcting the first static object attribute data by using the second static object attribute data.
8. A computer device, comprising:
at least one processor; and
at least one memory having a computer program stored thereon,
wherein the computer program, when executed by the at least one processor, causes the at least one processor to perform the method of any of claims 1 to 6.
9. A vehicle comprising the apparatus of claim 7 or the computer device of claim 8.
10. The vehicle of claim 9, further comprising an in-vehicle entertainment terminal to construct the virtual environment based at least in part on the ambient environment data.
11. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 6.
CN202210515734.8A 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle Active CN115035239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210515734.8A CN115035239B (en) 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210515734.8A CN115035239B (en) 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle

Publications (2)

Publication Number Publication Date
CN115035239A CN115035239A (en) 2022-09-09
CN115035239B true CN115035239B (en) 2023-05-09

Family

ID=83120258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210515734.8A Active CN115035239B (en) 2022-05-11 2022-05-11 Method and device for building virtual environment, computer equipment and vehicle

Country Status (1)

Country Link
CN (1) CN115035239B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10957103B2 (en) * 2017-11-03 2021-03-23 Adobe Inc. Dynamic mapping of virtual and physical interactions
US20200020143A1 (en) * 2018-07-12 2020-01-16 GM Global Technology Operations LLC Systems and methods for in-vehicle augmented virtual reality system
CN110853393B (en) * 2019-11-26 2020-12-11 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN114461064B (en) * 2022-01-21 2023-09-15 北京字跳网络技术有限公司 Virtual reality interaction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115035239A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
EP3244591B1 (en) System and method for providing augmented virtual reality content in autonomous vehicles
CN109215433B (en) Vision-based driving scenario generator for automated driving simulation
JP6944970B2 (en) Position update method, position and navigation route display method, vehicle and system
EP3338136B1 (en) Augmented reality in vehicle platforms
US10262234B2 (en) Automatically collecting training data for object recognition with 3D lidar and localization
CN107563267B (en) System and method for providing content in unmanned vehicle
US11155268B2 (en) Utilizing passenger attention data captured in vehicles for localization and location-based services
JP7259749B2 (en) Information processing device, information processing method, program, and moving body
US11127373B2 (en) Augmented reality wearable system for vehicle occupants
US20200293041A1 (en) Method and system for executing a composite behavior policy for an autonomous vehicle
WO2022105395A1 (en) Data processing method, apparatus, and system, computer device, and non-transitory storage medium
CN113260430B (en) Scene processing method, device and system and related equipment
CN110007752A (en) The connection of augmented reality vehicle interfaces
CN109213144A (en) Man-machine interface (HMI) framework
JP2014133444A (en) Cruise control device, method of the cruise control, and vehicle identification apparatus
CN114201038A (en) Integrated augmented reality system for sharing augmented reality content between vehicle occupants
CN113483774B (en) Navigation method, navigation device, electronic equipment and readable storage medium
US20220277193A1 (en) Ground truth data generation for deep neural network perception in autonomous driving applications
Miquet New test method for reproducible real-time tests of ADAS ECUs:“Vehicle-in-the-Loop” connects real-world vehicles with the virtual world
Karle et al. EDGAR: An Autonomous Driving Research Platform--From Feature Development to Real-World Application
CN115035239B (en) Method and device for building virtual environment, computer equipment and vehicle
US20220315033A1 (en) Apparatus and method for providing extended function to vehicle
EP4358524A1 (en) Mr service platform for providing mixed reality automotive meta service, and control method therefor
JP2019117435A (en) Image generation device
US11687149B2 (en) Method for operating a mobile, portable output apparatus in a motor vehicle, context processing device, mobile output apparatus and motor vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant