CN110427227B - Virtual scene generation method and device, electronic equipment and storage medium - Google Patents

Virtual scene generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110427227B
CN110427227B CN201910578450.1A CN201910578450A CN110427227B CN 110427227 B CN110427227 B CN 110427227B CN 201910578450 A CN201910578450 A CN 201910578450A CN 110427227 B CN110427227 B CN 110427227B
Authority
CN
China
Prior art keywords
virtual
scene
session
terminal device
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578450.1A
Other languages
Chinese (zh)
Other versions
CN110427227A (en
Inventor
贺杰
戴景文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910578450.1A priority Critical patent/CN110427227B/en
Publication of CN110427227A publication Critical patent/CN110427227A/en
Application granted granted Critical
Publication of CN110427227B publication Critical patent/CN110427227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a method and a device for generating a virtual scene, electronic equipment and a storage medium. The virtual scene generation method comprises the following steps: acquiring participation data of one or more terminal devices in a remote session; according to the participation data, performing position arrangement in a virtual session scene on the virtual object corresponding to each terminal device; acquiring the position of each virtual object in the virtual session scene according to the position arrangement result; based on the position of the virtual object, a virtual conversation scene containing the virtual object is generated. The method can generate the virtual session scene of the remote session, and improve the effect of the remote session.

Description

Virtual scene generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a method and an apparatus for generating a virtual scene, an electronic device, and a storage medium.
Background
In recent years, with the rapid development of network technology and technology, more and more people use electronic devices for remote conversations (e.g., chatting, conferencing, etc.). In a common remote conversation, voice input by a user and an image collected by a camera are transmitted, but the user is difficult to feel personally on the scene, so that the conversation effect is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device, a system, a terminal device and a storage medium for generating a virtual scene, which can improve the effect of remote session.
In a first aspect, an embodiment of the present application provides a method for generating a virtual scene, where the method includes: acquiring participation data of one or more terminal devices in a remote session; according to the participation data, position arrangement in a virtual session scene is carried out on the virtual object corresponding to each terminal device; acquiring the position of each virtual object in the virtual session scene according to the position arrangement result; and generating a virtual session scene containing the virtual object based on the position of the virtual object.
In a second aspect, an embodiment of the present application provides an apparatus for generating a virtual meeting scene, where the apparatus includes: the system comprises a data acquisition module, a position arrangement module, a position acquisition module and a scene generation module, wherein the data acquisition module is used for acquiring participation data of one or more terminal devices in a remote session; the position arrangement module is used for carrying out position arrangement in a virtual session scene on the virtual object corresponding to each terminal device according to the participation data; the position acquisition module is used for acquiring the position of each virtual object in the virtual session scene according to the position arrangement result; the scene generation module is used for generating a virtual session scene containing the virtual object based on the position of the virtual object.
In a third aspect, an embodiment of the present application provides an electronic device, one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method for generating a virtual scene provided by the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the method for generating a virtual scene provided in the first aspect.
According to the scheme provided by the application, the participation data of one or more terminal devices in the remote session are acquired, the position arrangement in the virtual session scene is carried out on the virtual object corresponding to each terminal device according to the participation data, the position of each virtual object in the virtual session scene is acquired according to the position arrangement result, and the virtual session scene containing the virtual object is generated based on the position of the virtual object. Therefore, the virtual objects of the terminal equipment participating in the remote conversation can be arranged according to the participation data of the terminal equipment of the user, so that the virtual objects are added to the corresponding positions in the remote conversation scene, the virtual conversation scene used for displaying is obtained, the user can feel real, and the effect of the remote conversation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application scenario applicable to the embodiment of the present application.
Fig. 2 shows another schematic diagram of an application scenario applicable to the embodiment of the present application.
Fig. 3 shows a flowchart of a method for generating a virtual scene according to an embodiment of the present application.
Fig. 4 is a schematic diagram illustrating a display effect according to an embodiment of the present application.
Fig. 5 shows a flowchart of a method for generating a virtual scene according to another embodiment of the present application.
Fig. 6 shows a flowchart of step S220 in a method for generating a virtual scene according to another embodiment of the present application.
Fig. 7 is a schematic diagram illustrating a display effect according to an embodiment of the application.
Fig. 8 is a schematic diagram illustrating another display effect provided according to an embodiment of the application.
Fig. 9 is a flowchart illustrating a method for generating a virtual scene according to still another embodiment of the present application.
Fig. 10 is a schematic diagram illustrating a display effect according to an embodiment of the present application.
Fig. 11 shows a flowchart of a method for generating a virtual scene according to still another embodiment of the present application.
Fig. 12 shows a block diagram of a virtual scene generation apparatus according to an embodiment of the present application.
Fig. 13 is a block diagram of a terminal device for executing a virtual scene generation method according to an embodiment of the present application.
Fig. 14 is a block diagram of a server for executing a virtual scene generation method according to an embodiment of the present application.
Fig. 15 is a storage unit, according to an embodiment of the present application, configured to store or carry program codes for implementing a virtual scene generation method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An application scenario of the virtual scenario generation method provided in the embodiment of the present application is described below.
Referring to fig. 1, a schematic diagram of an application scenario of a virtual scenario generation method provided in an embodiment of the present application is shown, where the application scenario includes an interactive system 10, and the interactive system 10 may be applied to a remote session. The interactive system 10 includes: a plurality of terminal devices 100 and a server 200, wherein the terminal devices 100 are connected with the server 200.
In some embodiments, the terminal device 100 is communicatively connected to the server 200 through a network, so that data interaction between the terminal device 100 and the server 200 is possible. The terminal device 100 may access the network where the router is located, and communicate with the server 200 through the network where the router is located, and of course, the terminal device 100 may also communicate with the server 200 through a data network.
In some embodiments, the terminal device 100 may be a head-mounted display device, and may also be a mobile device such as a mobile phone or a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may also be an intelligent terminal such as a mobile phone connected to an external/access head-mounted display device, that is, the terminal device 100 may be inserted or accessed into the external head-mounted display device as a processing and storage device of the head-mounted display device, and display virtual content on the head-mounted display device. In the remote session, the terminal device 100 may be configured to display a Virtual session scene of the remote session, so as to implement AR (Augmented Reality) display or VR (Virtual Reality) display on a scene picture of the Virtual session scene, thereby improving a display effect of the scene picture in the remote session. In another embodiment, the terminal device 100 may also be a display device such as a computer, a tablet computer, or a television, and the terminal device 100 may display a 2D (2 Dimensions) screen corresponding to the virtual conversation scene.
In some embodiments, the terminal device 100 may collect information data (e.g., collect facial information, voice data, etc. of the user) in the remote session to construct the three-dimensional model from the information data. In other embodiments, the terminal device 100 may also perform modeling according to information data such as face information, voice data, and body model stored in advance, or may perform modeling in combination with the information data stored in advance and the collected information data. For example, the terminal device 100 may collect face information in real time to establish a face model, where the face information may include expression information and morphological action information (such as head-off, head-on, etc.), and then integrate the face model with a preset body model, so that the time for modeling and rendering is saved, and the expression and morphological action of the user can be obtained in real time. In some embodiments, the terminal device 100 may transmit the collected information data to the server 200 or other terminal devices 100.
In some embodiments, referring to fig. 2, the interactive system 100 may also include an information collecting device 300, where the information collecting device 300 is configured to collect the information data (for example, collect face information, voice data, etc. of the user), and transmit the collected information data to the terminal device 100 or the server 200. In some embodiments, the information collecting device 300 may include a camera, an audio module, and the like, and may also include various sensors such as a light sensor and an acoustic sensor. As a specific embodiment, the information collecting apparatus 300 may be a photographing device (such as an RGB-D Depth camera) having functions of a common color camera (RGB) and a Depth camera (Depth) to acquire Depth data of a photographed user, so as to obtain a three-dimensional structure corresponding to the user. In a specific embodiment, the information collecting apparatus 300 and the terminal device 100 may be in the same field environment so as to collect information of a user corresponding to the terminal device 100, and the information collecting apparatus 300 may be connected or not connected to the terminal device 100, which is not limited herein.
In some embodiments, the server 200 may be a local server or a cloud server, and the type of the specific server 200 may not be limited in this embodiment. In the remote session, the server 200 may be configured to implement data interaction between multiple terminal devices 100/information collecting apparatuses 300, so as to ensure data transmission and synchronization between multiple terminal devices 100/information collecting apparatuses 300, and implement virtual session scenes, synchronization of audio and video data, data transmission between terminal devices 100/information collecting apparatuses 300, and the like in the remote session.
In some embodiments, when at least two terminal devices 100 exist in the same field environment (for example, in the same room) among the plurality of terminal devices 100 in the remote session, the at least two terminal devices 100 in the same field environment may also be connected through a communication method such as bluetooth, wiFi (Wireless Fidelity), zigBee (ZigBee technology), or the like, or may also be connected through a wired communication method such as a data line, so as to implement data interaction between the at least two terminal devices 100 in the same field environment. Of course, the connection mode between at least two terminal devices 100 in the same field environment may not be limited in the embodiment of the present application.
A specific virtual scene generation method is described in the following embodiments with reference to the accompanying drawings.
Referring to fig. 3, an embodiment of the present application provides a method for generating a virtual scene, where the method for generating a virtual scene may include:
step S110: participation data of one or more terminal devices in the remote session is obtained.
The remote session refers to a process of performing remote interaction and communication through multiple ends established by data communication. The participation data may be data acquired for participating in the remote session when the terminal device participates in the remote session. Each terminal device participating in the remote session can send participation data to the server, so that the server constructs a virtual session scene according to the participation data of each terminal device in the remote session. The participation data can comprise one or more of identity information of a user corresponding to the terminal device, time for the terminal device to join the remote session, a spatial position of the terminal device in a real scene, a posture of the terminal device and a place where the terminal device is located. The identity information of the user may include a name, a number, a job position, a gender, and the like of the user. The specific participation data may not be limited in this embodiment, for example, the participation data may also include a device ID (IDentity, identification number) of the terminal device, a social relationship between users, a role of the user in the remote session, and the like.
In some embodiments, the terminal device may collect the participation data and then send the participation data to the server. In other embodiments, the participation data may be collected by an information collection device on site, and then the information collection device sends the participation data to the server.
In some embodiments, a server (or any terminal device in a remote session) may obtain participation data of terminal devices participating in the remote session, so as to generate a virtual session scene according to the participation data of each terminal device participating in the session.
Step S120: and according to the participation data, performing position arrangement in a virtual session scene on the virtual object corresponding to each terminal device.
In the embodiment of the application, after the server acquires the participation data of the terminal device in the remote session, the position arrangement of each virtual object in the virtual session scene corresponding to the remote session can be performed according to the participation data. The virtual objects are corresponding to the terminal devices. The virtual session scene is a 3D (3 Dimensions) scene in a virtual space, the virtual session scene may include at least virtual objects, and a position of each virtual object in the virtual session scene may be fixed with respect to a world coordinate origin in the virtual space. Of course, the specific content in the virtual session scenario may not be limited, for example, a virtual conference table, a virtual tablecloth, a virtual ornament, and the like may also be included in the virtual session scenario.
In some embodiments, the virtual object may include a virtual character model, a virtual character avatar, and the like corresponding to the user, and the specific virtual object may not be limited in this embodiment.
In some embodiments, the server may perform position arrangement on the virtual object corresponding to each terminal device according to a pre-stored arrangement rule and the participation data. The configuration rule may be a rule for performing position configuration according to identity information, a rule for performing position configuration according to a spatial position of the terminal device in a real scene, a rule for performing position configuration according to a posture of the terminal device, or a rule for performing position configuration according to time when the terminal device joins a remote session. For example, the remote session scene includes a plurality of different positions around the conference table, and the virtual object corresponding to each terminal device may be sequentially arranged in the plurality of different positions from a designated position in the plurality of different positions in the order of increasing ages of users in the participation data. Of course, the arrangement rule may also be a rule for arranging according to multiple types of participating data, and the specific arrangement rule may not be limited in the embodiment of the present application. In addition, different configuration rules may be determined according to different remote sessions, for example, the configuration rules used may be determined according to the type of the remote session.
In some embodiments, virtual objects corresponding to different terminal devices in the remote session scene are in different locations. The server arranges the positions of the virtual objects corresponding to the terminal devices according to the participation data corresponding to each terminal device, so that the positions of the virtual objects corresponding to different terminal devices in a virtual session scene can be distinguished, and the virtual object corresponding to each terminal device corresponds to the participation data, thereby improving the sense of reality of the virtual session.
Step S130: and acquiring the position of each virtual object in the virtual session scene according to the position arrangement result.
In some embodiments, after the server performs position arrangement on the virtual object corresponding to each terminal device in the virtual session scene according to the participation data, the server may obtain a position of each virtual object in the virtual session scene according to a result of the position arrangement on the virtual object. The position of the virtual object in the virtual session scene may be a position in a world coordinate system (i.e., a position relative to a world coordinate origin in the world coordinate system) where the virtual object may be a virtual space.
Step S140: based on the position of the virtual object, a virtual conversation scene containing the virtual object is generated.
After the server acquires the position of each virtual object in the virtual session scene, the server may generate the virtual session scene according to the position of each virtual object.
In some embodiments, the server may obtain content data of the virtual object, and generate a virtual conversation scene including at least the virtual object according to the content data of the virtual object and a position of the virtual object in the virtual space. The content data of the virtual object may be three-dimensional model data of the virtual object, and the three-dimensional model data may include colors, model vertex coordinates, model contour data, and the like for constructing a model corresponding to the three-dimensional model.
As an embodiment, the content data of the virtual object may be stored locally in advance, and the server may obtain the content data of the virtual object corresponding to each terminal device from the local. As another embodiment, the content data of the virtual object may be stored in the terminal device in advance, and the server may receive the content data of the virtual object corresponding to the terminal device.
In some embodiments, the server may determine rendering coordinates of each virtual object in the virtual space according to a position of each virtual object in the virtual session scene, that is, a rendering position of each virtual object is obtained. Wherein the rendering position can be used as a rendering coordinate of the virtual object to realize the rendering of the virtual object at the rendering position. The rendering coordinates refer to three-dimensional space coordinates of the virtual object in the virtual space with the virtual camera as an origin (which can be regarded as being with the human eye as the origin).
After the server obtains rendering coordinates for rendering each virtual object in the virtual space, the server may construct each virtual object according to content data of each virtual object, and render each virtual object according to the rendering coordinates of each virtual object, where the rendering of the virtual object may obtain vertex coordinates, color values, and the like of each vertex in the virtual object. Since the content data may include three-dimensional model data, the rendered virtual object may be three-dimensional virtual content.
Therefore, after the virtual object is generated according to the position of the virtual object and the content data, a virtual session scene at least comprising the virtual object corresponding to each terminal device can be obtained. Of course, when the virtual session scene includes other virtual content, when the virtual session scene is generated, the position and content data of the other virtual content may also be acquired, and the other virtual content may be generated, so that the virtual session scene includes the other virtual content. For example, when a virtual chair is included in the virtual conversation scene, in addition to generating the virtual object according to the position of the virtual object and the content data of the virtual object, it is necessary to generate the virtual chair according to the position of the virtual chair in the virtual space and the content data of the virtual chair, so that the generated virtual conversation scene includes the virtual object and the virtual chair corresponding to the terminal device.
In the embodiment of the application, the generated virtual conversation scene may be used to generate a scene picture of the virtual conversation scene, and the terminal device may display the scene picture, so that the user may observe the 3D virtual conversation scene and may observe virtual objects at different positions, so that the user feels a strong sense of reality. For example, referring to fig. 4, fig. 4 shows a scene diagram of a remote conference scene, where the terminal device 100 may be a head-mounted display device, the user 601 is at a position around a physical table body in a real scene, the user 601 may observe a scene picture of a virtual conversation scene through the head-mounted display device, and the scene picture of the virtual conversation scene may include virtual characters 701 of other users participating in the remote conference.
In some embodiments, the operations of acquiring participation data of the terminal device in the remote session, arranging the position of the virtual object corresponding to the terminal device, acquiring the position of the virtual object, generating the virtual session scene, and the like may also be completed by the terminal device.
According to the method for generating the virtual scene, the virtual objects of the terminal equipment participating in the remote conversation are arranged in the positions according to the participation data of the terminal equipment of the user, so that the virtual objects are located in the corresponding positions in the remote conversation scene, the virtual conversation scene used for displaying is obtained, the user can feel real, and the effect of the remote conversation is improved.
Referring to fig. 5, another embodiment of the present application provides a method for generating a virtual scene, where the method for generating a virtual scene may include:
step S210: participation data of one or more terminal devices in the remote session is obtained.
In the embodiment of the present application, step S210 may refer to the contents of the foregoing embodiments, which are not described herein again.
Step S220: and according to the participation data, performing position arrangement in the virtual session scene on the virtual object corresponding to each terminal device.
The participation data of the terminal device in participating in the remote session may include: the remote conversation management method comprises one or more of identity information of a user, time for joining the remote conversation, spatial position of the terminal device in a real scene, posture of the terminal device and location of the terminal device. The specific participation data may not be limited in the embodiments of the present application.
In some embodiments, referring to fig. 6, the performing, by the server according to the participation data, position arrangement in a virtual session scene on a virtual object corresponding to each terminal device may include:
step S221: and determining the priority of the virtual object corresponding to each terminal device in the virtual session scene according to the participation data.
In a virtual session scene of a remote session, priorities of virtual objects corresponding to different terminal devices are different. The priority is used as a priority in position sorting, and the higher the priority of the virtual object is, the more the virtual object is arranged in the virtual conversation scene.
Further, the priority of the virtual object corresponding to the terminal device in the virtual session scene may be determined by the participation data of the terminal device.
In some embodiments, the participation data includes at least a participation time of the terminal device in the remote session. The participation time may be a time when the terminal device joins the remote session through the network. Determining the priority of the virtual object corresponding to each terminal device in the virtual session scene according to the participation data may include:
and sequencing the priorities of the virtual objects corresponding to the terminal devices in the virtual session scene from high to low according to the sequence of the participation time of the terminal devices to obtain the priorities of the virtual objects corresponding to the terminal devices.
The server may sort the participation time corresponding to each terminal device according to the participation time in the participation data corresponding to each terminal device and the sequence of the participation time, and the obtained sorting result may be determined as that the priority of the virtual object in the virtual session scene is sorted from high to low, so as to obtain the priority of the virtual object corresponding to each terminal device. For example, the correspondence relationship between the participation time of each terminal device and the priority of the virtual object corresponding to each terminal device, as shown in table 1,
TABLE 1
Terminal deviceIs provided with Time of participation Priority level
Device 4 Time point 4 1
Device 2 Time point 2 2
Device 3 Time 3 3
Device 1 Time 1 4
As shown in the above table, the participation time sequence of the terminal device is from top to bottom in table 2, i.e., time point 4 is earlier than time point 2, time point 2 is earlier than time point 3, and time point 3 is earlier than time point 1. The priorities of the virtual objects corresponding to the terminal devices are sequentially lowered from top to bottom in table 2, that is, the priority of the virtual object corresponding to the device 4, the priority of the virtual object corresponding to the device 2, the priority of the virtual object corresponding to the device 1, and the priority of the virtual object corresponding to the device 1 are sequentially lowered.
In some embodiments, the participation data at least includes identity information corresponding to the terminal device, and the identity information corresponding to the terminal device may include identity information of the user, identity information of the terminal, and the like. The identity information of the user may include the user's name (ID), job title, age, gender, the user's role in the remote session (e.g., speaker, listener), and the like. The identity information of the terminal may include a device ID of the terminal device, etc. Of course, the specific identity information may not be limiting.
Further, determining the priority of the virtual object corresponding to each terminal device in the virtual session scene according to the participation data may include: acquiring the identity level corresponding to each terminal device according to the identity information of each terminal device; and sequencing the priorities of the virtual objects corresponding to the terminal devices in the virtual session scene from high to low according to the sequence of the identity levels corresponding to the terminal devices from high to low to obtain the priorities of the virtual objects corresponding to the terminal devices.
The server can determine the identity level corresponding to the terminal device according to the work position, age, gender, role in remote session and the like in the identity information, and the identity levels are different. For example, the identity level corresponding to the terminal device may be determined according to the level of the job position, and the order of the identity levels is determined by the order of the levels of the job position. For another example, the identity level corresponding to the terminal device may be determined according to the age, and the identity levels may be arranged from top to bottom according to the age, or may be arranged from bottom to top according to the age. For another example, the identity level corresponding to the terminal device may be determined according to the level of the role in the remote session, and the order of the identity levels is determined by the order of the levels of the role. Also for example, the identity level may be determined based on whether the user is a male or a female, and the identity level of a female may be higher than the identity level of a male, or the identity level of a male may be substantially higher than the identity level of a female. Of course, the manner of specifically determining the identity level of the terminal device may not be limiting.
After obtaining the identity level corresponding to each terminal device in the remote session, the server may sort the identity levels corresponding to each terminal device in the order from high to low, and the obtained sorting result may be determined as that the priorities of the virtual objects in the virtual session scene are sorted from high to low, so as to obtain the priorities of the virtual objects corresponding to each terminal device. For example, the correspondence between the identity of each terminal device and the priority of the virtual object corresponding to each terminal device, as shown in table 2,
TABLE 2
Terminal device Identity level Priority level
Device 3 Level 3 1
Device 1 Level 2 2
Device 2 Level 1 3
As shown in the above table, the sequence of the identity levels corresponding to the terminal devices from high to low is the sequence from top to bottom in table 2, that is, the identity level corresponding to the device 3, the identity level corresponding to the device 2, and the identity level corresponding to the device 1 decrease sequentially. The priorities of the virtual objects corresponding to the terminal devices are sequentially lowered from top to bottom in table 2, that is, the priority of the virtual object corresponding to the device 3, the priority of the virtual object corresponding to the device 2, and the priority of the virtual object corresponding to the device 1 are sequentially lowered.
Step S222: and performing position arrangement in the virtual session scene on each virtual object according to the priority of each virtual object.
After the server obtains the priority of the virtual object corresponding to each terminal device in the remote session, the server may perform position arrangement in the virtual session scene on each virtual object according to the priority of each virtual object.
In some embodiments, there are multiple set positions in the virtual conversation scene, the set positions corresponding to the priorities of the virtual objects. Wherein, if the setting positions are different, the priorities of the virtual objects arranged at the setting positions are different.
In one embodiment, the priority of the virtual object arranged at a first position among the plurality of setting positions of the virtual session scene is highest, and the priorities of the virtual objects at other setting positions adjacent to the first position are sequentially lowered, so that the virtual objects can be arranged at the setting positions in sequence from high to low. For example, referring to fig. 7, a virtual conference table 801 is included in the virtual session scene, and a plurality of virtual seats may be distributed around the virtual conference table 801: a virtual seat P0, a virtual seat P7, a virtual seat P6, a virtual seat P5, a virtual seat P1, a virtual seat P2, a virtual seat P3, and a virtual seat P4. The priority of the virtual object arranged in the virtual seat P0 (target position) is highest, the priorities of the virtual objects arranged in the virtual seats P7, P6, P5, P1, P2, P3, and P4 are sequentially lowered, and each of the virtual object arrangements may be arranged in the virtual seats P0, P7, P6, P5, P1, P2, P3, and P4 in sequence from the highest to the lowest.
In a specific embodiment, the priorities of the virtual objects to be arranged at the plurality of set positions in the virtual conversation scene are distributed irregularly, in which case, the positions of the virtual objects may be arranged directly according to the priority of each virtual object and the priorities of the virtual objects at the respective set positions. For example, in fig. 7, the priorities of the virtual objects of the virtual seats P0, P7, P6, P5, P1, P2, P3, and P4 are 8, 1, 2, 3, 7, 4, 5, and 6, respectively, and the terminal device can directly arrange the virtual objects in the plurality of virtual seats according to the priorities of the virtual objects.
In some embodiments, the server may obtain identity information of each terminal device in the remote session, and determine a corresponding identity category according to the identity information, where the identity category may be used to represent an organization unit to which the terminal device belongs, and may set a position of a virtual object corresponding to the terminal devices belonging to the same identity category at a set position belonging to the same side in the virtual space. Furthermore, the server may determine the priority of each virtual object set at the same side set position according to the identity level corresponding to each terminal device belonging to the same identity category, and arrange the corresponding set position of each virtual object at the same side according to the priority.
For example, the users participating in the remote session include employee 1 of company a, employee 2 of company a, employee 3 of company a, employee 4 of company B, employee 5 of company B, and employee 6 of company C, where company B and company C are party a relative to company a, and company a is party B relative to company B and company C, so that the virtual objects corresponding to employee 1, employee 2, and employee 3 of party B may be arranged in the same side position, and employee 4, employee 5, and employee 6 of party a may be arranged in the same side position, and the employees of company B and company C may have different identity levels, and may be arranged in different positions on the same side according to different identity levels. For example, referring to fig. 7, employee 1, employee 2, and employee 3 of company a on the second side may be located in virtual seats P7, virtual seats P6, and virtual seats P5 on the same side, and employee 4 of company B on the second side, employee 5 of company B, and employee 6 of company C may be located in virtual seats P4, virtual seats P3, and virtual seats P2 on the same side.
Step S230: and acquiring the position of each virtual object in the virtual session scene according to the position arrangement result.
In the embodiment of the present application, step S230 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S240: and acquiring attitude information and position information of the target equipment in the remote session.
After the server acquires the position of the virtual object corresponding to each terminal device in the virtual session scene, the server may acquire the position of each virtual object in the world coordinate system of the virtual space (i.e., the position of the world coordinate origin in the world coordinate system) according to the position of the virtual object, and generate the virtual session scene according to the position of each virtual object in the virtual space.
In some embodiments, the server may obtain pose information and location information of the target device in the remote session when generating the virtual session scene according to the location of each virtual object. The target device may be a terminal device that is to display a picture of a virtual session scene, for example, the target device may be a terminal device that acquires the participation data, performs position arrangement on the virtual object according to the participation data, and acquires a position of the virtual object. The gesture information of the target device may be the orientation and the rotation angle of the target device, and the position information of the target device may be the position of the target device in the real scene.
Step S250: and acquiring a first relative position relation between other virtual objects and the target equipment according to the position information and the positions of the virtual objects, wherein the other virtual objects are virtual objects corresponding to other terminal equipment except the target equipment.
After the server acquires the position information and the posture information of the target device, a first relative position relation between the target device and the virtual object to be displayed by the target device in the virtual space can be determined according to the position information of the target device and the position of the virtual object. The virtual object that the target device needs to display may be a virtual object corresponding to at least some other terminal devices in the terminal devices of the remote session except the target device.
In some embodiments, the server may determine the first relative position relationship between the other virtual objects and the target device according to the relative position relationship between the position where the other virtual objects need to be superimposed in the real scene and the target device, the position information of the target device, and the position of the virtual object in the virtual session scene.
Step S260: first content data of other virtual objects is acquired.
In some embodiments, the server may obtain the content data of the virtual object that the target device needs to display, i.e. obtain the first content data of other virtual objects. For a specific manner of acquiring the content data by the server, reference may be made to the contents of the foregoing embodiment, which is not described herein again.
Step S270: and generating a virtual session scene comprising other virtual objects based on the first relative position relation and the first content data.
In some embodiments, after acquiring the first relative positional relationship between the other virtual objects and the target device in the virtual space and the first content data of the other virtual objects, the server may determine the positions of the other virtual objects in the virtual space according to the first relative positional relationship, and generate a virtual session scene including the other virtual objects according to the positions of the other virtual objects in the virtual space and the first content data. The manner in which the terminal device generates the virtual session scene according to the positions of the other virtual objects in the virtual space and the first content data may refer to the contents of the foregoing embodiments, which are not described herein again.
Of course, the virtual session scene may also include other virtual contents, such as a virtual table body, a virtual chair, and the like, and the terminal device may also generate other virtual contents, so that the generated virtual session scene includes a virtual object and other virtual contents.
Step S280: and generating a virtual scene picture for displaying in the target device according to the attitude information and the virtual session scene.
In the embodiment of the present application, after generating the virtual session scene, the server may generate the virtual session scene according to the attitude information of the target device and the virtual session scene when generating the virtual scene picture of the virtual session scene for display by the target device. The server can determine a virtual scene picture corresponding to the posture information from the virtual session scene according to the posture information of the target device, acquire data of the virtual scene picture, and generate the virtual scene picture according to the data of the virtual scene picture, so that the virtual scene picture for displaying in the target device, namely the virtual scene picture to be displayed by the target device, is obtained. Because the virtual scene picture corresponds to the posture information of the target device, a user corresponding to the target device can view virtual pictures in different view directions by changing the posture of the target terminal. Referring to fig. 8, fig. 8 shows a scene diagram of a teleconference scene, where a user H can view virtual objects corresponding to a user B, a user C, a user D, and a user E through a head-mounted display device worn in a posture of the currently worn head-mounted display device, and the user H can change the posture of the head-mounted display device by rotating the head so as to view virtual objects corresponding to other users, for example, when the user H rotates the head to the left, the virtual objects corresponding to other users such as the user a can also be seen.
In some embodiments, the position of the virtual object in the virtual session scene may also be adjusted according to requirements. Therefore, the method for generating a virtual scene may further include:
acquiring a position replacement request sent by terminal equipment; and responding to the position replacing request, and adjusting the position of the virtual object corresponding to the terminal equipment which sends the position replacing request in the virtual session scene.
The position change request can carry a target position to be adjusted, so that the server can adjust the position of the virtual object corresponding to the terminal device sending the position change request to the target position in the virtual session scene according to the position change request, and the user requirements can be met.
Further, in the virtual session scene, the plurality of positions of the virtual object corresponding to the plurality of terminal devices may include a position of the virtual object corresponding to the terminal device of the main speaker in the remote session, and the position of the virtual object corresponding to the terminal device of the main speaker may be used as the main speaker position. The position of the talkback can be preset, and the position of the talkback can also be changed by a user through the terminal equipment, wherein the position change request is used for indicating that the position of the virtual object corresponding to the terminal equipment of the user in the virtual conversation scene is adjusted to the position of the talkback, so that the position of the virtual object corresponding to the terminal equipment which initiates the position acquisition request is adjusted to the position of the talkback in the virtual conversation scene. For example, in a remote conference scenario, a participant participating in the remote conference may initiate a location change request of a speaking location through a terminal device, so that a virtual object corresponding to the terminal device of the participant is located at the speaking location in the remote conference.
In some embodiments, the implementation of the above steps may also be performed by the terminal device.
According to the method for generating the virtual scene, the priority of the virtual object corresponding to each terminal device is determined according to the participation data of the terminal devices of the user, then the virtual objects of the terminal devices are arranged according to the priority, the virtual objects are located at the corresponding positions in the remote session scene, the virtual session scene for displaying is obtained, the virtual scene picture of the virtual session scene is generated according to the posture information of the terminal devices, the virtual scene picture seen by the user corresponds to the posture information of the terminal devices, the real feeling is provided for the user, and the effect of the remote session is improved.
Referring to fig. 9, another embodiment of the present application provides a method for generating a virtual scene, where the method for generating a virtual scene may include:
step S310: participation data of one or more terminal devices in the remote session is obtained.
In the embodiment of the present application, the step S310 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S320: the method comprises the steps of obtaining a first terminal device in a first reality scene in a remote session and a second terminal device in a second reality scene in the remote session, wherein the first reality scene is a scene including a physical table body, and the second reality scene is a remote scene relative to the scene.
The plurality of terminal devices in the remote session may have a part of the first terminal devices in the same first reality scene, the first reality scene includes the physical table body, and other second terminal devices are not in the first reality scene and are in a second reality scene not including the physical table body, wherein other one or more second terminal devices may be in the same reality scene or in a plurality of different reality scenes. When the server arranges the positions of the virtual objects, the positions of the terminal devices in the first reality scene can be preferentially arranged, and then the positions of the terminal devices in the second reality scene are arranged. The second reality scene is a remote scene relative to the live scene, that is, other reality scenes that are not in the first reality scene in the remote session, and the second reality scene may include a plurality of reality scenes or may include one reality scene.
In some embodiments, the participation data may include a location position of the terminal device. The server may determine, according to the positioning position in the participation data of each terminal device, a first terminal device in a first reality scene and a second terminal device in a second reality scene.
Step S330: and acquiring relative spatial position information between the first terminal equipment and the physical table body from the participation data of the first terminal equipment, and performing position arrangement in the virtual session scene on the virtual object corresponding to the first terminal equipment according to the relative spatial position information.
In some embodiments, the participation data of the first terminal device may further include relative spatial position information of the first terminal device and the physical table. The relative spatial position information may include relative position information between the first terminal device and the physical table body, posture information, and the like, and the posture information may be an orientation, a rotation angle, and the like of the physical table body with respect to the first terminal device. The server can acquire relative spatial position information between the first terminal device and the physical table body from the participation data of the first terminal device according to the participation data of the first terminal device.
Furthermore, the virtual session scene comprises a plurality of setting positions, and the setting positions are respectively in one-to-one correspondence with the seats around the physical table body. As an embodiment, the server may obtain a position relationship between the first terminal device and the physical table body according to the relative spatial position information between the first terminal device and the physical table body, and determine a corresponding set position of each first terminal device in the virtual session scene according to the position relationship between the first terminal device and the physical table body, so as to arrange the virtual objects corresponding to the first terminal devices.
Step S340: and acquiring the residual configuration positions in the virtual session scene according to the position configuration result of the virtual object corresponding to the first terminal equipment.
After arranging the positions of the virtual objects corresponding to the first terminal device, the server may obtain the positions that are not arranged in the virtual session scene, and determine the positions that are not arranged as the remaining arrangement positions in the virtual session scene, where the positions that are not arranged may refer to the set positions that do not correspond to the first terminal device.
Step S350: and according to the participation data of the second terminal equipment, carrying out position arrangement on the virtual object corresponding to the second terminal equipment in the rest arrangement positions.
After the server obtains the remaining arrangement positions in the virtual session scene, the server may perform position arrangement on the virtual objects corresponding to the second terminal device according to the participation data of the second terminal device in the second real scene. The server may participate in the foregoing embodiment in the manner of performing position arrangement on the virtual object corresponding to the second terminal device in the remaining arrangement positions according to the participation data of the second terminal device, for example, the manner of performing position arrangement according to the participation data, for example, the manner of performing position arrangement according to the identity information, the participation data, and the like, which is not described herein again.
Step S360: and acquiring the position of each virtual object in the virtual session scene according to the position arrangement result.
Step S370: based on the position of the virtual object, a virtual session scene containing the virtual object is generated.
In the embodiment of the present application, step S360 and step S370 may refer to the contents of the foregoing embodiments, and are not described herein again.
Referring to fig. 10, fig. 10 shows a scene diagram of a teleconference scene, where a user B, a user D, and a user H are in a first reality scene, a user a, a user F, a user G, a user C, and a user E are in a second reality scene, and when performing position arrangement, the user a, the user F, the user G, the user C, and the user E may be arranged at remaining arrangement positions after arranging the positions of the user B, the user D, and the user H. The user H can see part of the physical table, the user B, and the user D in the real scene, and can see the virtual objects corresponding to the user C and the user E through the head-mounted display device worn by the user H in the current posture of the head-mounted display device.
In some embodiments, the implementation of the above steps may also be performed by the terminal device.
According to the method for generating the virtual scene, the positions of the virtual objects corresponding to the first terminal device in the virtual session scene are arranged according to the relative spatial position information of the first terminal device relative to the physical table body in the first reality scene, then the positions of the virtual objects corresponding to the second terminal device in the virtual session scene are arranged according to the participation data of the second terminal device in the second reality scene, so that the virtual objects are located at the corresponding positions in the remote session scene, the virtual session scene for display is obtained, the positions of the virtual objects in part of the virtual session scene correspond to the positions of the physical table body in the reality scene, a user can feel real, and the effect of remote session is improved.
Referring to fig. 11, a further embodiment of the present application provides a method for generating a virtual scene, where the method for generating a virtual scene may include:
step S410: participation data of one or more terminal devices in the remote session is obtained.
Step S420: and according to the participation data, performing position arrangement in a virtual session scene on the virtual object corresponding to each terminal device.
Step S430: and acquiring the position of each virtual object in the virtual session scene according to the position arrangement result.
In the embodiment of the present application, step S410, step S420, and step S430 may refer to the contents of the foregoing embodiments, and are not described herein again.
Step S440: and determining the virtual table bodies corresponding to the number according to the number of the terminal devices participating in the remote session.
In some implementations, a virtual table may be included in a virtual session scenario. The number of the terminal devices participating in the remote session can be determined, and the virtual table bodies corresponding to the number of the terminal devices are determined. For example, when the number of terminal devices participating in the remote session is 6, the number of positions corresponding to the virtual table is 6. For another example, if the number of positions corresponding to the virtual round table is at least 4 and the number of positions corresponding to the virtual square table is 4 or more, the virtual table body is a virtual round table if the number of terminal devices participating in the remote session is 4 or less, and the virtual table body is a virtual square table if the number of terminal devices participating in the remote session is 4 or more.
In some implementations, when the users participating in the remote session include roles of party a and party b, the shape of the virtual table may also be determined according to the number of parties a. For example, if there are 2 persons on the first side, the virtual table may be rectangular, and if there are 5 persons, the virtual table may be hexagonal, heptagonal, or the like.
Step S450: and acquiring attitude information and position information of the target equipment in the remote session.
In the embodiment of the present application, the step S450 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S460: and acquiring a second relative position relation between the other virtual objects and the target device and a third relative position relation between the virtual table body and the target device according to the position information and the positions of the other virtual objects relative to the virtual table body, wherein the other virtual objects are virtual objects corresponding to other terminal devices except the target device.
After the server acquires the position information and the posture information of the target device, a second relative position relationship between the virtual object to be displayed by the target device and the target device in the virtual space and a third relative position relationship between the virtual table body and the target device can be determined according to the position information of the target device and the positions of other virtual objects relative to the virtual table body. The virtual object that the target device needs to display may be a virtual object corresponding to at least some other terminal devices in the terminal devices of the remote session except the target device. That is to say, when the target device displays the screen of the virtual session scene, only the virtual objects corresponding to the other terminal devices need to be displayed, so as to improve the sense of reality when the virtual session scene is displayed.
In some embodiments, the content of the foregoing embodiments may be referred to for obtaining the second relative position relationship between the other virtual objects in the virtual space and the target device, and details are not repeated here.
In some embodiments, the third relative positional relationship between the virtual table body and the target device in the virtual session scene may correspond to the position information and the posture information of the target device. As a specific implementation manner, the server may determine a reference plane in a real scene according to the position information and the posture information of the target device, where the position of the virtual table body that needs to be superimposed and displayed corresponds to the reference plane, and may determine a third relative position relationship between the virtual table body and the target device in the virtual space according to the position of the reference plane in the real scene where the target device is located. The reference plane may be a plane of a real object in a real scene, such as a solid desktop, a plane where a marker is located, and the like. Therefore, the position of the virtual table body in the virtual session scene corresponds to the position information and the posture information of the target device, and the reality sense of the virtual session scene can be improved.
In addition, the attitude angle of the target device can be obtained according to the attitude information of the target device, and the server can obtain the third relative position relationship between the virtual table body and the target device only when the attitude angle of the target device reaches the set angle. That is, the target device may display the virtual table body in the virtual session scene only when the attitude angle of the target device reaches the set angle.
Step S470: first content data of other virtual objects and second content data of the virtual table body are obtained.
In some implementations, the server can obtain first content data of other virtual objects and second content data of the virtual table. For a specific manner of acquiring the content data by the server, reference may be made to the contents of the foregoing embodiment, which is not described herein again.
Step S480: and generating a virtual session scene comprising the virtual table body and other virtual objects based on the second relative position relationship, the third relative position relationship, the first content data and the second content data.
In this embodiment, the manner in which the server generates the virtual session scene according to the second relative position relationship, the third relative position relationship, the first content data, and the second content data may refer to the contents of the foregoing embodiments, and details are not described herein.
Step S490: and generating a virtual scene picture for displaying in the target equipment according to the attitude information and the virtual conversation scene.
In the embodiment of the present application, step S490 may refer to the contents of the foregoing embodiments, and is not described herein again.
In some embodiments, for a terminal device subsequently participating in a remote session, a position may be added on the basis of a position of a virtual object corresponding to a terminal device currently participating in the remote session, at this time, it may be determined whether to expand the virtual table according to a situation, or the position of the virtual object corresponding to a terminal device previously participating in the remote session may be adaptively adjusted by adding to an appropriate position according to participation data (e.g., identity information, a role in the remote session, etc.) of the terminal device newly participating in the remote session.
In some embodiments, the above steps may also be performed by the terminal device.
According to the method for generating the virtual scene, the virtual scene picture displayed in the target device can be obtained, and the virtual scene picture corresponds to the posture information of the target device, so that a user corresponding to the target device can view the virtual pictures in different view directions by changing the posture of the target terminal. In addition, the virtual table body is included in the virtual scene picture, so that more real remote conversation experience can be provided for the user.
In the foregoing embodiment, the terminal device may be an external/access head-mounted display device, and the head-mounted display device is connected to the server. The server transmits the virtual scene picture of the virtual session scene to the head-mounted display device after generating the virtual session scene, and the head-mounted display device can complete the display of the virtual scene picture.
Referring to fig. 12, a block diagram of a virtual scene generation apparatus 400 provided in the present application is shown. The virtual scene generation apparatus 400 includes: a data acquisition module 410, a position arrangement module 420, a position acquisition module 430, and a scene generation module 440. The data obtaining module 410 is configured to obtain participation data of one or more terminal devices in a remote session; the position arrangement module 420 is configured to perform position arrangement in a virtual session scene on the virtual object corresponding to each terminal device according to the participation data; the position obtaining module 430 is configured to obtain a position of each virtual object in the virtual session scene according to the result of the position arrangement; the scene generation module 440 is configured to generate a virtual conversation scene containing the virtual object based on the position of the virtual object.
In some embodiments, the position arrangement module 420 may be specifically configured to: determining the priority of the virtual object corresponding to each terminal device in the virtual session scene according to the participation data; and performing position arrangement in the virtual session scene on each virtual object according to the priority of each virtual object.
In some embodiments, the participation data includes a participation time of the terminal device in the remote session. The position arrangement module 420 determines, according to the participation data, a priority of the virtual object corresponding to each terminal device in the virtual session scene, and may include: and sequencing the priorities of the virtual objects corresponding to the terminal devices in the virtual session scene from high to low according to the sequence of the participation time of the terminal devices to obtain the priorities of the virtual objects corresponding to the terminal devices.
In some embodiments, the participation data includes identity information corresponding to the terminal device. The position arrangement module 420 determines, according to the participation data, a priority of the virtual object corresponding to each terminal device in the virtual session scene, including: acquiring the identity level corresponding to each terminal device according to the identity information of each terminal device; and sequencing the priorities of the virtual objects corresponding to the terminal devices in the virtual session scene from high to low according to the sequence of the identity levels corresponding to the terminal devices from high to low to obtain the priorities of the virtual objects corresponding to the terminal devices.
In some embodiments, the position arrangement module 420 may be specifically configured to: acquiring first terminal equipment in a first reality scene in a remote session and second terminal equipment in a second reality scene in the remote session, wherein the first reality scene is a field scene comprising a physical table body, and the second reality scene is a remote scene relative to the field scene; acquiring relative spatial position information between the first terminal equipment and the physical table body from participation data of the first terminal equipment, and performing position arrangement in a virtual session scene on a virtual object corresponding to the first terminal equipment according to the relative spatial position information; acquiring the residual arrangement positions in the virtual session scene according to the position arrangement result of the virtual object corresponding to the first terminal equipment; and according to the participation data of the second terminal equipment, carrying out position arrangement on the virtual object corresponding to the second terminal equipment in the rest arrangement positions.
In some embodiments, the scenario generation module 440 may be specifically configured to: acquiring attitude information and position information of target equipment in a remote session; acquiring a first relative position relation between other virtual objects and target equipment according to the position information and the positions of the virtual objects, wherein the other virtual objects are virtual objects corresponding to other terminal equipment except the target equipment; acquiring first content data of other virtual objects; generating a virtual session scene including other virtual objects based on the first relative positional relationship and the first content data; and generating a virtual scene picture for displaying in the target device according to the attitude information and the virtual session scene.
In some implementations, the virtual session scene further includes a virtual table body, the position of the virtual object including a position of the virtual object relative to the virtual table body. The scenario generation module 440 may be specifically configured to: generating a virtual conversation scene containing the virtual object based on the position of the virtual object, comprising: determining virtual table bodies corresponding to the number according to the number of the terminal devices participating in the remote session; acquiring attitude information and position information of target equipment in a remote session; according to the position information, the posture information and the positions of other virtual objects relative to the virtual table body, acquiring second relative position relations between other virtual objects and the target equipment, and acquiring a third relative position relation between the virtual table body and the target equipment, wherein the other virtual objects are virtual objects corresponding to other terminal equipment except the target equipment; acquiring first content data of other virtual objects and second content data of the virtual table body; generating a virtual session scene comprising a virtual table body and other virtual objects based on the second relative position relationship, the third relative position relationship, the first content data and the second content data; and generating a virtual scene picture for displaying in the target device according to the attitude information and the virtual session scene.
In some embodiments, the generating apparatus 400 of the virtual scene may further include: the device comprises a request acquisition module and a scene adjustment module. The request acquisition module can be used for acquiring a position replacement request sent by the terminal equipment; the scene adjustment module may be configured to adjust, in response to the location change request, a location of a virtual object in the virtual session scene, where the virtual object corresponds to the terminal device that sent the location change request.
In summary, according to the scheme provided by the application, participation data of one or more terminal devices in a remote session is acquired, position arrangement in a virtual session scene is performed on a virtual object corresponding to each terminal device according to the participation data, the position of each virtual object in the virtual session scene is acquired according to a position arrangement result, and a virtual session scene including the virtual object is generated based on the position of the virtual object. Therefore, the virtual objects of the terminal equipment participating in the remote conversation can be arranged according to the participation data of the terminal equipment of the user, so that the virtual objects are added to the corresponding positions in the remote conversation scene, the virtual conversation scene used for displaying is obtained, the user can feel more real, and the effect of the remote conversation is improved.
In this embodiment of the present application, the electronic device that executes the method for generating a virtual scene provided in the foregoing embodiment may be a server, or may be a terminal device.
Referring to fig. 13, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a terminal device capable of running an application, such as a smart phone, a tablet computer, a head-mounted display device, and the like. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
In some embodiments, the terminal device 100 may further include an image sensor 130 for capturing images of real objects and capturing scene images of the target scene. The image sensor 130 may be an infrared camera or a visible light camera, and the specific type is not limited in the embodiment of the present application.
In one embodiment, the terminal device is a head-mounted display device, and may further include one or more of the following components in addition to the processor, the memory, and the image sensor described above: display module assembly, optical module assembly, communication module and power.
The display module may include a display control unit. The display control unit is used for receiving the display image of the virtual content rendered by the processor, displaying and projecting the display image to the optical module, and a user can watch the virtual content through the optical module. The display device may be a display screen or a projection device, and may be used to display an image.
The optical module can adopt an off-axis optical system or a waveguide optical system, and a display image displayed by the display device can be projected to eyes of a user after passing through the optical module. The user sees the display image that display device throws through optical module group simultaneously. In some embodiments, the user can also observe the real environment through the optical module, and experience the augmented reality effect after the virtual content and the real environment are superimposed.
The communication module can be a module such as Bluetooth, wiFi (Wireless-Fidelity), zigBee (Violet technology) and the like, and the head-mounted display device can be in communication connection with the terminal equipment through the communication module. The head-mounted display device in communication connection with the terminal equipment can perform information and instruction interaction with the terminal equipment. For example, the head-mounted display device may receive image data transmitted from the terminal device via the communication module, and generate and display virtual content of a virtual world from the received image data.
The power supply can supply power for the whole head-mounted display device, and the normal operation of each part of the head-mounted display device is ensured.
Referring to fig. 14, a block diagram of a server according to an embodiment of the present disclosure is shown. The server 200 may be a cloud server, a local server, or the like, and the server 200 may include one or more of the following components: a processor 210, a memory 220, and one or more applications, wherein the one or more applications may be stored in the memory 220 and configured to be executed by the one or more processors 210, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Referring to fig. 15, a block diagram of a computer-readable storage medium provided in an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. A method for generating a virtual scene, the method comprising:
acquiring participation data of one or more terminal devices in a remote session, wherein the participation data comprises one or more data of identity information of a user corresponding to the terminal device, time for the terminal device to join the remote session, a spatial position of the terminal device in a real scene, a posture of the terminal device and a place where the terminal device is located;
according to the participation data, position arrangement in a virtual session scene is carried out on the virtual object corresponding to each terminal device;
acquiring the position of each virtual object in the virtual session scene according to the position arrangement result;
generating a virtual session scene containing the virtual object based on the position of the virtual object, wherein the virtual session scene comprises a virtual scene picture displayed in target equipment, the virtual scene picture is generated according to the position of the virtual object, first content data of other virtual objects, posture information and position information of the target equipment, and the other virtual objects are virtual objects corresponding to other terminal equipment except the target equipment;
the performing, according to the participation data, position arrangement in a virtual session scene for the virtual object corresponding to each of the terminal devices includes:
acquiring a first terminal device in a first reality scene in the remote session and a second terminal device in a second reality scene in the remote session, wherein the first reality scene is a scene including a physical table body, and the second reality scene is a remote scene relative to the scene;
acquiring relative spatial position information between the first terminal equipment and the physical table body from participation data of the first terminal equipment, and performing position arrangement in a virtual session scene on a virtual object corresponding to the first terminal equipment according to the relative spatial position information;
acquiring the rest arrangement positions in the virtual session scene according to the position arrangement result of the virtual object corresponding to the first terminal device;
and according to the participation data of the second terminal equipment, carrying out position arrangement on the virtual object corresponding to the second terminal equipment in the residual arrangement position.
2. The method according to claim 1, wherein the arranging the position in the virtual session scene of the virtual object corresponding to each of the terminal devices according to the participation data includes:
determining the priority of the virtual object corresponding to each terminal device in a virtual session scene according to the participation data;
and performing position arrangement on each virtual object in the virtual session scene according to the priority of each virtual object.
3. The method of claim 2, wherein the participation data includes participation time of terminal devices in the remote session, and wherein the determining the priority of the virtual object corresponding to each terminal device in the virtual session scene according to the participation data includes:
and sequencing the priorities of the virtual objects corresponding to the terminal devices in the virtual session scene from high to low according to the sequence of the participation time of the terminal devices to obtain the priorities of the virtual objects corresponding to the terminal devices.
4. The method according to claim 2, wherein the participation data includes identity information corresponding to terminal devices, and the determining the priority of the virtual object corresponding to each of the terminal devices in the virtual session scene according to the participation data includes:
acquiring the identity level corresponding to each terminal device according to the identity information of each terminal device;
and sequencing the priorities of the virtual objects corresponding to the terminal devices in the virtual session scene from high to low according to the sequence of the identity levels corresponding to the terminal devices from high to low to obtain the priorities of the virtual objects corresponding to the terminal devices.
5. The method of claim 1, wherein generating the virtual session scene based on the location of the virtual object comprises:
acquiring attitude information and position information of target equipment in the remote session;
acquiring a first relative position relation between other virtual objects and the target equipment according to the position information and the positions of the virtual objects, wherein the other virtual objects are virtual objects corresponding to other terminal equipment except the target equipment;
acquiring first content data of the other virtual objects;
generating the virtual conversation scene including the other virtual objects based on the first relative positional relationship and the first content data;
and generating a virtual scene picture for displaying in the target equipment according to the attitude information and the virtual session scene.
6. The method of claim 1, wherein the virtual session scene further comprises a virtual table body, and wherein the position of the virtual object comprises a position of the virtual object relative to the virtual table body;
generating a virtual session scene containing the virtual object based on the position of the virtual object, including:
determining the virtual table bodies corresponding to the number according to the number of the terminal devices participating in the remote session;
acquiring attitude information and position information of target equipment in the remote session;
according to the position information, the posture information and the positions of other virtual objects relative to the virtual table body, acquiring second relative position relations between the other virtual objects and the target equipment, and acquiring a third relative position relation between the virtual table body and the target equipment, wherein the other virtual objects are virtual objects corresponding to other terminal equipment except the target equipment;
acquiring first content data of the other virtual objects and second content data of the virtual table body;
generating the virtual session scene including the virtual table body and the other virtual objects based on the second relative positional relationship, the third relative positional relationship, the first content data, and the second content data;
and generating a virtual scene picture for displaying in the target equipment according to the attitude information and the virtual session scene.
7. An apparatus for generating a virtual meeting scene, the apparatus comprising: a data acquisition module, a position arrangement module, a position acquisition module and a scene generation module, wherein,
the data acquisition module is used for acquiring participation data of one or more terminal devices in a remote session, wherein the participation data comprises one or more data of identity information of a user corresponding to the terminal device, time for the terminal device to join the remote session, a spatial position of the terminal device in a real scene, a posture of the terminal device and a place of the terminal device;
the position arrangement module is used for carrying out position arrangement in a virtual session scene on the virtual object corresponding to each terminal device according to the participation data;
the position acquisition module is used for acquiring the position of each virtual object in the virtual session scene according to the position arrangement result;
the scene generation module is configured to generate a virtual session scene including the virtual object based on the position of the virtual object, where the virtual session scene includes a virtual scene picture displayed in a target device, the virtual scene picture is generated according to the position of the virtual object, first content data of other virtual objects, posture information of the target device, and position information, and the other virtual objects are virtual objects corresponding to other terminal devices except the target device;
the position obtaining module is further configured to obtain a first terminal device in a first reality scene in the remote session and a second terminal device in a second reality scene in the remote session, where the first reality scene is a scene including a physical table body, and the second reality scene is a remote scene relative to the scene; acquiring relative spatial position information between the first terminal equipment and the physical table body from participation data of the first terminal equipment, and performing position arrangement in a virtual session scene on a virtual object corresponding to the first terminal equipment according to the relative spatial position information; acquiring the rest arrangement positions in the virtual session scene according to the position arrangement result of the virtual object corresponding to the first terminal device; and performing position arrangement on the virtual object corresponding to the second terminal equipment in the remaining arrangement positions according to the participation data of the second terminal equipment.
8. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to execute the method according to any of claims 1-6.
CN201910578450.1A 2019-06-28 2019-06-28 Virtual scene generation method and device, electronic equipment and storage medium Active CN110427227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578450.1A CN110427227B (en) 2019-06-28 2019-06-28 Virtual scene generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578450.1A CN110427227B (en) 2019-06-28 2019-06-28 Virtual scene generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110427227A CN110427227A (en) 2019-11-08
CN110427227B true CN110427227B (en) 2023-01-06

Family

ID=68408861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578450.1A Active CN110427227B (en) 2019-06-28 2019-06-28 Virtual scene generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110427227B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111835531B (en) * 2020-07-30 2023-08-25 腾讯科技(深圳)有限公司 Session processing method, device, computer equipment and storage medium
CN111966222A (en) * 2020-08-12 2020-11-20 徐雪峰 High-safety VR virtual reality device, system and method
CN112601047B (en) * 2021-02-22 2021-06-22 深圳平安智汇企业信息管理有限公司 Projection method and device based on virtual meeting scene terminal and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303773A (en) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 Method and system for generating virtual scene
CN102170361A (en) * 2011-03-16 2011-08-31 西安电子科技大学 Virtual-reality-based network conference method
CN107071334A (en) * 2016-12-24 2017-08-18 深圳市虚拟现实技术有限公司 3D video-meeting methods and equipment based on virtual reality technology
CN108881784A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Virtual scene implementation method, device, terminal and server
CN108961421A (en) * 2018-06-27 2018-12-07 深圳中兴网信科技有限公司 Control method, control system and the computer readable storage medium of Virtual Space

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9058693B2 (en) * 2012-12-21 2015-06-16 Dassault Systemes Americas Corp. Location correction of virtual objects
US20170154468A1 (en) * 2015-12-01 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method and electronic apparatus for constructing virtual reality scene model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303773A (en) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 Method and system for generating virtual scene
CN102170361A (en) * 2011-03-16 2011-08-31 西安电子科技大学 Virtual-reality-based network conference method
CN107071334A (en) * 2016-12-24 2017-08-18 深圳市虚拟现实技术有限公司 3D video-meeting methods and equipment based on virtual reality technology
CN108881784A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Virtual scene implementation method, device, terminal and server
CN108961421A (en) * 2018-06-27 2018-12-07 深圳中兴网信科技有限公司 Control method, control system and the computer readable storage medium of Virtual Space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
虚拟现实技术在环境艺术设计中的应用分析;万国;《现代信息科技》;20180425(第04期);第96-98页 *

Also Published As

Publication number Publication date
CN110427227A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
TWI650675B (en) Method and system for group video session, terminal, virtual reality device and network device
US10952006B1 (en) Adjusting relative left-right sound to provide sense of an avatar's position in a virtual space, and applications thereof
US11140361B1 (en) Emotes for non-verbal communication in a videoconferencing system
US11765318B2 (en) Placement of virtual content in environments with a plurality of physical participants
EP3954111A1 (en) Multiuser asymmetric immersive teleconferencing
CN110413108B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN111527525A (en) Mixed reality service providing method and system
CN110427227B (en) Virtual scene generation method and device, electronic equipment and storage medium
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
US11457178B2 (en) Three-dimensional modeling inside a virtual video conferencing environment with a navigable avatar, and applications thereof
CN111064919A (en) VR (virtual reality) teleconference method and device
CN110610546B (en) Video picture display method, device, terminal and storage medium
CN110536095A (en) Call method, device, terminal and storage medium
US20230236713A1 (en) Established perspective user interface and user experience for video meetings
CN114549744A (en) Method for constructing virtual three-dimensional conference scene, server and AR (augmented reality) equipment
US20240087236A1 (en) Navigating a virtual camera to a video avatar in a three-dimensional virtual environment, and applications thereof
US11928774B2 (en) Multi-screen presentation in a virtual videoconferencing environment
CN110413109A (en) Generation method, device, system, electronic equipment and the storage medium of virtual content
CN116420351A (en) Providing 3D representations of sending participants in a virtual conference
US11776227B1 (en) Avatar background alteration
US11741652B1 (en) Volumetric avatar rendering
US11748939B1 (en) Selecting a point to navigate video avatars in a three-dimensional environment
US12028651B1 (en) Integrating two-dimensional video conference platforms into a three-dimensional virtual environment
US20240007593A1 (en) Session transfer in a virtual videoconferencing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant