CN108513090B - Method and device for group video session - Google Patents

Method and device for group video session Download PDF

Info

Publication number
CN108513090B
CN108513090B CN201710104669.9A CN201710104669A CN108513090B CN 108513090 B CN108513090 B CN 108513090B CN 201710104669 A CN201710104669 A CN 201710104669A CN 108513090 B CN108513090 B CN 108513090B
Authority
CN
China
Prior art keywords
user
interaction model
dimensional interaction
dimensional
operation instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710104669.9A
Other languages
Chinese (zh)
Other versions
CN108513090A (en
Inventor
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710104669.9A priority Critical patent/CN108513090B/en
Priority to PCT/CN2018/075749 priority patent/WO2018153267A1/en
Priority to TW107106428A priority patent/TWI650675B/en
Publication of CN108513090A publication Critical patent/CN108513090A/en
Priority to US16/435,733 priority patent/US10609334B2/en
Application granted granted Critical
Publication of CN108513090B publication Critical patent/CN108513090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method and a device for group video conversation, and belongs to the technical field of Virtual Reality (VR). The method comprises the following steps: in the group video session process, acquiring a three-dimensional interaction model of a target object to be displayed; processing the three-dimensional interaction model of the target object according to the visual angle of each user in the plurality of users in the group video session to obtain video data of the user, wherein the video data of the user comprises model data obtained by carrying out visual angle transformation on the three-dimensional interaction model of the target object; and respectively sending the video data of the plurality of users to the terminals of the plurality of users. According to the invention, a plurality of users can experience the same three-dimensional interaction model at the own visual angle during group video conversation and communicate through the three-dimensional interaction model, so that the efficiency of video conversation is improved on the basis of an expanded communication mode.

Description

Method and device for group video session
Technical Field
The present invention relates to the field of VR (Virtual Reality) technologies, and in particular, to a method and an apparatus for group video session.
Background
VR technology is a technology that can create and experience a virtual world, which can simulate a realistic environment and intelligently sense the behavior of a user, so that the user feels personally on the scene. Therefore, the application of VR technology in social aspect is receiving a lot of attention, and a method for performing group video based on VR technology is in the future.
In a group video session, the server may provide a virtual environment for a plurality of users in the group video session, and also provide an avatar that can express the user's own image. Furthermore, the server can display the virtual character, the virtual environment and the audio data of a certain user to other users in a video mode, so that a plurality of users can communicate with each other in the virtualized world. However, since the communication of the group video session shows the virtual character of the user, the adopted communication mode is conversation communication, and other communication modes cannot be expanded in the group video session, so that the actual efficiency of the video session is low.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for group video session. The technical scheme is as follows:
in one aspect, a method for group video session is provided, the method comprising:
in the group video session process, acquiring a three-dimensional interaction model of a target object to be displayed;
processing the three-dimensional interaction model of the target object according to the view angle of each user in the plurality of users in the group video session to obtain video data of the user, wherein the video data of the user comprises model data obtained by performing view angle transformation on the three-dimensional interaction model of the target object;
and respectively sending the video data of the plurality of users to the terminals where the plurality of users are located.
In another aspect, an apparatus for group video sessions is provided, the apparatus comprising:
the interactive model acquisition module is used for acquiring a three-dimensional interactive model of a target object to be displayed in the group video session process;
the processing module is used for processing the three-dimensional interaction model of the target object according to the view angle of each user in the plurality of users in the group video session to obtain video data of the user, wherein the video data of the user comprises model data obtained by performing view angle transformation on the three-dimensional interaction model of the target object;
and the sending module is used for respectively sending the video data of the users to the terminals where the users are located.
According to the embodiment of the invention, the three-dimensional interaction model of the target object to be displayed is obtained, the three-dimensional interaction model is processed according to the visual angle of each user in the group video session, the video data obtained after the visual angle transformation is carried out on the three-dimensional interaction model is obtained, and the video data is sent to the terminals where the plurality of users are located, so that the plurality of users can experience the same three-dimensional interaction model according to the visual angles of the users during the group video session, and the communication is carried out through the three-dimensional interaction model, and therefore, the efficiency of the video session is improved on the basis of an expanded communication mode.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a group video session according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for group video session according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-dimensional interaction model provided by an embodiment of the invention;
FIG. 4 is a flow chart of adjusting a three-dimensional interaction model according to an embodiment of the present invention;
FIG. 5 is a flow chart of an interaction provided by an embodiment of the invention;
fig. 6 is a block diagram of an apparatus for group video session according to an embodiment of the present invention;
fig. 7 is a block diagram of an apparatus for group video session according to an embodiment of the present invention;
fig. 8 is a block diagram of an apparatus for group video session according to an embodiment of the present invention;
fig. 9 is a block diagram of an apparatus for group video session according to an embodiment of the present invention;
fig. 10 is a block diagram of an apparatus for group video session according to an embodiment of the present invention;
fig. 11 is a block diagram of an apparatus for group video session according to an embodiment of the present invention;
fig. 12 is a block diagram of an apparatus 1200 for group video session according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a group video session according to an embodiment of the present invention. Referring to fig. 1, the implementation environment includes:
at least one terminal 101 (e.g., a mobile terminal and a desktop computer), at least one VR device 102, and at least one server 103. The server 103 is configured to acquire a three-dimensional interaction model, process the three-dimensional interaction model according to a user's perspective, and send video data obtained by the processing to the conventional terminal 101 or the VR device 102. The legacy terminal 101 and the VR device 102 are configured to receive and display video data transmitted by the server 103.
In addition, the server 103 may be configured with at least one database, such as a avatar database, a multimedia database, a user relationship chain database, and the like. The virtual character database is used for storing configured virtual characters, and a virtual host can be selected from the virtual characters; the multimedia database is used for storing multimedia files, such as video files, audio files and the like; the user relationship chain database is used for storing user relationship chain data which the user has, for example, the user relationship chain data is used for indicating the user who is in friend relationship or group relationship with the user.
Fig. 2 is a flowchart of a method for group video session according to an embodiment of the present invention. Referring to fig. 2, the method is applied to a server, and specifically includes:
201. in the group video session process, the server obtains a three-dimensional interaction model of a target object to be displayed.
Wherein a group video session refers to a video session conducted by multiple (two or more) users on a server basis. The multiple users may be multiple users on the social platform corresponding to the server, and the multiple users may have a group relationship or a friend relationship. The target object refers to a real object that a certain user wants to show in the group video session. The three-dimensional interaction model is a three-dimensional model generated according to a target object and used for being displayed in video data of a plurality of users based on control of any user in the group video session. For example, fig. 3 is a schematic diagram of a three-dimensional interaction model provided by an embodiment of the present invention. Referring to fig. 3, the three-dimensional interaction model may be a three-dimensional geometric model, a three-dimensional automobile model, and a three-dimensional graph model.
In this step, the server may obtain the three-dimensional interaction model in a variety of ways. For example, the server may acquire the three-dimensional object model uploaded by the fifth user. In this example, the three-dimensional interactive model may be a model obtained by a fifth user through CAD (Computer Aided Design), such as a three-dimensional automobile model.
For another example, the server obtains the two-dimensional table uploaded by the sixth user, and processes the two-dimensional table to obtain the three-dimensional table model. In this example, the server may directly generate a three-dimensional table model corresponding to the two-dimensional table through the EXCEL table. Alternatively, the server may also build a three-dimensional coordinate model (x, y, z). For example, when there are two parameters (e.g., class and number of people) in the two-dimensional table, the server may generate the three-dimensional table model in the form of a bar graph by representing different "class" parameter values using different planar areas on the (x, y) plane, and determining the number of people "parameter value corresponding to each" class "parameter value as the z-coordinate corresponding to the" class "parameter value. Of course, the server may generate other forms of three-dimensional table models, such as pie charts and bar charts, with reference to the above example. Moreover, the server may also set the color tone of the three-dimensional table model when generating the three-dimensional table model, e.g., different parameters corresponding to different color tones.
In fact, the server may perform three-dimensional modeling on the object based on at least one two-dimensional image data corresponding to the object uploaded by the user, for example, using SFS (Shape From Shape) algorithm, thereby obtaining a three-dimensional interaction model.
Wherein the fifth user or the sixth user can be any user in the group video session. Further, the fifth user or the sixth user may also be a user having an upload right. The embodiment of the invention does not limit the user with the uploading authority. For example, the user with the upload right is an initiator of a group video session, or a VIP (virtual inportant peer) user.
202. The server processes the three-dimensional interaction model of the target object according to the view angle of each user in the plurality of users in the group video session to obtain video data of the user, wherein the video data of the user comprises model data obtained by converting the view angle of the three-dimensional interaction model of the target object.
In this step, the server may obtain view angle data of each user in the group video session, determine a view angle of the user according to the view angle data of the user and a display position of a virtual character of the user, and further, the server may extract image data of a three-dimensional interaction model corresponding to the view angle, synthesize the extracted image data with session environment data, and perform stereoscopic encoding on the synthesized image data, thereby obtaining video data of one frame of the user. The embodiment of the present invention does not limit the method of stereo coding. For example, according to the interlaced display principle, the server encodes the synthesized image data into video data of two fields, i.e., a single field formed by a single trace and an even field formed by an even trace, so that when the VR device receives the video data, the video data of the two fields can be interlaced and displayed on the left and right eye screens, thereby generating parallax for both eyes of the user and achieving the three-dimensional display effect. The session environment data is not limited to the virtual environment corresponding to the group video session, the avatar corresponding to each of the plurality of users, the audio data of each user, and the like.
It should be noted that, the embodiment of the present invention does not limit the manner of acquiring the view angle data. For example, the server may obtain perspective data corresponding to the head orientation feature data of the second user according to the head orientation feature data collected by the sensor of the user. For another example, the server acquires gaze direction feature data of the user according to eye image data captured by a camera of the user, and determines perspective data of the user according to an eyeball position indicated by the gaze direction feature data.
In fact, to better show the three-dimensional interaction model, the server may also determine the display position of the three-dimensional interaction model in different ways before obtaining the video data. For example, a default display position is configured on the server, and the default display position can be the position opposite to the virtual character corresponding to the plurality of users. For another example, the server determines a position beside the user uploading the three-dimensional interaction model as a display position so as to facilitate the user to perform demonstration description on the three-dimensional interaction model.
In the embodiment of the invention, in order to further expand the communication mode in the group video session and improve the actual efficiency of the video session, when the server receives the operation instruction of the three-dimensional interaction model, the three-dimensional interaction model can be adjusted according to the operation mode corresponding to the operation instruction, and the steps of processing and sending according to the view angle of each user in a plurality of users in the group video session are executed based on the adjusted three-dimensional interaction model. And the operation instruction is used for indicating that the three-dimensional interaction model is adjusted according to the corresponding operation mode. The embodiment of the invention does not limit the obtaining mode of the operation instruction. For example, the server may employ at least two of the following acquisition modes:
the acquisition method 1 includes the steps that the server acquires gesture feature data of a first user, and when the gesture feature data are matched with any operation method of the three-dimensional interaction model, an operation instruction corresponding to the operation method is determined to be received.
The gesture feature data is used for representing the gesture of the first user, and various ways for acquiring the gesture feature data may be provided, such as a camera or a gesture sensor. Taking a gesture sensor on the VR device of the first user as an example, the server may obtain gesture feature data acquired by the gesture sensor, determine a gesture of the first user according to the gesture feature data, determine an operation mode matched with the gesture according to an operation mode corresponding to a preset gesture when the gesture is matched with a preset gesture (for example, pointing to the left, the right, the upper, or the lower), and generate and obtain an operation instruction corresponding to the operation mode. The embodiment of the present invention does not limit the specific operation manner. For example, referring to table 1, an embodiment of the present invention provides a preset corresponding relationship between a gesture and an operation mode:
TABLE 1
Figure BDA0001232744630000051
Figure BDA0001232744630000061
And in the acquisition mode 2, the server acquires the operation information of the second user on the external equipment, and when the operation information is matched with any operation mode of the three-dimensional interaction model, the external equipment is bound with the terminal where the second user is located, and the operation instruction corresponding to the operation mode is determined to be received.
The external device may be a mouse or a keyboard. When the server acquires the operation information of the second user to the external device, whether an operation mode corresponding to the operation information exists or not can be judged, and if yes, an operation instruction corresponding to the operation mode is generated and acquired. Referring to table 2, an embodiment of the present invention provides a preset corresponding relationship between a gesture and an operation mode:
TABLE 2
Operation information Mode of operation
Left mouse click Magnifying three-dimensional interaction model
Right mouse click Reducing three-dimensional interaction models
Long press of left mouse button for moving Rotating three-dimensional interaction model according to mouse moving direction
Of course, the first user and the second user may be any user in the group video session, or may also be users having an operation right on the three-dimensional interaction model, which is not limited in the embodiment of the present invention.
In an actual application scenario, in order to intelligently provide the user with the interactive service, the user may also be prompted to operate the three-dimensional interactive model and how to operate. The embodiment of the invention does not limit the prompting time. For example, when it is determined that the user has a need to operate the three-dimensional interaction model, prompt is timely performed: and when the server detects that the staring time of the seventh user to the three-dimensional interaction model is longer than the preset time, sending operation prompt information to a terminal where the seventh user is located, wherein the operation prompt information is used for prompting that the seventh user can operate the three-dimensional interaction model.
The explanation of the seventh user is the same as that of the first user. In the above example, the server may monitor the gaze direction of the seventh user in real time, and perform timing once it is detected that the gaze direction of the seventh user is aligned with the three-dimensional interaction model, and when the timing duration (i.e., the gaze duration) is longer than the preset duration, it indicates that the seventh user is likely to have a need to operate the three-dimensional interaction model, so that the operation prompt information is sent to the terminal where the seventh user is located. The embodiment of the present invention does not limit the specific content included in the operation prompt information. For example, the server supports mouse operation, the operation prompt information may include text prompt information of "the automobile model can be operated by the mouse", and a specific method for operating by the mouse, for example, "clicking the left mouse button to enlarge the automobile model" and "clicking the right mouse button to reduce the automobile model".
Through the operation process of the user, the server can obtain the operation instruction and adjust the three-dimensional interaction model according to the operation mode corresponding to the operation instruction. The embodiment of the present invention does not limit the specific adjustment process. For example, the operation instruction is a rotation operation instruction, a zoom operation instruction, and a shift operation instruction, respectively, and the corresponding adjustment process may specifically be:
and in the adjusting process 1, when the operation instruction is a rotation operation instruction, the server acquires a rotation angle and a rotation direction corresponding to the rotation operation instruction, and rotates the three-dimensional interaction model according to the rotation angle and the rotation direction.
In the adjusting process, the server can extract the rotation angle and the rotation direction carried in the rotation operation instruction, and rotate the three-dimensional interaction model based on the two parameters and the three-dimensional interaction model seen by the current user visual angle. Wherein the rotation angle and the rotation direction are determined when the rotation operation instruction is generated. The embodiment of the present invention does not limit the specific manner of determination. For example, when the rotation operation instruction is generated from the gesture feature data, the rotation direction may be the same as the gesture direction; the rotation angle may be a default rotation angle, e.g., 30 degrees, or determined according to the duration of the gesture, e.g., 30 degrees by duration (seconds). For another example, when the rotation operation command is generated based on the operation information, the rotation direction may be the same as the moving direction of the external device, and the rotation angle may be determined based on the moving distance of the external device, for example, the rotation angle is equal to the moving distance (cm) × 10 degrees.
And 2, in the adjusting process, when the operation instruction is a zooming operation instruction, the server acquires a reduction ratio or an enlargement ratio corresponding to the zooming operation instruction, and reduces or enlarges the three-dimensional interaction model according to the reduction ratio and the enlargement ratio.
In the adjusting process, the server can extract the reduction scale or the enlargement scale carried in the zooming operation instruction, and zoom the three-dimensional interaction model based on the zooming scale and the three-dimensional interaction model seen by the current user visual angle. Wherein the scaling ratio can be determined when the scaling operation instruction is generated. The embodiment of the present invention does not limit the specific manner of determination. For example, when the zoom operation instruction is generated according to the operation information, each operation may correspond to a default zoom ratio, e.g., one click of the left mouse button corresponds to zooming in 10% of the three-dimensional interaction model.
And 3, in the adjusting process, when the operation instruction is a shift operation instruction, the server acquires a shift direction and a shift distance corresponding to the shift operation instruction, and shifts the three-dimensional interaction model according to the shift direction and the shift distance.
In the adjusting process, the server can extract the shifting direction and the shifting distance carried in the shifting operation instruction, and shift the three-dimensional interaction model based on the two parameters and the three-dimensional interaction model seen by the current user visual angle. Wherein the shift direction and the shift distance may be determined when the shift operation instruction is generated. The embodiment of the present invention does not limit the specific manner of determination. For example, when the shift operation instruction is generated according to the gesture feature data, the shift direction may be the same as the gesture direction; the shift distance may be determined according to the duration of the gesture, e.g., 10% of the three-dimensional interaction model length. For another example, when the shift operation command is generated according to the operation information, the shift direction may be consistent with the moving direction of the external device, and the shift distance may be determined according to the moving distance of the external device, for example, the shift distance is 5% of the moving distance (centimeter) by the length of the three-dimensional interactive model.
Of course, the server may receive the at least two operation instructions simultaneously, and at this time, the server may perform the at least two adjustment processes in series or may perform the at least two adjustment processes in parallel. For example, when the server receives the rotation operation instruction and the shift operation instruction at the same time, in order to show the change process of the three-dimensional interaction model more clearly, the server may rotate the three-dimensional interaction model first and then shift the three-dimensional interaction model; alternatively, the server may rotate and shift the three-dimensional interaction model simultaneously in order to link the adjustment process with the user's operation process.
It should be noted that, in the process of adjusting the three-dimensional interaction model, the server may generate video data of one frame in real time corresponding to the adjustment process, that is, according to the currently adjusted three-dimensional interaction model, the server synthesizes and encodes the currently adjusted three-dimensional interaction model and the session environment data according to the current viewing angle of the user to obtain the current frame of video data, thereby displaying the dynamic adjustment process of the three-dimensional interaction model for the user.
In addition, it should be noted that, in the above adjustment process, the server may provide services for each user independently, that is, the three-dimensional interaction model is processed according to the operation instruction triggered by each user, and the video data of the user is obtained; when the operation authority is needed for operating the three-dimensional interaction model, the server can also process the three-dimensional interaction model according to the operation instruction triggered by the user with the operation authority and the visual angle of each user, so that the video data of each user can be obtained. To clearly illustrate the flow of the adjustment process, referring to fig. 4, an embodiment of the present invention provides a flow chart for adjusting a three-dimensional interaction model, where a server obtains the three-dimensional interaction model, monitors the gaze direction of a user, obtains operation information, and further adjusts the three-dimensional interaction model according to an operation mode corresponding to the operation information.
In the group video session process, in order to enable the video sessions of a plurality of users to be carried out in order and highlight the speaking process of a certain user, when a server receives a speaking request of a third user, specified video data can be generated, and the specified video data is used for showing the process that a virtual microphone is transferred from a virtual host to a virtual character of the third user; based on the specified video data, performing the steps of processing and transmitting according to the perspective of each of the plurality of users in the group video session.
Wherein the third user may be any user in the group video session. The embodiment of the present invention does not limit the manner of triggering the talk request. For example, the third user's audio data is automatically triggered when the server receives the audio data, or the third user's designated operation information is detected, and the designated operation information may be a continuous double click of the left mouse button. The virtual host may be a virtual character obtained by the server from the virtual character database, or may be a virtual character of a user in the group video session. The embodiment of the invention does not limit the mode of the server for acquiring the virtual host. For example, the server acquires a virtual host matched with the group attribute according to the group attribute of the group corresponding to the group video session, for example, when the group attribute is class, the dressing of the matched virtual host is school uniform, and when the group attribute is company, the dressing of the matched virtual host is suit. For another example, the server randomly designates the avatar of one user as the virtual host, or, when the group video session starts, the server transmits voting information for voting the virtual host to the VR device, where the voting information includes at least user information of a plurality of users, the VR device displays a voting interface according to the voting information, and when any user a selects a certain user information B on the voting interface, the server can determine that the user a votes for a user B corresponding to the user information B, and further, the server can count the users having the largest number of votes, and take the avatar of the user as the virtual host.
Based on the above description, when the server receives the speaking request of the third user, the server may determine the moving path of the virtual microphone according to the display position C of the third user in the virtual environment and the current display position D of the virtual microphone, where the moving path may be a path from D to C (or the server determines a path from D to E to C as a moving path according to the display position E of the virtual host), and further, the server may generate specified video data of one frame by one frame according to the moving path of the virtual microphone to dynamically characterize the delivery process of the virtual microphone, and further, the server may process and transmit the video data according to the viewing angle of each user. Of course, in order to more reasonably display the virtual microphone, the server may determine a lifting path of the arm model of the virtual character of the third user when the virtual microphone reaches the display position of the third user, so that the generated at least one frame specifies a process in which the video data corresponds to the arm model being lifted and holding the virtual microphone. In addition, during the delivery, the server may synthesize specified audio data of the virtual host to the specified video data, the specified audio data indicating that the third user is about to speak, which may include a piece of speech "now spoken by the third user".
In fact, in addition to the above-described method of communicating a virtual microphone, the speaking process of a certain user may be highlighted by other methods. For example, when the server receives a speech request of a third user, the volume of audio data of a fourth user is reduced, and the fourth user is a user other than the third user in the group video session; based on the adjusted audio data, performing the steps of processing and transmitting according to the perspective of each of the plurality of users in the group video session. In this example, the server may adjust the volume V2 of the audio data of the fourth user to be less than V1 according to the volume V1 of the audio data of the third user.
It should be noted that, the above two methods for highlighting the speaking process of the user may also be combined, that is, when the server receives the speaking request of the third user, the specified video data may be generated, the specified video data is used for showing the process that the virtual microphone is transferred from the virtual host to the virtual character of the third user, and the volume of the audio data of the fourth user in the specified video data is reduced.
In an actual application scenario, it is possible that the server receives the utterance request of the fourth user when the third user speaks, and at this time, the embodiment of the present invention does not limit a manner in which the server processes the utterance request of the fourth user. For example, the server temporarily stores the speaking request of the fourth user until detecting that the audio data of the third user is finished, and continues to process the speaking request of the fourth user in a manner of processing the speaking request of the third user according to the receiving sequence of the speaking requests. Of course, during the process that the fourth user waits for speaking, the server may send speech prompt information to the terminal where the fourth user is located, where the speech prompt information user indicates when the fourth user speaks, and may include text information such as "what is you next to speak".
In the embodiment of the invention, in order to further improve the efficiency of the group video session and expand the interactive mode during the group video session, when the server receives the multimedia file playing request, the multimedia file corresponding to the multimedia playing request can be synthesized into the video data of a plurality of users. Such as audio files, video files or text files. The multimedia file playing request can directly carry the multimedia file and can also carry the file identifier of the multimedia file, so that the server acquires the multimedia file corresponding to the file identifier from a multimedia database or a network. In the extended interactive mode, the method for synthesizing the multimedia file is not limited in the embodiment of the present invention. For example, when the multimedia file is an audio file, the server may synthesize the audio file as background audio into video data; when the multimedia file is a video file, the server may synthesize the video file into a virtual environment opposite to each user according to the viewing angle of the user, so that the video file is embedded in the virtual environment in a "screen-playing" manner.
Based on the extended interaction mode, referring to fig. 5, an embodiment of the present invention provides an interaction flowchart, where a server may authorize an operation right to a three-dimensional interaction model for a user 1 and authorize a play right to a multimedia file for a user 2, and therefore, the server may adjust the three-dimensional interaction model based on operation information of the user 1, thereby providing a service for operating the three-dimensional interaction model, and may also synthesize the multimedia file into video data based on a multimedia file play request of the user 2, thereby providing a service for sharing the multimedia file.
203. And the server respectively sends the video data of the plurality of users to the terminals where the plurality of users are located.
In this step, when the terminal receives the video data, the video data can be displayed, and each user can see the three-dimensional interaction model of the own view angle from the video data because the video data is processed according to the view angle of the user.
It should be noted that, when the user uses the VR device, the server may directly send the video data to the VR device where the user is located, and when the user uses the conventional terminal, the server may extract the two-dimensional video data of a certain viewing angle when processing the three-dimensional interaction model, thereby sending the two-dimensional video data to the conventional terminal where the user is located, so that a plurality of users may freely communicate without being limited by the device type.
According to the embodiment of the invention, the three-dimensional interaction model of the target object to be displayed is obtained, the three-dimensional interaction model is processed according to the visual angle of each user in the group video session, the video data obtained after the visual angle transformation is carried out on the three-dimensional interaction model is obtained, and the video data is sent to the terminals where the plurality of users are located, so that the plurality of users can experience the same three-dimensional interaction model according to the visual angles of the users during the group video session, and the communication is carried out through the three-dimensional interaction model, and therefore, the efficiency of the video session is improved on the basis of an expanded communication mode.
In addition, when an operation instruction for the three-dimensional interaction model is received, the three-dimensional interaction model can be adjusted according to an operation mode corresponding to the operation instruction, so that a service for operating the three-dimensional interaction model is provided for a user, and video data can be sent to a plurality of users based on the adjusted three-dimensional interaction model, so that the plurality of users can interact based on the same three-dimensional interaction model, and the efficiency of video conversation is further improved.
In addition, at least two modes for acquiring the operation instruction are provided, the gesture characteristic data of the first user can be used, when the gesture characteristic data is matched with any operation mode of the three-dimensional interaction model, the operation instruction corresponding to the operation mode is determined to be received, the operation information of the second user on the external equipment can also be used, when the operation information is matched with a certain operation mode, the operation instruction corresponding to the operation mode is determined to be received, the operation instruction can be intelligently triggered according to the gesture of the user, the operation instruction can also be triggered according to the operation information of the user, so that the diversified operation instruction acquisition modes are provided, and the operability is stronger.
In addition, at least three processes of adjusting the three-dimensional interaction model according to the operation instruction are provided, for example, the three-dimensional interaction model is rotated according to the rotation operation instruction, the three-dimensional interaction model is reduced or enlarged according to the zooming operation instruction, and the three-dimensional interaction model is shifted according to the shifting operation instruction, so that diversified adjustment modes are provided, the interaction intensity of the video session is increased, and the efficiency of the video session is further improved.
In addition, in order to make the group video conversation orderly and highlight the speaking process of a certain user, at least two methods for processing the speaking request are provided, such as generating specified video data for showing a virtual character of a virtual microphone transferred from a virtual host to a third user or reducing the volume of audio data of a fourth user.
In addition, at least two ways of obtaining the three-dimensional interaction model are provided, for example, a three-dimensional object model uploaded by a fifth user is obtained, or a two-dimensional table uploaded by a sixth user is obtained and processed to obtain a three-dimensional table model, so that diversified three-dimensional interaction models can be provided.
In addition, the communication mode during the video session is further expanded, for example, when a multimedia file playing request is received, the multimedia file can be synthesized into the video data of a plurality of users, so that the plurality of users can share the multimedia file.
In addition, in order to provide an intelligent interactive service and prompt a user to operate the three-dimensional interactive model and how to operate, when the fact that the staring time of the seventh user on the three-dimensional interactive model is longer than the preset time is detected, the seventh user is possibly required to operate the three-dimensional interactive model, therefore, operation prompt information can be sent to a terminal where the seventh user is located, and the seventh user is timely prompted to operate the three-dimensional interactive model.
Fig. 6 is a block diagram of an apparatus for group video session according to an embodiment of the present invention. Referring to fig. 6, the apparatus specifically includes:
the interactive model acquisition module 601 is used for acquiring a three-dimensional interactive model of a target object to be displayed in the group video session process;
the processing module 602 is configured to process the three-dimensional interaction model of the target object according to a view angle of each user in the plurality of users in the group video session to obtain video data of the user, where the video data of the user includes model data obtained by performing view angle transformation on the three-dimensional interaction model of the target object;
a sending module 603, configured to send video data of multiple users to terminals where the multiple users are located respectively.
According to the embodiment of the invention, the three-dimensional interaction model of the target object to be displayed is obtained, the three-dimensional interaction model is processed according to the visual angle of each user in the group video session, the video data obtained after the visual angle transformation is carried out on the three-dimensional interaction model is obtained, and the video data is sent to the terminals where the plurality of users are located, so that the plurality of users can experience the same three-dimensional interaction model according to the visual angles of the users during the group video session, and the communication is carried out through the three-dimensional interaction model, and therefore, the efficiency of the video session is improved on the basis of an expanded communication mode.
In one possible implementation, based on the apparatus composition of fig. 6, see fig. 7, the apparatus further comprises: an adjustment module 604;
the adjusting module 604 is configured to, when an operation instruction for the three-dimensional interaction model is received, adjust the three-dimensional interaction model according to an operation mode corresponding to the operation instruction;
a processing module 602, configured to perform, based on the adjusted three-dimensional interaction model, a step of processing according to a viewing angle of each user of the plurality of users in the group video session;
a sending module 603, configured to send the video data processed by the processing module according to the view angle of each user in the multiple users in the group video session.
In one possible implementation, based on the apparatus composition of fig. 6, see fig. 8, the apparatus further comprises:
the gesture obtaining module 605 is configured to obtain gesture feature data of the first user, and when the gesture feature data matches any operation mode of the three-dimensional interaction model, determine that an operation instruction corresponding to the operation mode is received; or the like, or, alternatively,
the operation information obtaining module 606 is configured to obtain operation information of the second user on the external device, and when the operation information matches any operation mode of the three-dimensional interaction model, determine that an operation instruction corresponding to the operation mode is received, and bind the external device and the terminal where the second user is located.
In one possible implementation, the adjusting module 604 is configured to: when the operation instruction is a rotation operation instruction, acquiring a rotation angle and a rotation direction corresponding to the rotation operation instruction, and rotating the three-dimensional interaction model according to the rotation angle and the rotation direction; and/or the adjusting module is used for: when the operation instruction is a zooming operation instruction, acquiring a reduction ratio or an enlargement ratio corresponding to the zooming operation instruction, and reducing or enlarging the three-dimensional interaction model according to the reduction ratio and the enlargement ratio; and/or the adjusting module is used for: and when the operation instruction is a shift operation instruction, acquiring a shift direction and a shift distance corresponding to the shift operation instruction, and performing shift operation on the three-dimensional interaction model according to the shift direction and the shift distance.
In one possible implementation, based on the apparatus composition of fig. 6, referring to fig. 9, the apparatus further comprises:
a generating module 607, configured to generate specified video data when receiving a speaking request of a third user, where the specified video data is used to show a process in which a virtual microphone is transferred from a virtual host to a virtual character of the third user;
a processing module 602, configured to perform, based on the specified video data, a step of processing according to a view angle of each of a plurality of users in the group video session;
a sending module 603, configured to send the specified video data processed by the processing module according to the view angle of each user in the multiple users in the group video session.
In one possible implementation, based on the apparatus composition of fig. 6, referring to fig. 10, the apparatus further includes:
a decreasing module 608, configured to decrease, when a speech request of a third user is received, a volume of audio data of a fourth user, where the fourth user is a user other than the third user in the group video session;
a processing module 602, configured to perform, based on the adjusted audio data, a step of processing according to a viewing angle of each of a plurality of users in the group video session;
a sending module 603, configured to send the video data processed by the processing module according to the view angle of each user in the multiple users in the group video session.
In one possible implementation, the interaction model obtaining module 601 is configured to: acquiring a three-dimensional object model uploaded by a fifth user; or, the interaction model obtaining module is used for 601: and acquiring a two-dimensional table uploaded by a sixth user, and processing the two-dimensional table to obtain a three-dimensional table model.
In one possible implementation, based on the apparatus composition of fig. 6, referring to fig. 11, the apparatus further includes: the synthesizing module 609 is configured to synthesize, when receiving a multimedia file playing request, a multimedia file corresponding to the multimedia file playing request to video data of multiple users.
In one possible implementation, the sending module 603 is further configured to: and when detecting that the staring time of the seventh user to the three-dimensional interaction model is longer than the preset time, sending operation prompt information to a terminal where the seventh user is located, wherein the operation prompt information is used for prompting the seventh user to operate the three-dimensional interaction model.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
It should be noted that: in the device for group video session provided in the foregoing embodiment, only the division of the functional modules is illustrated in the group video session, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for group video session and the method embodiment for group video session provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiment and are not described herein again.
Fig. 12 is a block diagram of an apparatus 1200 for group video session according to an embodiment of the present invention. Referring to fig. 12, the apparatus 1200 may be provided as a server including a processing component 1222 that further includes one or more processors, and memory resources, represented by memory 1232, for storing instructions, such as applications, executable by the processing component 1222. The application programs stored in memory 1232 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1222 is configured to execute instructions to perform the method of group video sessions in the embodiment of fig. 2.
The apparatus 1200 may also include a power supply component 1226 configured to perform power management of the apparatus 1200, a wired or wireless network interface 1250 configured to connect the apparatus 1200 to a network, and an input output (I/O) interface 1258. The apparatus 1200 may operate based on an operating system, such as Windows Server, stored in the memory 1232TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (20)

1. A method for group video conversation, applied to a virtual reality environment including avatars of a plurality of users conducting a group video conversation, the method comprising:
in the group video session process, acquiring a three-dimensional interaction model of a target object to be displayed;
processing the three-dimensional interaction model of the target object according to the view angle of each user in the plurality of users in the group video session to obtain video data of the user, wherein the video data of the user comprises model data obtained by performing view angle transformation on the three-dimensional interaction model of the target object;
and respectively sending the video data of the plurality of users to the terminals where the plurality of users are located.
2. The method of claim 1, wherein after obtaining the three-dimensional interaction model of the object to be displayed during the group video session, the method further comprises:
and when an operation instruction for the three-dimensional interaction model is received, adjusting the three-dimensional interaction model according to an operation mode corresponding to the operation instruction, and executing the steps of processing and sending according to the view angle of each user in the plurality of users in the group video session based on the adjusted three-dimensional interaction model.
3. The method of claim 1, further comprising:
acquiring gesture feature data of a first user, and determining to receive an operation instruction corresponding to any operation mode of the three-dimensional interaction model when the gesture feature data is matched with the operation mode; or the like, or, alternatively,
and acquiring operation information of a second user on the external equipment, and determining to receive an operation instruction corresponding to an operation mode when the operation information is matched with any operation mode of the three-dimensional interaction model, wherein the external equipment is bound with a terminal where the second user is located.
4. The method according to claim 2, wherein the adjusting the three-dimensional interaction model according to the operation mode corresponding to the operation instruction comprises:
when the operation instruction is a rotation operation instruction, acquiring a rotation angle and a rotation direction corresponding to the rotation operation instruction, and rotating the three-dimensional interaction model according to the rotation angle and the rotation direction; and/or the presence of a gas in the gas,
when the operation instruction is a zooming operation instruction, acquiring a zooming ratio or an amplifying ratio corresponding to the zooming operation instruction, and zooming in or out the three-dimensional interaction model according to the zooming ratio and the amplifying ratio; and/or the presence of a gas in the gas,
and when the operation instruction is a shift operation instruction, acquiring a shift direction and a shift distance corresponding to the shift operation instruction, and performing shift operation on the three-dimensional interaction model according to the shift direction and the shift distance.
5. The method of claim 1, wherein after obtaining the three-dimensional interaction model of the object to be displayed during the group video session, the method further comprises:
when a speaking request of a third user is received, generating specified video data, wherein the specified video data are used for showing a process that a virtual microphone is transferred from a virtual host to a virtual character of the third user;
and executing the steps of processing and transmitting according to the view angle of each user in the plurality of users in the group video session based on the designated video data.
6. The method of claim 1, wherein after obtaining the three-dimensional interaction model of the object to be displayed during the group video session, the method further comprises:
when a speaking request of a third user is received, reducing the volume of audio data of a fourth user, wherein the fourth user is a user except the third user in the group video session;
and executing the steps of processing and transmitting according to the view angle of each of the plurality of users in the group video session based on the adjusted audio data.
7. The method of claim 1, wherein the obtaining a three-dimensional interaction model of an object to be displayed comprises:
acquiring a three-dimensional object model uploaded by a fifth user; or the like, or, alternatively,
and acquiring a two-dimensional table uploaded by a sixth user, and processing the two-dimensional table to obtain a three-dimensional table model.
8. The method of claim 1, further comprising:
and when a multimedia file playing request is received, synthesizing the multimedia file corresponding to the multimedia playing request to the video data of the plurality of users.
9. The method of claim 1, wherein after obtaining the three-dimensional interaction model of the object to be displayed during the group video session, the method further comprises:
and when detecting that the staring time of a seventh user to the three-dimensional interaction model is longer than a preset time, sending operation prompt information to a terminal where the seventh user is located, wherein the operation prompt information is used for prompting that the seventh user can operate the three-dimensional interaction model.
10. An apparatus for group video conversation, applied to a virtual reality environment including avatars of a plurality of users conducting the group video conversation, the apparatus comprising:
the interactive model acquisition module is used for acquiring a three-dimensional interactive model of a target object to be displayed in the group video session process;
the processing module is used for processing the three-dimensional interaction model of the target object according to the view angle of each user in the plurality of users in the group video session to obtain video data of the user, wherein the video data of the user comprises model data obtained by performing view angle transformation on the three-dimensional interaction model of the target object;
and the sending module is used for respectively sending the video data of the users to the terminals where the users are located.
11. The apparatus of claim 10, further comprising: an adjustment module;
the adjusting module is used for adjusting the three-dimensional interaction model according to an operation mode corresponding to an operation instruction when the operation instruction for the three-dimensional interaction model is received;
the processing module is used for executing the step of processing according to the view angle of each user in the plurality of users in the group video conversation based on the adjusted three-dimensional interaction model;
the sending module is configured to send the video data processed by the processing module according to the view angle of each user in the plurality of users in the group video session.
12. The apparatus of claim 10, further comprising:
the gesture obtaining module is used for obtaining gesture feature data of a first user, and when the gesture feature data is matched with any operation mode of the three-dimensional interaction model, determining that an operation instruction corresponding to the operation mode is received; or the like, or, alternatively,
and the operation information acquisition module is used for acquiring operation information of a second user on the external equipment, and when the operation information is matched with any operation mode of the three-dimensional interaction model, determining that an operation instruction corresponding to the operation mode is received, and the external equipment is bound with a terminal where the second user is located.
13. The apparatus of claim 11,
the adjustment module is configured to: when the operation instruction is a rotation operation instruction, acquiring a rotation angle and a rotation direction corresponding to the rotation operation instruction, and rotating the three-dimensional interaction model according to the rotation angle and the rotation direction; and/or the presence of a gas in the gas,
the adjustment module is configured to: when the operation instruction is a zooming operation instruction, acquiring a zooming ratio or an amplifying ratio corresponding to the zooming operation instruction, and zooming in or out the three-dimensional interaction model according to the zooming ratio and the amplifying ratio; and/or the presence of a gas in the gas,
the adjustment module is configured to: and when the operation instruction is a shift operation instruction, acquiring a shift direction and a shift distance corresponding to the shift operation instruction, and performing shift operation on the three-dimensional interaction model according to the shift direction and the shift distance.
14. The apparatus of claim 10, further comprising:
the generation module is used for generating specified video data when a speaking request of a third user is received, wherein the specified video data are used for showing the process that a virtual microphone is transferred from a virtual host to a virtual character of the third user;
the processing module is used for executing the step of processing according to the view angle of each user in the plurality of users in the group video session based on the specified video data;
the sending module is configured to send the designated video data processed by the processing module according to the view angle of each of the plurality of users in the group video session.
15. The apparatus of claim 10, further comprising:
a decreasing module, configured to decrease, when a speech request of a third user is received, a volume of audio data of a fourth user, where the fourth user is a user other than the third user in the group video session;
the processing module is used for executing the step of processing according to the visual angle of each user in the plurality of users in the group video session based on the adjusted audio data;
the sending module is configured to send the video data processed by the processing module according to the view angle of each user in the plurality of users in the group video session.
16. The apparatus of claim 10,
the interaction model acquisition module is used for: acquiring a three-dimensional object model uploaded by a fifth user; or the like, or, alternatively,
the interaction model acquisition module is used for: and acquiring a two-dimensional table uploaded by a sixth user, and processing the two-dimensional table to obtain a three-dimensional table model.
17. The apparatus of claim 10, further comprising:
and the synthesis module is used for synthesizing the multimedia files corresponding to the multimedia playing requests to the video data of the plurality of users when the multimedia file playing requests are received.
18. The apparatus of claim 10, wherein the sending module is further configured to:
and when detecting that the staring time of a seventh user to the three-dimensional interaction model is longer than a preset time, sending operation prompt information to a terminal where the seventh user is located, wherein the operation prompt information is used for prompting that the seventh user can operate the three-dimensional interaction model.
19. A server, characterized in that the server comprises a processor and a memory, the memory comprising instructions, the processor being configured to execute the instructions to implement the method of group video session according to any of claims 1 to 9.
20. A computer-readable storage medium having instructions embodied therein, the processor being configured to execute the instructions to implement the method of group video session according to any one of claims 1 to 9.
CN201710104669.9A 2017-02-24 2017-02-24 Method and device for group video session Active CN108513090B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201710104669.9A CN108513090B (en) 2017-02-24 2017-02-24 Method and device for group video session
PCT/CN2018/075749 WO2018153267A1 (en) 2017-02-24 2018-02-08 Group video session method and network device
TW107106428A TWI650675B (en) 2017-02-24 2018-02-26 Method and system for group video session, terminal, virtual reality device and network device
US16/435,733 US10609334B2 (en) 2017-02-24 2019-06-10 Group video communication method and network device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710104669.9A CN108513090B (en) 2017-02-24 2017-02-24 Method and device for group video session

Publications (2)

Publication Number Publication Date
CN108513090A CN108513090A (en) 2018-09-07
CN108513090B true CN108513090B (en) 2021-01-01

Family

ID=63373903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710104669.9A Active CN108513090B (en) 2017-02-24 2017-02-24 Method and device for group video session

Country Status (1)

Country Link
CN (1) CN108513090B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709537B (en) * 2020-05-21 2023-06-13 云米互联科技(广东)有限公司 User interaction method based on 5G television, 5G television and readable storage medium
CN115514729B (en) * 2022-08-31 2024-04-05 同炎数智科技(重庆)有限公司 Instant discussion method and system based on three-dimensional model
CN115454258A (en) * 2022-11-10 2022-12-09 北京圜晖科技有限公司 Three-dimensional model collaborative interaction method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263772A (en) * 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
US9007422B1 (en) * 2014-09-03 2015-04-14 Center Of Human-Centered Interaction For Coexistence Method and system for mutual interaction using space based augmentation
CN105611215A (en) * 2015-12-30 2016-05-25 掌赢信息科技(上海)有限公司 Video call method and device
CN105872444A (en) * 2016-04-22 2016-08-17 广东小天才科技有限公司 Video call method, device and system
CN106303690A (en) * 2015-05-27 2017-01-04 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263772A (en) * 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
US9007422B1 (en) * 2014-09-03 2015-04-14 Center Of Human-Centered Interaction For Coexistence Method and system for mutual interaction using space based augmentation
CN106303690A (en) * 2015-05-27 2017-01-04 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device
CN105611215A (en) * 2015-12-30 2016-05-25 掌赢信息科技(上海)有限公司 Video call method and device
CN105872444A (en) * 2016-04-22 2016-08-17 广东小天才科技有限公司 Video call method, device and system

Also Published As

Publication number Publication date
CN108513090A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
TWI650675B (en) Method and system for group video session, terminal, virtual reality device and network device
CN109874021B (en) Live broadcast interaction method, device and system
CN104170318B (en) Use the communication of interaction incarnation
CN111527525A (en) Mixed reality service providing method and system
WO2018014766A1 (en) Generation method and apparatus and generation system for augmented reality module, and storage medium
CN107463248A (en) A kind of remote interaction method caught based on dynamic with line holographic projections
CN110413108B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN113240782A (en) Streaming media generation method and device based on virtual role
CN105493501A (en) Virtual video camera
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN108960889B (en) Method and device for controlling voice speaking room progress in virtual three-dimensional space of house
CN108513090B (en) Method and device for group video session
JP2021144706A (en) Generating method and generating apparatus for virtual avatar
CN108513088B (en) Method and device for group video session
WO2022100680A1 (en) Mixed-race face image generation method, mixed-race face image generation model training method and apparatus, and device
EP3671653A1 (en) Generating and signaling transition between panoramic images
CN108810049A (en) Control method, device, system and the Virtual Reality equipment of equipment
CN110536095A (en) Call method, device, terminal and storage medium
CN113206993A (en) Method for adjusting display screen and display device
US11996113B2 (en) Voice notes with changing effects
US20240073373A1 (en) Sharing social augmented reality experiences in video calls
CN107204026B (en) Method and device for displaying animation
CN106445282B (en) A kind of exchange method based on augmented reality
CN111580658A (en) AR-based conference method and device and electronic equipment
CN116962848A (en) Video generation method, device, terminal, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant