CN114143494A - Video communication method, electronic equipment and communication system - Google Patents

Video communication method, electronic equipment and communication system Download PDF

Info

Publication number
CN114143494A
CN114143494A CN202111441224.2A CN202111441224A CN114143494A CN 114143494 A CN114143494 A CN 114143494A CN 202111441224 A CN202111441224 A CN 202111441224A CN 114143494 A CN114143494 A CN 114143494A
Authority
CN
China
Prior art keywords
user
electronic device
dimensional virtual
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111441224.2A
Other languages
Chinese (zh)
Inventor
刘任重
邓玉
肖敏
何铠锋
许启胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202111441224.2A priority Critical patent/CN114143494A/en
Publication of CN114143494A publication Critical patent/CN114143494A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to the technical field of video communication, and discloses a video communication method, which is applied to first electronic equipment and comprises the following steps: creating a three-dimensional virtual scene and displaying the three-dimensional virtual scene; determining a first holographic image of a first user and a second holographic image of a second user, wherein the first user is a user of first electronic equipment, the second user is a user of second electronic equipment, and the second electronic equipment is electronic equipment for performing video communication with the first electronic equipment; in the three-dimensional virtual scene, a first holographic image and a second holographic image are displayed. Compared with the mode of displaying the real scene of the user, the mode of displaying the three-dimensional virtual scene can effectively reduce the requirement of video communication on the communication quality. In addition, the holographic image of the user is displayed in the three-dimensional virtual scene, the real image of the user can be reserved, the authenticity of the user is guaranteed, and the user experience is effectively improved. The application also discloses an electronic device and a communication system.

Description

Video communication method, electronic equipment and communication system
Technical Field
The present application relates to the field of video communication technologies, and in particular, to a video communication method, an electronic device, and a communication system.
Background
Video communication technologies such as video conferences and video calls are multimedia communication technologies which utilize transmission media to realize interactive, visual and real-time communication.
Currently, in the process of video communication such as video conference, video call and the like, a user can transmit the real scene image of the environment where the local user is located and the real dynamic image of the user to the electronic device at the opposite end among the electronic devices performing video communication. And each electronic device performing video communication can simultaneously display the real scene image of the environment where the local user is located and the real dynamic image of the local user, and display the real scene image of the environment where the opposite-end user is located and the real dynamic image of the opposite-end user sent by the opposite-end electronic device, so as to realize video communication.
Since the real scene image of the environment where the user is located and the real dynamic image of the user need to be transmitted simultaneously, the video communication method generally has higher requirements on communication quality such as network bandwidth and network rate for video communication. When the communication quality is not good, the video images, the sound and the like are easy to be jammed, and the problems that the video communication effect is influenced and the user experience is influenced exist.
Disclosure of Invention
The application provides a video communication method, an electronic device, a communication system and a computer readable storage medium, which can solve the problem that the video communication effect and the user experience are not good, namely, the video communication effect and the user experience can be improved.
In order to solve the above technical problem, in a first aspect, an embodiment of the present application provides a video communication method applied to a first electronic device, where the method includes: creating a three-dimensional virtual scene and displaying the three-dimensional virtual scene; determining a first holographic image of a first user and a second holographic image of a second user, wherein the first user is a user of first electronic equipment, the second user is a user of second electronic equipment, and the second electronic equipment is electronic equipment for performing video communication with the first electronic equipment; in the three-dimensional virtual scene, a first holographic image and a second holographic image are displayed.
In the implementation mode, in the process of video communication, the mode that the first electronic device displays the three-dimensional virtual scene is compared with the mode that the real scene image of the environment where the user is located is displayed, and the requirement of the video communication on the communication quality can be effectively reduced. In addition, the first electronic device displays the holographic image of the user in the three-dimensional virtual scene, so that the purpose of displaying the real dynamic image of the user can be achieved, namely the real image of the user can be kept, the authenticity of the user is effectively guaranteed, the video communication effect can be effectively improved, and the user experience can be improved.
In one possible implementation of the first aspect, determining the first holographic image includes: acquiring first image information and second image information of a first user, wherein the first image information and the second image information are image information of the first user in different physical space angles, the first image information is image information obtained by shooting the first user by first electronic equipment through a first camera of the first electronic equipment, and the second image information is image information obtained by shooting the first user by the first electronic equipment through a second camera of the first electronic equipment; and obtaining a first holographic image according to the first image information and the second image information.
In this implementation, the first electronic device can respectively shoot image information of the first user at different physical space angles through the first camera and the second camera, and can conveniently obtain the first holographic image of the first user according to the image information, so that the real image of the user can be kept in the video communication process, the authenticity of the user is ensured, the video communication effect can be effectively improved, and the user experience is improved.
In one possible implementation of the first aspect, determining the second holographic image includes: receiving holographic image data of a second holographic image of a second user, which is sent by a second electronic device, wherein the second holographic image is obtained by the second electronic device according to first image information and second image information of the second user, the first image information and the second image information of the second user are image information of the second user in different physical space angles, the first image information of the second user is image information obtained by the second electronic device shooting the second user through a first camera of the second electronic device, and the second image information of the second user is image information obtained by the second electronic device shooting the second user through a second camera of the second electronic device; and obtaining a second holographic image according to the holographic image data.
In this implementation manner, the second electronic device can respectively shoot image information of the second user at different physical space angles through the first camera and the second camera, and can conveniently obtain the second holographic image of the second user according to the image information. The first electronic equipment can receive the second holographic image of the second user sent by the second electronic equipment and display the second holographic image in the three-dimensional virtual scene, so that the real image of the second user can be reserved in the video communication process, the authenticity of the user is guaranteed, the video communication effect can be effectively improved, and the user experience is improved.
In a possible implementation of the first aspect, the method further includes: and sending the first holographic image of the first user to the second electronic equipment so as to enable the second electronic equipment to display the first holographic image.
In a possible implementation of the first aspect, the method further includes: and displaying the three-dimensional virtual scene, and displaying the first holographic image and the second holographic image through the curved surface display screen of the first electronic device.
In the implementation mode, the three-dimensional virtual scene, the first holographic image and the second holographic image are displayed through the curved surface display screen, so that the holographic image can be displayed more three-dimensionally, the display effect of the holographic image can be effectively improved, the video communication effect can be improved, and the user experience can be improved.
In one possible implementation of the first aspect, creating a three-dimensional virtual scene includes: determining three-dimensional virtual scene data, wherein the three-dimensional virtual scene data comprises three-dimensional space data, three-dimensional article data and multimedia data; and creating a three-dimensional virtual scene according to the three-dimensional virtual scene data.
In the implementation mode, the three-dimensional virtual scene can be conveniently and quickly constructed through the three-dimensional virtual scene data.
In one possible implementation of the first aspect, determining three-dimensional virtual scene data includes: if the first electronic device and the second electronic device are determined to establish video communication connection, determining three-dimensional virtual scene data according to preset scene initialization data; and/or in the process of video communication with the second electronic device, if the setting operation of the user on the three-dimensional virtual scene is received, responding to the setting operation, and determining the three-dimensional virtual scene data corresponding to the setting operation.
In this implementation manner, when the first electronic device determines to establish the video communication connection with the second electronic device, the first electronic device can conveniently and quickly construct the three-dimensional virtual scene through the three-dimensional virtual scene data. In addition, the second electronic device can conveniently construct a three-dimensional virtual scene through the same or similar three-dimensional virtual scene data.
In addition, the first electronic device further provides a setting function of the three-dimensional virtual scene, and in the process of video communication connection between the first electronic device and the second electronic device, the first electronic device can re-create the corresponding three-dimensional virtual scene according to the setting operation of the user. Therefore, the three-dimensional virtual scene can better meet the expectation of the user, and the user experience can be effectively improved. In addition, the second electronic device may also reconstruct its three-dimensional virtual scene in the same or similar manner.
In one possible implementation of the first aspect, displaying the first holographic image and the second holographic image in the three-dimensional virtual scene includes: and displaying the first holographic image and the second holographic image in the three-dimensional virtual scene according to the three-dimensional virtual scene and the posture information of the first user and the second user.
Therefore, the first holographic image and the second holographic image can be better blended into the three-dimensional virtual scene, the video communication effect can be improved, and the user experience can be improved.
In a possible implementation of the first aspect, the method further includes: receiving user input information, wherein the user input information comprises user input information input by a first user and/or user input information input by a second user; in the three-dimensional virtual scene, user input information is displayed.
In this implementation manner, the first electronic device further provides an information input function, and the first electronic device may further display user input information input by a user in a three-dimensional virtual scene, for example, a three-dimensional virtual writing board displayed in the three-dimensional virtual scene, so that an effect of video communication may be improved, and user experience may be effectively improved.
In a possible implementation of the first aspect, the method further includes: determining voice information of the first user and/or the second user; processing the voice information into text information and displaying the text information; or determining the language type information of the first user and/or the second user, processing the voice information into simultaneous interpretation text information according to the language type information, and displaying the simultaneous interpretation text information.
In this implementation, first electronic equipment still provides the function that speech conversion becomes text display, perhaps converts the function that text display after the pronunciation is translated into, so can promote video communication's effect to can promote user experience effectively.
In a possible implementation of the first aspect, determining the voice information of the first user and the second user includes: acquiring communication quality information of video communication; and if the current communication quality is determined to not meet the communication requirement of the video communication according to the communication quality information, determining the voice information.
In this implementation, when communication quality is not good, carry out above-mentioned pronunciation conversion and show the function for the characters, perhaps the pronunciation translate the function that the back converts the characters to show, so can avoid the user to hear the problem of sound each other, can promote video communication's effect to can promote user experience effectively.
In a second aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processing module, a display module and a display module, wherein the processing module is used for creating a three-dimensional virtual scene; the processing module is further configured to determine a first holographic image of a first user and a second holographic image of a second user, where the first user is a user of the electronic device, the second user is a user of another electronic device, and the another electronic device is an electronic device performing video communication with the electronic device; and the display module is also used for displaying the first holographic image and the second holographic image in the three-dimensional virtual scene.
It should be noted that the electronic device may be the aforementioned first electronic device, and the another electronic device may be the aforementioned second electronic device.
In a possible implementation of the second aspect, the electronic device further includes a first camera module and a second camera module, where the first camera module and the second camera module are respectively used for shooting a first user to obtain first image information and second image information of the first user at different physical space angles; and the processing module is further used for obtaining a first holographic image according to the first image information and the second image information.
In a possible implementation of the second aspect, the electronic device further includes a communication module, and the communication module is configured to receive a second holographic image of a second user transmitted by another electronic device.
In a possible implementation of the second aspect, the electronic device further includes an input module, and the input module is configured to receive user input information of the first user.
The electronic device provided in this implementation manner is configured to execute the video communication method provided in the first aspect and/or any one of the possible implementation manners of the first aspect, so that the beneficial effects (or advantages) of the video communication method provided in the first aspect can also be achieved.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a display screen, and the processor and the display screen are electrically connected, where the processor is configured to create a three-dimensional virtual scene, and determine a first holographic image of a first user and a second holographic image of a second user, where the first user is a user of the electronic device, the second user is a user of another electronic device, and the another electronic device is an electronic device in video communication with the electronic device; and the display screen is used for displaying the three-dimensional virtual scene and displaying the first holographic image and the second holographic image in the three-dimensional virtual scene.
In a possible implementation of the third aspect, the electronic device further includes a first camera and a second camera, where the first camera and the second camera are respectively disposed at different positions of the display screen, and the first camera and the second camera are respectively configured to shoot a first user, so as to obtain first image information and second image information of the first user at different physical space angles.
In a possible implementation of the third aspect, the electronic device further includes a first light beam processor, a second light beam processor, a first plane mirror, a second plane mirror, a first beam expander, a second beam expander, and a hologram plate, where a first image light beam obtained by the first camera is projected to the hologram plate sequentially through the first light beam processor, the first plane mirror, and the first beam expander, and a second image light beam obtained by the second camera is projected to the hologram plate sequentially through the second light beam processor, the second plane mirror, and the second beam expander.
In a possible implementation of the third aspect, the display screen is a curved display screen, that is, the first camera and the second camera are respectively disposed at different positions of the curved display screen, and the curved display screen is configured to display a three-dimensional virtual scene, a first holographic image, and a second holographic image.
In a possible implementation of the second aspect, the electronic device further includes an input device, where the input device is configured to receive user input information input by a first user; and the display screen is also used for displaying user input information.
The electronic device provided in this implementation manner is configured to execute the video communication method provided in the first aspect and/or any one of the possible implementation manners of the first aspect, so that the beneficial effects (or advantages) of the video communication method provided in the first aspect can also be achieved.
In a fourth aspect, embodiments of the present application provide a communication system, where the communication system includes at least two electronic devices provided in any one of the above third aspects and/or possible implementation manners of the third aspect, and a server, where each electronic device establishes a communication connection with the server to perform video communication.
The communication system provided by the implementation manner of the present application includes the electronic device provided by any possible implementation manner of the third aspect and/or the third aspect, and therefore, the beneficial effects (or advantages) of the electronic device provided by the third aspect can also be achieved.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory for storing a computer program, the computer program comprising program instructions; a processor configured to execute program instructions to cause an electronic device to perform the video communication method as provided by the first aspect and/or any one of the possible implementations of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, where a computer program is stored, where the computer program includes program instructions that are executed by a computer to make the computer perform the video communication method provided in the first aspect and/or any one of the possible implementation manners of the first aspect.
It is understood that the beneficial effects of the second to sixth aspects can also be referred to the related description of the first aspect, and are not repeated herein.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings used in the description of the embodiments will be briefly introduced below.
FIG. 1 is a schematic diagram illustrating an architecture of a video conferencing system, according to some embodiments of the present application;
fig. 2 is a flow diagram illustrating a video communication method according to some embodiments of the present application;
3A-3C are schematic diagrams illustrating one configuration of a video conferencing device 100, according to some embodiments of the present application;
FIG. 4 is a schematic diagram illustrating a process by which videoconferencing equipment 100 and videoconferencing equipment 200 display holographic images, according to some embodiments of the present application;
5A-5D are schematic diagrams illustrating some display interfaces of video conferencing device 100, according to some implementations of the present application;
FIG. 6 is a schematic diagram illustrating another configuration of a video conferencing device 100, according to some embodiments of the present application;
FIG. 7 is a schematic flow chart illustrating the display of user input information by videoconferencing equipment 100 and videoconferencing equipment 200, according to some embodiments of the present application;
fig. 8A and 8B are schematic diagrams illustrating some display interfaces of video conferencing device 100 and video conferencing device 200, according to some implementations of the present application;
FIG. 9 is a schematic diagram illustrating a structure of an electronic device, according to some implementations of the present application;
fig. 10 is a schematic diagram illustrating a structure of another electronic device, according to some implementations of the present application.
Detailed Description
The technical solution of the present application will be described in further detail with reference to the accompanying drawings. It should be noted that, in the technical solution of the present application, the acquisition, storage, use, processing, etc. of data all conform to the relevant regulations of the national laws and regulations.
Currently, with the development of scientific technology, the remote office technology is more and more mature. The remote office technology is a technology for establishing a temporary and safe communication (also called communication) connection in a public network (usually the internet) through a virtual private network to form a safe and stable channel passing through a disordered public network so as to help remote users establish a credible and safe communication connection and realize remote communication cooperation, and the remote office technology can effectively improve the working efficiency. Moreover, in general, remote office can realize the scene of supporting all-around office needs, such as remote video conference, telephone conference, company internal mail processing, remote development and the like.
The video conference is a multimedia communication technology and a solution for realizing 'interaction, visualization and real-time' communication by using a transmission medium in order to meet the communication requirements of people in different physical and geographic regions. The video conference technology can synchronously transmit information such as dynamic images, voice, shared files, pictures and the like of participants in a conference site to terminal equipment of other conference sites through various communication equipment, an internet, a television telephone network, transfer equipment, terminal equipment and the like, so that geographically dispersed participants can communicate in various ways such as sound, images, posture actions, expressions and the like in real time, and an in-person conference atmosphere is created.
As described above, when a video conference is currently performed, the multiple pieces of video conference equipment performing the video conference display real scene images of the environments where the participants are located and real dynamic images of the participants, and thus, there is a problem that the video conference has a high requirement on communication quality. And when the communication quality is not good, the video pictures, the voice and the like are easy to be jammed, and the problems that the video communication effect is influenced, the user experience is influenced and the like exist.
Thus, the present implementation provides a video conference system, please refer to fig. 1, which includes a video conference device 100 and a video conference device 200, at least two video conference devices, and a server 300. Video conference Applications (APPs) are installed in the video conference device 100 and the video conference device 200, respectively, the video conference device 100 and the video conference device 200 establish communication connection with the server 300 based on an access network, respectively, and a video conference can be performed based on the video conference applications.
Further, in this implementation manner, the participant corresponding to the video conference device 100 is the user a, and the participant corresponding to the video conference device 200 is the user B.
The implementation manner of the present application further provides a video communication method applied to the video conference system, please refer to fig. 2, where the video communication method includes the following steps:
s110, the video conference device 100 and the video conference device 200 establish a video conference communication connection through the video conference application and the server 300, and the video conference device 100 and the video conference device 200 respectively create a three-dimensional virtual scene corresponding to the video conference and display the three-dimensional virtual scene.
S120, the video conference device 100 and the video conference device 200 determine a first hologram of the user a and a second hologram of the user B, respectively.
S130, the video conference device 100 and the video conference device 200 respectively display the first holographic image and the second holographic image in their three-dimensional virtual scenes.
Specifically, when the user a of the video conference device 100 operates the video conference device 100 to open the video conference application, the video conference device 100 creates a three-dimensional virtual scene corresponding to the video conference and displays the three-dimensional virtual scene. In addition, the video conference device 100 may acquire the first hologram image of the user a and display the first hologram image of the user a in a three-dimensional virtual scene thereof.
In addition, when the user B of the video conference device 200 operates the video conference device 200 to open the video conference application, the video conference device 200 creates a three-dimensional virtual scene corresponding to the video conference and displays the three-dimensional virtual scene. In addition, the video conference device 200 may acquire a second holographic image of the user B and display the second holographic image of the user B in the three-dimensional virtual scene.
Further, the video conference device 100 may transmit the first holographic image of the user a to the video conference device 200, and the video conference device 200 may simultaneously display the first holographic image of the user a in the three-dimensional virtual scene displayed by the video conference device 200. And the video conference device 200 may also send the second holographic image of the user B to the video conference device 100, and the video conference device 100 may simultaneously display the second holographic image of the user B in the three-dimensional virtual scene displayed by the video conference device 100.
Thus, during the video conference between the video conference device 100 and the video conference device 200, three-dimensional virtual scenes can be created and displayed, respectively, and in the three-dimensional virtual scenes displayed respectively, holographic images of the user a and the user B who are performing the video conference are displayed to perform video communication.
In this implementation, in the process of performing a video conference between the video conference device 100 and the video conference device 200, the mode of displaying a three-dimensional virtual scene is compared with the mode of displaying a real scene image of an environment where a user is located, so that the requirement of the video conference on communication quality can be reduced, and the problems that the communication quality is not good, the video picture is stuck and the like which affect the video conference effect are avoided. That is, in this implementation, the three-dimensional virtual reality scene is used to replace the real scene, so that the requirement of transmission of video signal information on communication quality is reduced, and the influence on the video conference is reduced under the condition of poor communication quality such as poor network signal.
Further, the video conference device 100 and the video conference device 200 display the holographic images (also referred to as holographic projection images) of the participants in the three-dimensional virtual scene, so that the purpose of vividly and vividly displaying the posture (also referred to as limb movement), facial expression, emotion and other information of the participants can be achieved, that is, the purpose of displaying the real dynamic images of the user can be achieved, the authenticity of the user is ensured, the real effect of the video conference can be ensured, the video conference has the on-site visual interaction sense and the on-site sense, and the user experience can be effectively improved.
Further, the manner of displaying the holographic images of the participants in the three-dimensional virtual scene can reflect whether the participants are currently in the effective holographic image acquisition area (i.e., the shooting area of the camera) in real time. That is, if a participant is currently located in an effective holographic image capture area, the video conference device 100 and the video conference device 200 may display a holographic image of the participant; if the participant is not currently within the valid hologram capture area, the video conference device 100 and the video conference device 200 do not display the hologram of the participant. Therefore, when the participant leaves the conference site (i.e., leaves the conference site or the holographic image capturing area) temporarily and enters the conference site (i.e., enters the conference site or the holographic image capturing area), the video conference device 100 and the video conference device 200 may dynamically reflect these information in real time in the three-dimensional virtual scene by displaying or not displaying the hologram of the participant. Therefore, the real effect of the video conference can be ensured, the visual sense and the telepresence of the conference are enhanced, and the user experience can be effectively improved.
Further, in the manner of displaying the holographic images of the participants in the three-dimensional Virtual scene, the user can visually see the three-dimensional video conference effect without the aid of glasses and other devices based on a Virtual Reality (VR) technology or an Augmented Reality (AR) technology, and the user experience is effectively improved.
The structure of the video conference device provided by the implementation mode of the application will be explained below.
Referring to fig. 3A, taking the video conference apparatus 100 as an example, the video conference apparatus 100 includes a processor 110, a curved display screen 120, and two cameras, namely a camera 131 and a camera 132, where the curved display screen 120, the camera 131 and the camera 132 are respectively electrically connected to the processor 110.
In this implementation, processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The processor 110 may generate operation control signals according to the instruction operation code and the timing signals, so as to complete the control of instruction fetching and instruction execution. A memory may also be provided in processor 110 for storing instructions and data. Of course, the memory may also be a stand-alone device. In some implementations, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory, avoiding repeated accesses, reducing the latency of the processor 110, and thus increasing the efficiency of the system. And the processor 110 executes various processes of video communication by the video conference device 100 by executing the stored instructions, such as controlling the camera 131 and the camera 132 to photograph the user a, controlling the curved display screen 120 to display a three-dimensional virtual scene, displaying the first holographic image and the second holographic image, and the like.
Further, referring to fig. 3B, in this implementation, the camera 131 and the camera 132 are respectively disposed at different positions of the curved display screen 120, for example, the camera 131 is disposed at the upper left corner of the curved display screen 120, and the camera 132 is disposed at the upper right corner of the curved display screen 120. The camera 131 and the camera 132 are disposed at different positions of the curved display screen 120, and the first image information and the second image information of the user a at different physical space angles can be obtained by taking pictures of the user a.
Further, referring to fig. 3C, in this implementation, the video conference apparatus 100 further includes a beam processor 141, a beam processor 142, a plane mirror 151, a plane mirror 152, a beam expanding mirror 161, a beam expanding mirror 162, and a hologram plate 170. The first image beam obtained by the camera 131 passes through the beam processor 141, the plane mirror 151, and the beam expander 161 in sequence and is projected onto the hologram plate 170, and the second image beam obtained by the camera 132 passes through the beam processor 142, the plane mirror 152, and the beam expander 162 in sequence and is projected onto the hologram plate 170.
The video conference device 100 provided by this implementation may photograph the user a, obtain the first holographic image of the user a, and may display the first holographic image of the user a. The principle and process of the video conference apparatus 100 photographing the user a to obtain the first hologram image and displaying the first hologram image will be described below.
The process of shooting the user a by the video conference device 100 to obtain the first holographic image includes: the video conference apparatus 100 photographs the user a through the camera 131 and the camera 132, respectively, forms a diffused beam image in the foregoing manner, and records the beam image to the hologram plate 170. Since the camera 131 and the camera 132 are disposed at different positions of the curved display screen 120, a phase difference and an amplitude difference exist between the light beam captured by the camera 131 and the light beam captured by the camera 132. After the light beams shot by the camera 131 and the camera 132 from different physical space angles are projected onto the hologram plate 170, the two light beams generate an interference reaction on the hologram plate 170, and form a deviation degree on the phase and amplitude of the user a image light beam, and the deviation degree is converted into a change on a physical space. Further, the hologram plate 170 stores all information of the light waves reflected by the participant by using the difference and the degree of deviation between the interference fringes of the light. Further, the hologram plate 170 forms a first hologram image of the user a by performing image processing, such as image development, image enhancement, image fixing, and the like, on the base image having the interference fringes.
The process of the video conference device 100 displaying the first holographic image of the user a includes: after the videoconferencing equipment 100 generates the first holographic image of the user a, the light wave information is reproduced again using the principle of light diffraction. The first holographic image with the information of user a resembles a complex grating, and the diffracted light waves of the first holographic image recording the information of user a generate two images, an initial image and a conjugate image, under illumination by coherent light. The videoconferencing equipment 100 may achieve the purpose of displaying the first holographic image of user a as illustrated in fig. 3C by exposing the initial image and the conjugate image to be presented in different areas of space, i.e. in the space of the three-dimensional virtual scene.
In this implementation, the video conference apparatus 100 displays the first holographic image of the user a through the enhanced holographic projection technology 1:1, and the displayed first holographic image of the user a has the advantages of real visual effect and strong stereoscopic impression.
Further, referring to fig. 4, in this implementation, after obtaining the first holographic image of the user a, the video conference device 100 may determine holographic image data of the first holographic image. Then, the hologram image data of the first hologram image of the user a is 64-bit encoded, and the encoded hologram image data of the first hologram image is transmitted to the server 300. The server 300 transmits the hologram image data of the first hologram image of the user a to the video conference device 200. The video conference device 200 may obtain the first hologram image of the user a by 64-bit decoding the hologram image data of the first hologram image. Then, the video conference device 200 performs imaging, for example, displays the first holographic image of the user a by using the enhanced holographic projection technology 1:1, and the display process of the first holographic image of the user a is the same as the process of displaying the first holographic image of the user a by using the video conference device 100, which is not described herein again.
Further, the video conference device 200 may also obtain the first image information and the second image information of the user B through two cameras of the first camera and the second camera of the video conference device 200, respectively, and obtain the second holographic image of the user B according to the first image information and the second image information of the user B. Then, the video conference apparatus 200 may also encode the hologram image data of the second hologram image of the user B with 64 bits and transmit the encoded hologram image data to the video conference apparatus 100. After receiving the holographic image data of the second holographic image of the user B, which is sent by the video conference device 200 and subjected to 64-bit encoding, the video conference device 100 may also display the second holographic image of the user B according to the holographic image data, and the display process of the second holographic image of the user B is the same as the process of displaying the second holographic image of the user a, which is not described herein again.
The 64-bit encoding method may be another encoding method or data processing method, and may be selected and set as necessary.
It is to be understood that the illustrated structure of the present implementation does not constitute a specific limitation to the video conference apparatus 100. In other implementations of the present application, videoconferencing equipment 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Further, in the implementation of the present application, the structure of the video conference device 200 is the same as or similar to that of the video conference device 100, and will not be described herein.
In the implementation mode, the method is different from the traditional face-to-face video mode, namely different from the mode of transmitting the real scene image and the real dynamic image of the user, and has the problems of distortion and unsharpness of images of participants, poor visual sense of a conference, small proportion of images of people and the like, and poor user experience. The video conference equipment provided by the implementation mode adopts the curved surface display screen and the technology of two groups of cameras, the amplitude and phase distribution of the light wave recorded on the photographic films such as the holographic plate can be further enhanced, the technology of displaying the three-dimensional projection picture is realized, the three-dimensional visual effect is realized by recording the information in the light wave transmitted by the shot object, and the purpose of enhancing the naked eye 3D holographic projection can be realized. And the naked eye 3D holographic projection technology and the video screen conference technology are creatively combined, so that the display effect of 1:1 of the enhanced naked eye 3D holographic projection can be realized, the visual sense of the video conference is strong, and brand-new video conference visual experience is provided for participants.
In addition, in the implementation mode, the participant is projected by using the enhanced plate naked eye 3D holographic projection technology, so that the problem of poor visual sense of the video conference can be solved. And, need not to wear under the condition of 3D glasses equipment at the participant, can reach the visual effect of participant's VR formation of image and scene model sound combination. Furthermore, the development of naked eye 3D holographic projection of the enhanced plate enables people to get rid of the limitation of 3D glasses, and the 3D conference effect can be seen through naked eyes.
In the following, a process of displaying a three-dimensional virtual scene by the video conference device 100 and the video conference device 200 in the implementation manner of the present application is described.
Taking the video conference apparatus 100 as an example, in an implementation manner of the present application, the displaying of the three-dimensional virtual scene by the video conference apparatus 100 includes the following processes:
referring to fig. 5A, after receiving an operation of opening a video conference application by a user a, the video conference device 100 displays a video conference application login interface shown in fig. 5A, where the video conference application login interface includes controls such as "conference number", "user name", "join conference", and "cancel". The user can input information such as a conference number, a user name and the like through the video conference application login interface, and login the video conference application through the 'joining control'.
After a user inputs information such as a conference number and a user name through the video conference application login interface, if the video conference device 100 receives a click operation of the user on the join control, and the video conference device 100 determines that video conference initialization is required, the video conference device 100 takes locally stored scene initialization data as three-dimensional virtual scene data, and creates a three-dimensional virtual scene according to the three-dimensional virtual scene data. The three-dimensional virtual scene data includes data such as 3D space data, 3D article data, and multimedia data. The 3D space data refers to data information such as a space size required for constructing a three-dimensional virtual scene, the 3D object data refers to data of an object in the three-dimensional virtual scene, and the multimedia data may refer to data such as PPT and music required in a conference process. Of course, the three-dimensional virtual scene data may also include other information, which may be selected and set as desired.
It should be noted that, in the implementation manner of the present application, the three-dimensional virtual scene data is some preset scene data, which may be made by a user using unity3D or other software, or may be imported, combined, and the like using some existing 3D scene data, and may be specifically selected and set as needed.
Further, referring to fig. 5B, the video conference apparatus 100 may display a three-dimensional virtual scene shown in fig. 5B, where the three-dimensional virtual scene includes a virtual conference room, a virtual desk located in the virtual conference room, and the like.
In the implementation mode, the three-dimensional virtual scene can be conveniently and quickly created through the preset scene initialization data, and the video conference initialization function is realized. In addition, the video conference equipment realizes the video conference initialization function and uses the three-dimensional virtual scene data, so that the requirements on the communication quality of a network broadband and the like are low, the dependence on the communication quality of the video conference is reduced under the condition of poor communication quality, the three-dimensional virtual scene can be normally displayed, the video communication effect can be effectively improved, and the user experience is improved.
Further, please refer to fig. 5C, in some implementations of the present application, the video conference application further provides a scene switching control, the scene switching control corresponds to a plurality of three-dimensional virtual scenes, and the user can select a suitable three-dimensional virtual scene according to the need. Furthermore, the video conference device 100 may determine three-dimensional virtual scene data corresponding to the three-dimensional virtual scene according to a preset correspondence between the three-dimensional virtual scene and the three-dimensional virtual scene data, and create and display a new three-dimensional virtual scene according to the three-dimensional virtual scene data.
In some implementation manners of the application, the video conference application further provides a scene combination control, the scene switching control also corresponds to various three-dimensional virtual scenes, and a user can select the three-dimensional virtual scenes needing to be combined according to needs. Furthermore, the video conference device 100 may determine three-dimensional virtual scene data corresponding to the combined three-dimensional virtual scene according to a preset corresponding relationship between the three-dimensional virtual scene and the three-dimensional virtual scene data, and create and display a new three-dimensional virtual scene according to the three-dimensional virtual scene data.
In some implementations of the present application, the video conference application further provides a 3D article synchronous display control, the 3D article synchronous display control corresponds to a plurality of 3D articles, and the user can select a suitable 3D article as needed. Also, the video conference apparatus 100 may display the 3D item newly selected by the user in the three-dimensional virtual scene according to the selection operation of the 3D item by the user.
In some implementation manners of the present application, the video conference application further provides a conference adjustment control, and the conference adjustment control corresponds to a conference volume adjustment control, a picture resolution adjustment control, a participant list, a conference ending control, and the like, and is used for a user to adjust the conference.
In some implementations of the present application, the video conferencing application also provides more or fewer functionality controls, and other types of functionality controls may be provided for use by the user, which may be specifically selected and set as desired.
In summary, the participants can combine different three-dimensional virtual scenes according to their preferences, or switch different three-dimensional virtual scenes according to actual needs, and the like. For example, if the video conference device 100 receives a setting operation of the user a on a control corresponding to the aforementioned three-dimensional virtual scene during a video conference with the video conference device 200 by the video conference device 100, in response to the setting operation of the user a, three-dimensional virtual scene data corresponding to the setting operation is determined, and the three-dimensional virtual scene is recreated. Therefore, the user experience can be effectively improved.
The video conference device 100 may display controls such as a scene switching control, a scene combining control, a 3D article synchronous display control, and a conference adjusting control shown in fig. 5C, so that the user may perform corresponding operations.
Of course, the video conference device 100 may not display the scene switching control, the scene combination control, the 3D object synchronous display control, the conference adjustment control, and other controls, and then display the controls according to the use operation of the user, which may be selected and set as needed.
Further, please refer to fig. 5D, in an implementation manner of the present application, after the video conference device 100 displays the three-dimensional virtual scene, the method further includes displaying the first holographic image of the user a and the second holographic image of the user B in the three-dimensional virtual scene.
It should be noted that, when the video conference apparatus 100 displays the first holographic image of the user a and the second holographic image of the user B in the three-dimensional virtual scene, information such as the posture information of the user a and the user B may be determined through the first holographic image of the user a and the second holographic image of the user B, so that the first holographic image of the user a and the second holographic image of the user B are better fused and displayed in the three-dimensional virtual scene. For example, if both the user a and the user B are in a standing posture, the hologram images of both the user a and the user B are displayed at positions far from the desk. If the user a is in a sitting posture, the first hologram image of the user a may be displayed on the seat.
In addition, in this implementation manner, the video conference device 100 may further determine, according to information such as a meeting entering sequence of the user a and the user B, display positions of the first holographic image of the user a and the second holographic image of the user B in the three-dimensional virtual scene, and the like. Alternatively, the display positions, display modes, and the like of the first holographic image of the user a and the second holographic image of the user B in the three-dimensional virtual scene may be determined by other information, which may be selected and set as needed.
The process of acquiring the holographic image and displaying the holographic image by the video conference device 100 is as described above, and will not be described herein.
It should be noted that in this implementation manner, the video conference device 100 may send the aforementioned three-dimensional virtual scene data to the video conference device 200, so that the video conference device 200 creates a three-dimensional virtual scene according to the same three-dimensional virtual scene data. Of course, the video conference apparatus 200 may also create a three-dimensional virtual scene from locally stored three-dimensional virtual scene data.
In addition, the process of creating and displaying a three-dimensional virtual scene by the video conference apparatus 200 is the same as or similar to the process of creating and displaying a three-dimensional virtual scene by the video conference apparatus 100, and will not be described herein. And the process of the video conference device 200 displaying the first holographic image of the user a and the second holographic image of the user B in the three-dimensional virtual scene is the same as or similar to the process of the video conference device 100 displaying the first holographic image of the user a and the second holographic image of the user B in the three-dimensional virtual scene, and a description thereof will not be provided here.
Further, there are some scenes in the video conference where the participants manually describe information such as text, and in order to well show the information such as text manually described by the participants to other participants, in other implementations of the present application, each video conference device further includes an electronic writing board.
Referring to fig. 6, taking the video conference device 100 as an example, in an implementation manner of the present application, the video conference device 100 further includes an electronic writing board 180 as an accessory device, and the user a can write or draw a flowchart on the electronic writing board 180 by using a sensing pen (or a tool pen, etc.) corresponding to the electronic writing board 180. The video conference apparatus 100 and the video conference apparatus 200 may also display user input information input by the user a through the electronic tablet 180 on the display three-dimensional virtual tablet 181 in the three-dimensional virtual scene. It should be noted that, when a scene three-dimensional virtual scene is created, the video conference device 100 and the video conference device 200 may default to create and display the three-dimensional virtual writing board 181; alternatively, if the video conference device 100 and the video conference device 200 detect that the electronic tablet 180 is connected, the video conference device 100 and the video conference device 200 may create and display the three-dimensional virtual tablet 181 in the scene three-dimensional virtual scene; which can be selected and set as desired.
Referring to fig. 7, the process of displaying the user input information input by the user through the electronic tablet 180 in the three-dimensional virtual scene by the video conference device 100 and the video conference device 200 includes the following steps:
s201, the user A writes on the electronic writing board 180 by using the induction pen.
S202, the sensor on the electronic tablet 180 senses the pressure of the sensing pen, converts the pressure into an electrical signal, and sends the electrical signal to the video conference device 100.
S203, the video conference device 100 synchronously maps the electrical signal, i.e., displays the user input information on the three-dimensional virtual writing board 181 according to the electrical signal. And, these information can also be stored in the array at the same time. In addition, the video conference device 100 transmits user input information (i.e., an electric signal) to the server 300.
S204, the server 300 transmits the user input information to the video conference device 200.
S205, the video conference device 100 displays the corresponding user input information.
It should be noted that the video conference apparatus 100 and the electronic tablet 180 may be integrally provided or may be separately provided, and they may be selected as needed.
In this implementation, the video conference device 100 maps user input information, such as text information input by a user, by using a mixed reality technology, and displays the mapped information in the three-dimensional virtual scene. And, the video conference apparatus 100 transmits the mapped user input information to the video conference apparatus 200 in real time, so that the video conference apparatus 200 performs virtual reality presentation of the information to the user B through the mixed reality technology. The video conference equipment and the electronic writing board are combined, so that the participants can better transmit the hand-written information to other participants when needing to describe the characters or the flow charts. Therefore, the function and information interaction of video communication are enhanced, the effect of video communication can be effectively improved, and the user experience is improved.
Further, in this implementation, the electronic writing board 180 is similar to a white board, and the size thereof can be specifically selected and set according to the requirement, for example, the length thereof can be 30cm, the thickness thereof can be 30cm, and the like.
Further, in the video conference, there may be some cases that the languages of the participants are not available, or the voices are blurred, and the video conference needs to depend heavily on the communication quality such as the network broadband because the information amount of the video signal and the voice signal is large. In other implementation manners of the present application, the video communication method provided by the present application further includes acquiring communication quality information corresponding to the video conference in a process of performing the video conference; and if the communication quality is determined to not meet the communication requirement of the video conference according to the communication quality information, determining the voice information of the user A and/or the user B. And the voice information is directly processed into corresponding character information and displayed, or the language type information of the user A and/or the user B is determined, the voice information is processed into simultaneous interpretation character information according to the language type information, and the simultaneous interpretation character information is displayed.
For example, referring to fig. 8A, if the language type of the user a is chinese and the language type of the user B is english, the video conference device 100 displays corresponding chinese subtitles according to the voice information of the user B, and the video conference device 200 displays corresponding english subtitles according to the voice information of the user a.
Of course, referring to fig. 8B, the video conference apparatus 100 and the video conference apparatus 200 may also display chinese subtitles and english subtitles at the same time, which may be selected and set as needed.
In another implementation manner of the present application, if the video conference device 100 and the video conference device 200 determine that the communication quality does not meet the communication requirement of the video conference according to the communication quality information, a prompt message may also be displayed, so that a user may select whether to start the voice-to-text function. And, if the user selects to activate the voice to text function, the video conference apparatus 100 and the video conference apparatus 200 determine corresponding voice information, convert the voice information into text information, and display the text information.
That is, many video conferences have a video-stuck state, a voice-stuck state, an image-stuck state, or a language-blocked state due to the influence of communication quality such as network bandwidth. If the situation occurs, the video conference equipment provided by the implementation mode reminds the participants of using the simultaneous interpretation function and the subtitle function of the video conference equipment by detecting whether the images and the voice signals are smoothly transmitted or not, and opens the simultaneous interpretation function and the subtitle function of the video conference equipment according to the operation of the user. The video conference equipment converts the voice information of the participants into text information and synchronizes the text information to the screens of the display screens of the video conference equipment in other meeting places.
The implementation mode of the application creatively combines the functions of synchronous reminding, simultaneous interpretation and caption. The method ensures that the speech communication can be normally carried out under the condition that the speech of the participants is not smooth. Meanwhile, under the condition that the communication quality of network signals and the like is not good, voice signals are converted into character signals to be displayed on the display screen, the influence on the video conference due to unclear voice is reduced, the dependence of the video conference on the communication quality of network broadband and the like is also reduced, and the normal operation of the video conference is ensured. Therefore, the conference participants can acquire the information of the video conference, the video communication effect can be effectively improved, and the user experience is improved.
Of course, in other implementation manners of the present application, the video conference device 100 and the video conference device 200 may also start the foregoing subtitle function according to other conditions or user requirements such as user settings, which may be selected and set as needed.
It should be noted that, in the implementation manner of the present application, the video conference device 100, the video conference device 200, and the server 300 may communicate with each other based on a 5G Network (5G Network) communication technology, where the 5G Network is a 5 th generation mobile communication Network, and a peak theoretical transmission speed of the 5G Network may reach 20Gbps, that is, 2.5G can be transmitted per second, which is 10 times faster than that of a 4G Network. The 5G network communication technology adopts a high-frequency transmission technology, has the advantages of high transmission speed and high network stability, and is very suitable for communication of a video conference system. Of course, in other implementations of the present application, the video conference device 100, the video conference device 200 and the server 300 may communicate with each other in other wireless or wired manners, which may be selected and set as needed.
In the implementation mode, the three-dimensional virtual scene and the holographic images of the participants are displayed, and a better video communication effect can be achieved based on a VR (virtual reality) technology and an AR (augmented reality) technology. The VR is a computer simulation system capable of creating and experiencing a virtual world, a simulation environment is generated by a computer, and the VR is a system simulation of multi-source information fusion, interactive three-dimensional dynamic views and entity behaviors so as to achieve the effect of immersing a user in the environment. AR is a technology that calculates the position and angle of a camera image in real time and adds corresponding images, video, and 3D models, and the goal of this technology is to fit a virtual world on a screen over the real world and interact with it. Along with the improvement of the CPU operation capability of the portable electronic product, the application of the augmented reality is more and more extensive. According to the technical scheme, based on the AR technology and the VR technology, the three-dimensional virtual scene and the holographic image display effect can be well achieved, namely the video conference effect can be better achieved by combining the virtual reality holographic projection technology and the video conference technology.
In an implementation manner of the present application, the user a is taken as an example of a first user, and may refer to one user corresponding to the video conference device 100 end, or may refer to multiple users corresponding to the video conference device 100 end; in addition, the user B may refer to a user corresponding to the video conference device 200 end, or may refer to multiple users corresponding to the video conference device 200 end, which may be selected and set as needed.
For example, in some implementations of the application, if both the camera 131 and the camera 132 of the video conference device 100 capture images of only one user, it may be determined that the user a is one user. If the first camera and the second camera of the video conference device 100 capture images of more than two users at the same time, it may be determined that the user a is a plurality of users. Alternatively, the video conference apparatus 100 may select one or more users from the plurality of captured users as the user a through some preset filtering conditions. The filtering condition may be, for example, a distance, a case where a facial feature of the user is photographed, or the like, which may be selected and set as needed. Of course, the determination manner of the user B is the same as or similar to that of the user a, and is not described herein again.
In this embodiment, the video conference apparatus 100 and the video conference apparatus 200 have the same structure or can realize the same function to realize video communication therebetween.
In some implementations of the present application, the first electronic device may be the video conference device 100, the second electronic device may be the video conference device 200, and a process of implementing the video communication method between the first electronic device and the second electronic device is the same as a process of implementing the video communication method between the video conference device 100 and the video conference device 200, and is not described herein again.
The curved display screen 120 is an example of a display screen, and the display screen may be any other display screen. The camera 131 is an example of a first camera, the camera 132 is an example of a second camera, and the first camera and the second camera may be other types of cameras. Beam processor 141 is an example of a first beam processor and beam processor 142 is an example of a second beam processor. The flat mirror 151 is an example of a first flat mirror, and the flat mirror 152 is an example of a second flat mirror. The beam expander 161 is an example of a first beam expander, and the beam expander 162 is an example of a second beam expander. The aforementioned components can be selected and arranged as desired.
The electronic tablet 180 is an example of an input device, and the input device may be a keyboard or other type of input device, which can be selected and set as desired.
In some implementations of the present application, the video conference device may be a large-screen device, or may be other electronic devices besides the large-screen device, such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a television, a projector, a vehicle-mounted display, and the like.
That is, the aforementioned video conference device is an example of an electronic device provided in implementations of the present application, and in some implementations of the present application, the electronic device may be any other type of electronic device having a display function, which can be selected and set as needed.
The video communication method provided by the application can also be applied to a multi-party video conference scene in which more than two electronic devices carry out video conferences. In addition, the method can also be applied to video communication scenes such as multi-party video call, and the implementation process of the method is the same as or similar to the process of implementing the video conference, and is not repeated here.
The foregoing video conference system is an example of a communication system provided in the implementation of the present application, and the communication system may also include other electronic devices, which may be selected and set as needed.
Referring to fig. 9, fig. 9 is a block diagram illustrating functional module structures of an electronic device according to some implementations of the present application. The electronic device includes a processing module 902, a display module 904, a first camera module 906, a second camera module 908, an input module 910, a voice processing module 912, and a communication module 914. Further, the voice processing module 912 includes a detection transmission signal module 9121, a simultaneous interpretation module 9122, a voice-to-text function module 9123, and a text display module 9124.
The processing module 902 is configured to implement the same function as the processor 110, the display module 904 is configured to implement the same function as the curved display screen 120, the first camera module 906 is configured to implement the same function as the camera 131, the second camera module 908 is configured to implement the same function as the camera 132, and the input module 910 is configured to implement the same function as the electronic tablet 180.
The voice processing module 912 is used for implementing the aforementioned voice-to-text function. Specifically, the transmission signal detection module 9121 may detect communication quality information such as network bandwidth and network transmission signal at regular time, and send the communication quality information to the processing module 902, and the processing module 902 determines whether the current reminding quality meets the communication requirement according to the communication quality information. If not, the processing module 902 may notify the display module 904 to display a reminder to inform the user whether the text subtitle function needs to be turned on. The simultaneous interpretation module 9122 can determine to open the text subtitle function according to the selection and the need of the participants. In addition, after the simultaneous interpretation module 9122 is started and determines the language type of the participant, the voice-to-text function module 9123 converts the language of the participant. The voice-to-text function module 9123 converts the voice information of the participant into text information of the corresponding language type. The text display module 9124 displays the corresponding text information through the display module 904.
The communication module 914 is used for implementing wireless or wired communication between the electronic device and other electronic devices such as a server, for example, for receiving information such as holographic image data of a participant transmitted by another electronic device.
It should be noted that the electronic device provided in this implementation manner may further include a scene switching unit for implementing the scene switching function, a scene combining unit for implementing the scene combining function, a 3D article synchronous display unit for implementing the 3D article synchronous display function, and a conference adjusting unit for implementing the conference adjusting function, which may be specifically selected and set as needed.
Referring to fig. 10, fig. 10 is a block diagram illustrating an electronic device according to further implementations of the present application. The electronic device may include one or more processors 1002, system control logic 1008 coupled to at least one of the processors 1002, system memory 1004 coupled to the system control logic 1008, non-volatile memory (NVM)1006 coupled to the system control logic 1008, and a network interface 1010 coupled to the system control logic 1008.
The processor 1002 may include one or more single-core or multi-core processors. The processor 1002 may include any combination of general-purpose processors and dedicated processors (e.g., graphics processors, application processors, baseband processors, etc.). In implementations herein, the processor 1002 may be configured to perform the aforementioned video communication methods.
In some implementations, system control logic 1008 may include any suitable interface controllers to provide any suitable interface to at least one of processors 1002 and/or any suitable device or component in communication with system control logic 1008.
In some implementations, system control logic 1008 may include one or more memory controllers to provide an interface to system memory 1004. System memory 1004 may be used to load and store data and/or instructions. In some implementations, the memory 1004 of the electronic device can include any suitable volatile memory, such as suitable Dynamic Random Access Memory (DRAM).
NVM/memory 1006 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some implementations, the NVM/memory 1006 can include any suitable non-volatile memory, such as flash memory, and/or any suitable non-volatile storage device, such as at least one of a HDD (Hard Disk Drive), CD (Compact Disc) Drive, DVD (Digital Versatile Disc) Drive.
The NVM/memory 1006 can include a portion of a storage resource installed on a device of the electronic device, or it can be accessed by, but not necessarily a part of, the device. For example, the NVM/storage 1006 may be accessed over a network via the network interface 1010.
In particular, the system memory 1004 and the NVM/storage 1006 may each include: a temporary copy and a permanent copy of the instructions 1020. The instructions 1020 may include: instructions that, when executed by at least one of the processors 1002, cause the electronic device to implement the aforementioned video communication method. In some implementations, the instructions 1020, hardware, firmware, and/or software components thereof may additionally/alternatively be disposed in the system control logic 1008, the network interface 1010, and/or the processor 1002.
The network interface 1010 may include a transceiver to provide a radio interface for the electronic device to communicate with any other suitable device (e.g., front end module, antenna, etc.) over one or more networks. In some implementations, the network interface 1010 may be integrated with other components of the electronic device. For example, the network interface 1010 may be integrated with at least one of the processor 1002, the system memory 1004, the NVM/storage 1006, and a firmware device (not shown) having instructions that, when executed by at least one of the processor 1002, the electronic device implements the video communication method described previously.
The network interface 1010 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface. For example, network interface 1010 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one implementation, at least one of the processors 1002 may be packaged together with logic for one or more controllers of system control logic 1008 to form a System In Package (SiP). In one implementation, at least one of the processors 1002 may be integrated on the same die with logic for one or more controllers of system control logic 1008 to form a system on a chip (SoC).
The electronic device may further include: input/output (I/O) devices 1012. The I/O device 1012 may include a user interface to enable a user to interact with the electronic device; the design of the peripheral component interface enables the peripheral component to also interact with the electronic device. In some implementations, the electronic device further includes a sensor to determine at least one of environmental conditions and location information associated with the electronic device.
In some implementations, the user interface can include, but is not limited to, a display (e.g., a liquid crystal display, a touch screen display, etc.), a speaker, a microphone, one or more cameras (e.g., still image cameras and/or video cameras), a flashlight (e.g., a light emitting diode flash), and a keyboard.
In some implementations, the peripheral component interfaces can include, but are not limited to, a non-volatile memory port, an audio jack, and a power interface.
In some implementations, the sensors may include, but are not limited to, a gyroscope sensor, an accelerometer, a proximity sensor, an ambient light sensor, and a positioning unit. The positioning unit may also be part of the network interface 1010 or interact with the network interface 1010 to communicate with components of a positioning network, such as Global Positioning System (GPS) satellites.
It is to be understood that the illustrated structure of the implementation of the invention does not constitute a specific limitation for the electronic device. In other implementations of the present application, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application implementation, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
One or more aspects of at least one implementation may be implemented by representative instructions stored on a computer-readable storage medium, which represent various logic in a processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. These representations, known as "IP cores" may be stored on a tangible computer-readable storage medium and provided to a number of customers or manufacturing facilities to load into the manufacturing machines that actually make the logic or processor.
It should be noted that the terms "first," "second," and the like are used merely to distinguish one description from another, and are not intended to indicate or imply relative importance.
It should be noted that in the accompanying drawings, some structural or methodical features may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some implementations, the features may be arranged in a manner and/or order different from that shown in the illustrative figures. Additionally, the inclusion of structural or methodical features in a particular figure is not meant to imply that such features are required in all implementations, and in some implementations, these features may not be included or may be combined with other features.
While the present application has been shown and described with reference to certain preferred implementations thereof, it will be understood by those skilled in the art that the foregoing is a more detailed description of the application, and the specific implementations of the application are not to be considered limited to these descriptions. Various changes in form and detail, including simple deductions or substitutions, may be made by those skilled in the art without departing from the spirit and scope of the present application.

Claims (18)

1. A video communication method is applied to a first electronic device, and is characterized by comprising the following steps:
creating a three-dimensional virtual scene and displaying the three-dimensional virtual scene;
determining a first holographic image of a first user and a second holographic image of a second user, wherein the first user is a user of the first electronic device, the second user is a user of a second electronic device, and the second electronic device is an electronic device performing video communication with the first electronic device;
displaying the first holographic image and the second holographic image in the three-dimensional virtual scene.
2. The video communication method of claim 1, wherein determining the first holographic image comprises:
acquiring first image information and second image information of the first user, wherein the first image information and the second image information are image information of the first user in different physical space angles, the first image information is image information obtained by the first electronic device through a first camera of the first electronic device shooting the first user, and the second image information is image information obtained by the first electronic device through a second camera of the first electronic device shooting the first user;
and obtaining the first holographic image according to the first image information and the second image information.
3. The video communication method according to claim 1 or 2, wherein determining the second holographic image comprises:
receiving holographic image data of the second holographic image of the second user, which is sent by the second electronic device; the second holographic image is obtained by the second electronic device according to first image information and second image information of the second user, the first image information and the second image information of the second user are image information of the second user in different physical space angles, the first image information of the second user is image information obtained by the second electronic device through a first camera of the second electronic device shooting the second user, and the second image information of the second user is image information obtained by the second electronic device through a second camera of the second electronic device shooting the second user;
and obtaining the second holographic image according to the holographic image data.
4. The video communication method according to any of claims 1-3, wherein the method further comprises:
and displaying the three-dimensional virtual scene, and displaying the first holographic image and the second holographic image through a curved surface display screen of the first electronic device.
5. The video communication method according to any one of claims 1 to 4, wherein creating the three-dimensional virtual scene comprises:
determining three-dimensional virtual scene data, wherein the three-dimensional virtual scene data comprises three-dimensional space data, three-dimensional article data and multimedia data;
and creating the three-dimensional virtual scene according to the three-dimensional virtual scene data.
6. The video communication method according to claim 5, wherein determining the three-dimensional virtual scene data comprises:
if the first electronic device and the second electronic device are determined to establish video communication connection, determining the three-dimensional virtual scene data according to preset scene initialization data; and/or
And in the process of carrying out video communication with the second electronic equipment, if the setting operation of the three-dimensional virtual scene by a user is received, responding to the setting operation, and determining the three-dimensional virtual scene data corresponding to the setting operation.
7. The video communication method according to any of claims 1-6, wherein the method further comprises:
receiving user input information, wherein the user input information comprises user input information input by the first user and/or user input information input by the second user;
and displaying the user input information in the three-dimensional virtual scene.
8. The video communication method according to any of claims 1-7, wherein the method further comprises:
determining voice information of the first user and/or the second user;
processing the voice information into text information and displaying the text information; or
Determining language type information of the first user and/or the second user, processing the voice information into simultaneous interpretation text information according to the language type information, and displaying the simultaneous interpretation text information.
9. The video communication method according to claim 8, wherein determining the voice information of the first user and/or the second user comprises:
acquiring communication quality information of the video communication;
and if the current communication quality is determined to not meet the communication requirement of the video communication according to the communication quality information, determining the voice information.
10. An electronic device, characterized in that the electronic device comprises:
the processing module is used for creating a three-dimensional virtual scene;
the display module is used for displaying the three-dimensional virtual scene;
the processing module is further configured to determine a first holographic image of a first user and a second holographic image of a second user, where the first user is a user of the electronic device, the second user is a user of another electronic device, and the another electronic device is an electronic device performing video communication with the electronic device;
the display module is further configured to display the first holographic image and the second holographic image in the three-dimensional virtual scene.
11. An electronic device comprising a processor and a display screen, the processor and the display screen being electrically connected, wherein,
the processor is configured to create a three-dimensional virtual scene, determine a first holographic image of a first user, and determine a second holographic image of a second user, where the first user is a user of the electronic device, the second user is a user of another electronic device, and the another electronic device is an electronic device performing video communication with the electronic device;
the display screen is used for displaying a three-dimensional virtual scene and displaying the first holographic image and the second holographic image in the three-dimensional virtual scene.
12. The electronic device according to claim 11, further comprising a first camera and a second camera, wherein the first camera and the second camera are respectively disposed at different positions of the display screen, and the first camera and the second camera are respectively configured to shoot the first user, so as to obtain first image information and second image information of the first user at different physical space angles.
13. The electronic device according to claim 12, further comprising a first beam processor, a second beam processor, a first plane mirror, a second plane mirror, a first beam expander, a second beam expander, and a hologram plate, wherein the first image beam obtained by the first camera is projected to the hologram plate sequentially through the first beam processor, the first plane mirror, and the first beam expander, and the second image beam obtained by the second camera is projected to the hologram plate sequentially through the second beam processor, the second plane mirror, and the second beam expander.
14. The electronic device of any of claims 11-13, wherein the display screen is a curved display screen.
15. The electronic device of any of claims 11-14, further comprising an input device configured to receive user input information entered by the first user; the display screen is also used for displaying the user input information.
16. A communication system, comprising at least two electronic devices according to any one of claims 11-15, and a server, wherein each of the electronic devices establishes a communication connection with the server for video communication.
17. An electronic device, characterized in that the electronic device comprises:
a memory for storing a computer program, the computer program comprising program instructions;
a processor for executing the program instructions to cause the electronic device to perform the video communication method of any of claims 1-9.
18. A computer-readable storage medium storing a computer program comprising program instructions that are executed by an electronic device to cause the electronic device to perform the video communication method according to any one of claims 1 to 9.
CN202111441224.2A 2021-11-30 2021-11-30 Video communication method, electronic equipment and communication system Pending CN114143494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111441224.2A CN114143494A (en) 2021-11-30 2021-11-30 Video communication method, electronic equipment and communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111441224.2A CN114143494A (en) 2021-11-30 2021-11-30 Video communication method, electronic equipment and communication system

Publications (1)

Publication Number Publication Date
CN114143494A true CN114143494A (en) 2022-03-04

Family

ID=80389645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111441224.2A Pending CN114143494A (en) 2021-11-30 2021-11-30 Video communication method, electronic equipment and communication system

Country Status (1)

Country Link
CN (1) CN114143494A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190289A (en) * 2022-05-30 2022-10-14 李鹏 3D holographic view screen communication method, cloud server, storage medium and electronic device
CN115578541A (en) * 2022-09-29 2023-01-06 北京百度网讯科技有限公司 Virtual object driving method, device, system, medium and product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190289A (en) * 2022-05-30 2022-10-14 李鹏 3D holographic view screen communication method, cloud server, storage medium and electronic device
CN115578541A (en) * 2022-09-29 2023-01-06 北京百度网讯科技有限公司 Virtual object driving method, device, system, medium and product

Similar Documents

Publication Publication Date Title
US11403595B2 (en) Devices and methods for creating a collaborative virtual session
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
JP4059513B2 (en) Method and system for communicating gaze in an immersive virtual environment
Apostolopoulos et al. The road to immersive communication
JP5208810B2 (en) Information processing apparatus, information processing method, information processing program, and network conference system
US6466250B1 (en) System for electronically-mediated collaboration including eye-contact collaboratory
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
Baker et al. Understanding performance in coliseum, an immersive videoconferencing system
EP2352290B1 (en) Method and apparatus for matching audio and video signals during a videoconference
CN114143494A (en) Video communication method, electronic equipment and communication system
CN1732687A (en) Method, system and apparatus for telepresence communications
KR102612529B1 (en) Neural blending for new view synthesis
CN111989914A (en) Remote presentation device operating method
US20230283888A1 (en) Processing method and electronic device
Fechteler et al. A framework for realistic 3D tele-immersion
Baker et al. Computation and performance issues in coliseum: an immersive videoconferencing system
CN114327055A (en) 3D real-time scene interaction system based on meta-universe VR/AR and AI technologies
KR20170127354A (en) Apparatus and method for providing video conversation using face conversion based on facial motion capture
CN113676690A (en) Method, device and storage medium for realizing video conference
CN111367598B (en) Method and device for processing action instruction, electronic equipment and computer readable storage medium
Xu et al. Ar mobile video calling system based on webrtc api
CN114124911A (en) Live broadcast echo cancellation method, computer-readable storage medium and electronic device
Kuchelmeister et al. Immersive mixed media augmented reality applications and technology
TWI807504B (en) Method, device and storage medium for audio processing of virtual meeting room
KR102546532B1 (en) Method for providing speech video and computing device for executing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination