CN110661760A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN110661760A
CN110661760A CN201810712074.6A CN201810712074A CN110661760A CN 110661760 A CN110661760 A CN 110661760A CN 201810712074 A CN201810712074 A CN 201810712074A CN 110661760 A CN110661760 A CN 110661760A
Authority
CN
China
Prior art keywords
data
video
frequency
audio data
frequency domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810712074.6A
Other languages
Chinese (zh)
Inventor
焦克新
安君超
韩杰
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201810712074.6A priority Critical patent/CN110661760A/en
Publication of CN110661760A publication Critical patent/CN110661760A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the invention provides a data processing method and a video networking system, wherein the method comprises the following steps: the method comprises the steps that first video data are collected by an opposite-end video network terminal, wherein the first video data comprise first audio data and first image data; encoding the first video data and sending the encoded first video data to a video networking server; the video network server forwards the coded first video data to the local video network terminal; the local video network terminal decodes the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency of the frequency domain data according to a preset frequency threshold; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data. And then can change the tone of the corresponding sound of video data, improve the interest of conversation.

Description

Data processing method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method and a data processing apparatus.
Background
The video networking is an important milestone for network development, is a higher-level form of the Internet, is a real-time network, can realize the real-time transmission of full-network high-definition videos which cannot be realized by the existing Internet, and pushes a plurality of Internet applications to high-definition video; therefore, large-scale high-definition video comprehensive services such as high-definition video conferences, video monitoring, intelligent monitoring analysis, emergency command and the like are realized on one platform.
In the process of communication, the video networking terminal provides rich functions for images corresponding to videos, such as adding stickers, beautifying the faces and the like on the images; thereby providing interesting pictures for the opposite-end user; however, the sound heard by the opposite end user is a single tone, namely the sound heard by the local end user is the original sound of the local end user, and the interest of the call is further reduced.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a data processing method to improve the interest of a call by changing the tone of a sound.
Correspondingly, the embodiment of the invention also provides a video networking system, which is used for ensuring the realization and the application of the method.
In order to solve the above problems, the present invention discloses a data processing method, which is applied to a video networking system, wherein the video networking system comprises a video networking terminal and a video networking server, and the method comprises the following steps: in the process of carrying out a call between two video networking terminals, an opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data; the first video data are coded, and the coded first video data are sent to the video networking server according to a video networking protocol; the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol; the local video network terminal decodes the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data.
The invention also discloses a video networking system, which comprises a video networking terminal and a video networking server, wherein in the process of carrying out conversation between the two video networking terminals, one video networking terminal is a local video networking terminal, and the other video networking terminal is an opposite video networking terminal, wherein the opposite video networking terminal is used for acquiring first video data, and the first video data comprises first audio data and first image data; coding the first video data, and sending the coded first video data to the video networking server according to a video networking protocol; the video networking server is used for forwarding the coded first video data to the local video networking terminal according to a video networking protocol; the local video network terminal is used for decoding the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data.
Compared with the prior art, the embodiment of the invention has the following advantages:
in the process of carrying out a call between two video networking terminals, the opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data; coding the first video data, and sending the coded first video data to the video networking server according to a video networking protocol; the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol; the local video network terminal decodes the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data. And then through the tone changing processing of audio data in the video data, change the tone of the corresponding sound of video data, improve the interest of conversation.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flow chart of the steps of a data processing method embodiment of the present invention;
FIG. 6 is a flow chart of steps in another data processing method embodiment of the present invention;
fig. 7 is a block diagram of an embodiment of a video networking system of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in table 1, the data packet of the access network mainly includes the following parts:
Figure BDA0001716836330000081
TABLE 1
Wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in table 2, the data packet of the metropolitan area network mainly includes the following parts:
Figure BDA0001716836330000091
TABLE 2
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
The data processing method provided by the embodiment of the invention is applied to a video networking system, wherein the video networking system comprises video networking terminals and a video networking server, any two video networking terminals can carry out conversation, and the conversation can comprise video conversation. Among the two terminals performing a call, one of the terminals may be referred to as a local terminal and the other terminal performing a call may be referred to as a peer terminal.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of a data processing method of the present invention specifically includes:
step 501, in the process of a call between two video networking terminals, an opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data.
In the embodiment of the invention, in the process of carrying out the call by the two video network terminals, one video network terminal carries out tone modulation processing on the sound of the video uploaded by the other video network terminal, and then the video after tone modulation is played, so that the interest of the call is increased. Therefore, in the process of the conversation between the local video networking terminal and the opposite video networking terminal, the opposite video networking terminal can collect first video data and then send the first video data to the local video networking terminal through the video networking server; the sampling rate of acquiring the first video data may be set as required, the first video data may include first audio data and first image data, and the audio sampling rate of the first audio data may be different from the image sampling rate of the first image data.
Step 502, encoding the first video data, and sending the encoded first video data to the video networking server according to a video networking protocol.
After the end-to-end video networking terminal acquires the first media data, the first video data can be coded, and then the coded first video data is sent to a video networking server according to a video networking protocol; the first audio data and the first image data in the first video data may be encoded separately, for example, the first audio data is encoded by an audio encoding module, and the first image data is encoded by an image encoding module.
Step 503, the video network server forwards the encoded first video data to the local video network terminal according to the video network protocol.
After receiving the coded first video data, the video networking server can adopt a video networking protocol to forward the coded first video data to the video networking terminal of the local terminal, and the video networking terminal of the local terminal performs tone-changing processing and playing on the first video data.
And step 504, the home terminal video network terminal decodes the encoded first video data to obtain first video data.
Step 505, performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting a frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust a tone corresponding to the first audio data.
In the embodiment of the invention, after the terminal of the video network at the local end acquires the first video data, the terminal of the video network at the local end can directly perform tone-changing processing on the first audio data corresponding to the first video data, and can also perform tone-changing processing on the first audio data after receiving the tone-changing instruction. In the process of tone-changing processing, the home terminal video network terminal can obtain coded first video data after analyzing the received data according to the video network protocol; then, the encoded first video data may be decoded, wherein the encoded first audio data and the encoded first image data may be decoded separately, for example, the encoded first audio data is decoded by an audio decoding module, and the encoded first image data is decoded by an image decoding module; and then the corresponding first video data can be obtained. The corresponding frequencies of the sounds are different, and the corresponding tones of the sounds are also different, so that the tones of the first audio data can be changed by changing the corresponding frequencies of the first video data; the first audio data may be subjected to frequency domain transformation to obtain frequency domain data corresponding to the first audio data, and then the tone corresponding to the first audio data may be changed by adjusting the frequency of the frequency domain data. The frequency corresponding to the frequency domain data can be adjusted according to a preset frequency threshold, and the preset frequency threshold can be set according to requirements.
And step 506, performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data.
In the embodiment of the present invention, after the frequency of the frequency domain data is adjusted, the frequency domain data may be subjected to time domain transformation to obtain time domain data, where the time domain data corresponding to the frequency domain data after the frequency adjustment may be referred to as second audio data.
And 507, forming second video data by using the second audio data and the first image data, and playing the second video data.
Then, the second audio data and the first image data may be combined into second video data, and then the second video data is played, where only the second audio data may be played, only the first image data may be played, and of course, the second audio data and the first image data may be played simultaneously.
The embodiment of the invention can be applied to the chat scene of the video telephone of the friend, wherein in one example, the opposite-end video network terminal encodes and transmits the collected first video data to the video network server, the video network server transmits the encoded first video data to the local-end video network terminal, and the local-end video network terminal decodes and plays the first video data. After the opposite-end user executes the tone changing operation on the opposite-end video network terminal, the opposite-end video network terminal generates a tone changing instruction and sends the tone changing instruction to the video network server, the video network server forwards the tone changing instruction to the local-end video network terminal, and the local-end video network terminal can perform tone changing processing on the first video data to obtain second video data and play the second video data after receiving the tone changing instruction. Furthermore, in the process of chatting through the video telephone, the tone of the sound in the video can be changed at any end, so that the interestingness of chatting is increased.
In addition, the embodiment of the invention can also be applied to interview prover scenes to hide the identity of the terminal user of the opposite terminal video network; in one example, the following description will be given by taking video data collected by a demonstration end played by an interview end as an example: the method comprises the steps that a proof-lifting end collects first video data corresponding to a proof-lifting person, wherein the first image data corresponding to the first video data do not contain the face of the proof-lifting person; and sending the first video data to the local video network terminal through the video network server. At this moment, the local video network terminal can actively execute the tone-changing processing of the first audio data corresponding to the first video data, and can obtain and play the second video data. Of course, if the first image data contains the face of the prover, one way to play the second video data is that the interview end can only play the second audio data in the second video data, and does not play the first image data. Another playing method may be that before playing the first image data, if a face is detected in the first image data, the face of the person in the first image data may be hidden, for example, the face of the person may be mosaiced, the face may be covered with a sticker, and then the first image data and the second audio data may be played simultaneously; and then the interview end user also can't discern the person's of witness identity through the sound of broadcast or the image of broadcast, can hide the person's of witness identity, guarantees the person's of witness safety.
In summary, in the process of a call between two video network terminals, the opposite video network terminal collects first video data, wherein the first video data includes first audio data and first image data; coding the first video data, and sending the coded first video data to the video networking server according to a video networking protocol; the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol; the local video network terminal decodes the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; adopting the second audio data and the first image data to form second video data, coding the second video data, and sending the coded second video data to the local video networking terminal according to the video networking protocol; and the local video network terminal decodes the coded second video data and plays the second video data. And then through the tone changing processing of audio data in the video data, change the tone of the corresponding sound of video data, improve the interest of conversation.
In another embodiment of the present invention, when the home video networking terminal plays the second video data, the sound and the image of the played second video data may be asynchronous, so that to improve the user experience, when the sound and the image of the second video data are asynchronous, the playing speeds of the sound and the image of the second video data may be adjusted, so that the sound and the image of the second video data are played synchronously.
Referring to fig. 6, a flowchart illustrating steps of another data processing method according to an embodiment of the present invention specifically includes the following steps:
step 601, in the process of making a call between two video networking terminals, an opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data.
In the process of calling between the two video networking terminals, the opposite video networking terminal can call the audio acquisition equipment to acquire first audio data according to the audio sampling rate, call the image acquisition equipment to acquire first image data according to the image sampling rate, and further obtain first video data.
Step 602, encoding the first video data, and sending the encoded first video data to the video networking server according to a video networking protocol.
Then, a preset coding algorithm can be adopted to code the first video data, wherein the set coding algorithm comprises a set audio coding algorithm and a set image coding algorithm which can be set according to requirements; the video network terminal at the opposite end can call the audio coding module to code the first audio data by adopting a set audio coding algorithm, and can call the image coding module to code the first image data by adopting a set image coding algorithm. And then, the coded first video data can be packaged by adopting a video networking protocol, and the packaged data is sent to a video networking server by adopting the video networking protocol.
And 603, the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol.
The video network server can adopt a video network protocol to forward the encapsulated data (namely the coded first video data) to the video network terminal of the local terminal, and the video network terminal of the local terminal performs tone-changing processing and playing on the first video data.
And step 604, the home terminal video network terminal decodes the encoded first video data to obtain first video data.
After receiving the encapsulated data, the local video networking terminal can analyze the encapsulated data by adopting a video networking protocol to obtain coded first video data; and then decoding the coded first video data by adopting a decoding algorithm corresponding to the set coding algorithm to obtain the first video data. The first audio data encoded by the decoding algorithm corresponding to the set audio encoding algorithm can be decoded, and the first image data encoded by the decoding algorithm corresponding to the set image encoding algorithm can be decoded.
Then, the first video data may be modified, wherein the modification of the first video data may be implemented by modifying the first audio data in the first video data, specifically as follows:
605, transforming the first audio data by using a fast fourier transform algorithm to obtain frequency domain data corresponding to the first audio data.
In the embodiment of the invention, the tone of the first audio data can be changed by changing the frequency corresponding to the first audio data; the first audio data may be transformed by using a fast fourier transform algorithm to obtain frequency domain data corresponding to the first audio data, where the frequency domain data may describe frequency characteristics of the first audio data, and may include multiple frequency data, and each frequency data corresponds to one frequency. Then, the frequency corresponding to the frequency domain data may be adjusted according to a preset frequency threshold, and the tone corresponding to the first audio data may be adjusted specifically according to steps 606 to 607.
And 606, determining the frequency corresponding to each frequency data in the frequency domain data.
Step 607, amplifying the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold, or reducing the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold.
In the embodiment of the present invention, the frequency corresponding to each frequency data in the frequency domain data may be determined, and then the frequency corresponding to each frequency data may be adjusted according to the set frequency threshold. The frequency corresponding to each frequency data can be amplified by a multiple corresponding to a preset frequency threshold, so that the tone of the first audio data can be sharpened, and the sharpness degree is related to the size of the preset frequency threshold; the frequency corresponding to each frequency data can be reduced by a multiple corresponding to the preset frequency threshold, so that the tone of the first audio data can be lowered, and the sharpness degree is related to the size of the preset frequency threshold. Wherein, the specific adjusting method can be determined according to the requirement.
And 608, transforming the frequency domain data by adopting an inverse fast fourier transform algorithm to obtain second audio data corresponding to the frequency domain data.
And then, the frequency domain data can be transformed by adopting an inverse fast fourier transform algorithm, namely, the frequency domain data is transformed into time domain data, so that second audio data corresponding to the frequency domain data can be obtained.
And step 609, adopting the second audio data and the first image data to form second video data, and playing the second audio data.
In this embodiment of the present invention, the second audio and the first image data may be determined as second video data, and then the second video data may be played.
In an optional embodiment of the present invention, the audio and the image of the second video data played by the home-end video networking terminal may be asynchronous, so to improve the user experience, it may be determined whether the first image data and the second audio data are played synchronously in the process of playing the second video data; if the first image data and the second audio data are not played synchronously, the image decoding module is adjusted to acquire the time interval of two adjacent frames of images so as to adjust the first image data to be played synchronously with the second audio data. When each frame of audio frame and each frame of image are played, whether the second audio data and the first image data are synchronous can be judged according to the time stamp of the frame of audio frame and the time stamp of the frame of image. If the time corresponding to the time stamp of the frame of audio frame is less than the time corresponding to the time stamp of the frame of image, determining that the playing speed of the first image data is greater than the playing speed of the second audio data; if the time corresponding to the time stamp of the frame of audio frame is greater than the time corresponding to the time stamp of the frame of image, determining that the playing speed of the first image data is less than the playing speed of the second audio data; and if the time corresponding to the time stamp of the frame of audio frame is equal to the time corresponding to the time stamp of the frame of image, determining that the playing speed of the first image data is equal to the playing speed of the second audio data.
In an optional embodiment of the present invention, when the time interval between two adjacent frames of images obtained by the image decoding module is adjusted, if the playing speed of the first image data is lower than the playing speed of the second audio data, the time interval between two adjacent frames of video obtained by the video decoding module is reduced; if the playing speed of the first image data is greater than that of the second audio data, the time interval for the video decoding module to acquire the two adjacent video frames is increased.
In the process of carrying out a call between two video networking terminals, the opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data; coding the first video data, and sending the coded first video data to the video networking server according to a video networking protocol; the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol; the local video network terminal decodes the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; adopting the second audio data and the first image data to form second video data, coding the second video data, and sending the coded second video data to the local video networking terminal according to the video networking protocol; and the local video network terminal decodes the coded second video data and plays the second video data. And then through the tone changing processing of audio data in the video data, change the tone of the corresponding sound of video data, improve the interest of conversation.
Secondly, in the process of playing the second video data, the embodiment of the invention judges whether the first image data and the second audio data are played synchronously; if the first image data and the second audio data are not played synchronously, adjusting the time interval of two adjacent frames of images acquired by the image decoding module so as to adjust the first image data to be played synchronously with the second audio data; thereby improving the user experience.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The embodiment of the invention also provides a video networking system to ensure the implementation of the method.
Referring to fig. 7, a block diagram of an embodiment of a video networking system 700 according to the present invention is shown, where the video networking system 700 may specifically include: a video network terminal 701 and a video network server 702, wherein, in the process of a call between two video network terminals, one video network terminal is a home terminal video network terminal 7011, and the other video network terminal is an opposite terminal video network terminal 7012, wherein,
the peer-to-peer video networking terminal 7012 is configured to acquire first video data, where the first video data includes first audio data and first image data; encoding the first video data, and sending the encoded first video data to the video networking server 702 according to a video networking protocol;
the video networking server 702 is configured to forward the encoded first video data to the local video networking terminal 7011 according to a video networking protocol;
the home terminal video network terminal 7011 is configured to decode the encoded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data.
In an optional embodiment of the present invention, the home internet of things terminal 7011 is configured to transform the first audio data by using a fast fourier transform algorithm to obtain frequency domain data corresponding to the first audio data; and transforming the frequency domain data by adopting an inverse fast Fourier transform algorithm to obtain second audio data corresponding to the frequency domain data.
In an optional embodiment of the present invention, the frequency domain data includes a plurality of frequency data, and the video home terminal 7011 is configured to determine a frequency corresponding to each frequency data in the frequency domain data; and amplifying the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold, or reducing the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold.
In an optional embodiment of the present invention, the local video network terminal 7011 is further configured to determine whether the first image data and the second audio data are played synchronously; if the first image data and the second audio data are not played synchronously, the image decoding module is adjusted to acquire the time interval of two adjacent frames of images so as to adjust the first image data to be played synchronously with the second audio data.
In an optional embodiment of the present invention, the home-end video network terminal 7011 is configured to narrow a time interval between two adjacent frames of images obtained by the video decoding module if the playing speed of the first image data is lower than the playing speed of the second audio data; and if the playing speed of the first image data is greater than that of the second audio data, increasing the time interval of the video decoding module for acquiring the two adjacent frames of images.
In the process of carrying out a call between two video networking terminals, the opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data; coding the first video data, and sending the coded first video data to the video networking server according to a video networking protocol; the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol; the local video network terminal decodes the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data. And then through the tone changing processing of audio data in the video data, change the tone of the corresponding sound of video data, improve the interest of conversation.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a predictive manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The data processing method and the video networking system provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A data processing method is applied to a video networking system, the video networking system comprises a video networking terminal and a video networking server, and the method comprises the following steps:
in the process of carrying out a call between two video networking terminals, an opposite video networking terminal collects first video data, wherein the first video data comprises first audio data and first image data;
the first video data are coded, and the coded first video data are sent to the video networking server according to a video networking protocol;
the video networking server forwards the coded first video data to the local video networking terminal according to a video networking protocol;
the local video network terminal decodes the coded first video data to obtain first video data;
performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data;
performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data;
and adopting the second audio data and the first image data to form second video data, and playing the second video data.
2. The method of claim 1,
the frequency domain data obtained by performing frequency domain transformation on the first audio data includes:
transforming the first audio data by adopting a fast Fourier transform algorithm to obtain frequency domain data corresponding to the first audio data;
the time domain transformation of the frequency domain data after the frequency adjustment to obtain second audio data comprises:
and transforming the frequency domain data by adopting an inverse fast Fourier transform algorithm to obtain second audio data corresponding to the frequency domain data.
3. The method of claim 1, wherein the frequency-domain data comprises a plurality of frequency data, and the adjusting the frequency corresponding to the frequency-domain data according to a preset frequency threshold comprises:
determining the frequency corresponding to each frequency data in the frequency domain data;
and amplifying the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold, or reducing the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold.
4. The method of claim 1, further comprising:
judging whether the first image data and the second audio data are played synchronously;
if the first image data and the second audio data are not played synchronously, the image decoding module is adjusted to acquire the time interval of two adjacent frames of images so as to adjust the first image data to be played synchronously with the second audio data.
5. The method as claimed in claim 4, wherein the adjusting the time interval between the image decoding module and the two adjacent frames of images if the first image data and the second audio data are not played synchronously comprises:
if the playing speed of the first image data is less than that of the second audio data, the time interval of the video decoding module for acquiring two adjacent frames of images is shortened;
and if the playing speed of the first image data is greater than that of the second audio data, increasing the time interval of the video decoding module for acquiring the two adjacent frames of images.
6. A video network system is characterized in that the system comprises video network terminals and a video network server, wherein one video network terminal is a local video network terminal and the other video network terminal is an opposite video network terminal in the process of communication between the two video network terminals, wherein,
the peer-to-peer video networking terminal is used for acquiring first video data, wherein the first video data comprises first audio data and first image data; coding the first video data, and sending the coded first video data to the video networking server according to a video networking protocol;
the video networking server is used for forwarding the coded first video data to the local video networking terminal according to a video networking protocol;
the local video network terminal is used for decoding the coded first video data to obtain first video data; performing frequency domain transformation on the first audio data to obtain frequency domain data, and adjusting the frequency corresponding to the frequency domain data according to a preset frequency threshold value to adjust the tone corresponding to the first audio data; performing time domain transformation on the frequency domain data after frequency adjustment to obtain second audio data; and adopting the second audio data and the first image data to form second video data, and playing the second video data.
7. The system of claim 6,
the local video network terminal is used for transforming the first audio data by adopting a fast Fourier transform algorithm to obtain frequency domain data corresponding to the first audio data; and transforming the frequency domain data by adopting an inverse fast Fourier transform algorithm to obtain second audio data corresponding to the frequency domain data.
8. The system of claim 6, wherein the frequency domain data comprises a plurality of frequency data,
the local video network terminal is used for determining the frequency corresponding to each frequency data in the frequency domain data; and amplifying the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold, or reducing the frequency corresponding to each frequency data by a multiple corresponding to the preset frequency threshold.
9. The system of claim 6,
the home terminal video network terminal is also used for judging whether the first image data and the second audio data are played synchronously; if the first image data and the second audio data are not played synchronously, the image decoding module is adjusted to acquire the time interval of two adjacent frames of images so as to adjust the first image data to be played synchronously with the second audio data.
10. The system of claim 9,
the local video network terminal is used for shortening the time interval of the video decoding module for acquiring two adjacent frames of images if the playing speed of the first image data is less than that of the second audio data; and if the playing speed of the first image data is greater than that of the second audio data, increasing the time interval of the video decoding module for acquiring the two adjacent frames of images.
CN201810712074.6A 2018-06-29 2018-06-29 Data processing method and device Pending CN110661760A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810712074.6A CN110661760A (en) 2018-06-29 2018-06-29 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810712074.6A CN110661760A (en) 2018-06-29 2018-06-29 Data processing method and device

Publications (1)

Publication Number Publication Date
CN110661760A true CN110661760A (en) 2020-01-07

Family

ID=69027005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810712074.6A Pending CN110661760A (en) 2018-06-29 2018-06-29 Data processing method and device

Country Status (1)

Country Link
CN (1) CN110661760A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395577A (en) * 2020-09-10 2021-09-14 腾讯科技(深圳)有限公司 Sound changing playing method and device, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010001613A1 (en) * 1997-02-24 2001-05-24 Masahiro Hashimoto Video-data encoder and recording media wherein a video-data encode program is recorded
CN1650618A (en) * 2002-03-01 2005-08-03 汤姆森许可公司 Audio frequency scaling during video trick modes utilizing digital signal processing
CN1719514A (en) * 2004-07-06 2006-01-11 中国科学院自动化研究所 Based on speech analysis and synthetic high-quality real-time change of voice method
CN104618786A (en) * 2014-12-22 2015-05-13 深圳市腾讯计算机系统有限公司 Audio/video synchronization method and device
CN106341563A (en) * 2015-07-06 2017-01-18 北京视联动力国际信息技术有限公司 Terminal communication based echo suppression method and device
CN107958672A (en) * 2017-12-12 2018-04-24 广州酷狗计算机科技有限公司 The method and apparatus for obtaining pitch waveform data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010001613A1 (en) * 1997-02-24 2001-05-24 Masahiro Hashimoto Video-data encoder and recording media wherein a video-data encode program is recorded
CN1650618A (en) * 2002-03-01 2005-08-03 汤姆森许可公司 Audio frequency scaling during video trick modes utilizing digital signal processing
CN1719514A (en) * 2004-07-06 2006-01-11 中国科学院自动化研究所 Based on speech analysis and synthetic high-quality real-time change of voice method
CN104618786A (en) * 2014-12-22 2015-05-13 深圳市腾讯计算机系统有限公司 Audio/video synchronization method and device
CN106341563A (en) * 2015-07-06 2017-01-18 北京视联动力国际信息技术有限公司 Terminal communication based echo suppression method and device
CN107958672A (en) * 2017-12-12 2018-04-24 广州酷狗计算机科技有限公司 The method and apparatus for obtaining pitch waveform data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395577A (en) * 2020-09-10 2021-09-14 腾讯科技(深圳)有限公司 Sound changing playing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN108737768B (en) Monitoring method and monitoring device based on monitoring system
CN108965224B (en) Video-on-demand method and device
CN109302576B (en) Conference processing method and device
CN111193788A (en) Audio and video stream load balancing method and device
CN110475090B (en) Conference control method and system
CN110022295B (en) Data transmission method and video networking system
CN110049273B (en) Video networking-based conference recording method and transfer server
CN109788235B (en) Video networking-based conference recording information processing method and system
CN108630215B (en) Echo suppression method and device based on video networking
CN108965930B (en) Video data processing method and device
CN108574816B (en) Video networking terminal and communication method and device based on video networking terminal
CN110149305B (en) Video network-based multi-party audio and video playing method and transfer server
CN110769179B (en) Audio and video data stream processing method and system
CN109743284B (en) Video processing method and system based on video network
CN111131743A (en) Video call method and device based on browser, electronic equipment and storage medium
CN109302384B (en) Data processing method and system
CN110769297A (en) Audio and video data processing method and system
CN110611639A (en) Audio data processing method and device for streaming media conference
CN110446058B (en) Video acquisition method, system, device and computer readable storage medium
CN110072154B (en) Video networking-based clustering method and transfer server
CN108965914B (en) Video data processing method and device based on video network
CN111246153A (en) Video conference establishing method and device, electronic equipment and readable storage medium
CN110049069B (en) Data acquisition method and device
CN110661749A (en) Video signal processing method and video networking terminal
CN110661760A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200107