CN112565655A - Video data yellow identification method and device, electronic equipment and storage medium - Google Patents

Video data yellow identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112565655A
CN112565655A CN202011359802.3A CN202011359802A CN112565655A CN 112565655 A CN112565655 A CN 112565655A CN 202011359802 A CN202011359802 A CN 202011359802A CN 112565655 A CN112565655 A CN 112565655A
Authority
CN
China
Prior art keywords
video data
user equipment
webrtc
yellow
webrtc server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011359802.3A
Other languages
Chinese (zh)
Inventor
杨昊
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011359802.3A priority Critical patent/CN112565655A/en
Publication of CN112565655A publication Critical patent/CN112565655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1016IP multimedia subsystem [IMS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a method and a device for identifying yellow of video data, electronic equipment and a storage medium, and relates to the technical field of multimedia.

Description

Video data yellow identification method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a method and an apparatus for identifying yellow of video data, an electronic device, and a storage medium.
Background
In order to share activities such as on-site news facts, sports events, literary shows, knowledge competitions, meeting contents and the like to a remote end, the activities can be shared in various forms such as video conferences, live video broadcasts, video calls and the like. If a yellow-related scene appears in a video conference, a live video broadcast and a video call, not only the impression of audiences or participants can be influenced, but also adverse effects can be caused, so that the yellow identification in the video conference, the live video broadcast, the video call and the like is very necessary in order to create a good watching environment.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, an electronic device and a storage medium for identifying yellow of video data, which can solve the above problems.
In a first aspect, an embodiment of the present application provides a method for yellow color identification of video data, which is applied to a cloud of a yellow color identification system of video data, where the cloud includes a first WebRTC server, the yellow color identification system of video data further includes a first user equipment and a second user equipment, the first user equipment is provided with a first WebRTC client, the second user equipment is provided with a second WebRTC client, and the first user equipment is connected to the second user equipment and the first WebRTC server, respectively, where the method includes: the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client; when the first WebRTC server determines that the video data is yellow-related, sending a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment; when the first WebRTC server determines that the video data is not yellow-involved, a second instruction is sent to the first user equipment to instruct the first user equipment to send the video data to the second user equipment so as to be played at a second WebRTC client of the second user equipment.
In a second aspect, an embodiment of the present application provides a yellow-detection device for video data, which is applied to a cloud of a yellow-detection system for video data, where the cloud includes a first WebRTC server, the yellow-detection system for video data further includes a first user equipment and a second user equipment, the first user equipment is provided with a first WebRTC client, the second user equipment is provided with a second WebRTC client, the first user equipment is connected to the second user equipment and the first WebRTC server, respectively, and the device includes: a video data receiving module, configured to receive, by the first WebRTC server, the video data sent by the first user equipment through the first WebRTC client; the first WebRTC server is configured to send a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment when determining that the video data is yellow-related; and the second yellow-identification module is used for sending a second instruction to the first user equipment by the first WebRTC server when the video data is determined not to be yellow-associated, so as to instruct the first user equipment to send the video data to the second user equipment, and to play the video data on a second WebRTC client of the second user equipment.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the above-described methods.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The application provides a method, a device, an electronic device and a storage medium for identifying yellow of video data, a first WebRTC server receives video data sent by a first user device through a first WebRTC client, the first WebRTC server sends a first instruction to the first user device when determining that the video data is yellow, so as to instruct the first user device to stop sending the video data to a second user device, so as to reduce adverse effects caused by the yellow-related video data, the first WebRTC server sends a second instruction to the first user device when determining that the video data is not yellow, so as to instruct the first user device to send the video data to the second user device for playing on a second WebRTC client of the second user device, so as to ensure that the video data played on the second WebRTC client is not yellow, so as to create a good video viewing environment, and an independent first RTC server is used for specially identifying whether the video data is yellow-related, the yellow identification result can be efficiently and accurately obtained.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating a yellow identification system for video data according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a method for yellow identification of video data according to an embodiment of the present application;
fig. 3 shows an architecture diagram of WebRTC of a first user equipment provided in an embodiment of the present application;
fig. 4 is a flowchart illustrating a yellow-identifying method for video data according to another embodiment of the present application;
fig. 5 is a flowchart illustrating a yellow-identifying method for video data according to another embodiment of the present application;
fig. 6 is a flow chart illustrating a method for yellow identification of video data according to still another embodiment of the present application;
FIG. 7 is a flow chart illustrating the step S410 of the method for identifying yellow in video data shown in FIG. 6 according to the present application;
fig. 8 is a flowchart illustrating a yellow-identifying method for video data according to yet another embodiment of the present application;
fig. 9 is a flowchart illustrating a step S540 of the yellow-identifying method for video data illustrated in fig. 8 according to the present application;
fig. 10 shows a logic block diagram of a yellow identification device for video data according to an embodiment of the present application;
fig. 11 is a block diagram of an electronic device for executing a yellow-identifying method of video data according to an embodiment of the present application;
fig. 12 is a storage unit for storing or carrying program codes for implementing a yellow identification method of video data according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Users have come to a variety of forms such as video conferencing, live video broadcasting, and video calling in order to share activities such as on-site news facts, sporting events, art shows, knowledge competitions, and conference contents to remote sites. If a yellow-related scene appears in a video conference, a live video broadcast and a video call, not only the impression of audiences or participants can be influenced, but also adverse effects can be caused, so that the yellow identification in the video conference, the live video broadcast, the video call and the like is very necessary in order to create a good watching environment.
In view of the above technical problems, the inventors have found and proposed a method, an apparatus, an electronic device and a storage medium for yellow identification of video data, which are provided by the present application, to identify yellow of video data of a first user equipment by a first WebRTC server so as to create a good viewing environment. The specific method for identifying yellow in video data is described in detail in the following embodiments.
For convenience of description, this embodiment first shows a video call system, and fig. 1 shows a schematic diagram of a video data yellow identification system provided in this embodiment of the present application, please refer to fig. 1, a video call system 100 includes a cloud 110, a first user device 120, and a second user device 130, and the cloud 110 includes a first WebRTC server 111, a second WebRTC server 112, and a third WebRTC server 113. The first user device 120 is connected to the first WebRTC server 111 and the second WebRTC server 112, respectively, and the first user device 120 is connected to the second user device 130 through the third WebRTC server 113.
After the handshake negotiation with the first user equipment 120 is successful, the second WebRTC server 112 receives the video data sent by the first user equipment 120 through the first WebRTC server 111. The third WebRTC server 113 sends the video data to the first WebRTC server 111 for yellow identification, and when the first WebRTC server 111 identifies that the video data is not yellow-related, the first user equipment 120 sends the video data to the second user equipment 130 through the third WebRTC server 113 for playing. When the first WebRTC server 111 authenticates that the video data is yellow, the third WebRTC server 113 suspends forwarding the video data of the first user device 120.
The third WebRTC server 113 determines, according to the video type of the video data, a server path corresponding to the video type to forward the video data. For example, the server path includes TURN (converting Using Relay nat), sfu (selective Forwarding unit), and mcu (multipoint Forwarding unit), the video type includes a live type, a video conference type, and a video call type, and when the video type of the video data is a live type, the TURN is used to forward the video data; when the video type of the video data is a video conference type, the SFU is adopted to forward the video data; and when the video type of the video data is a video telephone type, the MCU is adopted to forward the video data.
Optionally, a communication connection for web real-time communication may be established between the first user device 120 and the first WebRTC server 111, the second WebRTC server 112, and the third WebRTC server 113.
The Web Real-Time Communication (WebRTC) is an open source item pushed by google corporation, and aims to provide a simple JavaScript interface for Web applications of browsers and mobile phones or computers, so that the Web applications have Real-Time Communication (RTC) Real-Time Communication capability. The method means that a developer can realize a complex multimedia RTC function only by a simple JavaScript statement when developing a web application on a WebRTC browser, thereby greatly reducing the development difficulty and development cost, and organizations such as W3C and the like are making a WebRTC standard JavaScript API interface. In the whole WebRTC project technical architecture, the WebRTC bottom layer provides a core technology of audio and video multimedia, and the core technology comprises the functions of audio and video acquisition, encoding and decoding, network transmission, display rendering and the like.
Alternatively, the first user device 120 may be, but is not limited to, a cell phone, a laptop, a tablet, a desktop, and the like. The second user device 130 may be, but is not limited to, a cell phone, a laptop, a tablet, a desktop, etc.
Based on fig. 1, the present embodiment shows a method for yellow-identification of video data, where the method for yellow-identification of video data is applied to a cloud 110 of a system 100 for yellow-identification of video data shown in fig. 1, in the present embodiment, the cloud 110 includes a first WebRTC server 111, fig. 2 shows a schematic flow diagram of the method for yellow-identification of video data provided in an embodiment of the present application, please refer to fig. 2, and the method for yellow-identification of video data may specifically include the following steps:
step S110, the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client.
The first user equipment acquires video data of the first user during live video broadcasting or video conference, and it can be understood that the video data is sound or images of the first user during live video broadcasting or video conference. The first user equipment can acquire video data with preset duration and send the video data to the first WebRTC server, for example, acquire video data with preset duration every 0.1 second and send the video data to the first WebRTC server, and it can be understood that the live broadcast or the video conference is sent to the first WebRTC server in multiple segments of video data for yellow identification. Or the first user equipment collects and sends the video data to the first WebRTC server in real time.
The first user equipment is provided with the first WebRTC client, so that the first user equipment sends video data to the first WebRTC server through the first WebRTC client. Or, installing the first application program on the first user equipment, and sending the video data to the first WebRTC server through the first application program by the first user equipment.
In some embodiments, the first user equipment in this embodiment may be an embedded system, fig. 3 illustrates an architecture schematic diagram of WebRTC of the first user equipment provided in this embodiment, please refer to fig. 3, where the architecture of the WebRTC mainly includes: hardware layer, system layer and WebRTC core library. The WebRTC core library comprises a video engine, a real-time transport control protocol (RTCP/SRTCP), a data channel, an audio engine, a real-time transport protocol (RTP/SRTP) and a yellow identification module.
With reference to fig. 3, the first user equipment includes a microphone and a camera, and acquires audio data of the first user during a video call or live broadcast through the microphone and acquires image data of the first user during the video call or live broadcast through the camera; video data in a video call or live broadcast is obtained based on the audio data and the image data.
It should be noted that the microphone and the camera may be selectively turned on according to the application scene, for example, when the application scene of the video call is live webcast, video conference, video phone, etc., it is necessary to simultaneously acquire audio data and image data, and therefore both the microphone and the camera are turned on; when the application scene of the video call is voice call, only the microphone can be turned on.
Optionally, after the video data may be compressed and encoded by the codec of fig. 3, the video data is sent to the first WebRTC server sequentially through the yellow identification module and the data channel, so as to save the traffic of the first user equipment for transmitting the video data.
Step S120, when it is determined that the video data is yellow, the first WebRTC server sends a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment.
Optionally, the first WebRTC server determines whether the video data is yellow-related. The method comprises the steps that a preset yellow-related database is pre-established in a first WebRTC server, wherein the preset yellow-related database comprises a plurality of yellow-related data, the first WebRTC server compares video data with the plurality of yellow-related data in the preset yellow-related database, and if the video data are matched with at least one of the plurality of yellow-related data, the video data are determined to be yellow-related. Since the video data comprises audio data and image data, when the audio data is matched with the yellow-related audio in the preset yellow-related database, or when the image data is matched with the yellow-related image in the preset yellow-related database, the video data is determined to be yellow-related.
Under a normal condition (namely when the video data is not yellow-related), the video information of the first user is sent to the second user equipment for playing, and when the video data is yellow-related, the first WebRTC server sends a first instruction to the first user equipment to indicate the first user equipment to stop sending the video data, so that adverse effects caused by the yellow-related video data are reduced.
Step S130, when it is determined that the video data is not yellow-related, the first WebRTC server sends a second instruction to the first user equipment to instruct the first user equipment to send the video data to the second user equipment, so as to play the video data on the second WebRTC client of the second user equipment.
Optionally, the first WebRTC server determines whether the video data is yellow-related. The method comprises the steps that a preset yellow-related database is pre-established in a first WebRTC server, wherein the preset yellow-related database comprises a plurality of yellow-related data, and if the video data is not matched with each of the plurality of yellow-related data, it is determined that the video data is not yellow-related. Since the video data includes audio data and image data, when the audio data matches the yellow-related audio in the preset yellow-related database, and when the image data matches the yellow-related image in the preset yellow-related database, it is determined that the video data is not yellow-related.
When the first WebRTC server determines that the video data are not yellow-involved, the first WebRTC server sends a second instruction to the first user equipment, the first user equipment sends the video data to the second user equipment based on the second instruction, and optionally, when the second user equipment is provided with a second WebRTC client, the second WebRTC client of the second user equipment plays the video data. Or when the second user equipment installs the second application program, the second application program of the second user equipment plays the video data.
In some embodiments, the connection between the first user equipment and the second user equipment may be a P2P direct connection, that is, the first user equipment and the second user equipment are directly connected, for example, the first user equipment and the second user equipment are connected by near field communication or bluetooth. Therefore, the second user equipment directly sends the audio and video data to the second user equipment.
In other embodiments, the connection between the first user equipment and the second user equipment may also be a non-P2P direct connection, for example, the first user equipment is connected to the second user equipment through a media server (e.g., a third WebRTC server in fig. 1). Therefore, the second user equipment sends the audio and video data to the second user equipment through the media server.
In the method for identifying yellow of video data provided in this embodiment, the first WebRTC server receives video data sent by the first user device through the first WebRTC client, when it is determined that the video data is yellow, the first WebRTC server sends a first instruction to the first user device to instruct the first user device to stop sending the video data to the second user device, so as to reduce adverse effects caused by the yellow-related video data, when it is determined that the video data is not yellow, the first WebRTC server sends a second instruction to the first user device to instruct the first user device to send the video data to the second user device, so as to play the video data on the second WebRTC client of the second user device, so as to ensure that the video data played on the second WebRTC client is not yellow-related, so as to create a good video viewing environment, reduce the risk of yellow-related, and specifically identify whether the video data is yellow-related through an independent first WebRTC server, the yellow identification result can be efficiently and accurately obtained.
On the basis of the previous embodiment, the present embodiment provides a method for yellow color identification of video data, where the cloud further includes a second WebRTC server to determine whether audio/video data needs to be yellow-identified, fig. 4 shows a schematic flow chart of the method for yellow color identification of video data according to another embodiment of the present application, and please refer to fig. 4, where the method for yellow color identification of video data specifically includes the following steps:
step S210, the second WebRTC server sends a video type request to the first user equipment.
And the second WebRTC server sends a video type request to the first user equipment so as to request the type of the video data collected by the first user equipment.
Optionally, the second WebRTC server is a signaling server.
Step S220, the second WebRTC receives the video type of the video data that the first user equipment requests to feed back based on the video type.
And the first user equipment feeds back the video type of the video data to the second WebRTC server based on the video type request. The video type includes a live type, a video conference type, a video phone type and the like, and the video type is determined by the first user and/or the second user, for example, when the first user starts video live broadcast at the first WebRTC client, the video type is the live type, or when the first user and the second user are in a video conference together, the video type is the video conference type.
Step S230, when determining that the video type is the preset type, the second WebRTC server sends a third instruction to the first user equipment to instruct the first user equipment to establish a connection with the first WebRTC server.
The preset type is a video type needing to start a yellow identification function, when the video type is determined to be the preset type by the second WebRTC server, it is determined that video information collected by the first user equipment needs to be yellow identified, the second WebRTC server generates a third instruction and sends the third instruction to the first user equipment, and the first user equipment establishes connection with the first WebRTC server according to the indication of the third instruction so as to send video data to the first WebRTC server for yellow identification. And determining that when the video type is not the preset type, the video data of the first user equipment does not need to be subjected to yellow identification, so that yellow identification resources of the first WebRTC server are saved, and the first user equipment directly sends the video data or indirectly sends the video data to the second user equipment through the third WebRTC server.
In some embodiments, the preset types may include a live type, a video conference type, and a video phone type, and each type is subjected to yellow identification to prevent yellow-related video information from being transmitted between the first user equipment and the second user equipment.
In other embodiments, when the video type is a live broadcast type, and the first user equipment broadcasts the video directly, the live broadcast room is for all audiences, so that when the content of the live broadcast room of the first user equipment is yellow-related video data, the audiences can report the content of the live broadcast room and can also reduce adverse effects caused by the yellow-related video data, and therefore, the preset type can be a video conference type and a video telephone type with a small number of participants.
Step S240, the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client.
Step S250, when it is determined that the video data is yellow, the first WebRTC server sends a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment.
Step S260, when it is determined that the video data is not yellow-related, the first WebRTC server sends a second instruction to the first user equipment to instruct the first user equipment to send the video data to the second user equipment, so as to play the video data on the second WebRTC client of the second user equipment.
For the detailed description of steps S240 to S260, please refer to steps S110 to S130, which are not described herein again.
Optionally, the second WebRTC server is further configured to perform handshake negotiation with the first user equipment before the first user equipment collects the video data, specifically, the second WebRTC server sends a handshake request to the first user equipment, the first user equipment generates a handshake signaling based on the handshake request, and sends the handshake signaling to the second WebRTC server to perform network negotiation with the second WebRTC server, and negotiate working parameters of the first user equipment, for example, an encoding manner of the first user equipment on the video data, a yellow identification manner (for example, yellow identification by video data or images), whether a yellow identification function needs to be started, and the like.
In this embodiment, when the video type of the video data is the preset type, the yellow identification function for the video data needs to be started, and the first WebRTC server performs yellow identification on the video data, so that adverse effects caused by yellow-related video data are reduced. When the video type of the video data is not the preset type, the yellow identification function of the video data does not need to be started, and yellow identification resources of the first WebRTC server are saved.
On the basis of the above embodiment, the cloud further includes a third WebRTC server, where the first user device is connected to the second user device through the third WebRTC server, that is, the connection mode between the first user device and the second user device is a non-P2P direct connection mode, and the third WebRTC server is configured to forward video data of the first user device, for example, the third WebRTC server forwards the video data of the first user device to the second user device for playing, or the third WebRTC server forwards the video data of the first user device to the first WebRTC server for yellow identification. Fig. 5 is a schematic flow chart illustrating a method for identifying yellow in video data according to another embodiment of the present application, please refer to fig. 5, where the method for identifying yellow in video data specifically includes the following steps:
step S310, the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client.
For detailed description of step S310, please refer to step S110, which is not described herein again.
Step S320, when it is determined that the video data is not yellow-related, the first WebRTC server sends the second instruction to the first user equipment.
In some embodiments, when the video data is not yellow-involved, the first WebRTC server sends a second instruction to the first user device, and the first user device sends the video data to the third WebRTC server when receiving the second instruction. It will be appreciated that the second instruction is an interrupt instruction.
In some embodiments, when the video data is not yellow-involved, the first WebRTC server does not send any interrupt command to the first user device, and the first device continues to send the video data to the third WebRTC server because the first user device does not receive the interrupt command.
Step S330, the third WebRTC server receives the video data sent by the first user equipment through the first WebRTC client in response to the second instruction, and sends the video data to the second user equipment, so as to be played at the second WebRTC client of the second user equipment.
And the first user equipment receives and responds to the second instruction, sends video data to a third WebRTC server through a first WebRTC client of the first user equipment, sends the video data to second user equipment through the third WebRTC server, and plays the video data on a second WebRTC client or a second application program of the second user equipment.
In this embodiment, the third WebRTC server receives video data sent by the first user, and sends the video data to the second user equipment, and the third WebRTC server is used as a separate server for sending the video data, thereby reducing the time delay of the second user equipment for receiving the video data.
In order to facilitate the identification of whether the video data is yellow, in this embodiment of the application, the first WebRTC server may identify one frame of image in the video data.
In some embodiments, the first user equipment acquires video data, acquires a frame of video image from a plurality of frames of video images of the video data as a video image to be identified, optionally, the video image to be identified may be a key frame image or any one of the plurality of frames of video images, and then sends the video image to be identified to the first WebRTC server for yellow identification.
When the first user equipment and the second user equipment adopt a P2P direct connection mode, in order to guarantee the yellow-identifying efficiency of the first WebRTC server, the first WebRTC server is used for identifying whether video data is yellow-identified or not, so that when the P2P direct connection mode is adopted, the first user equipment extracts a video image to be identified and sends the video image to be identified to the first WebRTC server. Or, when the first user equipment and the second user equipment adopt a non-P2P direct connection mode, that is, the first user equipment is connected with the second user equipment through the third WebRTC server, the first user equipment may send the video data to the second user equipment through the third WebRTC server. In the embodiment, the key frame is acquired at the first user equipment side, and the key frame is sent to the second WebRTC server through the first WebRTC server, so that the flow of the first user equipment is saved.
In other embodiments, in a non-P2P direct connection mode, since the third WebRTC server may extract a video image to be authenticated, in a non-P2P direct connection mode, the first user equipment may send the video data to the third WebRTC server, and the third WebRTC server extracts the video image to be authenticated, fig. 6 shows a flowchart of a yellow identification method for video data provided in another embodiment of the present application, please refer to fig. 6, where the yellow identification method for video data specifically includes the following steps:
step S410, the third WebRTC server receives the video data sent by the first user equipment through the first WebRTC client, where the video data includes multiple frames of video images.
The first user equipment collects video data containing multi-frame video images and sends the video data to the third WebRTC server through the first WebRTC client.
For the public video data, since it does not relate to confidential content, for example, the video data is video data in live broadcasting, in order to reduce latency, the video data may be directly transmitted by the first user equipment to the third WebRTC server through the first WebRTC client without being encrypted.
For the non-public video data, since it may relate to contents such as copyright, confidentiality, privacy, etc., for example, a video conference, a video call, a movie or a tv resource that needs to be paid for to be played, the video data may be encrypted to ensure its security, please refer to fig. 7, and step S410 includes the following steps:
step S411, the third WebRTC server receives SRTP-encrypted video data sent by the first user equipment through the first WebRTC client.
The first user equipment encrypts video data based on the SRTP (Secure Real-time Transport Protocol, SRTP for short), and sends the video data encrypted based on the SRTP to the third WebRTC server through the first WebRTC client.
Step S412, the third WebRTC server decrypts the encrypted video data based on the SRTP to obtain the video data.
And the third WebRTC server decrypts the encrypted video data based on the same protocol (SRTP) as the first user equipment to obtain the video data.
Step S420, the third WebRTC server determines one frame of video image among the multiple frames of video images as a video image to be authenticated, and sends the video image to be authenticated to the first WebRTC server.
In order to reduce the workload of the first WebRTC server for yellow identification, one frame of video image is determined in the multi-frame video images of the video data by the third WebRTC server to serve as a video image to be identified, and the video image to be identified is sent to the first WebRTC server for yellow identification.
In some embodiments, the third WebRTC server determines any one of the plurality of video images as the video image to be authenticated.
In other embodiments, the third WebRTC server determines a key frame video image from the plurality of frames of video images in each piece of video data as the video image to be authenticated. The key frame video image is a frame video image where the key action is located, and can represent a complete picture of the video data. For example, the third WebRTC server determines a frame of key frame video image in the multiple frames of video images by using a target clustering algorithm, as the video image to be identified. Or, the third WebRTC server determines that the video image corresponding to the i frame is a key frame image in the video data and serves as the video image to be identified, wherein the i frame is an intra-frame coding frame.
Step S430, the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client.
Step S440, when it is determined that the video data is not yellow-related, the first WebRTC server sends the second instruction to the first user equipment.
Step S450, the third WebRTC server receives the video data sent by the first user equipment through the first WebRTC client in response to the second instruction, and sends the video data to the second user equipment, so as to be played at the second WebRTC client of the second user equipment.
For the detailed description of steps S430 to S450, refer to steps S310 to S330, which are not described herein again.
In this embodiment, for fast yellow detection, the third WebRTC server determines one frame of video image in multiple frames of video images of the video data as a video image to be identified, and sends the video image to be identified to the first WebRTC server for yellow detection of the video data, and the first WebRTC server can fast determine whether the video data is yellow-related through the video image to be identified.
On the basis of the foregoing embodiment, the present embodiment provides a method for identifying yellow color of video data, fig. 8 shows a schematic flow chart of the method for identifying yellow color of video data according to yet another embodiment of the present application, please refer to fig. 8, where the method for identifying yellow color of video data specifically includes the following steps:
step S510, the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client.
Step S520, when it is determined that the video data is yellow, the first WebRTC server sends a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment.
For the detailed description of steps S510 to S520, refer to steps S110 to S120, which are not described herein again.
When the second user equipment plays the live broadcast of the first user equipment and when the video data of the first user equipment played by the second user equipment is yellow, the third WebRTC server stops forwarding the video data of the first user equipment to the second user equipment, and in order to guarantee the user experience of a user watching the live broadcast, the target video data can be forwarded to the second user equipment for playing. The method comprises the following specific steps:
step S530, the third WebRTC server obtains the play record of the second user equipment.
Optionally, the third WebRTC server plays the record for a preset time period of the second user equipment, for example, the preset time period is one week or one month. The play record may include video data of a third user that the second user has viewed, for example, the third user includes anchor a, anchor B, and the like, and the play record may also be a video tag that the second user has viewed, for example, the video tag may be a type of playing a fun, a type of eating and broadcasting, a type of hurrying a sea, a type of driving a mountain, a type of makeup, a type of putting on and the like.
Step S540, the third WebRTC server determines target video data according to the play record, and sends the target video data to the second user equipment, so as to play the target video data on the second WebRTC client of the second user equipment.
In some embodiments, the yellow-identification system for video data further includes a plurality of third user devices, where the plurality of third user devices are connected to the third WebRTC server, and the third WebRTC server determines, according to the play record, video data of a plurality of third users viewed by the second user, among the plurality of third users, determines one of the third users as a target third user, and determines a third user device corresponding to the target third user as the target third user device, and sends the target video data of the third user device to the second WebRTC client of the second user device for playing. For example, the third WebRTC server determines that the second user has viewed the video data of anchor a according to the play record, and sends the video data of anchor a corresponding to the target third user to the second user device.
In other embodiments, the yellow-identification system for video data further includes a plurality of third user devices, where the plurality of third user devices are connected to the third WebRTC server, and the third WebRTC server determines that video data that is interested by a user is sent to the second user device, please refer to fig. 9, and step S540 further includes:
step S541, the third WebRTC server determines a video tag according to the play record, and determines a third user device corresponding to the video tag from the plurality of third user devices as a target third user device.
The video tags comprise fun, broadcast, sea, mountain, makeup and wear.
In one embodiment, the playing times of each video tag in the multiple types of video tags are determined, and the third user device corresponding to the video tag with the largest playing time is determined as the target third user device.
In another embodiment, in the multiple types of video tags, a third user device corresponding to any one of the multiple types of video tags is determined as a target third user device.
In step S542, the third WebRTC server sends the target video data of the target third user device to the second user device, so as to be played at the second WebRTC client of the second user device.
In still other embodiments, a video tag of the video data of the first user device may be determined according to the video data of the first user device, among video tags of the video data of a plurality of third user devices, a third user device that is the same as the video tag of the video data of the first user device is determined as a target third user device, and the video data of the target third user device is determined as target video data, and the third WebRTC server sends the target video data to a second WebRTC client of the second user device for playing.
For example, the video tags of the video data of the first user equipment are of a beauty makeup class, the video tags of the video data of the plurality of third user equipment comprise a beauty makeup class, a sea catch-up class and a broadcast eating class, the third user equipment corresponding to the beauty makeup class in the plurality of third user equipment is determined to be used as the target third user equipment, the video data (namely beauty makeup video) of the target third user equipment is determined to be used as the target video data, and the third WebRTC server sends the target video data to the second WebRTC client of the second user equipment for playing.
In this embodiment, when the second user equipment plays the live broadcast of the first user equipment and when the video data of the first user equipment played by the second user equipment is yellow-related, the third WebRTC server stops forwarding the video data of the first user equipment to the second user equipment and forwards the target video data of the target third user equipment to the second user equipment for playing, so that the user experience of a user watching the live broadcast is ensured.
In order to implement the foregoing method embodiment, this embodiment provides a yellow-identification device for video data, which is applied to a cloud of a yellow-identification system for video data, where the cloud includes a first WebRTC server, the yellow-identification system for video data further includes a first user device and a second user device, the first user device is provided with a first WebRTC client, the second user device is provided with a second WebRTC client, the first user device is connected to the second user device and the first WebRTC server, fig. 10 shows a logic block diagram of the yellow-identification device for video data provided in this embodiment, please refer to fig. 10, and the yellow-identification device for video data 200 includes: a video data receiving module 210, a first yellow identification module 220 and a second yellow identification module 230.
A video data receiving module 210, configured to receive, by the first WebRTC server, the video data sent by the first user equipment through the first WebRTC client;
a first yellow-identification module 220, configured to send, by the first WebRTC server, a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment when it is determined that the video data is yellow-associated;
the second yellow-identifying module 230 is configured to, when it is determined that the video data is not yellow-associated, send a second instruction to the first user equipment to instruct the first user equipment to send the video data to the second user equipment, so as to play the video data on the second WebRTC client of the second user equipment.
Optionally, the cloud further includes a second WebRTC server, the second WebRTC server is connected to the first user equipment and the first WebRTC server respectively, and the yellow-identifying device 200 for video data further includes: the device comprises a video type request module, a video type receiving module and a connection establishing module.
A video type request module, configured to send a video type request to the first user equipment by the second WebRTC server;
a video type receiving module, configured to receive, by the second WebRTC, a video type of video data that is requested to be fed back by the first user equipment based on the video type;
and the connection establishing module is used for sending a third instruction to the first user equipment when the second WebRTC server determines that the video type is the preset type so as to indicate the first user equipment to establish the connection with the first WebRTC server.
Optionally, the cloud further includes a third WebRTC server, the first user equipment and the second user equipment are connected through the third WebRTC server, and the second yellow-identifying module 230 includes: a second instruction sending module and a second instruction response module.
A second instruction sending module, configured to send, by the first WebRTC server, the second instruction to the first user equipment when it is determined that the video data is not yellow-involved;
a second instruction response module, configured to receive, by the third WebRTC server, the video data sent by the first user equipment through the first WebRTC client in response to the second instruction, and send the video data to the second user equipment, so as to be played at the second WebRTC client of the second user equipment.
Optionally, the video data receiving module 210 includes: the video data receiving submodule and the video image to be identified determining submodule.
A video data receiving sub-module, configured to receive, by the third WebRTC server, the video data sent by the first user equipment through the first WebRTC client, where the video data includes multiple frames of video images;
and the to-be-identified video image determining submodule is used for determining one frame of video image in the multi-frame video image by the third WebRTC server to serve as the to-be-identified video image and sending the to-be-identified video image to the first WebRTC server.
Optionally, the video data receiving sub-module includes: a first video data receiving sub-module and a second video data receiving sub-module.
A first video data receiving sub-module, configured to receive, by the third WebRTC server, SRTP-based encrypted video data sent by the first user equipment through the first WebRTC client;
and the second video data receiving submodule is used for decrypting the encrypted video data by the third WebRTC server based on the SRTP to obtain the video data.
Optionally, the yellow-identifying apparatus 200 for video data further comprises: the device comprises a playing record acquisition module and a target video data playing module.
A playing record obtaining module, configured to obtain, by the third WebRTC server, a playing record of the second user equipment;
and the target video data playing module is used for determining target video data according to the playing record by the third WebRTC server and sending the target video data to the second user equipment so as to play the target video data on the second WebRTC client of the second user equipment.
Optionally, the yellow-identifying system for video data further includes a plurality of third user devices, the plurality of third user devices are connected to the third WebRTC server, and the target video data playing module includes: and the target third user equipment determination submodule and the target video data playing submodule.
A target third user equipment determining submodule, configured to determine, by the third WebRTC server, a video tag according to the play record, and determine, from the plurality of third user equipment, a third user equipment corresponding to the video tag as a target third user equipment;
and the target video data playing sub-module is used for sending the target video data of the target third user equipment to the second user equipment by the third WebRTC server so as to play the target video data at the second WebRTC client of the second user equipment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 11 is a block diagram of an electronic device for executing a method for identifying yellow of video data according to an embodiment of the present application, and please refer to fig. 11, which shows a block diagram of an electronic device 300 according to an embodiment of the present application. The electronic device 300 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running an application. The electronic device 300 in the present application may include one or more of the following components: a processor 310, a memory 320, and one or more applications, wherein the one or more applications may be stored in the memory 320 and configured to be executed by the one or more processors 310, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 310 may include one or more processing cores, among other things. The processor 310 connects various parts throughout the electronic device 300 using various interfaces and lines, and performs various functions of the electronic device 300 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 320 and calling data stored in the memory 320. Alternatively, the processor 310 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 310 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content to be displayed; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 310, but may be implemented by a communication chip.
The Memory 320 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 320 may be used to store instructions, programs, code sets, or instruction sets. The memory 320 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the mobile terminal 300 in use, such as a phonebook, audio-video data, chat log data, and the like.
Fig. 12 is a storage unit for storing or carrying program codes for implementing a yellow identification method of video data according to an embodiment of the present application, and please refer to fig. 12, which shows a block diagram of a computer-readable storage medium provided by an embodiment of the present application. The computer readable medium 400 has stored therein a program code that can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium 400 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 400 includes a non-volatile computer-readable storage medium. The computer readable storage medium 400 has storage space for program code 410 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. Program code 410 may be compressed, for example, in a suitable form.
To sum up, the present application provides a method, an apparatus, an electronic device, and a storage medium for identifying yellow color of video data, where a first WebRTC server receives video data sent by a first user device through a first WebRTC client, the first WebRTC server sends a first instruction to the first user device to instruct the first user device to stop sending video data to a second user device when determining that the video data is yellow-related, so as to reduce adverse effects caused by the yellow-related video data, the first WebRTC server sends a second instruction to the first user device to instruct the first user device to send video data to the second user device to be played at a second WebRTC client of the second user device, so as to ensure that the video data played at the second WebRTC client is not yellow-related, so as to create a good video viewing environment, and specifically identify whether the video data is yellow-related through a separate first WebRTC server, the yellow identification result can be efficiently and accurately obtained.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. The method for identifying the yellow of the video data is applied to a cloud of a yellow identification system of the video data, the cloud comprises a first WebRTC server, the yellow identification system of the video data further comprises a first user device and a second user device, the first user device is provided with a first WebRTC client, the second user device is provided with a second WebRTC client, and the first user device is respectively connected with the second user device and the first WebRTC server, and the method comprises the following steps:
the first WebRTC server receives the video data sent by the first user equipment through the first WebRTC client;
when the first WebRTC server determines that the video data is yellow-related, sending a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment;
when the first WebRTC server determines that the video data is not yellow-involved, a second instruction is sent to the first user equipment to instruct the first user equipment to send the video data to the second user equipment so as to be played at a second WebRTC client of the second user equipment.
2. The method of claim 1, wherein the cloud further includes a second WebRTC server, the second WebRTC server is connected to the first user device and the first WebRTC server, respectively, and before the first WebRTC server receives the video data sent by the first user device through the first WebRTC client, the method further includes:
the second WebRTC server sends a video type request to the first user equipment;
the second WebRTC receives the video type of the video data fed back by the first user equipment based on the video type request;
and when the video type is determined to be a preset type, the second WebRTC server sends a third instruction to the first user equipment to indicate the first user equipment to establish connection with the first WebRTC server.
3. The method of claim 1, wherein the cloud further comprises a third WebRTC server, the first user device and the second user device are connected through the third WebRTC server, and the first WebRTC server determines that when the video data is not yellow-involved, sending a second instruction to the first user device to instruct the first user device to send the video data to the second user device for playing on the second WebRTC client of the second user device comprises:
when the first WebRTC server determines that the video data is not yellow-related, the first WebRTC server sends the second instruction to the first user equipment;
and the third WebRTC server receives the video data sent by the first user equipment through the first WebRTC client in response to the second instruction, and sends the video data to the second user equipment so as to be played at the second WebRTC client of the second user equipment.
4. The method of claim 3, wherein the receiving, by the first WebRTC server, the video data sent by the first user device via the first WebRTC client comprises:
the third WebRTC server receives the video data sent by the first user equipment through the first WebRTC client, wherein the video data comprises a plurality of frames of video images;
and the third WebRTC server determines one frame of video image in the multiple frames of video images as a video image to be identified and sends the video image to be identified to the first WebRTC server.
5. The method of claim 4, wherein the third WebRTC server receiving the video data sent by the first user device via the first WebRTC client comprises:
the third WebRTC server receives video data which is sent by the first user equipment through the first WebRTC client and encrypted based on SRTP;
and the third WebRTC server decrypts the encrypted video data based on the SRTP to obtain the video data.
6. The method of claim 3, wherein when the first WebRTC server determines that the video data is yellow, sending a first instruction to the first user device to instruct the first user device to stop sending the video data to the second user device, further comprising:
the third WebRTC server acquires the play record of the second user equipment;
and the third WebRTC server determines target video data according to the playing record and sends the target video data to the second user equipment so as to play the target video data on the second WebRTC client of the second user equipment.
7. The method of claim 6, wherein the yellow-identification system of the video data further comprises a plurality of third user devices connected to the third WebRTC server, and wherein the third WebRTC server determines target video data according to the playing record and sends the target video data to the second user device for playing on the second WebRTC client of the second user device, including:
the third WebRTC server determines a video tag according to the playing record, and determines a third user device corresponding to the video tag from the plurality of third user devices as a target third user device;
and the third WebRTC server sends the target video data of the target third user equipment to the second user equipment so as to be played at the second WebRTC client of the second user equipment.
8. The utility model provides a yellow-identifying device of video data which characterized in that, is applied to the high in the clouds of the yellow-identifying system of video data, the high in the clouds includes first WebRTC server, the yellow-identifying system of video data still includes first user equipment and second user equipment, first user equipment is provided with first WebRTC client, the second user equipment is provided with second WebRTC client, first user equipment respectively with the second user equipment with first WebRTC server is connected, the device includes:
a video data receiving module, configured to receive, by the first WebRTC server, the video data sent by the first user equipment through the first WebRTC client;
the first WebRTC server is configured to send a first instruction to the first user equipment to instruct the first user equipment to stop sending the video data to the second user equipment when determining that the video data is yellow-related;
and the second yellow-identification module is used for sending a second instruction to the first user equipment by the first WebRTC server when the video data is determined not to be yellow-associated, so as to instruct the first user equipment to send the video data to the second user equipment, and to play the video data on a second WebRTC client of the second user equipment.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
CN202011359802.3A 2020-11-27 2020-11-27 Video data yellow identification method and device, electronic equipment and storage medium Pending CN112565655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011359802.3A CN112565655A (en) 2020-11-27 2020-11-27 Video data yellow identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011359802.3A CN112565655A (en) 2020-11-27 2020-11-27 Video data yellow identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112565655A true CN112565655A (en) 2021-03-26

Family

ID=75046090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011359802.3A Pending CN112565655A (en) 2020-11-27 2020-11-27 Video data yellow identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112565655A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113163153A (en) * 2021-04-06 2021-07-23 游密科技(深圳)有限公司 Method, device, medium and electronic equipment for processing violation information in video conference

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1643701A1 (en) * 2004-09-30 2006-04-05 Microsoft Corporation Enforcing rights management through edge email servers
CN106454492A (en) * 2016-10-12 2017-02-22 武汉斗鱼网络科技有限公司 Live pornographic content audit system and method based on delayed transmission
CN107809368A (en) * 2016-09-09 2018-03-16 腾讯科技(深圳)有限公司 Information filtering method and device
CN108040262A (en) * 2018-01-25 2018-05-15 湖南机友科技有限公司 Live audio and video are reflected yellow method and device in real time
CN108881938A (en) * 2018-08-02 2018-11-23 佛山龙眼传媒科技有限公司 Live video intelligently cuts broadcasting method and device
CN108966234A (en) * 2018-05-31 2018-12-07 北京五八信息技术有限公司 The treating method and apparatus of fallacious message
CN109660869A (en) * 2017-10-10 2019-04-19 武汉斗鱼网络科技有限公司 Barrage message screening method, storage medium, equipment and the system of multiterminal cooperation
CN109726312A (en) * 2018-12-25 2019-05-07 广州虎牙信息科技有限公司 A kind of regular expression detection method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1643701A1 (en) * 2004-09-30 2006-04-05 Microsoft Corporation Enforcing rights management through edge email servers
CN107809368A (en) * 2016-09-09 2018-03-16 腾讯科技(深圳)有限公司 Information filtering method and device
CN106454492A (en) * 2016-10-12 2017-02-22 武汉斗鱼网络科技有限公司 Live pornographic content audit system and method based on delayed transmission
CN109660869A (en) * 2017-10-10 2019-04-19 武汉斗鱼网络科技有限公司 Barrage message screening method, storage medium, equipment and the system of multiterminal cooperation
CN108040262A (en) * 2018-01-25 2018-05-15 湖南机友科技有限公司 Live audio and video are reflected yellow method and device in real time
CN108966234A (en) * 2018-05-31 2018-12-07 北京五八信息技术有限公司 The treating method and apparatus of fallacious message
CN108881938A (en) * 2018-08-02 2018-11-23 佛山龙眼传媒科技有限公司 Live video intelligently cuts broadcasting method and device
CN109726312A (en) * 2018-12-25 2019-05-07 广州虎牙信息科技有限公司 A kind of regular expression detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113163153A (en) * 2021-04-06 2021-07-23 游密科技(深圳)有限公司 Method, device, medium and electronic equipment for processing violation information in video conference

Similar Documents

Publication Publication Date Title
US10764623B2 (en) Method and system for media adaption
US10187668B2 (en) Method, system and server for live streaming audio-video file
US11336941B2 (en) Apparatus and method for presentation of holographic content
US9300705B2 (en) Methods and systems for interfacing heterogeneous endpoints and web-based media sources in a video conference
US8255552B2 (en) Interactive video collaboration framework
CN111935443B (en) Method and device for sharing instant messaging tool in real-time live broadcast of video conference
CN111836074B (en) Live wheat-connecting method and device, electronic equipment and storage medium
US20050007965A1 (en) Conferencing system
US11051050B2 (en) Live streaming with live video production and commentary
CN111880865A (en) Multimedia data pushing method and device, electronic equipment and storage medium
CN112565802A (en) Live broadcast interaction method, system, server and storage medium
KR20140103156A (en) System, apparatus and method for utilizing a multimedia service
WO2015035934A1 (en) Methods and systems for facilitating video preview sessions
CN112565655A (en) Video data yellow identification method and device, electronic equipment and storage medium
US20220321945A1 (en) Server-side digital content insertion in audiovisual streams broadcasted through an interactive live streaming network
CN113141352B (en) Multimedia data transmission method and device, computer equipment and storage medium
CN112073727B (en) Transcoding method and device, electronic equipment and storage medium
US10904590B2 (en) Method and system for real time switching of multimedia content
CN116264619A (en) Resource processing method, device, server, terminal, system and storage medium
CN113747181A (en) Network live broadcast method, live broadcast system and electronic equipment based on remote desktop
KR20090040107A (en) Method for real-time personal broadcasting
US20220201372A1 (en) Live video streaming architecture with real-time frame and subframe level live watermarking
WO2024114489A1 (en) Playing method and apparatus based on data stream, and device, medium and program product
KR20090040106A (en) Method for real-time personal broadcasting
Dewi et al. Utilization of the Agora video broadcasting library to support remote live streaming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination