CN114786038A - Low-custom live broadcast behavior monitoring method based on deep learning - Google Patents

Low-custom live broadcast behavior monitoring method based on deep learning Download PDF

Info

Publication number
CN114786038A
CN114786038A CN202210318524.XA CN202210318524A CN114786038A CN 114786038 A CN114786038 A CN 114786038A CN 202210318524 A CN202210318524 A CN 202210318524A CN 114786038 A CN114786038 A CN 114786038A
Authority
CN
China
Prior art keywords
live broadcast
live
video
behavior
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210318524.XA
Other languages
Chinese (zh)
Inventor
余丹
于艺春
兰雨晴
王丹星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Standard Intelligent Security Technology Co Ltd
Original Assignee
China Standard Intelligent Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Standard Intelligent Security Technology Co Ltd filed Critical China Standard Intelligent Security Technology Co Ltd
Priority to CN202210318524.XA priority Critical patent/CN114786038A/en
Publication of CN114786038A publication Critical patent/CN114786038A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23109Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a low-custom live broadcast behavior monitoring method based on deep learning, which comprises the steps of analyzing and processing video picture data and video sound data contained in historical live broadcast video data of different live broadcast platforms to obtain a low-custom live broadcast behavior database; extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data, and comparing the live broadcast picture data and the live broadcast sound data with a low-custom live broadcast behavior database to obtain a low-custom live broadcast behavior evidence of a target live broadcast platform; the method has the advantages that the vulgar live broadcast forming evidence is uploaded to a live broadcast management platform for authentication, the live broadcast state and the live broadcast permission of a target live broadcast platform are changed according to an authentication result, the vulgar live broadcast behaviors possibly existing in the live broadcast process are counted and collected in a mode of constructing a vulgar live broadcast behavior database, a reliable and comprehensive judgment reference is provided for follow-up monitoring of the target live broadcast platform, the condition that monitoring is omitted due to the vulgar live broadcast behaviors can be effectively avoided, and the accuracy and timeliness of monitoring are improved.

Description

Low-custom live broadcast behavior monitoring method based on deep learning
Technical Field
The invention relates to the technical field of online video monitoring, in particular to a low-custom live broadcast behavior monitoring method based on deep learning.
Background
Live broadcasting has become an important way for people to communicate over the internet. With the development of the live broadcast industry, live broadcast personnel are likely to make some popular behaviors in the live broadcast process in order to earn traffic. In the prior art, the vulgar behaviors are reported by manual supervision modes such as audience report or patrol of supervision personnel of a live broadcast platform. The quantity of live broadcast room is huge in same period, only relies on artifical supervision mode can't carry out comprehensive coverage monitoring to all live broadcast rooms, and this not only need consume a large amount of manpower time and accomplish the whole journey monitoring of live broadcast room, still can avoid taking place the low custom live broadcast action simultaneously and omit the condition of monitoring, reduces the accuracy and the promptness of low custom live broadcast action monitoring.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a vulgar live broadcast behavior monitoring method based on deep learning, which is used for analyzing and processing video picture data and video sound data contained in historical live broadcast video data of different live broadcast platforms to obtain a vulgar live broadcast behavior database; extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data, and comparing the live broadcast picture data and the live broadcast sound data with a vulgar live broadcast behavior database to obtain a vulgar live broadcast behavior evidence of a target live broadcast platform; the method has the advantages that the vulgar live broadcast forming evidence is uploaded to a live broadcast management platform for authentication, the live broadcast state and the live broadcast permission of a target live broadcast platform are changed according to an authentication result, the vulgar live broadcast behaviors possibly existing in the live broadcast process are counted and collected in a mode of constructing a vulgar live broadcast behavior database, a reliable and comprehensive judgment reference is provided for follow-up monitoring of the target live broadcast platform, the condition that monitoring is omitted due to the vulgar live broadcast behaviors can be effectively avoided, and the accuracy and timeliness of monitoring of the vulgar live broadcast behaviors are improved.
The invention provides a vulgar live broadcast behavior monitoring method based on deep learning, which comprises the following steps of:
step S1, acquiring historical live video data of different live broadcast platforms, and performing picture and sound separation processing on the historical live video data to obtain corresponding video picture data and video sound data; storing the video picture data and the video sound data packets into a blockchain;
step S2, analyzing and processing the video picture data and the video sound data respectively to obtain the body attribute information and the voice attribute information of live broadcast personnel of a live broadcast platform in the historical live broadcast process, thereby forming a historical live broadcast information set; generating a low-custom live broadcast behavior database according to the historical live broadcast information set;
step S3, acquiring real-time live video data of a target live broadcast platform, and extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data; comparing the live broadcast picture data and the live broadcast sound data with the vulgar live broadcast behavior database to obtain a vulgar live broadcast behavior evidence of a target live broadcast platform;
and step S4, uploading the colloquial live broadcast forming evidence to a live broadcast management platform for authentication, and changing the live broadcast state and the live broadcast authority of the target live broadcast platform according to the authentication result.
Further, in step S1, acquiring historical live broadcast video data of different live broadcast platforms, and performing picture and sound separation processing on the historical live broadcast video data to obtain corresponding video picture data and video sound data specifically includes:
sending a historical live broadcast video data acquisition request to a live broadcast management platform, and extracting live broadcast port information corresponding to the historical live broadcast video data to be acquired from the historical live broadcast video data acquisition request by the live broadcast management platform;
selecting matched historical live video data from a historical live video screen recording database according to the live port information;
and then, carrying out picture and sound separation processing on the selected historical live video data to obtain corresponding video picture data and video sound data.
Further, in step S1, the storing the video picture data and the video sound data packet into a blockchain specifically includes:
and after the video picture data and the video sound data are bound with the live broadcast port information, respectively storing the video picture data and the video sound data in a picture data storage interval and a sound data storage interval of a block chain.
Further, in step S2, analyzing and processing the video picture data and the video sound data, respectively, to obtain body attribute information and voice attribute information of live broadcast personnel of the live broadcast platform in a historical live broadcast process, so as to form a historical live broadcast information set specifically including:
carrying out identification processing on the video picture data to obtain facial feature information, limb action information and body exposure area information of live broadcast personnel in a historical live broadcast process;
identifying and processing the video sound data to obtain voice semantic information of live broadcast personnel in a historical live broadcast process;
and combining the facial feature information, the limb action information, the body exposure area information and the voice semantic information together to form a historical live broadcast information set.
Further, in step S2, generating a popular live action database according to the historical live action information set specifically includes:
extracting illegal mouth action behaviors of live broadcast personnel from the facial feature information;
extracting illegal limb action behaviors of live broadcast personnel from the limb action information;
extracting and obtaining illegal body exposure behaviors of live broadcast personnel from the body exposure area information;
extracting illegal speaking behaviors of live broadcast personnel from the voice semantic information;
and combining the illegal mouth action behavior, the illegal limb action behavior, the illegal body exposure behavior and the illegal speaking behavior together to form a low-custom live broadcast behavior database.
Further, in step S3, the acquiring of live video data of a target live platform, and the extracting of live picture data and live sound data from the live video data specifically includes:
performing real-time screen recording monitoring on a target live broadcast platform to obtain real-time live broadcast video data; and carrying out picture and sound separation processing on the real-time live video data to obtain live picture data and live sound data.
Further, in step S3, comparing the live broadcast picture data and the live broadcast sound data with the vulgar live broadcast behavior database to obtain the vulgar live broadcast behavior evidence of the target live broadcast platform specifically includes:
extracting body action behaviors and speaking semantic contents of live personnel in a target live broadcast platform from the live broadcast picture data and the live broadcast sound data;
comparing the body action behavior and the speaking semantic content with the low popular live broadcast behavior database, and when the body action behavior is matched with the illegal mouth action behavior, the illegal limb action behavior or the illegal body exposure behavior, or the speaking semantic content is matched with the illegal speaking behavior, carrying out screenshot on the body action behavior, or recording the speaking semantic content, thereby obtaining low popular live broadcast behavior screenshot evidence or low popular live broadcast behavior recording evidence.
Further, in step S4, uploading the colloquial live broadcast forming evidence to a live broadcast management platform for authentication, and changing the live broadcast state and the live broadcast permission of the target live broadcast platform according to the authentication result specifically includes:
uploading the low popular live action screenshot evidence or the low popular live action recording evidence to a live action management platform for authentication; and if the authentication is passed, immediately stopping the current live broadcasting state of the target live broadcasting platform, and identifying live broadcasting personnel of the target live broadcasting platform as personnel prohibited to obtain live broadcasting authority.
Compared with the prior art, the method for monitoring the vulgar live broadcast behavior based on deep learning analyzes and processes video picture data and video sound data contained in historical live broadcast video data of different live broadcast platforms to obtain a vulgar live broadcast behavior database; extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data, and comparing the live broadcast picture data and the live broadcast sound data with a vulgar live broadcast behavior database to obtain a vulgar live broadcast behavior evidence of a target live broadcast platform; the method has the advantages that the vulgar live broadcast forming evidence is uploaded to a live broadcast management platform for authentication, the live broadcast state and the live broadcast permission of a target live broadcast platform are changed according to an authentication result, the vulgar live broadcast behaviors possibly existing in the live broadcast process are counted and collected in a mode of constructing a vulgar live broadcast behavior database, a reliable and comprehensive judgment reference is provided for follow-up monitoring of the target live broadcast platform, the condition that monitoring is omitted due to the vulgar live broadcast behaviors can be effectively avoided, and the accuracy and timeliness of monitoring of the vulgar live broadcast behaviors are improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow diagram of a method for monitoring popular live broadcast behavior based on deep learning according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, a schematic flow chart of a method for monitoring popular live broadcast behavior based on deep learning according to an embodiment of the present invention is shown. The method for monitoring the popular live broadcast behavior based on deep learning comprises the following steps:
step S1, acquiring historical live broadcast video data of different live broadcast platforms, and performing picture and sound separation processing on the historical live broadcast video data to obtain corresponding video picture data and video and sound data; storing the video picture data and the video sound data packet into a block chain;
step S2, analyzing and processing the video image data and the video sound data respectively to obtain the body attribute information and the voice attribute information of live broadcast personnel in the history live broadcast process of the live broadcast platform, thereby forming a history live broadcast information set; generating a low popular live broadcast behavior database according to the historical live broadcast information set;
step S3, acquiring real-time live broadcast video data of a target live broadcast platform, and extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data; comparing the live broadcast picture data and the live broadcast sound data with the vulgar live broadcast behavior database to obtain a vulgar live broadcast behavior evidence of a target live broadcast platform;
and step S4, uploading the colloquial live broadcast forming evidence to a live broadcast management platform for authentication, and changing the live broadcast state and the live broadcast authority of the target live broadcast platform according to the authentication result.
The beneficial effects of the above technical scheme are: the method for monitoring the vulgar live broadcast behaviors based on deep learning analyzes and processes video picture data and video sound data contained in historical live broadcast video data of different live broadcast platforms to obtain a vulgar live broadcast behavior database; extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data, and comparing the live broadcast picture data and the live broadcast sound data with a vulgar live broadcast behavior database to obtain a vulgar live broadcast behavior evidence of a target live broadcast platform; the method has the advantages that the vulgar live broadcast forming evidence is uploaded to a live broadcast management platform for authentication, the live broadcast state and the live broadcast permission of a target live broadcast platform are changed according to an authentication result, the vulgar live broadcast behaviors possibly existing in the live broadcast process are counted and collected in a mode of constructing a vulgar live broadcast behavior database, a reliable and comprehensive judgment reference is provided for follow-up monitoring of the target live broadcast platform, the condition that monitoring is omitted due to the vulgar live broadcast behaviors can be effectively avoided, and the accuracy and timeliness of monitoring of the vulgar live broadcast behaviors are improved.
Preferably, in step S1, the obtaining historical live video data of different live broadcast platforms, and performing picture and sound separation processing on the historical live video data to obtain corresponding video picture data and video and sound data specifically includes:
sending a historical live broadcast video data acquisition request to a live broadcast management platform, and extracting live broadcast port information corresponding to the historical live broadcast video data to be acquired from the historical live broadcast video data acquisition request by the live broadcast management platform;
selecting matched historical live broadcast video data from a historical live broadcast video screen recording database according to the live broadcast port information;
and then, carrying out picture and sound separation processing on the selected historical live video data to obtain corresponding video picture data and video sound data.
The beneficial effects of the above technical scheme are: the live broadcast management platform is used as a comprehensive management center of different live broadcast rooms, and can record live broadcast processes of all subordinate live broadcast rooms, so that corresponding historical live broadcast video data are obtained, and each historical live broadcast video data corresponds to live broadcast port information of the live broadcast room, namely room number information of the live broadcast room. And then, picture and sound separation processing is carried out on each historical live video data, and the video picture part and the video sound part can be subjected to regional differentiation processing.
Preferably, in the step S1, the storing the video picture data and the video sound data packet into the blockchain specifically includes:
and after the live broadcast port information is bound to the video picture data and the video sound data, respectively storing the video picture data and the video sound data in a picture data storage interval and a sound data storage interval of a block chain.
The beneficial effects of the above technical scheme are: the video image data and the video sound data are respectively stored in the image data storage interval and the sound data storage interval of the block chain, so that the situation that the two kinds of data are in disorder in the storage process can be avoided.
Preferably, in step S2, analyzing and processing the video picture data and the video sound data respectively to obtain body attribute information and voice attribute information of live broadcasting personnel of the live broadcasting platform in the historical live broadcasting process, so as to form a historical live broadcasting information set specifically including:
carrying out identification processing on the video picture data to obtain facial feature information, limb action information and body exposure area information of live broadcast personnel in a historical live broadcast process;
identifying and processing the video sound data to obtain voice semantic information of live broadcast personnel in the historical live broadcast process;
and combining the facial feature information, the limb action information, the body exposure area information and the voice semantic information together to form a historical live broadcast information set.
The beneficial effects of the above technical scheme are: the video image data and the video sound data are respectively identified to obtain the facial feature information, the limb action information, the body exposure area information and the voice semantic information of the live broadcast personnel, so that the behavior and the language of the live broadcast personnel in the live broadcast process can be identified from multiple aspects.
Preferably, in step S2, the generating a popular live action database according to the historical live information set specifically includes:
extracting the illegal mouth action behaviors of the live personnel from the facial feature information;
extracting the illegal limb action behaviors of the live personnel from the limb action information;
extracting and obtaining the illegal body exposure behaviors existing in the live broadcast personnel from the body exposure area information;
extracting illegal speaking behaviors existing in the live broadcast personnel from the voice semantic information;
and the illegal mouth action behavior, the illegal limb action behavior, the illegal body exposure behavior and the illegal speaking behavior are combined to form a low-custom direct broadcast behavior database.
The beneficial effects of the above technical scheme are: and the extracted illegal mouth action behaviors, illegal limb action behaviors, illegal body exposure behaviors and illegal speaking behaviors are combined to form a low-custom direct broadcast behavior database, so that the low-custom direct broadcast behavior database can ensure that the low-custom direct broadcast behavior database can include the low-custom behaviors of direct broadcast personnel in multiple dimensions such as behaviors, speaking and the like, and comprehensive and reliable data support is provided for subsequently judging whether the evidence demonstrating direct broadcast personnel make the low-custom behaviors.
Preferably, in step S3, the acquiring real-time live video data of the target live broadcast platform, and the extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data specifically includes:
performing real-time screen recording monitoring on a target live broadcast platform to obtain real-time live broadcast video data; and carrying out picture and sound separation processing on the real-time live video data to obtain live picture data and live sound data.
The beneficial effects of the above technical scheme are: the method comprises the steps of carrying out real-time screen recording monitoring on a target live broadcast platform, and carrying out picture and sound separation processing on real-time live broadcast video data obtained through monitoring, so that the real-time performance and the comprehensiveness of the monitoring on the target live broadcast platform can be ensured.
Preferably, in step S3, the comparing the live video data and the live sound data with the vulgar live action database to obtain the vulgar live action evidence of the target live platform specifically includes:
extracting the body action behaviors and the speaking semantic contents of live broadcast personnel in the target live broadcast platform from the live broadcast picture data and the live broadcast sound data;
comparing the body action behavior and the speaking semantic content with the low-custom live broadcast behavior database, and when the body action behavior is matched with the illegal mouth action behavior, the illegal limb action behavior or the illegal body exposure behavior, or the speaking semantic content is matched with the illegal speaking behavior, carrying out screenshot on the body action behavior, or recording the speaking semantic content, thereby obtaining screenshot evidence of the low-custom live broadcast behavior or recording evidence of the low-custom live broadcast behavior.
The beneficial effects of the above technical scheme are: and comparing the body action behavior and the speaking semantic content with the low-popular live broadcast behavior database to obtain corresponding low-popular live broadcast behavior screenshot evidence or low-popular live broadcast behavior recording evidence, so that reliable real-time evidence can be provided for a subsequent live broadcast management platform to carry out low-popular behavior authentication.
Preferably, in step S4, uploading the colloquial live broadcast forming evidence to a live broadcast management platform for authentication, and changing the live broadcast status and the live broadcast authority of the target live broadcast platform according to the authentication result specifically includes:
uploading the screenshot evidence of the popular live broadcast behavior or the recording evidence of the popular live broadcast behavior to a live broadcast management platform for authentication; and if the authentication is passed, immediately stopping the current live broadcast state of the target live broadcast platform, and identifying live broadcast personnel of the target live broadcast platform as personnel prohibited to obtain live broadcast authority.
The beneficial effects of the above technical scheme are: and immediately stopping live broadcast and forbidding related live broadcast personnel to obtain live broadcast permission by a target live broadcast platform corresponding to the confirmed low-custom live broadcast behavior screenshot evidence or the confirmed low-custom live broadcast behavior recording evidence, so that further diffusion and propagation of low-custom live broadcast behaviors can be rapidly stopped.
According to the content of the embodiment, the vulgar live broadcast behavior monitoring method based on deep learning analyzes and processes video picture data and video sound data contained in historical live broadcast video data of different live broadcast platforms to obtain a vulgar live broadcast behavior database; extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data, and comparing the live broadcast picture data and the live broadcast sound data with a low-custom live broadcast behavior database to obtain a low-custom live broadcast behavior evidence of a target live broadcast platform; the method has the advantages that the vulgar live broadcast forming evidence is uploaded to a live broadcast management platform for authentication, the live broadcast state and the live broadcast permission of a target live broadcast platform are changed according to an authentication result, the vulgar live broadcast behaviors possibly existing in the live broadcast process are counted and collected in a mode of constructing a vulgar live broadcast behavior database, a reliable and comprehensive judgment reference is provided for follow-up monitoring of the target live broadcast platform, the condition that monitoring is omitted due to the vulgar live broadcast behaviors can be effectively avoided, and the accuracy and timeliness of monitoring of the vulgar live broadcast behaviors are improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A method for monitoring popular live broadcast behaviors based on deep learning is characterized by comprising the following steps:
step S1, acquiring historical live broadcast video data of different live broadcast platforms, and performing picture and sound separation processing on the historical live broadcast video data to obtain corresponding video picture data and video and sound data; storing the video picture data and the video sound data packets into a blockchain;
step S2, analyzing and processing the video picture data and the video sound data respectively to obtain the body attribute information and the voice attribute information of live broadcast personnel of a live broadcast platform in the historical live broadcast process, thereby forming a historical live broadcast information set; generating a low popular live broadcast behavior database according to the historical live broadcast information set;
step S3, acquiring real-time live broadcast video data of a target live broadcast platform, and extracting live broadcast picture data and live broadcast sound data from the real-time live broadcast video data; comparing the live broadcast picture data and the live broadcast sound data with the vulgar live broadcast behavior database to obtain a vulgar live broadcast behavior evidence of a target live broadcast platform;
and step S4, uploading the colloquial live broadcast forming evidence to a live broadcast management platform for authentication, and changing the live broadcast state and the live broadcast authority of the target live broadcast platform according to the authentication result.
2. The method for monitoring vulgar live broadcast behaviors based on deep learning according to claim 1, characterized by comprising the following steps:
in step S1, acquiring historical live video data of different live platforms, and performing picture and sound separation processing on the historical live video data to obtain corresponding video picture data and video sound data specifically includes:
sending a historical live broadcast video data acquisition request to a live broadcast management platform, and extracting live broadcast port information corresponding to the historical live broadcast video data to be acquired from the historical live broadcast video data acquisition request by the live broadcast management platform;
selecting matched historical live video data from a historical live video screen recording database according to the live port information;
and then, carrying out picture and sound separation processing on the selected historical live video data to obtain corresponding video picture data and video sound data.
3. The vulgar live-broadcast behavior monitoring method based on deep learning as claimed in claim 2, characterized in that:
in step S1, the storing the video picture data and the video sound data packet into a blockchain specifically includes:
and after the live broadcast port information is bound to the video picture data and the video sound data, respectively storing the video picture data and the video sound data in a picture data storage interval and a sound data storage interval of a block chain.
4. The vulgar live-broadcast behavior monitoring method based on deep learning as claimed in claim 3, characterized in that:
in step S2, the video picture data and the video sound data are respectively analyzed and processed to obtain body attribute information and voice attribute information of live broadcast personnel of the live broadcast platform in the historical live broadcast process, so as to form a historical live broadcast information set, which specifically includes:
identifying the video picture data to obtain facial feature information, limb action information and body exposure area information of live broadcast personnel in a historical live broadcast process;
identifying and processing the video sound data to obtain voice semantic information of live broadcast personnel in a historical live broadcast process;
and combining the facial feature information, the limb action information, the body exposure area information and the voice semantic information together to form a historical live broadcast information set.
5. The method for monitoring vulgar live broadcast behaviors based on deep learning according to claim 4, wherein the method comprises the following steps:
in step S2, generating a popular live action database according to the historical live information set specifically includes:
extracting the illegal mouth action behaviors of the live personnel from the facial feature information;
extracting and obtaining the illegal limb action behaviors of the live broadcast personnel from the limb action information;
extracting and obtaining illegal body exposure behaviors of live broadcast personnel from the body exposure area information;
extracting illegal speaking behaviors existing in the live broadcast personnel from the voice semantic information;
and combining the illegal mouth action behavior, the illegal limb action behavior, the illegal body exposure behavior and the illegal speaking behavior together to form a low-custom live broadcast behavior database.
6. The method for monitoring vulgar live-broadcast behaviors based on deep learning according to claim 5, characterized by comprising the following steps:
in step S3, the acquiring of the real-time live video data of the target live broadcast platform, and the extracting of the live broadcast picture data and the live broadcast sound data from the real-time live broadcast video data specifically includes: performing real-time screen recording monitoring on a target live broadcast platform to obtain real-time live broadcast video data; and carrying out picture and sound separation processing on the real-time live video data to obtain live picture data and live sound data.
7. The vulgar live-broadcast behavior monitoring method based on deep learning of claim 6, characterized in that:
in step S3, comparing the live broadcast picture data and the live broadcast sound data with the vulgar live broadcast behavior database, and obtaining a vulgar live broadcast behavior evidence of the target live broadcast platform specifically includes:
extracting body action behaviors and speaking semantic contents of live personnel in a target live broadcast platform from the live broadcast picture data and the live broadcast sound data;
comparing the body action behavior and the speaking semantic content with the low-custom live broadcast behavior database, and when the body action behavior is matched with the illegal mouth action behavior, the illegal limb action behavior or the illegal body exposure behavior, or the speaking semantic content is matched with the illegal speaking behavior, carrying out screenshot on the body action behavior, or recording the speaking semantic content, thereby obtaining screenshot evidence of the low-custom live broadcast behavior or recording evidence of the low-custom live broadcast behavior.
8. The vulgar live-broadcast behavior monitoring method based on deep learning of claim 7, characterized in that:
in step S4, uploading the colloquial live broadcast forming evidence to a live broadcast management platform for authentication, and changing the live broadcast state and the live broadcast right of the target live broadcast platform according to the authentication result specifically includes:
uploading the low popular live action screenshot evidence or the low popular live action recording evidence to a live action management platform for authentication; and if the authentication is passed, immediately stopping the current live broadcast state of the target live broadcast platform, and identifying live broadcast personnel of the target live broadcast platform as personnel prohibited to obtain live broadcast authority.
CN202210318524.XA 2022-03-29 2022-03-29 Low-custom live broadcast behavior monitoring method based on deep learning Pending CN114786038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210318524.XA CN114786038A (en) 2022-03-29 2022-03-29 Low-custom live broadcast behavior monitoring method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210318524.XA CN114786038A (en) 2022-03-29 2022-03-29 Low-custom live broadcast behavior monitoring method based on deep learning

Publications (1)

Publication Number Publication Date
CN114786038A true CN114786038A (en) 2022-07-22

Family

ID=82424772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210318524.XA Pending CN114786038A (en) 2022-03-29 2022-03-29 Low-custom live broadcast behavior monitoring method based on deep learning

Country Status (1)

Country Link
CN (1) CN114786038A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002511A (en) * 2022-08-01 2022-09-02 珠海金智维信息科技有限公司 Live video evidence obtaining method and device, electronic equipment and readable storage medium
CN116822805A (en) * 2023-08-29 2023-09-29 深圳市纬亚森科技有限公司 Education video quality monitoring method based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331695A (en) * 2016-08-24 2017-01-11 合肥数酷信息技术有限公司 Video and audio-based detection and data analysis system
CN110365996A (en) * 2019-07-25 2019-10-22 深圳市元征科技股份有限公司 Management method, live streaming management platform, electronic equipment and storage medium is broadcast live
CN112995696A (en) * 2021-04-20 2021-06-18 共道网络科技有限公司 Live broadcast room violation detection method and device
WO2021164333A1 (en) * 2020-02-18 2021-08-26 腾讯科技(深圳)有限公司 Voice information communication management method and apparatus, storage medium, and device
CN114245205A (en) * 2022-02-23 2022-03-25 达维信息技术(深圳)有限公司 Video data processing method and system based on digital asset management

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331695A (en) * 2016-08-24 2017-01-11 合肥数酷信息技术有限公司 Video and audio-based detection and data analysis system
CN110365996A (en) * 2019-07-25 2019-10-22 深圳市元征科技股份有限公司 Management method, live streaming management platform, electronic equipment and storage medium is broadcast live
WO2021164333A1 (en) * 2020-02-18 2021-08-26 腾讯科技(深圳)有限公司 Voice information communication management method and apparatus, storage medium, and device
CN112995696A (en) * 2021-04-20 2021-06-18 共道网络科技有限公司 Live broadcast room violation detection method and device
CN114245205A (en) * 2022-02-23 2022-03-25 达维信息技术(深圳)有限公司 Video data processing method and system based on digital asset management

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002511A (en) * 2022-08-01 2022-09-02 珠海金智维信息科技有限公司 Live video evidence obtaining method and device, electronic equipment and readable storage medium
CN115002511B (en) * 2022-08-01 2023-02-28 珠海金智维信息科技有限公司 Live video evidence obtaining method and device, electronic equipment and readable storage medium
CN116822805A (en) * 2023-08-29 2023-09-29 深圳市纬亚森科技有限公司 Education video quality monitoring method based on big data
CN116822805B (en) * 2023-08-29 2023-12-15 北京菜鸟无忧教育科技有限公司 Education video quality monitoring method based on big data

Similar Documents

Publication Publication Date Title
CN111382623B (en) Live broadcast auditing method, device, server and storage medium
CN114786038A (en) Low-custom live broadcast behavior monitoring method based on deep learning
CN110837615A (en) Artificial intelligent checking system for advertisement content information filtering
CN100563335C (en) Classified content auditing terminal system
CN111414873B (en) Alarm prompting method, device and alarm system based on wearing state of safety helmet
CN110796098B (en) Method, device, equipment and storage medium for training and auditing content auditing model
CN111090813B (en) Content processing method and device and computer readable storage medium
CN111464819B (en) Live image detection method, device, equipment and storage medium
CN109729376B (en) Life cycle processing method, life cycle processing device, life cycle processing equipment and life cycle processing storage medium
CN114245205B (en) Video data processing method and system based on digital asset management
CN111401447B (en) Artificial intelligence-based flow cheating identification method and device and electronic equipment
CN111586432B (en) Method and device for determining air-broadcast live broadcast room, server and storage medium
CN111970471A (en) Participant scoring method, device, equipment and medium based on video conference
CN111914648A (en) Vehicle detection and identification method and device, electronic equipment and storage medium
CN113869115A (en) Method and system for processing face image
CN114065090A (en) Method and system for updating classification database, storage medium and computer equipment
CN112019875B (en) Learning behavior monitoring method and device for online live broadcast and live broadcast platform
CN116634093A (en) Method, system and storage medium for multifunctional online communication and conference
CN113365100B (en) Video processing method and device
CN109688439A (en) Playback method, electronic device and storage medium
CN115761614A (en) Abnormal behavior target detection system based on multi-view deep learning algorithm
CN111372197B (en) Early warning method and related device
WO2021114985A1 (en) Companionship object identification method and apparatus, server and system
CN109889916B (en) Application system of recorded broadcast data
CN107563788A (en) The stage division and server of shared billboard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination