CN110557671A - Method and system for automatically processing unhealthy content of video - Google Patents

Method and system for automatically processing unhealthy content of video Download PDF

Info

Publication number
CN110557671A
CN110557671A CN201910852735.XA CN201910852735A CN110557671A CN 110557671 A CN110557671 A CN 110557671A CN 201910852735 A CN201910852735 A CN 201910852735A CN 110557671 A CN110557671 A CN 110557671A
Authority
CN
China
Prior art keywords
video
unhealthy
frame
content
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910852735.XA
Other languages
Chinese (zh)
Inventor
单志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Original Assignee
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Happly Sunshine Interactive Entertainment Media Co Ltd filed Critical Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority to CN201910852735.XA priority Critical patent/CN110557671A/en
Publication of CN110557671A publication Critical patent/CN110557671A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4542Blocking scenes or portions of the received content, e.g. censoring scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4545Input to filtering algorithms, e.g. filtering a region of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Abstract

the invention discloses a method and a system for automatically processing unhealthy contents of a video, wherein the method comprises the following steps: 1) playing video content, judging whether a user sets video filtering, if so, jumping to the step 3), and if not, entering the next step; 2) acquiring face information of a viewer through a camera, judging whether the viewer has minors or not, if so, entering the next step, otherwise, skipping to the step 4); 3) analyzing video streaming media data, filtering unhealthy contents of the video according to preset settings, and returning to the step 1); 4) continue playing the video content and return to step 1). The method can distinguish the viewers and automatically filter unhealthy contents in the video aiming at the minors, and provides a healthy film watching environment for the minors.

Description

method and system for automatically processing unhealthy content of video
Technical Field
The invention relates to a video processing method, in particular to a method and a system for automatically processing unhealthy contents of a video.
background
at present, video is mainly transmitted in a streaming media data form, and the transmission mode is deeply favored by users by the characteristics of downloading and playing simultaneously. At present, the classification system and the auditing mechanism in China are not complete, and partial videos and audio and subtitle contents in the videos have unhealthy information, so that the audiences of minors are easily affected. Existing electronic devices, such as mobile phones, tablet computers, televisions, set-top boxes and the like, all have a child mode, access to part of applications and limitation of part of rights are prohibited in the child mode, so that the situation that minors contact unhealthy contents is avoided, but users cannot be distinguished in the mode, and the minors can still contact the unhealthy contents under the condition that the child mode is not set. Meanwhile, the existing children mode cannot detect and process unhealthy contents in videos, audios and subtitles in real time.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides a method and a system for automatically processing unhealthy contents in a video, which can distinguish viewers and automatically filter the unhealthy contents in the video aiming at minors.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: an automatic processing method for unhealthy contents of videos is characterized by comprising the following steps:
1) Playing video content, judging whether a user sets video filtering, if so, jumping to the step 3), and if not, entering the next step;
2) acquiring face information of a viewer through a camera, judging whether the viewer has minors or not, if so, entering the next step, otherwise, skipping to the step 4);
3) analyzing video streaming media data, filtering unhealthy contents of the video according to preset settings, and returning to the step 1);
4) continue playing the video content and return to step 1).
Preferably, the step 2) of determining whether there are minors in the viewer includes the steps of:
2.1) inquiring the face information of the viewer in a database, judging whether the face information of the viewer is stored in the database, if so, skipping to the step 2.3), otherwise, marking the face information of the viewer which is not stored in the database as new face information and entering the next step, wherein the database also stores age information corresponding to the face information one by one;
2.2) carrying out feature analysis on the new facial information, judging the age of the viewer, and storing the new facial information and the age information in a database;
2.3) reading age information corresponding to the face information of the viewer in the database, wherein when the age information is less than 18 years old, the viewer has minors.
preferably, step 2.2) is preceded by a step 2.1'), in particular: judging whether the face information of the viewer has the administrator face information, if so, informing the administrator to enter new face information and corresponding age information, saving the new information and the corresponding age information which are entered by the administrator, and then, skipping to the step 2.3), otherwise, entering the step 2.2).
Preferably, step 3) specifically comprises the following steps:
3.1) receiving video streaming media data, reading video attribute information in the video streaming media data, judging whether a grading label exists or not, if so, entering the next step, and otherwise, skipping to the step 3.3);
3.2) judging whether the grading label is adult grade, if so, presetting the video and turning to the step 3.4), otherwise, playing the video and turning to the step 3.4);
3.3) reading a video frame, an audio frame and a subtitle frame in the video streaming media data, and performing preset processing on the video frame, the audio frame or the subtitle frame with unhealthy content;
3.4) return to step 1).
preferably, step 3.3) comprises the steps of:
3.3.1) extracting a video frame, a corresponding audio frame and a corresponding subtitle frame in the video streaming media data to serve as a current video frame, a current audio frame and a current subtitle frame, judging whether the current video frame has unhealthy content, if the current video frame has the unhealthy content, performing preset processing on the current video frame and entering the next step, and if not, playing the current video frame and entering the next step;
3.3.2) judging whether the current audio frame has unhealthy content, if so, performing preset processing on the current audio frame and entering the next step, otherwise, playing the current audio frame and entering the next step;
3.3.3) judging whether the current caption frame has unhealthy content, if so, performing preset processing on the current caption frame and entering the next step, otherwise, playing the current caption frame and entering the next step;
3.3.4) returning to the step 3.3.1) until the video frame, the audio frame and the subtitle frame in the video streaming media data are all extracted.
preferably, the preset processing is to specifically intercept related data, that is, intercept one or more of a video frame, an audio frame, or a subtitle frame in the video streaming media data, so that the content of the one or more of the video frame, the audio frame, or the subtitle frame is not played.
Preferably, the step 3.3.1) of judging whether unhealthy content exists in the current video frame includes the following steps:
3.3.1 a) carrying out image processing on the current video frame and extracting a sensitive image area;
3.3.1 b) calculating the matching degree of the sensitive image area information by utilizing a pre-constructed unhealthy content identification model, wherein when the matching degree of the unhealthy content is greater than a preset threshold value, the unhealthy content exists in the current video frame.
Preferably, the step 3.3.2) of judging whether the current audio frame has unhealthy content includes the following steps:
3.3.2 a) filtering the current audio frame to extract a characteristic audio signal;
3.3.2 b) calculating the information matching degree of the characteristic audio signal by utilizing a pre-constructed unhealthy content identification model, wherein the unhealthy content exists in the current audio frame when the matching degree of the unhealthy content is greater than a preset threshold value.
preferably, the step 3.3.3) of judging whether unhealthy content exists in the current subtitle frame includes the following steps:
3.3.3 a) carrying out character recognition on the current caption and extracting the character content of the caption;
3.3.3 b) calculating the matching degree of the text content information by utilizing a pre-constructed unhealthy content identification model, wherein the unhealthy content exists in the current caption when the matching degree of the unhealthy content is greater than a preset threshold value.
the invention also provides an automatic video unhealthy content processing system, which comprises a display screen unit with a camera, and is characterized in that the display screen unit is programmed or configured to execute the steps of the automatic video unhealthy content processing method.
compared with the prior art, the invention has the advantages that:
the method comprises the steps of firstly setting a video filtering function, filtering unhealthy contents of a video according to preset settings regardless of the presence of minors after the video filtering function is started, then acquiring facial information of a viewer through a camera under the condition that the video filtering function is not started, judging whether the minors exist in the viewer, processing the video without the audience of the minors and not influencing the viewing experience, and filtering the unhealthy contents of the video according to the preset settings when the minors exist in the viewer so as to prevent the video and the unhealthy information in the corresponding audio and subtitles from influencing the mind and body of the minors. Meanwhile, the face information of the viewer is obtained again after each processing is finished, whether the juveniles exist in the viewer is detected in real time, the unhealthy content of the video is processed when the juveniles exist, the video content is not processed when the juveniles do not exist, and the automatic processing of the unhealthy information of the video is realized.
Drawings
Fig. 1 is a flowchart of the method of the present embodiment.
fig. 2 is a flowchart illustrating a method for determining whether a minor is present in a viewer according to the present embodiment.
fig. 3 is a flowchart illustrating unhealthy content filtering of a video according to a preset setting in the method of the embodiment.
Fig. 4 is a flowchart of step 3.3) in the method of this embodiment.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
as shown in fig. 1, the method for automatically processing unhealthy video content of the present invention includes the following steps:
1) Playing video content, judging whether a user sets video filtering, if so, jumping to the step 3), and if not, entering the next step;
2) acquiring face information of a viewer through a camera, judging whether the viewer has minors or not, if so, entering the next step, otherwise, skipping to the step 4);
3) Analyzing video streaming media data, filtering unhealthy contents of the video according to preset settings, and returning to the step 1);
4) Continue playing the video content and return to step 1).
According to the method, firstly, a video filtering function is set, after the video filtering function is started, unhealthy content filtering is carried out on a video according to the preset setting no matter whether a minor is present or not, then under the condition that the video filtering function is not started, facial information of a viewer is obtained through a camera, whether the minor exists in the viewer is judged, when the minor does not exist in the viewer, the video is not processed, the viewing experience is not influenced, and when the minor exists in the viewer, the unhealthy content filtering is carried out on the video according to the preset setting, so that the influence of the unhealthy information in the video and corresponding audio and subtitles on the mind and body of the minor is prevented. Meanwhile, the face information of the viewer is obtained again after each processing is finished, whether the juveniles exist in the viewer is detected in real time, the unhealthy content of the video is processed when the juveniles exist, the video content is not processed when the juveniles do not exist, and the automatic processing of the unhealthy information of the video is realized.
As shown in fig. 2, the step 2) of the present embodiment of determining whether there is a minor in the viewer includes the following steps:
2.1) inquiring the face information of the viewer in a database, judging whether the face information of the viewer is stored in the database, if so, skipping to the step 2.3), otherwise, marking the face information of the viewer which is not stored in the database as new face information and entering the next step, wherein the database also stores age information corresponding to the face information one by one;
2.2) carrying out feature analysis on the new facial information, judging the age of a viewer through a preset facial feature model comprising a anthropometry model, a flexible model or an appearance model, or judging the age of the viewer through a deep learning model, and storing the new facial information and the age information in a database after obtaining the age of the viewer;
2.3) reading age information corresponding to the face information of the viewer in the database, wherein when the age information is less than 18 years old, the viewer has minors.
the information matching method and the model comparison method are combined in the steps, after the face information of the viewer is collected, the face feature model or the deep learning model is used for comparing and judging the age of the viewer, and meanwhile, the collected face information and the stored information are matched and searched according to the pre-stored panel information of the viewer and the corresponding age, so that the processing time is shortened, and the judgment accuracy is improved.
In order to facilitate real-time management of the face information, the method of the embodiment allocates administrator authority to users, and if an administrator exists in viewers, new face information and ages can be timely entered, so that the method is more accurate in managing the ages of the viewers, and before the step 2.2), the method further comprises a step 2.1'), which is specifically as follows: judging whether the face information of the viewer has the administrator face information, if so, informing the administrator to enter new face information and corresponding age information, saving the new information and the corresponding age information which are entered by the administrator, and then, skipping to the step 2.3), otherwise, entering the step 2.2). And 2.2) storing the new facial information and the age information in the database, and displaying prompt information on a screen to remind an administrator to perfect the stored new facial information and the age information.
As shown in fig. 3, the unhealthy content filtering on the video according to the preset in step 3) of the present embodiment includes the following steps:
3.1) receiving video streaming media data, reading video attribute information in the video streaming media data, judging whether a grading label exists or not, if so, entering the next step, and otherwise, skipping to the step 3.3);
3.2) judging whether the grading label is adult-grade, if so, performing preset processing on the video and turning to the step 3.4), otherwise, playing the video and turning to the step 3.4), wherein in the step, the preset processing mode is to intercept video streaming media data and enable the video content not to be played, namely, intercept video frames, audio frames and subtitle frames in the video streaming media data and enable the content in the video frames, the audio frames and the subtitle frames not to be played until the received video streaming media data is completely transmitted, meanwhile, other modes such as a preset video playlist can be available for performing preset processing on the video, the non-adult-grade video is stored in the video playlist, and the video in the video playlist is selected to be played until the received video streaming media data is completely transmitted;
3.3) reading a video frame, an audio frame and a subtitle frame in the video streaming media data, wherein the subtitle frame of the embodiment comprises real-time commenting subtitles such as a comment subtitle, a dialog subtitle and a bullet screen, the video frame, the audio frame or the subtitle frame with unhealthy content is preset, and the video frame, the audio frame or the subtitle frame without unhealthy content is played.
as shown in fig. 4, step 3.3) of the present embodiment includes the following steps:
3.3.1) extracting a video frame in the video streaming media data and an audio frame and a subtitle frame corresponding to the video frame to serve as a current video frame, a current audio frame and a current subtitle frame, judging whether the current video frame has unhealthy content, if the current video frame has the unhealthy content, performing preset processing on the current video frame and entering the next step, otherwise, playing the current video frame and entering the next step, wherein in the step, the preset processing mode is to intercept the video frame in the video streaming media data, so that the picture of the video frame is not displayed, the preset processing on the video frame can also be performed in other modes, such as presetting a storage space, storing the picture or the photo in the storage space, replacing the picture of the video frame with the picture or the photo for display, and skipping over the video frame with the unhealthy content;
3.3.2) judging whether the current audio frame has unhealthy content, if so, performing preset processing on the current audio frame and entering the next step, otherwise, playing the current audio frame and entering the next step, wherein in the step, the preset processing mode is to intercept the audio frame in the video streaming media data to enable the audio not to be played, the preset processing on the audio frame can also be performed in other modes, such as presetting a storage space, storing other audio or music in the storage space, replacing the audio frame with the unhealthy content for playing, and skipping the audio frame with the unhealthy content;
3.3.3) judging whether the current caption frame has unhealthy content, if so, performing preset processing on the current caption frame and entering the next step, otherwise, playing the current caption frame and entering the next step, wherein in the step, the preset processing mode is to intercept the caption frame in the video streaming media data to enable the caption not to be displayed, the preset processing on the caption frame can also be performed by other modes, such as presetting a storage space, storing other captions in the storage space, replacing the caption frame with the caption frame of other captions to display, and skipping the caption frame with the unhealthy content;
3.3.4) returning to the step 3.3.1) until the video frame, the audio frame and the subtitle frame in the video streaming media data are all extracted.
in the method of the embodiment, unhealthy content is judged based on deep learning, unhealthy content identification is carried out by utilizing a pre-trained unhealthy content identification model with a multi-hidden-layer structure, and a deep learning intelligent model based on a multi-layer neural network is suitable for analysis and identification of big data (massive images), so that unhealthy information in video frames, audio frames and subtitle frames can be identified timely and quickly, the processing speed of the video frames, audio frames or subtitle frames with the unhealthy content is increased, and the video delay seen by a user is reduced.
the step 3.3.1) of this embodiment of determining whether unhealthy content exists in the current video frame includes the following steps:
3.3.1 a) carrying out image processing on the current video frame and extracting a sensitive image area;
3.3.1 b) calculating the matching degree of the sensitive image area information by utilizing a pre-constructed unhealthy content identification model, wherein when the matching degree of the unhealthy content is greater than a preset threshold value, the unhealthy content exists in the current video frame.
the step 3.3.2) of this embodiment of determining whether unhealthy content exists in the current audio frame includes the following steps:
3.3.2 a) filtering the current audio frame to extract a characteristic audio signal;
3.3.2 b) calculating the information matching degree of the characteristic audio signal by utilizing a pre-constructed unhealthy content identification model, wherein the unhealthy content exists in the current audio frame when the matching degree of the unhealthy content is greater than a preset threshold value.
the step 3.3.3) of this embodiment of determining whether unhealthy content exists in the current subtitle frame includes the following steps:
3.3.3 a) carrying out character recognition on the current caption and extracting the character content of the caption;
3.3.3 b) calculating the matching degree of the text content information by utilizing a pre-constructed unhealthy content identification model, wherein the unhealthy content exists in the current caption when the matching degree of the unhealthy content is greater than a preset threshold value.
The invention also provides an automatic video unhealthy content processing system, which comprises a display screen unit with a camera, wherein the display screen unit is programmed or configured to execute the steps of the automatic video unhealthy content processing method.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (10)

1. an automatic processing method for unhealthy contents of videos is characterized by comprising the following steps:
1) Playing video content, judging whether a user sets video filtering, if so, jumping to the step 3), and if not, entering the next step;
2) acquiring face information of a viewer through a camera, judging whether the viewer has minors or not, if so, entering the next step, otherwise, skipping to the step 4);
3) Analyzing video streaming media data, filtering unhealthy contents of the video according to preset settings, and returning to the step 1);
4) continue playing the video content and return to step 1).
2. the method for automatically processing the unhealthy video content of claim 1, wherein the step 2) of determining whether the viewer has a minor comprises the steps of:
2.1) inquiring the face information of the viewer in a database, judging whether the face information of the viewer is stored in the database, if so, skipping to the step 2.3), otherwise, marking the face information of the viewer which is not stored in the database as new face information and entering the next step, wherein the database also stores age information corresponding to the face information one by one;
2.2) carrying out feature analysis on the new facial information, judging the age of the viewer, and storing the new facial information and the age information in a database;
2.3) reading age information corresponding to the face information of the viewer in the database, wherein the viewer has minor persons when the age information is less than 18 years old, and the viewer does not have minor persons when the age information is not less than 18 years old.
3. The method for automatically processing the unhealthy video content according to claim 2, further comprising a step 2.1') before the step 2.2), specifically: judging whether the face information of the viewer has the administrator face information, if so, informing the administrator to enter new face information and corresponding age information, saving the new information and the corresponding age information which are entered by the administrator, and then, skipping to the step 2.3), otherwise, entering the step 2.2).
4. the method for automatically processing the unhealthy video content according to claim 1, wherein step 3) specifically comprises the following steps:
3.1) receiving video streaming media data, reading video attribute information in the video streaming media data, judging whether a grading label exists or not, if so, entering the next step, and otherwise, skipping to the step 3.3);
3.2) judging whether the grading label is adult grade, if so, presetting the video and turning to the step 3.4), otherwise, playing the video and turning to the step 3.4);
3.3) reading a video frame, an audio frame and a subtitle frame in the video streaming media data, and performing preset processing on the video frame, the audio frame or the subtitle frame with unhealthy content;
3.4) return to step 1).
5. The method for automatically processing the unhealthy video content of claim 4, wherein step 3.3) comprises the following steps:
3.3.1) extracting a video frame, a corresponding audio frame and a corresponding subtitle frame in the video streaming media data to serve as a current video frame, a current audio frame and a current subtitle frame, judging whether the current video frame has unhealthy content, if the current video frame has the unhealthy content, performing preset processing on the current video frame and entering the next step, and if not, playing the current video frame and entering the next step;
3.3.2) judging whether the current audio frame has unhealthy content, if so, performing preset processing on the current audio frame and entering the next step, otherwise, playing the current audio frame and entering the next step;
3.3.3) judging whether the current caption frame has unhealthy content, if so, performing preset processing on the current caption frame and entering the next step, otherwise, playing the current caption frame and entering the next step;
3.3.4) returning to the step 3.3.1) until the video frame, the audio frame and the subtitle frame in the video streaming media data are all extracted.
6. The method as claimed in claim 4 or 5, wherein the predetermined process is specifically to intercept related data, that is, intercept one or more of video frames, audio frames and subtitle frames in the video streaming media data, so that the content of the one or more of video frames, audio frames and subtitle frames is not played.
7. the method for automatically processing the unhealthy video content according to claim 5, wherein the step of determining whether the unhealthy video content exists in the current video frame in step 3.3.1) comprises the steps of:
3.3.1 a) carrying out image processing on the current video frame and extracting a sensitive image area;
3.3.1 b) calculating the matching degree of the sensitive image area information by utilizing a pre-constructed unhealthy content identification model, wherein when the matching degree of the unhealthy content is greater than a preset threshold value, the unhealthy content exists in the current video frame.
8. the method for automatically processing the unhealthy video content according to claim 5, wherein the step of determining whether the unhealthy video content exists in the current audio frame in step 3.3.2) comprises the steps of:
3.3.2 a) filtering the current audio frame to extract a characteristic audio signal;
3.3.2 b) calculating the information matching degree of the characteristic audio signal by utilizing a pre-constructed unhealthy content identification model, wherein the unhealthy content exists in the current audio frame when the matching degree of the unhealthy content is greater than a preset threshold value.
9. the method for automatically processing the unhealthy video content according to claim 5, wherein the step of determining whether the unhealthy video content exists in the current caption frame in step 3.3.3) comprises the steps of:
3.3.3 a) carrying out character recognition on the current caption and extracting the character content of the caption;
3.3.3 b) calculating the matching degree of the text content information by utilizing a pre-constructed unhealthy content identification model, wherein the unhealthy content exists in the current caption when the matching degree of the unhealthy content is greater than a preset threshold value.
10. An automatic processing system for video unhealthy contents, comprising a display screen unit with a camera, wherein the display screen unit is programmed or configured to perform the steps of the automatic processing method for video unhealthy contents according to any one of claims 1 to 9.
CN201910852735.XA 2019-09-10 2019-09-10 Method and system for automatically processing unhealthy content of video Pending CN110557671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910852735.XA CN110557671A (en) 2019-09-10 2019-09-10 Method and system for automatically processing unhealthy content of video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910852735.XA CN110557671A (en) 2019-09-10 2019-09-10 Method and system for automatically processing unhealthy content of video

Publications (1)

Publication Number Publication Date
CN110557671A true CN110557671A (en) 2019-12-10

Family

ID=68739767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910852735.XA Pending CN110557671A (en) 2019-09-10 2019-09-10 Method and system for automatically processing unhealthy content of video

Country Status (1)

Country Link
CN (1) CN110557671A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654748A (en) * 2020-06-11 2020-09-11 深圳创维-Rgb电子有限公司 Limit level picture detection method and device, display equipment and readable storage medium
CN111835739A (en) * 2020-06-30 2020-10-27 北京小米松果电子有限公司 Video playing method and device and computer readable storage medium
CN113347302A (en) * 2020-02-17 2021-09-03 林意胜 Broadcast system of hand-free device for vehicle
CN114245205A (en) * 2022-02-23 2022-03-25 达维信息技术(深圳)有限公司 Video data processing method and system based on digital asset management
CN114745592A (en) * 2022-04-06 2022-07-12 展讯半导体(南京)有限公司 Bullet screen message display method, system, device and medium based on face recognition
CN114760523A (en) * 2022-03-30 2022-07-15 咪咕数字传媒有限公司 Audio and video processing method, device, equipment and storage medium
CN116208818A (en) * 2022-11-11 2023-06-02 中国第一汽车股份有限公司 Vehicle-mounted multimedia playing content filtering method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547271A (en) * 2010-12-24 2012-07-04 瀚斯宝丽股份有限公司 Video content output device and method capable of filtering video contents according to looker age
US20130332462A1 (en) * 2012-06-12 2013-12-12 David Paul Billmaier Generating content recommendations
CN106507168A (en) * 2016-10-09 2017-03-15 乐视控股(北京)有限公司 A kind of video broadcasting method and device
CN107529068A (en) * 2016-06-21 2017-12-29 北京新岸线网络技术有限公司 Video content discrimination method and system
CN108063979A (en) * 2017-12-26 2018-05-22 深圳Tcl新技术有限公司 Video playing control method, device and computer readable storage medium
CN109040782A (en) * 2018-08-29 2018-12-18 百度在线网络技术(北京)有限公司 Video playing processing method, device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547271A (en) * 2010-12-24 2012-07-04 瀚斯宝丽股份有限公司 Video content output device and method capable of filtering video contents according to looker age
US20130332462A1 (en) * 2012-06-12 2013-12-12 David Paul Billmaier Generating content recommendations
CN107529068A (en) * 2016-06-21 2017-12-29 北京新岸线网络技术有限公司 Video content discrimination method and system
CN106507168A (en) * 2016-10-09 2017-03-15 乐视控股(北京)有限公司 A kind of video broadcasting method and device
CN108063979A (en) * 2017-12-26 2018-05-22 深圳Tcl新技术有限公司 Video playing control method, device and computer readable storage medium
CN109040782A (en) * 2018-08-29 2018-12-18 百度在线网络技术(北京)有限公司 Video playing processing method, device and electronic equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347302A (en) * 2020-02-17 2021-09-03 林意胜 Broadcast system of hand-free device for vehicle
CN111654748A (en) * 2020-06-11 2020-09-11 深圳创维-Rgb电子有限公司 Limit level picture detection method and device, display equipment and readable storage medium
CN111835739A (en) * 2020-06-30 2020-10-27 北京小米松果电子有限公司 Video playing method and device and computer readable storage medium
CN114245205A (en) * 2022-02-23 2022-03-25 达维信息技术(深圳)有限公司 Video data processing method and system based on digital asset management
CN114245205B (en) * 2022-02-23 2022-05-24 达维信息技术(深圳)有限公司 Video data processing method and system based on digital asset management
CN114760523A (en) * 2022-03-30 2022-07-15 咪咕数字传媒有限公司 Audio and video processing method, device, equipment and storage medium
CN114745592A (en) * 2022-04-06 2022-07-12 展讯半导体(南京)有限公司 Bullet screen message display method, system, device and medium based on face recognition
CN116208818A (en) * 2022-11-11 2023-06-02 中国第一汽车股份有限公司 Vehicle-mounted multimedia playing content filtering method and device

Similar Documents

Publication Publication Date Title
CN110557671A (en) Method and system for automatically processing unhealthy content of video
US20210105535A1 (en) Control method of playing content and content playing apparatus performing the same
US10643074B1 (en) Automated video ratings
US9100701B2 (en) Enhanced video systems and methods
US20030147624A1 (en) Method and apparatus for controlling a media player based on a non-user event
US8214368B2 (en) Device, method, and computer-readable recording medium for notifying content scene appearance
CN103327407B (en) Audio-visual content is set to watch level method for distinguishing
CN106507168A (en) A kind of video broadcasting method and device
CN103167361A (en) Method for processing an audiovisual content and corresponding device
WO2017181969A1 (en) Playback control method and device
JP2013109537A (en) Interest degree estimation device and program thereof
US20080256576A1 (en) Method and Apparatus for Detecting Content Item Boundaries
CN110856013A (en) Method, system and storage medium for identifying key segments in video
CN111654748A (en) Limit level picture detection method and device, display equipment and readable storage medium
EP2621180A2 (en) Electronic device and audio output method
US20180210906A1 (en) Method, apparatus and system for indexing content based on time information
KR20050026965A (en) Method of and system for controlling the operation of a video system
KR102185700B1 (en) Image display apparatus and information providing method thereof
US10349093B2 (en) System and method for deriving timeline metadata for video content
JP5458163B2 (en) Image processing apparatus and image processing apparatus control method
WO2023069047A1 (en) A face recognition system to identify the person on the screen
CN110139134B (en) Intelligent personalized bullet screen pushing method and system
KR101436908B1 (en) Image processing apparatus and method thereof
US20150179228A1 (en) Synchronized movie summary
EP3471100B1 (en) Method and system for synchronising between an item of reference audiovisual content and an altered television broadcast version thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210

RJ01 Rejection of invention patent application after publication