KR20160104826A - Method and apparatus for detecting obscene video - Google Patents
Method and apparatus for detecting obscene video Download PDFInfo
- Publication number
- KR20160104826A KR20160104826A KR1020150027427A KR20150027427A KR20160104826A KR 20160104826 A KR20160104826 A KR 20160104826A KR 1020150027427 A KR1020150027427 A KR 1020150027427A KR 20150027427 A KR20150027427 A KR 20150027427A KR 20160104826 A KR20160104826 A KR 20160104826A
- Authority
- KR
- South Korea
- Prior art keywords
- harmful
- harmfulness
- shot
- video
- moving picture
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4542—Blocking scenes or portions of the received content, e.g. censoring scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4545—Input to filtering algorithms, e.g. filtering a region of the image
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a method and an apparatus for judging a harmful video, and a method for judging a malicious video according to an embodiment of the present invention includes: detecting a change in shot of a video; Dividing a processing period on a time axis for analyzing a harmfulness of the moving image based on the detected shot change; Extracting the spatiotemporal axis characteristic in each processing section and calculating a harmful possibility of the processing section; And judging the harmfulness of the moving picture on the basis of the harmfulness calculated for each processing section.
Description
[0001] The present invention relates to a method and an apparatus for judging a harmful moving picture, and more particularly, to a method and an apparatus for judging a harmful moving picture which improve classification accuracy using shot length information
Recently, due to the development of communication network technology and popularization of PCs and mobile devices, downloading and viewing video contents without restriction of time and place has become a daily life. However, with the increasing convenience of entertainment culture, the risk of exposure to harmful content such as obscene video is also increasing along with the growing number of children and adolescents. Accordingly, there is an increasing demand for a technology for automatically analyzing the contents of image contents to determine whether the contents are harmful and for blocking harmful contents.
Recent trends in how to identify and block harmful content are categorized into several types: In the following explanations, the term 'harmful video' means video content that shows women's breasts, sexually explicit sexual acts such as sex, sex, and caress.
The first type is a method for identifying harmfulness by using feature information that can be perceived intuitively for harmful images. As an example, a method of extracting a specific color distribution region such as a skin region from a frame image extracted from a moving image and calculating a feature vector representing a distribution pattern of a center of gravity and a region from a set of pixels included in the skin region . A recognizer such as Multi-Layer Perceptron (MLP) or Support Vector Machine (SVM), which inputs the feature vector thus calculated, is used to determine whether the input image is harmful or not, Learning. This approach is a method that utilizes the perceived intuitive characteristics of the person using the prior knowledge that the harmful content image basically expresses sexual behavior in a state of high exposure.
The second type is a method for determining whether a frame image extracted from a moving image is harmful or not by using a statistical characteristic automatically extracted by a machine. We applied the BOVW (Bag Of Visual Word) model, which has been studied for the automatic classification of document contents, to the image recognition problem, constructed a visual vocabulary from the natural feature automatically detected from the image, It is a method to automatically identify the category of the image from the input image as it automatically recognizes the classification of the content from the document. This method has been reported in academia to classify pornographic images that are exposed to male and female genitalia in the input image.
The third type is a method for discriminating the harmfulness of the input moving image by using the image change information on the time axis instead of the frame unit image characteristic. At this time, the image change information on the time axis can be used as a difference image between frames, a tendency of a feature point trajectory detected in a moving image, and a distribution characteristic of a motion vector. This approach can be used to reduce the errors that can occur when determining the harmfulness of the entire input video and to improve the accuracy of the discrimination results when using only the features extracted from the frame unit still images as in the first and second types above It can be used for the purpose.
SUMMARY OF THE INVENTION It is an object of the present invention to provide a method and an apparatus for judging a harmful video which improves classification accuracy using shot length information.
According to another aspect of the present invention, there is provided a method for determining a harmful moving picture, the method comprising: detecting a shot change of a moving picture; Dividing a processing period on a time axis for analyzing a harmfulness of the moving image based on the detected shot change; Extracting the spatiotemporal axis characteristic in each processing section and calculating a harmful possibility of the processing section; And judging the harmfulness of the moving picture on the basis of the harmfulness calculated for each processing section.
According to the present invention, classification accuracy can be improved when a harmful moving picture is discriminated.
1 is a flowchart illustrating a method for determining a harmful moving picture according to an embodiment of the present invention.
2 is a flowchart showing a harmfulness calculation method according to an embodiment of the present invention.
3 is a diagram illustrating shot length information according to an embodiment of the present invention.
4 is a diagram showing the possibility of harmfulness in the harmful moving image group and the harmless moving image group acquired in the learning step.
5 is a block diagram showing the internal structure of a harmful moving picture determining apparatus according to an embodiment of the present invention.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Note that, in the drawings, the same components are denoted by the same reference symbols as possible. Further, the detailed description of well-known functions and constructions that may obscure the gist of the present invention will be omitted.
Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "indirectly connected" between the devices in the middle. Throughout the specification, when an element is referred to as "comprising ", it means that it can include other elements as well, without excluding other elements unless specifically stated otherwise.
1 is a flowchart illustrating a method for determining a harmful moving picture according to an embodiment of the present invention.
Referring to FIG. 1, first, in
Then, based on the shot change detected in
Next, in
FIG. 2 shows a more detailed process for calculating the hazard probability.
2 is a flowchart showing a harmfulness calculation method according to an embodiment of the present invention.
Referring to FIG. 2, in calculating the hazard probability, first, in
3 is a diagram illustrating shot length information according to an embodiment of the present invention.
FIG. 3 shows shot length information defined for the current processing period. In FIG. 3, a
In
Equation (1)
Equation (2)
(Obj | m, t) and P (Non | m, t), assuming that the prior probabilities for the hazard of the processing interval in equations (1) and (2) t) can be interpreted as a value including the same proportional constant k as in the following equations (3) and (4), and as a result, P (m | Obj) t | Non), the current processing section can be determined as a harmful video section.
Equation (3)
Equation (4)
P (t | Obj), P (t | Obj), P (m | Non) and P (t | Non) are likelihood values for m and t of harmful video and harmless video, We use the density function for the distribution of hazards and shot length values to calculate moving images and harmless videos. The density function can be estimated as a continuous function, but it can be replaced with an approximated density function value in the form of a histogram in the learning step as shown in FIG.
4 is a diagram showing the possibility of harmfulness in the harmful moving image group and the harmless moving image group acquired in the learning step.
FIG. 4 is a histogram of the hazard probability, and the
Referring back to FIG. 1, in
According to the present invention, due to the characteristics of the harmful video, in which a specific region is displayed for a long time or a scene in which a specific action is continuously displayed in order to give a sexual satisfaction to a viewer, long take shots are frequently It is possible to improve the accuracy of the hazard discrimination by using the shot length information of the moving image as the feature information.
The embodiments of the invention thus far described may be embodied in instructions carried out by a processor and stored in a computer-readable storage medium. When these instructions are executed by a processor, they may generate means for implementing the functions / operations specified in the flowcharts and / or block diagrams described above. Each block in the flowcharts / block diagrams may represent hardware and / or software modules or logic that implement embodiments of the present invention. Further, the functions referred to in the block diagrams may occur outside the order mentioned in the drawings, or may occur simultaneously.
The computer-readable medium can include, for example, but is not limited to, a nonvolatile memory such as a floppy disk, ROM, flash memory, disk drive memory, CD-ROM, and other persistent storage, Available.
5 is a block diagram showing the internal structure of a harmful moving picture determining apparatus according to an embodiment of the present invention.
Referring to FIG. 5, the apparatus for determining a harmful motion picture 500 according to an embodiment of the present invention may include a moving
The
The
The
The shot
The moving
The hazard
The harmfulness
Section shot length calculation unit 536) can estimate the shot length information defined in the same manner as FIG. 3 for the current processing period. In FIG. 3, a
The
Equation (1)
Equation (2)
(Obj | m, t) and P (Non | m, t), assuming that the prior probabilities for the hazard of the processing interval in equations (1) and (2) t) can be interpreted as a value including the same proportional constant k as in the following equations (3) and (4), and as a result, P (m | Obj) t | Non), the current processing section can be determined as a harmful video section.
Equation (3)
Equation (4)
P (t | Obj), P (t | Obj), P (m | Non) and P (t | Non) are likelihood values for m and t of harmful video and harmless video, The hazard probability computed by the harmfulness
4 is a diagram showing the possibility of harmfulness in the harmful moving image group and the harmless moving image group acquired in the learning step.
FIG. 4 is a histogram of the hazard probability, and the
In the above description, the shot
The embodiments of the present invention disclosed in the present specification and drawings are merely illustrative examples of the present invention and are not intended to limit the scope of the present invention in order to facilitate understanding of the present invention. It will be apparent to those skilled in the art that other modifications based on the technical idea of the present invention are possible in addition to the embodiments disclosed herein.
510: Video input unit
520:
530:
Claims (1)
Dividing a processing period on a time axis for analyzing a harmfulness of the moving image based on the detected shot change;
Extracting the spatiotemporal axis characteristic in each processing section and calculating a harmful possibility of the processing section; And
And judging the harmfulness of the moving picture on the basis of the harmfulness calculated for each of the processing sections.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150027427A KR20160104826A (en) | 2015-02-26 | 2015-02-26 | Method and apparatus for detecting obscene video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150027427A KR20160104826A (en) | 2015-02-26 | 2015-02-26 | Method and apparatus for detecting obscene video |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160104826A true KR20160104826A (en) | 2016-09-06 |
Family
ID=56945766
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150027427A KR20160104826A (en) | 2015-02-26 | 2015-02-26 | Method and apparatus for detecting obscene video |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160104826A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180071156A (en) * | 2016-12-19 | 2018-06-27 | 삼성전자주식회사 | Method and apparatus for filtering video |
US11470385B2 (en) | 2016-12-19 | 2022-10-11 | Samsung Electronics Co., Ltd. | Method and apparatus for filtering video |
-
2015
- 2015-02-26 KR KR1020150027427A patent/KR20160104826A/en unknown
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180071156A (en) * | 2016-12-19 | 2018-06-27 | 삼성전자주식회사 | Method and apparatus for filtering video |
US11470385B2 (en) | 2016-12-19 | 2022-10-11 | Samsung Electronics Co., Ltd. | Method and apparatus for filtering video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200058075A1 (en) | Method and apparatus for obtaining vehicle loss assessment image, server and terminal device | |
CN105404884B (en) | Image analysis method | |
JP4819912B2 (en) | Multi-mode region of interest video object segmentation | |
EP3651055A1 (en) | Gesture recognition method, apparatus, and device | |
EP2923306B1 (en) | Method and apparatus for facial image processing | |
JP2012155727A (en) | Intra-mode region-of-interest image object segmentation | |
US9600898B2 (en) | Method and apparatus for separating foreground image, and computer-readable recording medium | |
RU2607774C2 (en) | Control method in image capture system, control apparatus and computer-readable storage medium | |
US8706663B2 (en) | Detection of people in real world videos and images | |
JP2016095849A (en) | Method and device for dividing foreground image, program, and recording medium | |
JP6024658B2 (en) | Object detection apparatus, object detection method, and program | |
US11748904B2 (en) | Gaze point estimation processing apparatus, gaze point estimation model generation apparatus, gaze point estimation processing system, and gaze point estimation processing method | |
JP2016085487A (en) | Information processing device, information processing method and computer program | |
KR20150051711A (en) | Apparatus and method for extracting skin area for blocking harmful content image | |
JPWO2016021147A1 (en) | Image processing system, image processing method, and recording medium for detecting stay of moving object from image | |
KR20160104826A (en) | Method and apparatus for detecting obscene video | |
CN108875488B (en) | Object tracking method, object tracking apparatus, and computer-readable storage medium | |
US20130076792A1 (en) | Image processing device, image processing method, and computer readable medium | |
US11527091B2 (en) | Analyzing apparatus, control method, and program | |
KR102065362B1 (en) | Apparatus and Method for extracting peak image in continuously photographed image | |
CN111353330A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110659537B (en) | Driver abnormal driving behavior detection method, computer device, and storage medium | |
KR20150047937A (en) | Method, apparatus and computer-readable recording medium for seperating the human and the background from the video | |
US20150139541A1 (en) | Apparatus and method for detecting harmful videos | |
KR101187481B1 (en) | Method and apparatus for extracting hand color to use image recognition based interface and image recognition based interface device |