CN1332357C - Sensitive video frequency detection based on kinematic skin division - Google Patents

Sensitive video frequency detection based on kinematic skin division Download PDF

Info

Publication number
CN1332357C
CN1332357C CNB2004100335406A CN200410033540A CN1332357C CN 1332357 C CN1332357 C CN 1332357C CN B2004100335406 A CNB2004100335406 A CN B2004100335406A CN 200410033540 A CN200410033540 A CN 200410033540A CN 1332357 C CN1332357 C CN 1332357C
Authority
CN
China
Prior art keywords
video
skin
frame
cube
closed curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100335406A
Other languages
Chinese (zh)
Other versions
CN1680977A (en
Inventor
谭铁牛
胡卫明
王谦
杨金峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CNB2004100335406A priority Critical patent/CN1332357C/en
Publication of CN1680977A publication Critical patent/CN1680977A/en
Application granted granted Critical
Publication of CN1332357C publication Critical patent/CN1332357C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a sensitive video detection method based on moving skin color division, which comprises steps: moving objects in a video are carried out division and boundary extraction; skin colors are detected on the divided objects; the exposure degree of skins corresponding to the moving objects are calculated. On the basis of the calculation of each frame, overall evaluation to the sensitivity of the whole video is made. The present invention applies a computer vision technology to the Internet, identifies and filters uncivilized information of the Internet and makes users be safe from the uncivilized information. Through the test of an international standard library, the present invention achieves a high identification rate.

Description

Sensitive video frequency detection method based on the motion skin color segmentation
Technical field
The present invention relates to the field that computer networking technology combines with computer vision technique, particularly based on the sensitive video frequency detection method of motion skin color segmentation.
Background technology
Popularizing with widespread use rapidly of Internet produced profound influence to development of computer: the networking of computer software application has proposed new requirement to software engineering, the network information security is in the new requirement one a very important problem, then is a concrete problem in the information security to the filtration of network sensitive information.Some researchs have been carried out in the filtration of network sensitive information, and the product of some home page filters and detection, for example SmartFilter[have been occurred Http:// www.smartfilter.de/], NoPorn[ Http:// www.noporn.com.tw/] wait anti-yellow software can prevent that general computing machine user from utilizing the browser access porn site.Wherein SmartFilter reaches by SmartFilter control tabulation data bank the management and the supervision of internet visit, the professional of SmartFilter company collects the website information that increases or disappear at present by server all over the world every day, and jede Woche upgrades once, the complete URL data bank that SmartFilter control tabulation data bank provides, all adopt the client of SmartFilter product to download up-to-date control tabulation data bank by jede Woche.For making things convenient for the supvr to set and, controlling the tabulation data bank and be divided into 27 kinds according to the different demand of constituent parts, interest and policy.As: chat, online dating, gambling, too drastic speech, rumour, pornographic etc.But the artificial factor that participates in of this product is too many, can not realize the automatic processing of information.The eefind[that VisionNEXT company produces Http:// www.eefind.com/] multimedia search series, to filter the groupware and can realize simple image detection, search and filtration still in the detection of sensitization picture, are filtered with search aspect accuracy rate too low.
In the sensitive information context of detection, abroad some universities (Berkeley Iowa) has carried out the exploration that part is analyzed sensitization picture on the network.Fleck and Forsyth[Margaret Fleck, DavidForsyth, and Chris Bregler, " Finding Naked People " European Conference onComputer Vision, Volume II, 1996, pp.592-602] skin by human body, and the each several part skin area is linked to be one group, discern a width of cloth picture and whether comprise bare content.This system uses the color and the texture properties of combination to mark the dermatoid pixel of class, then these skin areas is delivered to a specific device in groups.Device utilizes geometrical constraint on the organization of human body these zones to be formed a people's profile in groups.If device has been found an enough complicated structure in groups, it is just thought and includes the people in this width of cloth picture.The occasion of shade and skin color is effectively to this method for existing on a large scale.The Ian Craw of Aberdeen learns the probability model of the colour of skin with the SOM net in skin detection, obtain one behind the test samples fan-in network and may be the probable value of the colour of skin, one threshold values is set then takes a decision as to whether the colour of skin [David Brown, Ian Craw, and JulianLewthwaite, A SOM based approach to skin detection with application in realtime systems.PDF preprint, Department of Mathematical Sciences, Universityof Aberdeen, 2001.].In addition, also have some general CBIR systems, QBIC as IBM, the ImageFinder of Attrasoft, the Imatch of MWLabs etc., these systems all support coupling [the Colin C.Venters and Dr.Matthew Cooper to features such as color, shape, textures, " A Review of Content-Based Image Retrieval Systems ", University of Manchester, 2000].But this general image searching system is also nonspecific for sensitization picture designs, and efficient is not high when carrying out the sensitization picture search.
The product of domestic network secure context has PC bodyguard, and PC bodyguard 1.0 edition owners will come the pornographic information of screen by two kinds of means, a kind of Packet Filtering that is based on station address, and another kind is that intelligent information is filtered.Wherein, the basis of intelligently filters is data and a bad station network packet Feature Extraction of intercepting and capturing the network packet upper level, and this product does not possess the automatic identification of sensitive image and understands function.
It is to be based upon on the basis of image filtration that video filters.The filtration of dynamic yellow information at present still is a blank, is just few that network image filters both at home and abroad originally, does almost not having of Internet video filtration.This mainly is because the technology that video filters is also very immature, and more difficult to the filtration of still image to the specific filtration resistance of video, real-time requires higher.But there is active demand in society to this, because dynamic yellow information harmfulness is bigger.And our a whole set of method of from the yellow video of research trends filters, drawing, behavioural analysis and the semantic understanding of people in the computer vision had important reference meanings.
Summary of the invention
The purpose of this invention is to provide a kind of sensitive video frequency detection method based on the motion skin color segmentation.
For achieving the above object, a kind of sensitive video frequency detection method based on the motion skin color segmentation comprises step:
Adopt lever set that the method that partial differential equation develops is cut apart and Boundary Extraction the motion object in the video;
Employing to being carried out Face Detection on the cutting object, is obtained the degree of exposure of skin with respect to the motion object based on the cube complexion model of relational database;
On the basis of each frame calculating single frames susceptibility f (t), the susceptibility of whole video is done comprehensive evaluation.
The present invention is applied to the internet with computer vision technique, and the uncivil information on identification and the filtration internet is the murder by poisoning that the user avoids uncivil information.Through the test in international standard storehouse, the present invention has reached high recognition.
Description of drawings
Fig. 1 is that the moving region is cut apart and the Boundary Extraction example;
Fig. 2 is the cube complexion model;
Fig. 3 is a sensitive video frequency test The general frame;
Fig. 4 is the influence that different δ values is monitored sensitive video frequency;
Fig. 5 is the responsive frame distribution schematic diagram in the video.
Embodiment
Motion object bounds in the video extracts:
In a video, be partitioned into motion to as if Video processing, one of the most difficult and sixty-four dollar question in video compress and the computer vision.Traditional method is the parameter estimation of taking exercises earlier, and then does and cut apart, if estimation words accurately inadequately like this, the quality of cutting apart is just very poor.Here, we adopt level set that the method for partial differential equation evolution is carried out determining of moving boundaries and cut apart.Level set is by the numerical solution of a kind of partial differential equation of Osher and Sethian proposition, causes extensive concern in computer vision and figure educational circles in recent years.With traditional do on the image cut apart set up partial differential equation different be that the equation that we set up in video sequence has utilized movable information.
If (x, y t) represent the curve family that initial curve r0 is produced to r, suppose in direction Last speed is F, and then the curve rate representation is:
r t ( x , y , t ) = F N ‾
If closed curve r (t) is expressed as the implicit function form:
Φ (r (x, y, t), t)=0, starting condition be Φ (x, y, t=0)=r 0
Both sides are to the t differentiate:
By the fixed mesh differential is separated above-mentioned PDE:
Φ i , j n + 1 = Φ i , j n - Δt · h · ( max ( F i , j , 0 ) ▿ + + min ( F i , j , 0 ) ▿ - )
Wherein h is a mesh spacing, and n is an iterations, and Δ t is a time step, Φ I, j nBe pixel (i, j) the level value when the time is n, F I, jThe expression corresponding speed.And:
▿ + = ( max ( Φ i , j - Φ i - i , j , 0 ) 2 + min ( Φ i + i , j - Φ i , j , 0 ) 2 + max ( Φ i , j - Φ i , j - 1 , 0 ) 2 + min ( Φ i , j + 1 - Φ i , j , 0 ) 2 ) 1 2
▿ + = ( max ( Φ i + i , j - Φ i , j , 0 ) 2 + min ( Φ i , j - Φ i - 1 , j , 0 ) 2 + max ( Φ i , j + i - Φ i , j , 0 ) 2 + min ( Φ i , j - Φ i , j - 1 , 0 ) 2 ) 1 2
Usually in still image, speed F is by the image gradient decision, and in video sequence, we can utilize movable information.Speed F is as follows:
Figure C20041003354000087
Wherein K is a curvature, and r is a constant, g (I D, σ D) be Gauss's estimation to frame-to-frame differences, g (|  I|, σ T) be Gauss's estimation to image gradient  I.
Moving region in the video cut apart with Boundary Extraction as shown in Figure 1.
Skin detection in the video motion object:
Judge the point (x of image 0, y 0) whether in closed curve Inner:
Assumed curve
Figure C20041003354000092
Last coordinate is x 0Ordinate set be U y, curve
Figure C20041003354000093
Last coordinate is y 0Horizontal ordinate set be U xIf satisfy condition: U yElement number greater than 1, y 0At U yMinimum and greatest member between, U xElement number greater than 1, x 0At U xMinimum and greatest member between, decision-point (x then 0, y 0) in closed curve Inner.But the method is only effective to the closed curve of convexity.Closed curve The area that is surrounded is exactly that all are in closed curve
Figure C20041003354000096
The summation of interior pixel.In the pixel of closed curve inside, check whether it is the colour of skin, we have adopted the cube complexion model based on statistics of database.
Tradition complexion model [Jones 1998]:
In rgb space, r, g, the b three-component is representative color not only, also represents the light and shade of illumination.For eliminating illumination effect, adopt regularization to handle to color: r=R/ (R+G+B), b=B/ (R+G+B). color model can be used Gauss model N (m, c) expression.
Average: m=E{x} is x=(rb) wherein T
Variance: c=E{ (x-m) (x-m) T}
P (r, b)=exp[-0.5 (x-m) Tc -1(x-m)] x=(r b) wherein T
By getting certain threshold value, just can split skin.
Its defective: true really not so, may be than manying also complexity of Gaussian distribution, feedback trouble in addition
The method that we adopted:
We have adopted a kind of method based on the cube statistics.For a RGB cube, its size is 256 * 256 * 256.We segment cube, and each small cubes size is 8 * 8 * 8, obtain 32 * 32 * 32 cubes altogether.The cube complexion model as shown in Figure 2.
Simultaneously, for the accuracy of adding up, we have increased the constraint in each small cubes, and on this basis, we carry out database design, and dynamically set up the skin data storehouse.Database has following characteristics simultaneously: can dynamically set up database, dynamic feedback in identifying.Can retrieve (record in storehouse is generally about 30,000) to database fast
The video sensitivity is estimated:
The sensitivity f of every frame (t) can do an assessment as follows:
Figure C20041003354000101
Closed curve The area that is surrounded is exactly that all are in closed curve
Figure C20041003354000103
The summation of interior pixel.The sensitivity E of whole video can do an assessment as follows:
E = Max t 2 - t 1 = δ ∫ t 1 t 2 f ( t ) dt δ
In fact above-mentioned equation calculates be from t1 to the average sensitivity the t1+ δ, and get a upper limit.Different frame pitch deviation δ values to the influence of sensitive video frequency monitoring as shown in Figure 4, generally getting δ is 4.
Embodiment
Whole sensitive video frequency detection system is made the form of com component.At first be one section video of input, the input of video can be local input, video URL input that also can receiving remote.When receiving remote URL, this assembly can be finished the video download function automatically, and downloads and play in the mode of Streaming Media.When video is downloaded, carry out video decompression and handle, calculate then each frame motion segmentation zone and border.Pixel in the closed region is carried out skin detection, carrying out reading the skin data library information when skin is cut apart earlier, on the basis of the complexion model of setting up in advance, carry out skin and cut apart.The detection of sensitive video frequency is different with the detection of static responsive image: still image is a single frames, this images or be responsive, otherwise non-sensitive.Video is then different, and more redundant information is contained in the inside, if a certain frame is responsive, that this section video of had better not at once making a strategic decision is responsive, because the error rate of detection is improved.Because according to general knowledge, if one section video is responsive, must not to have only a key frame be responsive to this section video so.Therefore, we just need to calculate the distribution situation of responsive frame.If the distribution density of responsive key frame is too high in certain time period, then we have reason to think that this section video just contains sensitive information.In fact, differentiate the susceptibility of video on the basis of responsive frame distribution density, its accuracy is often than the responsive Image Detection height of static state.The detection block diagram of sensitive video frequency as shown in Figure 3.
The susceptibility evaluation and test:
We estimate that by each key frame being carried out susceptibility obtain the responsive frame distribution situation of video, as shown in Figure 5, wherein red representative may contain the frame of sensitive information.
By responsive frame distribution density being estimated whether the video of can making a strategic decision out is responsive.We do detection on the sample of 100 width of cloth videos, to the rate of accuracy reached to 86.5% that sensitive video frequency detects, false drop rate is 4%.

Claims (7)

1. sensitive video frequency detection method based on the motion skin color segmentation comprises step:
Adopt lever set that the method that partial differential equation develops is cut apart and Boundary Extraction the motion object in the video;
Employing to being carried out Face Detection on the cutting object, is obtained the degree of exposure of skin with respect to the motion object based on the cube complexion model of relational database;
On the basis of each frame calculating single frames susceptibility f (t), the susceptibility of whole video is done comprehensive evaluation.
2. by the described method of claim 1, it is characterized in that described skin detection comprises step:
Judge that picture point is whether in the inside of closed curve;
The RGB cube is segmented.
3. by the described method of claim 2, it is characterized in that retraining in the small cubes to each segmentation.
4. by claim 2 or 3 described methods, it is characterized in that storing the cube of segmentation and the constraint in the cube, and guarantee dynamically updating of constraint in the mode of database.
5. by the described method of claim 1, it is characterized in that the susceptibility of every frame is assessed by following formula:
Wherein, the area that surrounds of closed curve is exactly the summation of all pixels in closed curve.
6. by claim 1 or 5 described methods, it is characterized in that the sensitivity E of whole video assesses by following formula:
E = Max t 2 - t 1 = δ ∫ t 1 t 2 f ( t ) dt δ
Wherein, δ is the frame pitch deviation.
7. by the described method of claim 6, it is characterized in that frame pitch deviation δ is 4.
CNB2004100335406A 2004-04-06 2004-04-06 Sensitive video frequency detection based on kinematic skin division Expired - Fee Related CN1332357C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100335406A CN1332357C (en) 2004-04-06 2004-04-06 Sensitive video frequency detection based on kinematic skin division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100335406A CN1332357C (en) 2004-04-06 2004-04-06 Sensitive video frequency detection based on kinematic skin division

Publications (2)

Publication Number Publication Date
CN1680977A CN1680977A (en) 2005-10-12
CN1332357C true CN1332357C (en) 2007-08-15

Family

ID=35067554

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100335406A Expired - Fee Related CN1332357C (en) 2004-04-06 2004-04-06 Sensitive video frequency detection based on kinematic skin division

Country Status (1)

Country Link
CN (1) CN1332357C (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923652B (en) * 2010-07-23 2012-05-30 华中师范大学 Pornographic picture identification method based on joint detection of skin colors and featured body parts
CN102014295B (en) * 2010-11-19 2012-11-28 嘉兴学院 Network sensitive video detection method
CN104014122B (en) * 2014-06-17 2016-04-20 叶一火 Based on the sports and competitions back-up system of internet
WO2017107209A1 (en) * 2015-12-25 2017-06-29 王晓光 Method and system for image recognition in video software
CN107566903B (en) * 2017-09-11 2020-07-03 北京匠数科技有限公司 Video filtering device and method and video display system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002091302A2 (en) * 2001-05-04 2002-11-14 Legend Films, Llc Image sequence enhancement system and method
JP2003044859A (en) * 2001-07-30 2003-02-14 Matsushita Electric Ind Co Ltd Device for tracing movement and method for tracing person

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002091302A2 (en) * 2001-05-04 2002-11-14 Legend Films, Llc Image sequence enhancement system and method
JP2003044859A (en) * 2001-07-30 2003-02-14 Matsushita Electric Ind Co Ltd Device for tracing movement and method for tracing person

Also Published As

Publication number Publication date
CN1680977A (en) 2005-10-12

Similar Documents

Publication Publication Date Title
US20200012674A1 (en) System and methods thereof for generation of taxonomies based on an analysis of multimedia content elements
Boom et al. A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage
Park et al. Content-based image classification using a neural network
US10032081B2 (en) Content-based video representation
CN111797326B (en) False news detection method and system integrating multi-scale visual information
US8463000B1 (en) Content identification based on a search of a fingerprint database
US20040019574A1 (en) Processing mixed numeric and/or non-numeric data
CN107944035B (en) Image recommendation method integrating visual features and user scores
CN111783712A (en) Video processing method, device, equipment and medium
Kekre et al. Content Based Image Retreival Using Fusion of Gabor Magnitude and Modified Block Truncation Coding
CN101051344B (en) Sensitive video frequency identifying method based on light stream direction histogram and skin color stream form variation
Qamar Bhatti et al. Explicit content detection system: An approach towards a safe and ethical environment
CN113420198A (en) Patent infringement clue web crawler method for web commodities
Jayanthiladevi et al. Text, images, and video analytics for fog computing
CN1332357C (en) Sensitive video frequency detection based on kinematic skin division
CN114692593A (en) Network information safety monitoring and early warning method
Brown et al. Design of a digital forensics image mining system
Xu et al. Cross-browser differences detection based on an empirical metric for web page visual similarity
CN1508755A (en) Sensitive video-frequency detecting method
CN115080865A (en) E-commerce data operation management system based on multidimensional data analysis
US10867162B2 (en) Data processing apparatus, data processing method, and non-transitory storage medium
Guldogan et al. Personalized representative image selection for shared photo albums
CN113743188A (en) Internet video low-custom behavior detection method based on feature fusion
Vadivukarassi et al. A framework of keyword based image retrieval using proposed Hog_Sift feature extraction method from Twitter Dataset
CN109977301A (en) A kind of user's use habit method for digging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070815

Termination date: 20180406