TWI263944B - Naked body image detection method - Google Patents
Naked body image detection method Download PDFInfo
- Publication number
- TWI263944B TWI263944B TW93117330A TW93117330A TWI263944B TW I263944 B TWI263944 B TW I263944B TW 93117330 A TW93117330 A TW 93117330A TW 93117330 A TW93117330 A TW 93117330A TW I263944 B TWI263944 B TW I263944B
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- color
- skin color
- skin
- block
- Prior art date
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
1263944 玖、發明說明: 【發明所屬之技術領域】 —鍤if脣!係有關於-種裸露人_像之偵測方法,户扣 Λ γ it 用_軟體之前提下:、$ 方,降低使用者使 學習及自動侧效果:轉之自動 5錯誤_之可能性; ί人膚色 i?2fi 非膚色色彩如背也涵蓋了部分 位於影像之中央,且择f ,路々體衫像之主體通常 因此裸露人辟像主赠影像之絕大部分, 為裸露人體位£均是雜影像是否 點係因為主體為人iH’卜3 影像之另一特 大頭照符合iliiiiij ,之機率,再者,由於 加以處理以避免誤‘,太判斷,因此人臉债測亦須 可將誤判 降至最低:¾像=層過雜後’ 【先前技術】 由於政府與民間機構的大力 網路已融人了社會大眾的生活普及 缺的工具。尤其現在網二2成,1吊生活中不可5 隹、、罔路頻見由昔日之數據機提升為ADSLi 1263944 同軸纜線(CABLE),其兩者的頻寬更是傳 的數十倍,大大改善頻寬的限制,使 。網,的便利雖然帶給人們許許$的二 ίϊί,法令而言’對於網路流通資訊的掌握盘押制, 提供色情資賴防護。現今色情貝對其 ί字?目串偵測,幾乎都以文字判別t主ii 的動機、出發點與目的所在。兀備化疋本案發明人研究 ,露ίίί^ΐ及52口忠別⑵像是否為 含體影像即是指i影像裡體言,裸露 裸露人體影像的偵測與J別利偵測。對於 主要是因為色情影像拍攝二“難來的高, 人物上打上特殊的燈光,或 :5:不核境或主角 物及背景上產生色彩之變^在^=士慮鏡’使得其在人 用-個膚色色彩空間华人ϋ巧據以往的研究,只採 的色彩空間造成相近的子人體膚色與背景 色分類的依據。 3 口此我們利用此一特性作為膚 1263944 【發明内容】: ϊ : 影像之_妓,其步驟勺 :、對收集之^影人體膚/色影像^ 輸入:ί5ίί景經網路輪又:參❻人?f) ㉝入特徵“藉以該^^2入:丈向量’該複 j由該類神經網路之自動、Hm色之成分;⑷ ,膚色之成分,像資訊具有 合,則將該伽到之A 总色色私分佈所屬子集 其所佔1以1^ 取传敢大膚色平滑區塊’並計算出 臨=,ίϊίί;^二;?最f气色平滑區塊若大於土 i目平滑區塊是否位置於該影像中心區域,若 續影大膚Λ平二f,?塊y能為裸露人體影像,則繼續後 疋小於某臨界值,若是,則該最大膚色平滑區塊可 影像,繼續後續影像處理;以及⑻藉由臉 口j之巧徵^舄,判別該最大膚色平滑區塊是否為大頭照,若 疋’則判定該最大膚色平滑區塊為裸露人體影像。 較佳者,其中該人體膚色影像之人工前處理作業之步驟 ^括:步驟一:人工建立一裸露人體影像膚色子集合資料 f ;步;驟二··將該裸露人體影像膚色子集合資料庫裏面之每 一張影像之膚色,利用人工圈選方式把膚色部分區塊分離出 來丄步驟三:將該每一張由人工圈選分離出來之膚色區塊上 的f 一/點由RGB組成的像素轉到相對應之該UV色彩空間,如 此該每一張影像裡面的膚色區域都會對應到一個 CCHOJroma Chart Histogram);步驟四:藉由型態學之閉 合運算(Closing)對該每一個UV色彩空間之膚色分佈集合 1263944 做閉合處理,使得到該每一個uv色彩空間膚色 封閉區域’即將步驟三之該每—張影像戶 2 =二值化之該⑽系為B—cc,進而採用—娜 rSiiβ-cc;步驟五:根據該“的ίί 3 母-張練賴生之驗:敝序做_ — f f f 兩張影像的(ΓΗ ’其(ΤΗ的值總和分別二= ςς^3 Γ = ??池ν) ’接著利用其相對應之該CB—CC找Υ-重疊的 weZveZ Γ>〇·8,步驟六··由步驟五可得 ,每—健合細 CCH, 作,如景⑽象一Βί:ί蓋 涵蓋80%以上,則Α,β,㈣被 互相 後會得到數群膚色子集合^f 體影像膚色之資料庫,J爻J做為該裸露人 -類類是學習資料,另1263944 玖, invention description: [Technical field to which the invention belongs] - 锸 if lip! There is a method for detecting the naked person _ image, the user buckle Λ y it with _ software before:: $, lower the user to make learning and automatic side effect: the possibility of automatic 5 error _; ί people's skin color i? 2fi non-skin color, such as the back also covers part of the center of the image, and choose f, the main body of the road 々 body shirt is usually the most exposed image of the main image, for the naked body position Whether the miscellaneous image is a point or not because the subject is a human iH' Bu 3 image of another very large headshot in line with iliiiiij, the probability, in addition, because it is handled to avoid mistakes, too judged, so the face debt test must also be Minimize false positives: 3⁄4 like = layer is too complicated '[Previous technology] Because the strong network of government and private institutions has integrated the tools of the public's lack of life. In particular, now the network is 20%, 1 hanging life can not be 5 隹, 罔 频 frequency is seen from the former data machine upgrade to ADSLi 1263944 coaxial cable (CABLE), the bandwidth of the two is more than tens of times, Greatly improve the bandwidth limit so that. The convenience of the Internet, while giving people a privilege of $ , ϊ , , , 法 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 对于 。 。 。 。 。 。 Nowadays, the erotic shells detect the _ word line, and almost all of them use the words to judge the motive, starting point and purpose of the main ii.兀 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋 疋For the main reason is because of the erotic image shooting two "difficult to come high, the character is marked with special lights, or: 5: no nuclear or protagonist and the color change on the background ^ ^ ^ 士 镜 镜 ' According to previous research, only the color space caused by the color space of the skin caused similar sub-human skin color and background color classification. 3 This we use this feature as the skin 1263944 [Invention content]: ϊ : Image _ 妓, its step spoon:, the collection of the shadow of the human skin / color image ^ Input: ί5 ί Jingjing network round: ginseng? f) 33 into the feature "by the ^ ^ 2 into: Zhang vector 'The complex j is composed of the automatic and Hm color components of the neural network; (4), the composition of the skin color, like the information has a combination, then the gamma to the A total color color privately distributed to the subset of which belongs to 1^ Take the dare to color the smooth block 'and calculate the Pro =, ίϊ ίί; ^ 2;? If the most f-color smooth block is larger than the soil i-light smooth block is located in the center of the image, if the continuation of the large skin level The second f, block y can be a bare body image, and then the 疋 is less than a certain threshold, if , the maximum skin color smoothing block can be imaged, and the subsequent image processing is continued; and (8) determining whether the maximum skin color smoothing block is a large head photo by the face of the j, and determining the maximum skin color smoothing if The block is a bare body image. Preferably, the step of the manual pre-processing operation of the human skin color image comprises: Step 1: manually creating a bare body image skin color sub-collection data f; step; step two · · the naked body image skin color sub-set database The skin color of each image in the inside is separated by artificial circle selection. Step 3: The f/point on the skin color block separated by manual circle is composed of RGB. The pixel is transferred to the corresponding UV color space, so that the skin color region in each image corresponds to a CCHOJroma Chart Histogram); Step 4: Each UV color is determined by the Closing of the morphology The skin color distribution set 1263944 is closed, so that each of the uv color space skin color confinement areas is to be the first step of the third image of the image household 2 = binarization (10) is B-cc, and then adopts -na rSiiβ-cc; Step 5: According to the " ίί 3 mother - Zhang Lian Lai Sheng's test: 敝 做 _ — fff two images of the ΓΗ 其 其 其 其 其 其 其 其 其 总 总 总 总 = = = = ? ? ? ? ? ? ? ? ? ? ?池ν) ' Using the corresponding CB-CC to find the Υ-overlapping weZveZ Γ> 〇·8, step six············································ Covering more than 80%, then Α, β, (4) will be obtained from each other after a group of skin color sub-collections ^f body image skin color database, J爻J as the naked person - class is learning materials, another
累加起Ϊ 張膚色影像所產生的CCH CCH:A^CCH) (Accumulated ,色集合會得到咖(^千以句jf=’ 群出f機率最亡 中之 χ=1,\ •丄 X 〔hi]’x〕256x256 表示之,其 分佈與空間“真^達^取樣,無法使機率 斷上的誤差,Α τ化與代表性的目的’以致造成判CCH CCH:A^CCH) (Accumulated, color collection will get coffee (^ thousand sentences jf=' group out f ]'x] 256x256 indicates that its distribution and space "true ^ up ^ sampling, can not make the probability of the error, Α τ and representative purposes" resulting in the judgment
做内插的動作二另外為誤差,特別對該每一個A—CCH 時亦做強化Ϊ動作;差異性,同 為一内插強化累力亥經内插與強化之運算結果係 累加色杉空間統計圖IEA—CCH (Interpolated 1263944 and Enhanced A_CCH) , X-l, 2, ..., N ^ h\ x = V V /7 / ' ΛΑ ^ - y丄I〜U值越尚,代表是膚色 為零的代表是非膚色(背景)的部分, 生之個TFA Γ^Τ及 '"部分的I值設為1 ’而該所產 4ί;Γ體膚色 車父佳者’其中該類神經網路係一種來 分類判_各群人體膚色資訊之『籌:5己錄及自動 原妒ϊίίί属ίίϊ類神經網路輸入向量層係用以擷取判斷 及r背景特徵向量所組成其ΐϋ景ί=The action of interpolating is additionally an error, especially for each A-CCH. The difference is the same as the interpolation. The result of the interpolating and strengthening is the result of accumulating color space. Chart IEA-CCH (Interpolated 1263944 and Enhanced A_CCH), Xl, 2, ..., N ^ h\ x = VV /7 / ' ΛΑ ^ - y丄I~U value is the more representative, is the skin color is zero Represents a part of non-skinning (background), a TFA Γ^Τ and '" part of the I value is set to 1 ' and the produced 4 ί; Γ 肤色 车 车 车 ' 其中 其中 其中 其中 其中 其中 其中 其中To classify and judge _ group of human skin color information "funding: 5 recorded and automatic original 妒ϊ ί ί ίίϊ neural network input vector layer is used to capture the judgment and r background feature vector composed of its scene ί =
匕j,區塊之膚,涛,而該背景特徵向量 心二旦之洛,背景所求得。故-張影像會產生U 別ΐ入類神經網路之網路模型測試,以取得該 影像取有可能之膚色色彩分佈集合。 %竹 東得ΐίί括其ti前景膚色特徵向量及該背景特徵向量之 步驟—、影像正規化:為了讓影像處理部分 有輸人影像大小重新調整成躲接 ^原&,大小的(16Μ)χ(ΐ6Ν)個像素大小,其中μ與N為正整 數,接著把影像切割成16χΐ6的區塊;步驟二、將一^正規 ΐ後ί影i象計算得出該前景膚色特徵向量及該背景特徵向 里,y驟二、相關性特徵操取:藉由一個二維空間來表示其 相似分佈,以擷取分別代表膚色及背景相對於某群之 AEA—CCH的相關性特徵矩陣;步驟四、特徵簡化匕j, the skin of the block, Tao, and the background feature vector is the heart of the two, the background is obtained. Therefore, the image will generate a network model test of the U-type neural network to obtain a possible color distribution of the skin color. % bamboo ΐ ί ί 其 ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti ti χ(ΐ6Ν) pixel size, where μ and N are positive integers, and then the image is cut into 16χΐ6 blocks; step 2, a normal image is calculated, and the foreground skin color feature vector and the background are calculated. Feature inward, y second, correlation feature operation: the similarity distribution is represented by a two-dimensional space to obtain the correlation feature matrix representing the skin color and the background relative to a group of AEA-CCH; step four Simplified features
Retc^ion):由於相關特徵矩陣有256x256個資料,若以此 數量當成輸入的話,將造成類神經網路在訓練上的嚴重負 擔為了顧及計算量與保有相對空間資訊,因此可把256x256 的二維陣$降為32x32的陣列,並以兩矩陣代表膚色與背景 的輸入向量,大小各為32x32 ;以及步驟五、產生輸入^量: 1263944 神經Sii構5中該膚色模組層之每-個膚色模組皆係由類 幸父佳者’其中該膚色模組層之一羊 2驟包括m丨練樣本之取丨練過 練步㈣ £正量與偏權值㈣修正量;步驟十權值矩陣 ί : 步驟,直到收斂或者做ί的 ^:ίίί;ΐ' -,該N個輸入^量分別輸入N^膚^^ 晒,輸入向 色模組會得到一個輸出值,代表以^^每―個膚 相,性,其值介於[〇,丨],值越高 该群膚色的 把該N個輪出向量輸入到該決定 亡,接著再 那一群膚色群。根據五組膚色4產,該歸屬於 影像最佳的膚色色彩空嶋合^產生崎屬群組,找出此 f的向量,選擇-個或若干個最強值的處理輪出值所組 其餘為〇,作為該決定層的輪出,且凡*令其值為Retc^ion): Since the correlation feature matrix has 256x256 data, if this number is used as input, it will cause a serious burden on the neural network in training. In order to take into account the calculation amount and retain relative spatial information, it can be 256x256 The array is reduced to a 32x32 array, and the input vectors representing the skin color and the background are represented by two matrices, each of which is 32x32; and step 5, the input is generated: 1263944 Each of the skin color module layers in the neural Sii construct 5 The skin color module is composed of a class of good fathers and fathers. Among them, one of the skin color module layers, the sheep 2, including the m training samples, and the practice steps (4), the positive and partial weights (four) correction amount; Value matrix ί : Steps until convergence or ί ^: ί ί ΐ - - , , , , , ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ , , , , Each skin phase, sex, has a value between [〇, 丨], and the higher the value, the input of the N rounds of the skin color to the decision, and then the group of skin colors. According to the five sets of skin color 4, the color of the skin color attributed to the best image is generated to generate a group of the genus, to find the vector of f, and select the one or several of the strongest values of the processing round. Oh, as the decision-making layer, and where * is its value
Yl,p1,2,3,.··,N,其運算式如^ :為1观甲化輸出為 η- 1 , if fk > f, and fk >〇e5 ? vj^k 〇 , otherwise. 10 1263944 為卜定只有一個值 集合最接近,且其她度㈣始贿之人體膚色子 較"ί土者’其中该人體膚色之福測你兹 別輸入影像所屬那一群膚色集合貞=判 IEA—CCH (Interpolated and EnhanrP^ :予丨白衫像產生的Yl, p1, 2, 3, . . . , N, whose expression is ^: 1 is the output of spectroscopy as η-1, if fk > f, and fk > 〇e5 ? vj^k 〇, otherwise 10 1263944 For Puding, only one set of values is closest, and the other degree (4) begins to bribe the body color of the human body. The body color of the human body is measured by the group of skin color that belongs to the image. IEA-CCH (Interpolated and EnhanrP^: the image produced by the white shirt
Chart Histogram)以偵測出該影像中,C=u=d Chroma 化影像,再將該膚色部份設^像而旦色張二值 可得到-張對應膚色部份為白色而背如是 膚色ϊίϊ分ί ;ί人?㈣算為了降低 運算(Opening):對/白‘^理;·步驟了 ··斷開 影像輪廓,並截斷窄的細頸二消除细c斷,,算以平滑 該斷開運算後,可將許多含有声,大面積··於做完 背景部分都會被分割開來,接^ 相連的 fμ财色面_區塊 序則表示該影像 例之ί i ί驟☆以色 長寬比 區塊的長方形,ii 蓋該最大平滑膚色 實驗數據所求得與鼓:步驟二、根據 依Rgn—職可求得該最大平^5區^,面積如為fen—max, 及-長寬比Ratio,並利用f之平均寬度hvg以 平滑脑叙咖 1263944 %n—max/L,長貧έ卜r ϊίίί長寬⑽ati〇臨界值,如據^數 人體影像,而如:二忠仏過狹,而被歸類為非裸露 平滑膚咖A可能為裸i人體景;像:= ㉝表該最大 別方^ίί包£中口,忠:區塊之臉部特徵特寫判 最大平滑膚色區塊^臉部測字所求得之該 兩眼與嘴巴I ;關《且部的概略範圍,之後找; 臉部特寫影像,而非裸刀不,可判別該影像係為 定為裸露人體影像。體心像,否則,該影像即可被認 【實施方式】 的、J5功關本案發明之其它百 … x _具有的功效優點,進-步說明如下: 將接人,先以色彩資訊 元,因此很難’而且姿勢變化相當多 致膚色的色彩圍也隨者這些變化而改變,導 測將是-大難題。如果:膚色作精確的偵 合代表所5:;==:5這些2皆以-個膚色空間集 測。然而,若由所有可j者以此膚色空間對影像做膚色偵 分佈作為_=|2,露士體f f膚色所組成的膚色 、土楚的活,會因為膚色分佈範圍變廣, 12 1263944 加^②以暴!'文掌膚色而被偵測出來’因而增 似度,以ί的Γί=我們可ί先根據色彩才目 的學習與自我調整機制Γ將亚藉助類神經網路Chart Histogram) to detect the C=u=d Chroma image in the image, and then set the skin color portion to the image and the binary color value can be obtained - the corresponding skin color portion is white and the back color is color ϊ ϊ ί ί ; ί people? (d) in order to reduce the operation (Opening): right / white '^理; · Steps · · break the image outline, and cut the narrow neck and eliminate the fine c break, calculate to smooth the break After the operation, many sounds can be included, and the large area will be divided after the background is finished. The connected fμ color surface _ block sequence indicates the image example ί i 骤 ☆ color The rectangle of the aspect ratio block, ii covers the maximum smooth skin color experimental data and the drum is obtained: Step 2, according to the Rgn- job, the maximum flat area is 5, the area is fen-max, and - Aspect ratio Ratio, and use the average width hvg of f to smooth the brain 1263944%n-max/L, long barren r r ϊ ίίί length and width (10) ati 〇 critical value, such as according to ^ human body image, such as: two loyal It is too narrow, but is classified as non-naked smooth skin coffee A may be naked i body scene; like: = 33 table the biggest other party ^ ίί package £ Zhongkou, Loyalty: The face features of the block are judged to be the largest smooth skin color block. The two eyes and the mouth are obtained by the face test. I close the outline of the part, and then find it; the face close-up image, not the bare knife No, it can be determined that the image is determined to be a naked human body image. Body image, otherwise, the image can be recognized [Embodiment], J5 Gong Guan, the other inventions of the invention... x _ has the advantages of power, the step-by-step description is as follows: will pick up, first with color information element, therefore It's hard to 'and the posture changes quite a lot. The color of the skin color changes with these changes. The guide will be a big problem. If: skin color for accurate detection representative 5:; ==: 5 These 2 are measured by a skin color space. However, if all of the people can use this skin color space to make a skin color distribution as _=|2, the skin color and the earthy life of the skin color will be wider, because the skin color distribution range becomes wider, 12 1263944 plus ^2 is violent! 'Detected by the color of the palm of the hand', thus increasing the degree, with ί Γ ί = We can first learn and adjust the mechanism according to the color of the color
Type)記錄在㈣的分佈原型(Proto 的目的。 網路的鍊結中,進而達到精確偵測膚色 網路iri£ti2S:i缺時因為類神經 方法,請參酌圖式第—圖之:明人提出一個 藉由利用類神經網路來S定:s t T知本案發明係 利用後處理加“尨以―彩ί Γ ί而 影像屬於那-個神經網路會決定該 將膚色偵測出來,接n用的膚色色彩空間 不成立,則該影像即被步驟若有-項 有效管理網站及_防堵網路ϋίϊί的絲使能達到 會造HtS果巧或域鏡’這 說,背景色彩也同樣會的,也就是 應’我們對類神經網路的輪入:種》彩偏移的效 像的色彩資訊直接當成類神_ c排,,非將輸入影 為了去掉亮度的影響,在處理RGB彩色影像的時候,會 13 1263944 把RGB轉到另一個色彩空間來將色彩盥 ,度成伤的功旎,例如:TSL、HIS、Yuv、L b、 ^有刀離 ^,色毅《可觀來_膚色,=方m L 利内谷,以下將選腳m彩空間做為轉。伙釋本專 本木电明之别處理首先是透過人工的方法,對所古 I影i象$預先分類的工作,以為類神經網路輸來ii ,。請芩酌圖式第二圖之流程圖並從以下步驟說明如 靖嫩,經人工做膚色的預先分類』得以 利用人工方法擷取原始影像膚色部分:先 面的每-張影像之膚色,利 式把膚色部份給分離出來。(請參考附件工)。 ^膚色部分轉換麟色彩空間統計圖(aircma Chart I^istogram · CCH) ·把每一張分離出來之膚色區塊上 母一,由RGB組成的像素轉到相對應的爪色彩空間。因 張影像裡面的膚色區域都會對應到一個CCH。 (請參考附件2)。 型態學(Morphology)之閉合(C1〇sing)運算:對每 一個UV色彩空間做閉合處理,目的在於要得到每一個爪 色^空間的封閉區域。首先把每一張影像所求得的CCH 做二值化,並稱此二值化CCH為B—CC,如附件3。接著採 用一個 3*1的結構元素(structure E1 ement)對 B—CC 做閉合運异,產生一個封閉的β—Cc,即Qg CC。 四、根據CBJX的面積,f每一張影像所產生的^^依序做兩 兩比對:如附件4所示,x、y為任意兩張影像的CCH,CCH 的值總和為如)及ί^ΣΣ池v),接著利用其 U V u 相對應的CB—CC找出重疊的區域z,V若下列條件同時成 立,則x,y歸類為同一集合。 ’、 14 1263944 X5]x(m,v)/JT>〇.8Type) is recorded in (4) the distribution prototype (Proto's purpose. In the network link, to achieve accurate detection of skin color network iri£ti2S: i lack of time because of the neural method, please refer to the figure - map: Ming One proposes to use a neural network to determine: st T knows that the invention uses post-processing to add "尨" to "color" and the image belongs to that - a neural network will determine the skin color to be detected, If the color space of the skin color used for n is not established, then the image will be processed by the step-by-item effective management website and the anti-blocking network ϋίϊί wire will enable the HtS fruit or domain mirror to be created. Yes, that is, the color information of the effect of the color shift of the 'involvement of the neural network of the class: the species' can be directly regarded as the class _ c row, and the input shadow is not removed in order to remove the influence of brightness. In the case of color images, 13 1263944 will transfer RGB to another color space to make the color smash, and the degree will be hurt. For example: TSL, HIS, Yuv, L b, ^ have a knife away ^, color Yi "observable _ skin color, = square m L Li Nei Valley, the following will select the foot m color space as a turn The interpretation of the book is based on the manual method, the pre-classification work of the ancient I shadow i, thinking that the neural network is ii. Please consider the second figure of the figure. The flow chart and the following steps, such as Jingshen, the pre-classification of the skin color by hand, can be manually extracted to capture the skin color of the original image: the skin color of each image first, and the skin color part is separated by Lee. (Please refer to the attached work). ^Skin color part of the color space chart (aircma Chart I^istogram · CCH) · Each of the separated skin blocks on the mother, the pixels consisting of RGB are transferred to the corresponding Claw color space. Because the skin color area in the image will correspond to a CCH (refer to Appendix 2). Morphology closure (C1〇sing) operation: Close the treatment of each UV color space, the purpose It is to obtain a closed area of each claw color space. First, the CCH obtained by each image is binarized, and the binarized CCH is called B-CC, as shown in Annex 3. Then a 3* is adopted. 1 structural element (st Ru ru 对 对 B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B 四 四 四 四 四 四As shown in Annex 4, x and y are CCHs of any two images. The sum of CCH values is as follows) and ί^ΣΣ池v), and then the overlapping area z is found by using CB-CC corresponding to its UV u phase. V If x and y are simultaneously established, x, y are classified into the same set. ', 14 1263944 X5] x (m, v) / JT > 〇. 8
weZ veZweZ veZ
ΣΣ·^,ν)/η8 ueZ veZ 五 群化(Grouping):根據上一步 兩比對超過80%的CCH集合,每—個隹ϋ传到所有兩 作,/斤謂群化,我們將以下面化的動 為影像A與影像B互相涵蓋8〇μ。 例如,集合1 像C互相涵蓋80%以上,,〇β, ’ = s 2為影像Β與影 此類推。最後我們會得到數個膚色視為同一群,以ΣΣ·^, ν)/η8 ueZ veZ Grouping: According to the previous two comparisons, more than 80% of the CCH collection, each 隹ϋ 到 to all two, / 斤 is grouping, we will The motion of the surface is that the image A and the image B cover each other by 8 〇 μ. For example, set 1 like C covers more than 80% of each other, 〇β, ’ = s 2 is the image Β and shadow. Finally, we will get several skin colors as the same group,
的膚色;來:《5!15麥2??苎!,庙出現機率最大 色集合後,命別tKiii資料庫。得难群膚 類,-是學習資料,一是測;像分成兩 集合的代表色彩分佈,群膚I 彩空間統計圖(Accumulated 加的色 於取平均值’所效果等 來做說明,切辦气’二將^:4為例 X二 1,2, 3 4。 X — 」256χ256 表不之, 正達ίίΐί $目i城機率分触帥分佈真 例如,由前=的,以致造朗斷上的誤差。 出現的機率,來表示該色彩 的誤差造,但輪取樣上 s: sr:㈣内=二= 15 1263944 日士 目鄰點的值相加到中心座標點的值,同 化目的。經内插與強化的結果稱為内插 Α (ΤΗ、· 統計圖(InterpolatedandEnhanced A^CCH . IEA^CCH) . νλΗ: ? χ,1? 2? 3? 4 〇 /+1 j+l 〜因此t的值越高,代表是膚色的可能性 ίί合代表是非膚色(背景)的部分。由於 同時也造成背景偏黃的變化。因 豕的、交化關係特徵,將背景部分的力^值設為 經的_ΙΕΑ—αΗ,將會成為類神 網路用ί巧式部分第三圖所示之類神經 Μ Ϊ "" J (Multi-Layer Feed-Forward Neural Μ來丰習與分類各群膚色,實際上,其他類型 的類神經網路亦能適用。 i 出ϋίΐίϊ ’輸人向量係為—由原始影像所擷取 声舍^^為膚,及背景之資訊所組合喊。而各群 / 係為—個三層前向類神經網路(Three -Layer ^Srward Neural Network) 5 以達到膚色自動分類的效果。此 應,曰以色群的個數時’可以容易的學習與適 模型ΐίί=下將說明如圖式第三圖所示之類神經網路 來向量,即如何從一張原始影像,擷取出用 家,像该歸屬於哪一群膚色的輸入特徵向量。根據觀 是背’而影像的四個角敍多不包含人體物件,也ΐ 的向量係由前景膚色特徵向 里,、月,$锊徵向罝組成,其珂景膚色特徵向量由座落於影像 16 1263944 ί央景特徵向量由四個肖落的背景所求 量與背景ί徵3 圖形說明如何求得前景膚色特徵性 iSSSSTK大小’其中咖為正整數,接著把 徵5量計算:如附件6(A),分別獲得 ^Γ分別為中央與周圍之可能膚色,與 中如Λ件巧)所示為—張正規化後影像之 向^之、d^/7、.產1生之丽景膚色特徵向量與背景特徵 iM=,2,3,4,為可能膚色區塊(Possible Back=1〇Hnp),Bl,、1=1,2,3, 4 ’ 為可能背景區塊(Possible 分別為孕之色彩空間統計圖,沪⑽、炉分別為& 之累加色彩空間統計圖,其計算方式如下:The skin color; come: "5! 15 Mai 2?? 苎!, the temple has the highest probability of color collection, after the tKiii database. It is difficult to group skin, - is the learning material, one is the measurement; like the representative color distribution divided into two sets, the group skin I color space chart (Accumulated plus color in the average effect) to explain, cut Gas 'two will ^: 4 for example X two 1,2, 3 4. X — 256 χ 256 不 , , 正 正 正 正 正 正 ί ί ί ί ί ί ί ί ί ί ί 目 城 分布 分布 分布 分布 分布 分布 分布 分布 分布 分布 分布 分布 分布 真 真 真 真 真 真 真 真 真 真 真Error on the occurrence. The probability of occurrence, to indicate the error of the color, but the round sampling on s: sr: (four) inside = two = 15 1263944 The value of the neighbor point of the Sunshine is added to the value of the central coordinate point, assimilation purpose. The result of interpolation and enhancement is called interpolation Α (ΤΗ,······················· The higher the value of t, the more representative of the skin color ί ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ For the _ ΙΕΑ Η Η Η Η Η Η Η Η Η qu qu qu qu qu qu qu qu qu qu qu qu qu qu Ot;" J (Multi-Layer Feed-Forward Neural) to learn and classify the color of each group. In fact, other types of neural networks can also be applied. i ϋ ϋ ϋ ϊ 输 输 输 输 输 输 输 输 输 输 输 输 输The sound of the sound is combined with the information of the background and the information of the background. The group/system is a three-layered forward neural network (Three-Layer ^Srward Neural Network) 5 to achieve automatic skin color classification. The effect. This should be, when the number of color groups is 'can be easily learned and adapted to the model ΐ ίί= will explain the neural network to the vector shown in the third figure of the figure, that is, how to get from a raw image,撷 Take out the user, like the input feature vector of which group of skin colors belongs to. According to the view is the back' and the four corners of the image do not contain human objects, and the vector of the 系 is from the foreground skin color features inward, month, The composition of the 肤色 罝 罝 , , , , , , 肤色 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 Size 'where the coffee is a positive integer, Then calculate the amount of 5: as shown in Annex 6 (A), respectively, to obtain the possible skin color of the center and the surrounding, respectively, and the middle of the image is shown as a normalized image of the image ^, d ^ /7,. Production of a beautiful scene color feature vector and background features iM =, 2, 3, 4, is a possible skin color block (Possible Back = 1 〇 Hnp), Bl,, 1 = 1, 2, 3, 4 ' is the possible background block (Possible is the color space chart of pregnancy, Shanghai (10), furnace is the sum of the color space chart of the &, calculated as follows:
SS
ACCH 2XCi k^l txch βΑΟΟΗ __ jyCCH k=l f驟亏、相似性量測:傳統的相似性測量,大都以單一純量 ,表示兩標的物的相似性,為了保留色彩空間的相對空間資 f我們以個^一維空間來表不其相似性分佈。假設 ^別代表膚色及背景相對於X群之AEA一CCH的相關性^徵‘ 陣: m ij\x ^ij ^acch v j · s;; x h. ij,x 其中 mii,x、mij,x 分别爲 SACCH、B一1、Mx、Mx 的矩陣元素’ 為x群AEA一CCH ( //x )之矩陣元素,而其中, 0$i,j$255,X二1,2,3,4。 步驟四、特徵簡化:由於相關特徵矩陣Mx、Μ:有256x256ACCH 2XCi k^l txch βΑΟΟΗ __ jyCCH k=lf sudden loss, similarity measurement: traditional similarity measurement, mostly with a single scalar, indicating the similarity of the two objects, in order to preserve the relative space of the color space A similarity distribution is represented by a one-dimensional space. Assume that ^ represents the correlation between skin color and background relative to AEA-CCH of group X. ^ Array: m ij\x ^ij ^acch vj · s;; x h. ij,x where mii,x,mij,x The matrix elements ' of SACCH, B-1, Mx, and Mx are matrix elements of x group AEA-CCH ( //x), and wherein 0$i, j$255, X is 1, 2, 3, 4. Step 4: Feature simplification: due to the correlation feature matrix Mx, Μ: 256x256
yACCH 17 1263944 個貧料,如果以此數I當成輸人的話,會造成 訓練上的嚴重負擔,為了顧及計算量與保有2;;:以在 ^示之,其矩陣元素為〜,、。 异過程如下 /ΓΧ64+63 /x64+63 Σ i=kx64 /=/χ64 左x64+63/χ64+63 Σ Σ%,. i=kx64 /=1x64 :輸人向量即為特徵向量,乃由前景 Ϊ 景特!向量所組成,其值分別為疋、以 膚色模層類神1網路架構之 臉w〇rk)所構成日。考路(Back加卿此on 個數if,為2048,隱藏層她為職,輸出層 群膚色_色模、群的膚色。針對某 以^^二^^訓練的過程*,為了要讓網 2交樣本包含了以,品色景A",i5i 塊不同。對於χ群影像之可能膚色區 彩空S 個严塊中根據x群的平均強化累加色 元素,K σ十圖(概-CCH (A),而心為X群<之矩陣 Si,、吨出四個最有可能是膚色的區塊當成 每」呈如下··設x群影像中央的16個區塊, 怎u=6鬼,I6之(R,G,B)向量集合分別為 素點之r 7 "I)代表A區塊的相對座標為(i,j)像 ,¾户pf,G,B :值,f為一 3維向量,則 x α2)’·Ά6,15),χΡ((616))。根據RGB轉YUV公式,對 18 1263944 7 < 16 每一像素點(i,j·),將其( ίίϊ算可得Ϊ兩個相對應於4的以值為〜、” -本Γ /—為,〜、'為其矩陣座標,得到 < 中之 兀京在(〜,V")的值為,。如此,每一像辛%· η 膚色集合之可能性。因此,每—個大 的區塊’會得到16湖 —<16大 再把适256個值累加起來,得到-個總值,以X表示, 運,下:』=群&四個最有可能是膚色的區塊 :二;出t的前四,所相對應 :ί4ϊ^ίί:«Ιίχ%Τ/χ^ϊΐί 影像所相對應的輸入向量。 HA砰的母張 ⑵計算輸出層與隱藏層差距量、計算 -隱臧層加權值矩陣修正量與偏權值向量修正量。计 ⑶=^值_ :更赌出層域值鱗及隱藏層加權 循壤為止 (4)收放·重袓鈾面步驟,直到收傲或者做一定數目的學習 (5)測試:如上述,一張測試影像會得yCC"、 兩者yACCH 17 1263944 Poor material, if this number I is used as a loser, it will cause a serious burden on training, in order to take into account the amount of calculation and retention 2;;: In the case of ^, the matrix element is ~,,. The different process is as follows /ΓΧ64+63 /x64+63 Σ i=kx64 /=/χ64 left x64+63/χ64+63 Σ Σ%,. i=kx64 /=1x64 : The input vector is the eigenvector, which is the foreground Ϊ Jingte! The vector consists of 疋, 以, and the face of the skin layer of the gods 1 network architecture w〇rk). The test road (Back plus Qing this on number if, is 2048, hidden layer her job, output layer group skin color _ color model, group of skin color. For a certain ^ ^ two ^ ^ training process *, in order to let the network 2 交 样本 C , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , (A), and the heart is the X group < matrix Si, and the four blocks most likely to be skin color are as follows: · Set 16 blocks in the center of the x group image, how u=6 Ghost, I6's (R, G, B) vector set is r 7 "I) for the prime point respectively. The relative coordinates of the A block are (i, j), 3⁄4 household pf, G, B: value, f For a 3-dimensional vector, then x α2)'·Ά6,15),χΡ((616)). According to the RGB to YUV formula, for each pixel point (i, j·) of 18 1263944 7 < 16 , ( ί ϊ 可 Ϊ 相对 相对 相对 相对 相对 相对 相对 相对 相对 相对 — — — — — — — — — For, ~, 'for its matrix coordinates, get the value of (in, V") in the < in the (~, V"). So, every possibility like the collection of 辛%·η skin color. Therefore, every big one The block 'will get 16 lakes - < 16 big and then add 256 values to get - total value, expressed by X, transport, lower: 』 = group & four areas most likely to be skin color Block: two; the first four of t, the corresponding: ί4ϊ^ίί: «Ιίχ%Τ/χ^ϊΐί The input vector corresponding to the image. The parent of the HA砰 (2) Calculate the gap between the output layer and the hidden layer, calculate - Concealed layer weighted value matrix correction amount and partial weight vector correction amount. Calculate (3) = ^ value _: more gambling out of the layer value scale and hidden layer weighted by the road (4) retracting and re-enacting the uranium surface step, Until arrogance or a certain amount of learning (5) test: As mentioned above, a test image will get yCC", both
ACCH (取中央四個區塊)、$之累加色彩空間統計 擷取,得,4個輸入向量,“輸入四 膚色杈、、且測试,並且母一個膚色模組會得到一個 值二代今影像與該群膚色的相關性,值介於[0 U,ACCH (take the central four blocks), $ cumulative color space statistics capture, get, 4 input vectors, "enter four skin color 、, and test, and the mother a skin color module will get a value second generation The correlation between the image and the skin color of the group, the value is between [0 U,
Sit相關性越接著再把四個輸出向量輸入到決 疋層來決定此影像應該歸屬於哪一群膚色。 19 1263944 案競ί型類=經網路(Competitive ί,^ίΐί:4^ 算式如下··肋里為fl财化輸出為L W,2, 3, 4,運 rk=i 1,if fk - fj and fk ^0.5 ,vj^k ^ 〇, otherwise. 近,而且相似度超i〇. 5 Ο ' = 膚色最接 由於對裸露人體影像而言,人濟 h 裸 bSHH 物= 人體影像的人體區域有三種特性,第二祭,裸露 ttit' 係。因此我們做完平滑度的債測後,長覓比例關 „的依據。根據實驗結果顯示,即使做:j?性3 有:則計算臉部區域面積與膚色面^有|=域。若 個臨界值,則此影像為臉部照片而非“^=率=某 20 1263944 情的影與容易被誤判為色 第四圖之流湖及以下說5鱗理方法處理之,請參酌圖式 的任=路的判別是歸屬於四群中 1他__ 學^ Π 值化影像。若< 的值大於G,财干兮Etr . ·、 份為白色而背景是黑色的影像,t附^應膚色部 式部分^^圖4確率,其處理過程之流程請參酌圖 對於皮膚粗链度的量化方法,我們接闲 之則否,另外,若一區塊周圍的八個區塊都^ ‘ 此區塊亦被視為膚色區塊。做完此處理所得之HA’c則The Sit correlation then enters four output vectors into the decision layer to determine which group of skin colors the image should belong to. 19 1263944 Case Competition 型 type = via the network (Competitive ί, ^ ίΐί: 4^ The formula is as follows: rib is the output of fl, LW, 2, 3, 4, transport rk=i 1, if fk - fj And fk ^0.5 ,vj^k ^ 〇, otherwise. Near, and the similarity is super i〇. 5 Ο ' = The skin color is the most. Because of the naked body image, the human body h bare bSHH object = human body image of the human body area has Three characteristics, the second sacrifice, the bare ttit' system. Therefore, after we have completed the smoothness of the debt test, the ratio of the long 觅 ratio is off. According to the experimental results, even if: j? sex 3 has: calculate the face area Area and skin color surface ^ have |= domain. If a critical value, then this image is a face photo instead of "^= rate = a certain 20 1263944 love shadow and easy to be misjudged as the color of the fourth map of the lake and below 5 The scale method is used to deal with it. Please discriminate the Ren = road of the schema as belonging to the four groups of 1 __ learning ^ Π valued image. If the value of < is greater than G, the money is Etr. ·, For white, the background is black, and the color of the skin part is ^^Fig.4. The process of the process is to quantify the rough chain of the skin. We take the leisure of whether, in addition, if a block of eight surrounding blocks are ^ 'This block color blocks are also considered. This finish is obtained from the processing HA'c
Ig相同的轉E中座標(x,y)上記錄卜否貝= f,t=、為8x8的區塊’接著對個別區塊^之6=加 、、心右和小於6¾界值。,則將此區塊視為膚色區塊,反 21 1263944 严f像1s,接著把影像1读影像is做 像做完膚色平滑度偵測後,所得】二! 或者成為人體膚色的突支,甚至和声巨大的, 偵測後所得到膚色影像I』計算dm j 比貫際的面積要來的大。為了降低膚色面積 iiiss程式“尥= 除、、田的犬支的目的,其結構單元為6x6的二維矩陣;;碩/、泌 ^驟二、求得最大面積:做完斷開運算後, 體膚色相連的背景都會被分割開來'接著從戶 =中找出面積最大的區塊。由已知實驗數據可以以 ,¥、,若最大的膚色區塊面積大於,則代声 衫像可能為裸露人體影像,並繼續進行 主皮'^ 否則,職示此影像為非裸露人序, 種特S前;Ϊ:iiifff故完以上處理後,具有以下兩 心區域的ί 一範以D:置5大多數座落在影像中 有其心圍:吏ί:太===1¾ 二ίϊΐΐ?ί=塊的長軸,並延著此長軸找出Z個ΐ 妹區5的長方形,並得知此長方形之長L與寬W。 片’上圖為經過平滑度偵測後的膚色 时塊’下圖為涵蓋膚色區塊的長方形。 ,據實我們可以得到此膚色區塊 接者求得此膚色區塊的平均寬度[avg以及長寬比^·=我 22 1263944Ig is the same in the coordinate E (x, y) of the record E, and the block is = f, t =, the block of 8x8' is then added to the individual blocks, 6 = plus, right and less than 63⁄4. , this block is regarded as a skin color block, anti 21 1263944 严f like 1s, then the image 1 read image is done after the skin color smoothness detection, the result is 2! Or become the sudden extension of the human skin color, Even if the harmony is huge, the skin color image I obtained after the detection is calculated to be larger than the continuous area. In order to reduce the skin area iiiss program "尥 = division, the purpose of the dog branch, the structural unit is a 6x6 two-dimensional matrix;; master /, the second step, the maximum area: after the disconnection operation, The background of the skin color will be divided into 'the next block from the household = the largest area. From the known experimental data can be, ¥, if the largest skin color area is larger, then the sound shirt looks like In order to expose the human body image, and continue to the main skin '^ Otherwise, the job shows that the image is non-naked, and the special S is before; Ϊ: iiifff, after the above processing, has the following two-core area 一 范 以 D: Set 5 Most of the seats in the image have their own circumference: 吏ί: too ===13⁄4 2 ϊΐΐ ϊΐΐ ί = the long axis of the block, and the long axis of the Z sister area 5 is found along this long axis, and The length L and the width W of the rectangle are known. The picture above is the block of the skin tone after the smoothness detection. The figure below shows the rectangle covering the skin color block. According to the fact, we can get the skin color block. Get the average width of this skin block [avg and aspect ratio ^·= I 22 1263944
們即利用Ratio的值來判別膚色區塊的長寬比。方程弋士 W__avg = Rgn_max/LThey use the value of Ratio to determine the aspect ratio of the skin color block. Equation gentleman W__avg = Rgn_max/L
Ratio = L/W_avg 根據實驗可以求得此Ratio的臨界值。大 此膚色區塊太過狹長而被歸類到非裸露人體影^:值則代表 乂如果我們只單純利用膚色面積佔整带旦^你 係’以及膚色區塊位置與長寬比作為裸露Cl比例關 ,大頭照以人臉為主’而且其臉部面吳判’ 管而比例,如果這張大頭照影像經過類神 像之 屬;=膚雖 張影像將會被誤判為裸露人體影像。因此,:,則此 ,類照片給侧出來,將可降低誤判的機率,Η可二把 $碩照,我們知道大頭照的臉部面積佔^張$偵測 m圈選出此臉部的範圍,若此 23 1263944 利三要件,爰依法提出申請,懇請賜准。 有關上揭實施方式之描述,純粹是為了解釋本案發明的 原理及其最佳實用模式,藉以讓熟習該項技藝者對其特定目 的掌握其不同應用的變化。任何熟習該項技藝者雖因此可據 以完成若干修改或變化,達成相同的效果,然而均屬本發明 之範圍,並將為本案發明申請專利範圍之解釋所當然涵蓋 者。 24 1263944 圖式簡單說明】 第一圖·係為本案發明如何利用類神經網路來決 第二圖 之膚色色彩空間,進而利用後處理加心,影, 一張裸露人體影像之架構流程圖; 】別疋否為 •係為本案發明如何透過人工的方法, 影像做預先分類的工作,以及產生類斤,的膚色 參考資料之人工分類流程圖;產員砷、、_路輸入 第 圖:f為本ί發明所採用可自動學習及記錄分類夂群厚 色之類神經網路模型架構圖; 員口群;ί =四圖:係為本紐處理之觸触流程圖; m 處理中如何_皮膚之平滑特性以 流程圖 去除月景’而得到最大膚色平滑區塊之處理 25Ratio = L/W_avg The threshold of this Ratio can be found from the experiment. This skin color block is too narrow and is classified as a non-naked body shadow ^: The value represents 乂 if we only use the skin area to occupy the whole area ^ you are 'and the color block position and aspect ratio as bare Cl Proportion is off, the big head is based on the face of the face and the proportion of the face is determined by the tube. If the image of the big head passes through the genus of the idol, the image will be mistakenly judged as a naked human body image. Therefore, :, this, the class photo to the side, will reduce the chance of misjudgment, you can take two master photos, we know that the face area of the headshot accounted for ^ Zhang $ detection circle to select the face range If this 23 1263944 is necessary, please apply in accordance with the law, please give permission. The description of the above embodiments is purely for the purpose of explaining the principles of the present invention and its best practical mode, so that those skilled in the art can grasp the changes of their different applications for their specific purposes. It will be apparent to those skilled in the art that a number of modifications and variations can be made to achieve the same effect, and are intended to be within the scope of the invention. 24 1263944 Simple description of the schema] The first figure is how the invention uses the neural network to determine the skin color space of the second image, and then uses the post-processing centering, shadow, and a flow chart of the exposed human body image; 】 疋 为 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • The neural network model architecture diagram of the invention can automatically learn and record the classification and color of the group; the mouth group; ί = four diagrams: the touch flow chart of the processing of the New Zealand; The smoothness of the skin is processed by the flow chart to remove the moon's view and the maximum skin tone smoothing block is processed. 25
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW93117330A TWI263944B (en) | 2004-06-16 | 2004-06-16 | Naked body image detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW93117330A TWI263944B (en) | 2004-06-16 | 2004-06-16 | Naked body image detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
TW200601178A TW200601178A (en) | 2006-01-01 |
TWI263944B true TWI263944B (en) | 2006-10-11 |
Family
ID=37967218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW93117330A TWI263944B (en) | 2004-06-16 | 2004-06-16 | Naked body image detection method |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI263944B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI471755B (en) * | 2010-01-13 | 2015-02-01 | Chao Lieh Chen | Device for operation and control of motion modes of electrical equipment |
TWI676136B (en) * | 2018-08-31 | 2019-11-01 | 雲云科技股份有限公司 | Image detection method and image detection device utilizing dual analysis |
US10959646B2 (en) | 2018-08-31 | 2021-03-30 | Yun yun AI Baby camera Co., Ltd. | Image detection method and image detection device for determining position of user |
US11257246B2 (en) | 2018-08-31 | 2022-02-22 | Yun yun AI Baby camera Co., Ltd. | Image detection method and image detection device for selecting representative image of user |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543678B (en) * | 2018-11-14 | 2023-06-23 | 深圳大普微电子科技有限公司 | Sensitive image identification method and device |
CN114550306B (en) * | 2022-04-21 | 2022-08-05 | 杭州雅观科技有限公司 | Deployment method of intelligent classroom |
TWI817702B (en) * | 2022-09-05 | 2023-10-01 | 宏碁股份有限公司 | Picture filtering method and picture filtering apparatus |
-
2004
- 2004-06-16 TW TW93117330A patent/TWI263944B/en active
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI471755B (en) * | 2010-01-13 | 2015-02-01 | Chao Lieh Chen | Device for operation and control of motion modes of electrical equipment |
TWI676136B (en) * | 2018-08-31 | 2019-11-01 | 雲云科技股份有限公司 | Image detection method and image detection device utilizing dual analysis |
US10959646B2 (en) | 2018-08-31 | 2021-03-30 | Yun yun AI Baby camera Co., Ltd. | Image detection method and image detection device for determining position of user |
US11087157B2 (en) | 2018-08-31 | 2021-08-10 | Yun yun AI Baby camera Co., Ltd. | Image detection method and image detection device utilizing dual analysis |
US11257246B2 (en) | 2018-08-31 | 2022-02-22 | Yun yun AI Baby camera Co., Ltd. | Image detection method and image detection device for selecting representative image of user |
Also Published As
Publication number | Publication date |
---|---|
TW200601178A (en) | 2006-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109815893B (en) | Color face image illumination domain normalization method based on cyclic generation countermeasure network | |
CN108520219A (en) | A kind of multiple dimensioned fast face detecting method of convolutional neural networks Fusion Features | |
CN106815566B (en) | Face retrieval method based on multitask convolutional neural network | |
CN107273876B (en) | A kind of micro- expression automatic identifying method of ' the macro micro- transformation model of to ' based on deep learning | |
CN104732506B (en) | A kind of portrait photographs' Color Style conversion method based on face semantic analysis | |
CN109614996A (en) | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image | |
CN109376582A (en) | A kind of interactive human face cartoon method based on generation confrontation network | |
CN107423698A (en) | A kind of gesture method of estimation based on convolutional neural networks in parallel | |
CN107463920A (en) | A kind of face identification method for eliminating partial occlusion thing and influenceing | |
CN110909690A (en) | Method for detecting occluded face image based on region generation | |
CN112950661A (en) | Method for generating antithetical network human face cartoon based on attention generation | |
CN110032925A (en) | A kind of images of gestures segmentation and recognition methods based on improvement capsule network and algorithm | |
CN103853724B (en) | multimedia data classification method and device | |
CN109583321A (en) | The detection method of wisp in a kind of structured road based on deep learning | |
CN110458906A (en) | A kind of medical image color method based on depth color transfer | |
CN111783610B (en) | Cross-domain crowd counting method based on de-entangled image migration | |
CN112926652B (en) | Fish fine granularity image recognition method based on deep learning | |
CN105469111B (en) | The object classification method of small sample set based on improved MFA and transfer learning | |
CN109583376B (en) | Ancient ceramic source breaking and generation breaking method based on multi-feature information fusion | |
CN109086723A (en) | A kind of method, apparatus and equipment of the Face datection based on transfer learning | |
Johnson et al. | Sparse codes as alpha matte | |
TW200929008A (en) | Multi-direction human face detection method | |
CN109886153A (en) | A kind of real-time face detection method based on depth convolutional neural networks | |
TWI263944B (en) | Naked body image detection method | |
CN110188656A (en) | The generation and recognition methods of multi-orientation Face facial expression image |