CN102799885A - Lip external outline extracting method - Google Patents

Lip external outline extracting method Download PDF

Info

Publication number
CN102799885A
CN102799885A CN201210243876XA CN201210243876A CN102799885A CN 102799885 A CN102799885 A CN 102799885A CN 201210243876X A CN201210243876X A CN 201210243876XA CN 201210243876 A CN201210243876 A CN 201210243876A CN 102799885 A CN102799885 A CN 102799885A
Authority
CN
China
Prior art keywords
lip
region
lip region
outline
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210243876XA
Other languages
Chinese (zh)
Other versions
CN102799885B (en
Inventor
管业鹏
潘静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201210243876.XA priority Critical patent/CN102799885B/en
Publication of CN102799885A publication Critical patent/CN102799885A/en
Application granted granted Critical
Publication of CN102799885B publication Critical patent/CN102799885B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a lip external outline extracting method, which can be used for automatically extracting a lip external outline according to distribution characteristic difference of areas of a lip and a face skin color. A lip image is divided by using a self-adaption threshold value method; noises can be removed by combining a communication area with morphological treatment; and a lip external outline line is determined by using a curve fitting method. Therefore, the lip external outline can be rapidly and effectively extracted, and the requirement on characteristic extraction in man-machine interaction can be met.

Description

Lip outline method for distilling
Technical field
The present invention relates to a kind of lip outline method for distilling, be used for video digital images analysis and understanding, belong to the intelligent information processing technology field.
Background technology
The fast development of Along with computer technology; Research meets the human novel human-computer interaction technology Showed Very Brisk that exchanges custom naturally; And human-computer interaction technology is from being that the center transfers to progressively that focus be put on man with the computing machine; Multimedia user interface has then greatly enriched the form of expression of computerized information, the user can be replaced or utilizes a plurality of sensory channels simultaneously.Wherein, let computing machine understand the focus that the moving language of human lip has become the intelligent human-machine interaction research field automatically.A complete labiomaney system comprises links such as lip location, the moving feature extraction of lip, the moving identification of lip, and wherein the lip outline extracts very important effectively, automatically.
When extracting the lip outline, following method is generally arranged: one is based on colour of skin method, and this method receives illumination effect big, and robustness is low; Two are based on the lip model method, and this method is subject to the influence of different speaker's shape of the mouth as one speaks, and the especially interference of tongue when magnifying mouth causes the undesirable and complex algorithm of gained lip outline result, real-time low.
Summary of the invention
The objective of the invention is to exist mouth deformationization big to existing lip contour extraction method, the interference of tongue when magnifying mouth, and problem such as the real-time effect is undesirable provide a kind of improved lip outline method for distilling.It is the distribution characteristics difference according to lip and face complexion area, cuts apart lip fast, and extracts the lip outline, to improve the dirigibility and the simplicity of man-machine interaction.
For achieving the above object, design of the present invention is: adopt method for detecting human face, maintain the invariance according to the ratio between each characteristic of people's face portion, carry out the lip region coarse positioning; According to the distribution characteristics difference of lip and face complexion area, adopt adaptive threshold to cut apart lip; Geometric properties according to lip region distributes adopts method for marking connected region, removes non-lip region; And remove burr and fill up hole with digital morphological method; To obtain more complete lip region, adopt the curve curve-fitting method, fast and effeciently confirm the lip outline.
Based on the foregoing invention design, the present invention adopts following technical proposals:
1. a lip outline method for distilling is characterized in that cutting apart lip according to the distribution characteristics difference of lip and face complexion area, and extracts the lip outline automatically, and concrete steps are following:
1) starts man face image acquiring system: gather video image;
2) lip region coarse positioning: carry out people's face and detect, obtain human face region; Lip region is tentatively confirmed as in second zone of people's face vertical;
3) lip region is cut apart
4) extract the lip outline: in the lip region bianry image that step 3) is extracted, obtain the lip marginal point, adopt curve fitting method to extract the outline of lip.
Above-mentioned steps 3) the concrete operations step cut apart of lip is following:
(1) color space conversion is calculated color-values s: red by the RGB color space R, green GTwo components calculate color-values s:
Figure 201210243876X100002DEST_PATH_IMAGE001
(2) histogram generates: according to determined color sValue is obtained its histogram;
(3) the lip region rough segmentation is cut: according to step 2) determined initial lip region, obtain its area pixel number A, utilize step (2) gained simultaneously sHistogram is added up bigger sThe pixel number of value mWhen mEqual N%* AThe time, NFor real lip region accounts for the number of pixels number percent of the initial lip region of confirming, pairing sValue is threshold value TThe image-region that satisfies the following formula condition is confirmed as possible lip region G
G=s> T
(4) lip region is confirmed: the geometric properties according to wide, the height of lip and area etc., adopt the connected component labeling method to remove non-lip region, and adopt digital morphological method to remove burr and fill up hole.
Principle of the present invention is following:
In technical scheme of the present invention, according to saturation degree among the HSV SComputing formula and the distribution characteristics of lip region RGB, derive a kind of new color conversion method, and adopt the adaptive threshold method from area-of-interest, to be partitioned into lip.
Saturation degree SComputing formula following:
Figure 2251DEST_PATH_IMAGE002
In lip region, because Max (R, G, B)=R, Min (R, G, B)=B, following formula can be reduced to
Figure 201210243876X100002DEST_PATH_IMAGE003
According to lip, the colour of skin and tongue zone R, G, BDistribution characteristics makes up R-GWith R-BDistribution plan is analyzed, and draws utilization (R-G)/RBetter segmentation effect is arranged.But when there are some dressings in face, RValue possibly trend towards 0, cause lip region to extract difficulty.For addressing this problem, derive following formula, will sValue is limited between the 0-1.
Figure 338686DEST_PATH_IMAGE004
Confirm the image segmentation threshold value TReal lip region accounts for the number of pixels of the initial lip region of confirming A N%, NFor real lip region accounts for the number of pixels number percent of the initial lip region of confirming, the number of promptly real lip region pixel does N%* AUtilize statistics with histogram bigger sThe pixel number of value is when it equals N%* AThe time, pairing sValue is threshold value TAccording to the following formula split image, lip region is made as 1, and non-lip region puts 0.
Figure 201210243876X100002DEST_PATH_IMAGE005
According to the geometric properties of mouth shape, adopt the connected component labeling method to remove the nontarget area, handle with morphology and remove burr and fill up hole, obtain more complete lip region.According to the distribution characteristics of last lower lip, adopt respectively four times with conic fitting on lower lip, thereby obtain complete lip outline.
The present invention compared with prior art, have following conspicuous outstanding substantive distinguishing features and remarkable advantage: the ratio that the present invention is based between each characteristic of people's face portion maintains the invariance, and carries out the lip coarse positioning; And it is different in nature based on the color difference of lip region and human face region; Adopt adaptive threshold to cut apart lip, utilize connected component labeling method and digital morphological method to remove non-lip region, adopt curve-fitting method to confirm the lip outline; Computing is easy, flexible; Be prone to realize, solved when extract real-time lip profile, the influence of the various and easy variation of lip form and the deficiency that disturb in the tongue zone, computing is complicated when magnifying mouth; Improved the robustness that the lip outline extracts, the lip outline that can adapt under the different scenes extracts.Method of the present invention is quick, effective, easy to be realized.
Description of drawings
Fig. 1 is the flowsheet of the inventive method.
Fig. 2 is the original facial image of one embodiment of the invention.
Fig. 3 is people's face detected image of one embodiment of the invention.
Fig. 4 is the lip coarse positioning image of one embodiment of the invention.
Fig. 5 is the selection of threshold histogram of one embodiment of the invention.
The lip bianry image that Fig. 6 is partitioned into based on color space.
Fig. 7 be one embodiment of the invention bianry image is removed the lip bianry image after non-lip region and hole are filled.
Fig. 8 is the extraction lip image of contour edge point up and down of one embodiment of the invention.
Fig. 9 is that the lip outline of one embodiment of the invention extracts image.
Embodiment
A preferred embodiment of the present invention is: running program is as shown in Figure 1.The original facial image of this example is as shown in Figure 2, and people's face detected image such as Fig. 3 according to the color space transformation algorithm, carry out lip to coloured image shown in Figure 3 and cut apart.Concrete steps are following:
1) starts man face image acquiring system: gather video image;
2) lip region coarse positioning
Carry out people's face and detect, obtain human face region (Fig. 3).Lip region (Fig. 4) is tentatively confirmed as in second zone of people's face vertical.
3) lip region is cut apart
The concrete operations step is following:
(1) color space conversion is calculated color-values s: red by the RGB color space R, green GTwo components calculate color-values s:
Figure 502951DEST_PATH_IMAGE001
(2) histogram generates: according to determined color sValue is obtained its histogram (Fig. 5);
(3) the lip region rough segmentation is cut: according to step 2) determined initial lip region, the number of pixels of obtaining initial lip region is 47040, utilizes step (2) gained simultaneously sHistogram is added up bigger sThe pixel number of value mWhen mEqual 5 %* 47040 (get NBe 5) time, pairing sValue is 0.26, is threshold value T(Fig. 5).The image-region that satisfies the following formula condition is confirmed as possible lip region GThe bianry image that Fig. 6 cuts for the lip region rough segmentation.
G= s> T
(4) lip region is confirmed: the geometric properties according to wide, the height of lip and area etc., and adopt the connected component labeling method to remove non-lip region, and adopt digital morphological method to remove burr and fill up hole, the result is as shown in Figure 7.
4) extract the lip outline
In the lip region bianry image that step 3) is extracted, obtain lip marginal point (like Fig. 8), adopt respectively four times and the conic fitting method, confirm the outline of lip, the result is as shown in Figure 9.

Claims (1)

1. a lip outline method for distilling is characterized in that cutting apart lip according to the distribution characteristics difference of lip and face complexion area, and extracts the lip outline automatically, and concrete steps are following:
1) starts man face image acquiring system: gather video image;
2) lip region coarse positioning: carry out people's face and detect, obtain human face region; Lip region is tentatively confirmed as in second zone of people's face vertical;
3) lip region is cut apart;
4) extract the lip outline: in the lip region bianry image that step 3) is extracted, obtain the lip marginal point, adopt curve fitting method to extract the outline of lip;
The concrete operations step that said step 3) lip is cut apart is following:
(1) color space conversion is calculated color-values s: red by the RGB color space R, green GTwo components calculate color-values s:
Figure 201210243876X100001DEST_PATH_IMAGE002
(2) histogram generates: according to determined color sValue is obtained its histogram;
(3) the lip region rough segmentation is cut: according to step 2) determined initial lip region, obtain its area pixel number A, utilize step (2) gained simultaneously sHistogram is added up bigger sThe pixel number of value mWhen mEqual N%* AThe time, NFor real lip region accounts for the number of pixels number percent of the initial lip region of confirming, pairing sValue is threshold value TThe image-region that satisfies the following formula condition is confirmed as possible lip region G
G=s> T
(4) lip region is confirmed: the geometric properties according to wide, the height of lip and area etc., adopt the connected component labeling method to remove non-lip region, and adopt digital morphological method to remove burr and fill up hole.
CN201210243876.XA 2012-07-16 2012-07-16 Lip external outline extracting method Expired - Fee Related CN102799885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210243876.XA CN102799885B (en) 2012-07-16 2012-07-16 Lip external outline extracting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210243876.XA CN102799885B (en) 2012-07-16 2012-07-16 Lip external outline extracting method

Publications (2)

Publication Number Publication Date
CN102799885A true CN102799885A (en) 2012-11-28
CN102799885B CN102799885B (en) 2015-07-01

Family

ID=47198984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210243876.XA Expired - Fee Related CN102799885B (en) 2012-07-16 2012-07-16 Lip external outline extracting method

Country Status (1)

Country Link
CN (1) CN102799885B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484664A (en) * 2014-12-31 2015-04-01 小米科技有限责任公司 Human face image processing method and device
CN106373128A (en) * 2016-09-18 2017-02-01 上海斐讯数据通信技术有限公司 Lip accuracy positioning method and system
CN106997451A (en) * 2016-01-26 2017-08-01 北方工业大学 Lip contour positioning method
CN108831462A (en) * 2018-06-26 2018-11-16 北京奇虎科技有限公司 Vehicle-mounted voice recognition methods and device
CN112651310A (en) * 2020-12-14 2021-04-13 北京影谱科技股份有限公司 Method and device for detecting and generating lip shape of video character

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710595A (en) * 2005-06-16 2005-12-21 上海交通大学 Mouth-corner positioning method
US20080193020A1 (en) * 2005-02-21 2008-08-14 Mitsubishi Electric Coporation Method for Facial Features Detection
CN102024156A (en) * 2010-11-16 2011-04-20 中国人民解放军国防科学技术大学 Method for positioning lip region in color face image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080193020A1 (en) * 2005-02-21 2008-08-14 Mitsubishi Electric Coporation Method for Facial Features Detection
CN1710595A (en) * 2005-06-16 2005-12-21 上海交通大学 Mouth-corner positioning method
CN102024156A (en) * 2010-11-16 2011-04-20 中国人民解放军国防科学技术大学 Method for positioning lip region in color face image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484664A (en) * 2014-12-31 2015-04-01 小米科技有限责任公司 Human face image processing method and device
CN104484664B (en) * 2014-12-31 2018-03-20 小米科技有限责任公司 Face picture treating method and apparatus
CN106997451A (en) * 2016-01-26 2017-08-01 北方工业大学 Lip contour positioning method
CN106373128A (en) * 2016-09-18 2017-02-01 上海斐讯数据通信技术有限公司 Lip accuracy positioning method and system
CN106373128B (en) * 2016-09-18 2020-01-14 上海斐讯数据通信技术有限公司 Method and system for accurately positioning lips
CN108831462A (en) * 2018-06-26 2018-11-16 北京奇虎科技有限公司 Vehicle-mounted voice recognition methods and device
CN112651310A (en) * 2020-12-14 2021-04-13 北京影谱科技股份有限公司 Method and device for detecting and generating lip shape of video character

Also Published As

Publication number Publication date
CN102799885B (en) 2015-07-01

Similar Documents

Publication Publication Date Title
CN101719015B (en) Method for positioning finger tips of directed gestures
CN110070033B (en) Method for detecting wearing state of safety helmet in dangerous working area in power field
WO2017084204A1 (en) Method and system for tracking human body skeleton point in two-dimensional video stream
CN102799885A (en) Lip external outline extracting method
CN104050682B (en) Image segmentation method fusing color and depth information
CN102496002A (en) Facial beauty evaluation method based on images
CN101650782A (en) Method for extracting front human face outline based on complexion model and shape constraining
CN106983493A (en) A kind of skin image processing method based on three spectrum
CN104881660A (en) Facial expression recognition and interaction method based on GPU acceleration
CN102592113B (en) Rapid identification method for static gestures based on apparent characteristics
CN105069466A (en) Pedestrian clothing color identification method based on digital image processing
CN103218605A (en) Quick eye locating method based on integral projection and edge detection
TW200719871A (en) A real-time face detection under complex backgrounds
CN103886321A (en) Finger vein feature extraction method
CN102024156A (en) Method for positioning lip region in color face image
CN108846862A (en) A kind of strawberry mechanical hand object localization method of color priori knowledge guiding
CN107527343A (en) A kind of agaricus bisporus stage division based on image procossing
CN104484669A (en) Mobile phone payment method based on three-dimensional human face recognition
CN104537372A (en) Automatic generation method of face image mask with region perception characteristics
CN113592851B (en) Pore detection method based on full-face image
CN110688962A (en) Face image processing method, user equipment, storage medium and device
CN102073878A (en) Non-wearable finger pointing gesture visual identification method
Wang et al. A color-texture segmentation method to extract tree image in complex scene
CN108682021A (en) Rapid hand tracking, device, terminal and storage medium
Shao et al. Research on the tea bud recognition based on improved k-means algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150701

Termination date: 20180716