CN107516067B - Human eye positioning method and system based on skin color detection - Google Patents

Human eye positioning method and system based on skin color detection Download PDF

Info

Publication number
CN107516067B
CN107516067B CN201710600994.4A CN201710600994A CN107516067B CN 107516067 B CN107516067 B CN 107516067B CN 201710600994 A CN201710600994 A CN 201710600994A CN 107516067 B CN107516067 B CN 107516067B
Authority
CN
China
Prior art keywords
human eye
block
note
region
skin color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710600994.4A
Other languages
Chinese (zh)
Other versions
CN107516067A (en
Inventor
舒倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mengwang Video Co ltd
Original Assignee
Shenzhen Mengwang Video Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mengwang Video Co ltd filed Critical Shenzhen Mengwang Video Co ltd
Priority to CN201710600994.4A priority Critical patent/CN107516067B/en
Publication of CN107516067A publication Critical patent/CN107516067A/en
Application granted granted Critical
Publication of CN107516067B publication Critical patent/CN107516067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a human eye positioning method and system based on skin color detection, belongs to the technical field of image processing, and designs a human eye positioning technology by utilizing skin color detection to reduce the search range so as to improve the timeliness of the human eye video positioning technology.

Description

Human eye positioning method and system based on skin color detection
Technical Field
The invention relates to the technical field of image processing, in particular to a human eye positioning method and system based on skin color detection.
Background
With the rapid development of multimedia technology and computer network technology, video is becoming one of the mainstream carriers for information dissemination. The accurate and rapid human eye positioning technology can enhance the effect of twice with half the effort no matter the human face video retrieval or the online video beautifying is carried out. At present, the mainstream special human eye image positioning technology has large calculated amount, and restricts the online use and secondary development efficiency of the algorithm.
Disclosure of Invention
The embodiment of the invention aims to provide a human eye positioning method based on skin color detection, and aims to solve the problems that the human eye image positioning technology in the prior art is large in calculation amount and restricts the online use and secondary development efficiency of an algorithm.
The embodiment of the invention is realized in such a way that a human eye positioning method based on skin color detection comprises the following steps:
setting a corresponding skin color identifier for each block in the current image;
if the skin color identifiers of all the blocks of the current image are 0, the positioning by human eyes is not needed, and the process is finished directly;
searching a pending area of human eyes in a current image, and setting a corresponding judgment mode;
and carrying out human eye positioning and marking according to the judging mode.
Another object of an embodiment of the present invention is to provide an eye positioning system based on skin color detection, the system including:
a skin color block judgment processing module for judging whether each block in the current image is a skin color block, if bkt(i, j) determinationFor a skin tone block, the skin tone identifier of the block is set to 1, that is, notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
Wherein, bkt(i, j) represents the ith row and the jth block of the current image, and bkw and bkh respectively represent the column number and the row number of the image in units of blocks after the image is divided into the blocks; note (r) notet(i, j) represents the skin tone identifier of the ith row and jth block of the current image.
The skin color identifier judging module is used for judging that if the skin color identifiers of all the blocks of the current image are 0, the positioning by human eyes is not needed, and the process is finished directly;
the human eye pending area searching device is used for searching a human eye pending area in the current image and setting a corresponding judgment mode;
and the human eye positioning and marking device is used for positioning and marking human eyes according to the judging mode.
The invention has the advantages of
The invention provides a human eye positioning method and system based on skin color detection. The method of the invention utilizes skin color detection to reduce the search range, designs a human eye positioning technology, and improves the timeliness of the human eye video positioning technology.
Drawings
FIG. 1 is a flow chart of a method for locating human eyes based on skin color detection according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart of the detailed method of Step3 in FIG. 1;
FIG. 3 is a flowchart of a detailed method of the side decision mode in Step4 in FIG. 1;
FIG. 4 is a flowchart illustrating a detailed method of the positive determination mode at Step4 in FIG. 1;
FIG. 5 is a block diagram of an eye positioning system based on skin tone detection in accordance with a preferred embodiment of the present invention;
FIG. 6 is a block diagram of the eye predetermined area searching device of FIG. 5;
FIG. 7 is a block diagram of a front face determination device in the eye positioning and identification device of FIG. 5;
fig. 8 is a structural view of a side judgment device in the eye positioning and marking device of fig. 5.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples, and for convenience of description, only parts related to the examples of the present invention are shown. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a human eye positioning method and system based on skin color detection. The method of the invention utilizes skin color detection to reduce the search range, designs a human eye positioning technology, and improves the timeliness of the human eye video positioning technology.
Example one
FIG. 1 is a flow chart of a method for locating human eyes based on skin color detection according to a preferred embodiment of the present invention; the method comprises the following steps:
step 1: setting a corresponding skin color identifier for each block in the current image;
the method specifically comprises the following steps: judging whether each block in the current image is a skin color block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0。
The determination method of the skin color block is a skin color determination method using a block as a unit, which is disclosed in the industry and is not described herein again.
Wherein, bkt(i, j) represents the ith row and jth block (the block size can be 16x16, etc.) of the current image, bkw and bkh represent the column number and row number of the image in units of blocks after the image is divided into blocks respectively; note (r) notet(i, j) represents the skin tone identifier of the ith row and jth block of the current image.
Step 2: and if the skin color identifiers of all the blocks of the current image are 0, the positioning by human eyes is not needed, and the process is finished directly.
Step 3: searching a pending area of human eyes in a current image, and setting a corresponding judgment mode;
FIG. 2 is a flowchart of the detailed method of Step3 in FIG. 1, which includes the following steps:
step 31: firstly, whether a condition is met is searched for: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetA block of (i, j-1) ═ 1 (noted sbk)t(is, js) called eye start decision block, where is and js respectively denote the row number of the eye start decision block, and if yes, Step32 is entered; if not, ending.
Wherein notet(i-1, j) a skin tone identifier representing the jth block of line i-1 of the current image;
notet(i, j-1) a skin tone identifier representing the jth block of the ith row of the current image;
step 32: at the row where the human eye starts the decision block, it is looked for whether there is a line satisfying the condition:
notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetA block (denoted as dbk) with (i, j +1) ═ 1t(id, jd) is called human eye stopping decision block, id, jd respectively represent the row and column number of the human eye stopping decision block), if yes, then Step33 is entered, otherwise, Step34 is entered;
wherein notet(i, j +1) a skin tone identifier representing the ith row, j +1 block of the current image; step33, firstly, the fusion of the regions to be judged is carried out, namely, adjacent non-skin-color blocks of the human eye starting decision block are merged into a first region to be judged of the human eye together, then adjacent non-skin-color blocks of the human eye stopping decision block are merged into a second region to be judged of the human eye together, then the judgment mode is set to be a front judgment mode, and then the Step4 is entered.
Step34, firstly, fusing regions to be judged, namely combining adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged of the human eye, and then setting a judging mode as a side judging mode; then proceed to Step 4.
Step 4: and carrying out human eye positioning and marking according to the judging mode.
Side judgment mode:
FIG. 3 is a flowchart of a detailed method of the side decision mode in Step4 in FIG. 1; the method comprises the following steps:
step C1: calculating the brightness value distribution of the first region to be determined by human eyes
p (k) sum (sign (y (m, n) ═ k | y (m, n) ∈ human eye first region to be fixed).
Wherein p (k) identifies the distribution of luminance values k; sum (variable) denotes summing the variables; y (m, n) represents the luminance value of the mth row and nth column;
Figure BDA0001357135460000031
step C2: and solving the maximum value and the secondary maximum value of the brightness value distribution of the first region to be determined by the human eyes, and finding out the corresponding brightness value.
perk1(k)=max(p(k))、kmax1=arg(k|perk1(k))、
perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2=arg(k|perk2(k))。
Wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) Denotes the maximum value of variables satisfying the conditions, max: (Variables of) Representing the maximum value of the variable.
Step C3: if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and identifying all blocks in the region as the human eye, otherwise, identifying the first region as the non-human eye.
Namely sbkt(i,j)=sign(bkt(i, j) | human eye identification condition), wherein the human eye identification condition:
abs(kmax1-kmax2)>thres and bkt(i, j) ∈ the human eye defines a first region.
Wherein abs (variable) means taking the absolute value of the variable; sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; thres represents the threshold, and typically Thres can be taken>50。
A front determination mode:
FIG. 4 is a flowchart illustrating a detailed method of the positive determination mode at Step4 in FIG. 1; the method comprises the following steps:
step Z1: and respectively carrying out primary side judgment on the first region to be determined by the human eyes and the second region to be determined by the human eyes, and marking corresponding results.
Step Z2: and if the block marks exist in the first area to be determined by the human eyes and the second area to be determined by the human eyes, further confirmation is carried out. I.e. if lbk1-lbk20 and L2-R1≥max(1,1/2*lbk1) Then completing the positioning of the human eyes; otherwise, the identification image is absent from the human eye.
Wherein, lbk1、lbk2Respectively indicating the column widths of the first region to be defined by the human eye and the second region to be defined by the human eye in units of blocks L2、R1The column numbers of the left side of the second area to be determined by the human eye and the column numbers of the right side of the first area to be determined by the human eye are respectively expressed by taking the block as a unit.
Example two
FIG. 5 is a block diagram of a preferred embodiment of the present invention of a human eye location system based on skin tone detection, the system comprising:
the skin color block judgment processing module is used for setting a corresponding skin color identifier for each block in the current image;
the method specifically comprises the following steps: judging whether each block in the current image is a skin color block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0。
The determination method of the skin color block is a skin color determination method using a block as a unit, which is disclosed in the industry and is not described herein again.
Wherein, bkt(i, j) represents the ith row and jth block (the block size can be 16x16, etc.) of the current image, bkw and bkh represent the column number and row number of the image in units of blocks after the image is divided into blocks respectively; note (r) notet(i, j) represents the skin tone identifier of the ith row and jth block of the current image.
The skin color identifier judging module is used for judging that if the skin color identifiers of all the blocks of the current image are 0, the positioning by human eyes is not needed, and the process is finished directly;
the human eye pending area searching device is used for searching a human eye pending area in the current image and setting a corresponding judgment mode;
and the human eye positioning and marking device is used for positioning and marking human eyes according to the judging mode.
Further, fig. 6 is a structural diagram of the eye waiting area searching device in fig. 5; the device comprises:
the human eye starting decision block searching and judging module is used for searching whether the conditions are met: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetA block of (i, j-1) ═ 1 (noted sbk)t(is, js) called human eye starting decision block, wherein is and js respectively represent the row and column numbers of the human eye starting decision block, if yes, entering the human eye stopping decision block to search for the judgment module; if not, ending.
Wherein notet(i-1, j) a skin tone identifier representing the jth block of line i-1 of the current image; note (r) notet(i, j-1) a skin tone identifier representing the jth block of the ith row of the current image;
a human eye termination decision block search judging module for searching whether a line where the human eye termination decision block exists satisfies a condition: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetA block (denoted as dbk) with (i, j +1) ═ 1t(id, jd) is called a human eye stopping decision block, id and jd respectively represent the row number and the column number of the human eye stopping decision block), if yes, the front judgment mode setting module is entered, otherwise, the side judgment mode setting module is entered;
wherein notet(i, j +1) a skin tone identifier representing the ith row, j +1 block of the current image;
the front judging mode setting module is used for firstly carrying out fusion of regions to be judged, namely combining adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged of human eyes, combining adjacent non-skin color blocks of the human eye stopping decision block into a second region to be judged of human eyes, and then setting the judging mode as a front judging mode.
The side judgment mode setting module is used for firstly fusing the regions to be judged, namely combining the adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged for the human eye, and then setting the judgment mode as a side judgment mode;
furthermore, the human eye positioning and identifying device comprises a front judging device and a side judging device; further, FIG. 7 is a schematic diagram of a front face determination device of the eye positioning and identification device of FIG. 5; the front determination device includes:
a module for calculating the brightness value distribution of the first region to be determined for human eyes
Luminance value distribution p (k) sum (sign (y (m, n) ═ k | y (m, n) ∈ human eye first region to be determined)).
Wherein p (k) identifies the distribution of luminance values k; sum (variable) denotes summing the variables; y (m, n) represents the luminance value of the mth row and nth column;
Figure BDA0001357135460000051
and the brightness value acquisition module is used for solving the maximum value and the secondary maximum value of the brightness value distribution of the first area to be determined by the human eyes and finding out the corresponding brightness value.
perk1(k)=max(p(k))、kmax1=arg(k|perk1(k))、
perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2=arg(k|perk2(k))。
Wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) Denotes the maximum value of variables satisfying the conditions, max: (Variables of) Representing the maximum value of the variable.
A first eye identification module for determining if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and identifying all blocks in the region as the human eye, otherwise, identifying the first region as the non-human eye. Namely sbkt(i,j)=sign(bkt(i, j) | human eye identification condition), wherein the human eye identification condition: abs (k)max1-kmax2)>Thres and bkt(i, j) ∈ the human eye defines a first region.
Wherein abs (variable) means taking the absolute value of the variable; sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; thres represents the threshold, and typically Thres can be taken>50。
Further, fig. 8 is a structural view of a side decision device in the eye positioning and marking device of fig. 5, the side decision device comprising:
and the identification module for judging the side faces of the first area to be determined and the second area to be determined of the human eye is used for respectively carrying out primary side face judgment on the first area to be determined of the human eye and the second area to be determined of the human eye and marking corresponding results.
And the second eye identification module is used for judging whether the block identifications of the first region to be determined by the eyes and the second region to be determined by the eyes are the eyes or not, and then further confirming. I.e. if lbk1-lbk20 and L2-R1≥max(1,1/2*lbk1) Then completing the positioning of the human eyes; otherwise, the identification image is absent from the human eye.
Wherein, lbk1、lbk2Respectively indicating the column widths of the first region to be defined by the human eye and the second region to be defined by the human eye in units of blocks L2、R1The column numbers of the left side of the second area to be determined by the human eye and the column numbers of the right side of the first area to be determined by the human eye are respectively expressed by taking the block as a unit.
It will be understood by those skilled in the art that all or part of the steps in the method according to the above embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, such as ROM, RAM, magnetic disk, optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (5)

1. A human eye positioning method based on skin color detection is characterized by comprising the following steps:
setting a corresponding skin color identifier for each block in the current image, specifically: judging whether each block in the current image is a skin color block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i, j) ═ 0; wherein, bkt(i, j) represents the ith row and the jth block of the current image, and bkw and bkh respectively represent the column number and the row number of the image in units of blocks after the image is divided into the blocks; note (r) notet(i, j) a skin tone identifier representing the ith row and jth block of the current image;
if the skin color identifiers of all the blocks of the current image are 0, the positioning by human eyes is not needed, and the process is finished directly;
according to the skin color identifier of each block in the current image, searching a pending area of human eyes in the current image, and setting a corresponding judgment mode, wherein the specific steps are as follows: step 31: firstly, whether a condition is met is searched for: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j-1), the block is first designated as sbkt(is, js), referred to as the human eye initiation decision block, and then proceeds to Step 32; if not, ending; wherein is and js respectively represent the row and column numbers of the human eye initial decision block and notet(i-1, j) a skin tone identifier representing the jth block of line i-1 of the current image; note (r) notet(i, j-1) a skin tone identifier representing the jth block of the ith row of the current image; step 32: at the row where the human eye starts the decision block, it is looked for whether there is a line satisfying the condition: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notet(i, j +1) ═ 1 block, if yes, thenFirst, the block is recorded as dbkt(id, jd), referred to as the human eye suspension decision block, then proceeds to Step33, if not to Step 34; wherein id and jd respectively represent the row and column number of the human eye-suspended decision block, notetStep33, firstly, fusing regions to be judged, namely merging adjacent non-skin color blocks of a human eye starting decision block into a first region to be judged of the human eye, then merging adjacent non-skin color blocks of the human eye stopping decision block into a second region to be judged of the human eye, then setting a judging mode as a front judging mode, and then entering 'positioning the human eye according to the judging mode and identifying';
Figure FDA0002509162500000021
step C2: solving the maximum value and the sub-maximum value of the brightness value distribution of the first area to be determined by human eyes, and finding out the corresponding brightness value; perk1(k) max (p (k)), kmax1=arg(k|perk1(k))、perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2Arg (k | perk2 (k)); wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) Denotes the maximum value of variables satisfying the conditions, max: (Variables of) Expressing the maximum value of the variable; step C3: if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and marking all blocks in the region as the human eye, otherwise, marking the first region as the non-human eye; the method specifically comprises the following steps:
sbkt(i,j)=sign(bkt(i, j) | human eye identification condition), wherein the human eye identification condition:
abs(kmax1-kmax2)>thres and bkt(i, j) ∈ human eye defining a first region, wherein abs (variable) represents the absolute value of the variable sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; thres denotes the threshold, Thres>50;
And carrying out human eye positioning and marking according to the judging mode.
2. The method for skin color detection based eye location according to claim 1,
the determination mode further includes a side determination mode and a front determination mode.
3. The method for skin color detection based eye location according to claim 2,
the front determination mode includes the steps of:
step Z1: respectively carrying out primary side judgment on a first region to be determined by human eyes and a second region to be determined by human eyes, and marking corresponding results;
step Z2: if the block marks of the first area to be determined by the human eyes and the second area to be determined by the human eyes exist, further confirmation is carried out; the method specifically comprises the following steps: if lbk1-lbk20 and L2-R1≥max(1,1/2*lbk1) Then completing the positioning of the human eyes; otherwise, the identification image does not exist by human eyes;
wherein, lbk1、lbk2Respectively indicating the column widths of the first region to be defined by the human eye and the second region to be defined by the human eye in units of blocks L2、R1Are respectively shown inThe block is a unit for human eyes to determine the left column number of the second area and the right column number of the first area.
4. A human eye localization system based on skin tone detection, the system comprising:
a skin color block judgment processing module for judging whether each block in the current image is a skin color block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
Wherein, bkt(i, j) represents the ith row and the jth block of the current image, and bkw and bkh respectively represent the column number and the row number of the image in units of blocks after the image is divided into the blocks; note (r) notet(i, j) a skin tone identifier representing the ith row and jth block of the current image;
the skin color identifier judging module is used for judging that if the skin color identifiers of all the blocks of the current image are 0, the positioning by human eyes is not needed, and the process is finished directly;
the human eye pending area searching device searches a human eye pending area in the current image according to the skin color identifier of each block in the current image and sets a corresponding judgment mode; the human eye to-be-determined area searching device comprises: the human eye starting decision block searching and judging module is used for searching whether the conditions are met: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetA block of (i, j-1) ═ 1, noted sbkt(is, js) called a human eye starting decision block, if yes, entering a human eye stopping decision block to search for a judgment module; if not, ending; wherein is and js respectively represent the row and column numbers of the human eye initial decision block and notet(i-1, j) a skin tone identifier representing the jth block of line i-1 of the current image; note (r) notet(i, j-1) a skin tone identifier representing the jth block of the ith row of the current image; a human eye termination decision block search judging module for searching whether a line where the human eye termination decision block exists satisfies a condition: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetThe block with (i, j +1) ═ 1, noted dbkt(id, jd), called the human eye termination decision block, if yes, entering positive judgmentA mode setting module is determined, otherwise, a side judgment mode setting module is entered; wherein id and jd respectively represent the row and column number of the human eye-suspended decision block, notet(i, j +1) a skin tone identifier representing the ith row, j +1 block of the current image; the front judging mode setting module is used for firstly carrying out fusion of regions to be judged, namely combining adjacent non-skin color blocks of a human eye starting decision block into a first region to be judged of human eyes, combining adjacent non-skin color blocks of a human eye stopping decision block into a second region to be judged of human eyes, and then setting a judging mode as a front judging mode; the side judgment mode setting module is used for firstly fusing the regions to be judged, namely combining the adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged for the human eye, and then setting the judgment mode as a side judgment mode;
the human eye positioning and marking device is used for positioning and marking human eyes according to the judging mode;
the human eye positioning and marking device comprises a front judging device and a side judging device, wherein the side judging device comprises a human eye undetermined first area brightness value distribution calculating module, a human eye positioning and marking module and a side judging module, wherein the human eye positioning and marking module is used for calculating the brightness value distribution p (k) sum (y (m, n) k | y (m, n) ∈ human eye undetermined first area) of the human eye undetermined first area, p (k) marks the distribution of the brightness value k, sum (variable) represents the sum of variables, and y (m, n) represents the brightness value of the nth row of the mth row;
Figure FDA0002509162500000041
the brightness value acquisition module is used for solving the maximum value and the sub-maximum value of the brightness value distribution of the first area to be determined by human eyes and finding out the corresponding brightness value; perk1(k) max (p (k)), kmax1=arg(k|perk1(k))、perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2Arg (k | perk2 (k)); wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1First, arg (k | perk1(k)) meansPerk1(k), then assigning the value of k corresponding to the perk1(k) to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2(ii) a max (variable | condition) represents the maximum value of the variable satisfying the condition, and max (variable) represents the maximum value of the variable; a first eye identification module for determining if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and marking all blocks in the region as the human eye, otherwise, marking the first region as the non-human eye; the method specifically comprises the following steps:
sbkt(i,j)=sign(bkt(i, j) | human eye identification condition), wherein the human eye identification condition:
abs(kmax1-kmax2)>thres and bkt(i, j) ∈ human eye defining a first region, wherein abs (variable) represents the absolute value of the variable sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; thres denotes the threshold, Thres>50。
5. The skin tone detection-based eye positioning system of claim 4,
the front determination device includes:
the identification module for judging the side faces of the first region to be determined and the second region to be determined of the human eye is used for respectively carrying out primary side face judgment on the first region to be determined of the human eye and the second region to be determined of the human eye and marking corresponding results;
the second eye identification module is used for judging whether the block identifications of the first region to be determined by the eyes and the second region to be determined by the eyes are the eyes, and then further confirming specifically: if lbk1-lbk20 and L2-R1≥max(1,1/2*lbk1) Then completing the positioning of the human eyes; otherwise, the identification image does not exist by human eyes;
wherein, lbk1、lbk2Respectively indicating the column widths of the first region to be defined by the human eye and the second region to be defined by the human eye in units of blocks L2、R1Respectively representing the left column number of the second area to be determined by human eyes and the right of the first area to be determined by human eyes by taking a block as a unitSide column number.
CN201710600994.4A 2017-07-21 2017-07-21 Human eye positioning method and system based on skin color detection Active CN107516067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710600994.4A CN107516067B (en) 2017-07-21 2017-07-21 Human eye positioning method and system based on skin color detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710600994.4A CN107516067B (en) 2017-07-21 2017-07-21 Human eye positioning method and system based on skin color detection

Publications (2)

Publication Number Publication Date
CN107516067A CN107516067A (en) 2017-12-26
CN107516067B true CN107516067B (en) 2020-08-04

Family

ID=60722650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710600994.4A Active CN107516067B (en) 2017-07-21 2017-07-21 Human eye positioning method and system based on skin color detection

Country Status (1)

Country Link
CN (1) CN107516067B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781840B (en) * 2019-10-29 2022-08-26 深圳市梦网视讯有限公司 Nose positioning method and system based on skin color detection
CN111461073B (en) * 2020-05-06 2023-12-08 深圳市梦网视讯有限公司 Reverse face detection method, system and equipment based on nose positioning
CN111626143B (en) * 2020-05-06 2023-12-08 深圳市梦网视讯有限公司 Reverse face detection method, system and equipment based on eye positioning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN101059836A (en) * 2007-06-01 2007-10-24 华南理工大学 Human eye positioning and human eye state recognition method
CN103218615A (en) * 2013-04-17 2013-07-24 哈尔滨工业大学深圳研究生院 Face judgment method
CN104463102A (en) * 2014-11-07 2015-03-25 中国石油大学(华东) Human eye positioning method based on point-by-point scanning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682094B (en) * 2016-12-01 2020-05-22 深圳市梦网视讯有限公司 Face video retrieval method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN101059836A (en) * 2007-06-01 2007-10-24 华南理工大学 Human eye positioning and human eye state recognition method
CN103218615A (en) * 2013-04-17 2013-07-24 哈尔滨工业大学深圳研究生院 Face judgment method
CN104463102A (en) * 2014-11-07 2015-03-25 中国石油大学(华东) Human eye positioning method based on point-by-point scanning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种有效的唇部特征定位算法;王罡;《科技资讯》;20151231;论文第2节 *
全星慧.基于肤色模型与眼睛定位的人脸检测方法.《科学技术与工程》.2010,第10卷(第31期),论文全文. *
基于肤色模型与眼睛定位的人脸检测方法;全星慧;《科学技术与工程》;20101130;第10卷(第31期);论文全文 *

Also Published As

Publication number Publication date
CN107516067A (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN107506691B (en) Lip positioning method and system based on skin color detection
CN107516067B (en) Human eye positioning method and system based on skin color detection
CN110012349B (en) A kind of news program structural method end to end
CN107563278B (en) Rapid eye and lip positioning method and system based on skin color detection
CN107657625A (en) Merge the unsupervised methods of video segmentation that space-time multiple features represent
CN110807402B (en) Facial feature positioning method, system and terminal equipment based on skin color detection
CN110852269B (en) Cross-lens portrait correlation analysis method and device based on feature clustering
CN103034841B (en) A kind of face tracking methods and system
CN103927763A (en) Identification processing method for multi-target tracking tracks of image sequences
CN107481222B (en) Rapid eye and lip video positioning method and system based on skin color detection
CN108765264A (en) Image U.S. face method, apparatus, equipment and storage medium
KR101089847B1 (en) Keypoint matching system and method using SIFT algorithm for the face recognition
Gao et al. Multi-object tracking with Siamese-RPN and adaptive matching strategy
CN112926557B (en) Method for training multi-mode face recognition model and multi-mode face recognition method
WO2017214872A1 (en) Methods, systems and apparatuses of feature extraction and object detection
CN110781840B (en) Nose positioning method and system based on skin color detection
CN107527015B (en) Human eye video positioning method and system based on skin color detection
CN110245267B (en) Multi-user video stream deep learning sharing calculation multiplexing method
Wang et al. Object-based spatial similarity for semi-supervised video object segmentation
CN109271922B (en) Nasal part positioning method and system based on contrast
CN116958740A (en) Zero sample target detection method based on semantic perception and self-adaptive contrast learning
KR20130057585A (en) Apparatus and method for detecting scene change of stereo-scopic image
US11935300B2 (en) Techniques for generating candidate match cuts
CN107423704B (en) Lip video positioning method and system based on skin color detection
CN109242819A (en) A kind of algorithm of the galled spots defect connection based on image procossing

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518057 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the four Longtaili Technology Building Room 325 No. 30

Applicant after: Shenzhen mengwang video Co., Ltd

Address before: 518057 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the four Longtaili Technology Building Room 325 No. 30

Applicant before: SHENZHEN MONTNETS ENCYCLOPEDIA INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant