CN103632126A - Human face tracking method and device - Google Patents

Human face tracking method and device Download PDF

Info

Publication number
CN103632126A
CN103632126A CN201210296738.8A CN201210296738A CN103632126A CN 103632126 A CN103632126 A CN 103632126A CN 201210296738 A CN201210296738 A CN 201210296738A CN 103632126 A CN103632126 A CN 103632126A
Authority
CN
China
Prior art keywords
face
region
people
model
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210296738.8A
Other languages
Chinese (zh)
Other versions
CN103632126B (en
Inventor
张�杰
熊剑平
黄一宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201210296738.8A priority Critical patent/CN103632126B/en
Publication of CN103632126A publication Critical patent/CN103632126A/en
Application granted granted Critical
Publication of CN103632126B publication Critical patent/CN103632126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a human face tracking method and a device, and relates to the technical field of human face tracking. The method comprises that: a human face region and an external region of a human face are confirmed; a human face characteristic model is obtained according to the human face region and the external region of the human face; in a next frame image, a position of the human face region and the position of the external region of the human face are confirmed according to the human face region, the external region of the human face and the human face characteristic model; and the position of the human face is confirmed according to the position of the human face region and the position of the external region of the human face. The human face tracking method and the device are suitable for processing dynamic human face information.

Description

Face tracking method and device
Technical field
The present invention relates to face tracking technical field, particularly a kind of face tracking method and device.
Background technology
Face tracking is in video or image sequence, to determine the movement locus of someone's face and the process of size variation.All the time, face tracking is all significant in fields such as graphical analysis and recognition image monitoring and retrievals, becomes the focus of a large amount of scholars' concern, and many effective algorithms also occur in succession.Mean shift (mean shift) technology is a kind of printenv method of estimation based on density gradient, is mainly used in motion target tracking, has quick and effective feature.The mean shift of Cam shift(continuous adaptive) be a kind of Moving Target Tracking Algorithm based on mean shift method.Under the prerequisite based on Mean shift algorithm Fast Convergent characteristic, Cam shift utilize the histogram feature of tracked object make its have concurrently calculated amount low, target distortion, rotation such as are changed, block at the adaptable feature, thereby be widely used in the multiple occasions such as the tracking of people's face, hand and other object and robot vision.In Cam shift algorithm, people's face color histogram (being mainly features of skin colors) is unique foundation of face tracking, when track human faces process runs into the approaching situation of large area class area of skin color or background and the colour of skin, its tracking results can be subject to severe jamming, causes following the tracks of unsuccessfully.
For above-mentioned disturbed condition, two kinds of optimization methods have been proposed in prior art, be the improvement algorithm based on Cam shift, utilize various dimensions color space to set up object module, by two kernel functions, further get rid of background area for the interference of target following; Another kind is the Mean shift Face tracking algorithm based on edge histogram, utilize marginal information and texture information as tracking characteristics, the method is in the situation that background color is similar to the colour of skin, and tracking effect is obviously better than traditional mean shift Face tracking algorithm.
In realizing process of the present invention, inventor finds that in prior art, at least there are the following problems: although the improvement algorithm based on Cam shift can be optimized disturbing, but when the color of background color and tracking target approaches very much, still can not thoroughly get rid of the impact of interference; Mean shift Face tracking algorithm based on edge histogram can be processed background color and the approaching problem of tracking target color preferably, but occur that at people's face attitude changes, in the situation that expression shape change and groups of people's face are blocked, the method does not possess robustness, and tracking results is affected.
Summary of the invention
Embodiments of the invention provide a kind of people's face method and device, can solve background color close with the colour of skin in face tracking process to the interference of face tracking and in the situation that people's face generation expression shape change, attitude change and be blocked, the tracking of maintenance to people's face, improves face tracking quality.
The technical scheme that the embodiment of the present invention adopts is:
A face tracking method, comprising:
Determine human face region and people's face exterior domain;
According to described human face region and described people's face exterior domain, obtain face characteristic model;
In next frame image, according to described human face region, people's face exterior domain and face characteristic model, determine human face region position and people's face exterior domain position;
According to human face region position and people's face exterior domain position, determine people's face position.
Wherein, described definite people's face exterior domain comprises: add up the priori of described hair zones, obtain the prior probability of described hair zones, choose region that probability is larger as hair zones; Add up the priori in described shoulder region, obtain the prior probability in described shoulder region, choose region that probability is larger as shoulder region; Using described hair zones together with described shoulder region as people's face exterior domain.
Wherein, describedly according to described human face region and described people's face exterior domain, obtain face characteristic model and comprise: according to described human face region and described people's face exterior domain, obtain complexion model and non-complexion model, described complexion model and described non-complexion model are weighted to average acquisition face characteristic model, wherein said complexion model is obtained by described area of skin color, and described non-complexion model is obtained by described eye areas, lip region and described hair zones.
Further, described acquisition complexion model comprises: the color histogram of colourity, saturation degree and three components of brightness of statistics human face region, and using described color histogram as complexion model.
Further, in described acquisition people face, non-complexion model comprises: the area of skin color in described complexion model is wiped out, and the color histogram of remainder is added the color histogram of described hair zones as non-complexion model in people's face.
Wherein, describedly according to described human face region, people's face exterior domain and face characteristic model, determine that human face region position and people's face exterior domain position comprise: use the mean shift Cam shift algorithm keeps track human face region of the continuous adaptive that has added described face characteristic model, determine human face region position; Use described Cam shift algorithm to follow the tracks of respectively hair zones and shoulder region, determine people's face exterior domain position.
A face tracking device, comprising:
The first determination module, for initialization face tracking region, determines human face region and people's face exterior domain;
Model acquisition module, for obtaining face characteristic model according to described human face region and described people's face exterior domain;
The second determination module, at next frame image, determines human face region position and people's face exterior domain position according to described human face region, people's face exterior domain and face characteristic model;
The 3rd determination module, for according to human face region position and people's face exterior domain position, determines people's face position.
Wherein, described the first determination module also comprises: the first statistic unit, for adding up the priori of described hair zones, obtain the prior probability of described hair zones, and choose region that probability is larger as hair zones; The second statistic unit, for adding up the priori in described shoulder region, obtains the prior probability in described shoulder region, chooses region that probability is larger as shoulder region; The second determining unit, for using described hair zones together with described shoulder region as people's face exterior domain.
Wherein said model acquisition module specifically for: according to described human face region and described people's face exterior domain, obtain complexion model and non-complexion model, described complexion model and described non-complexion model are weighted to average acquisition face characteristic model, wherein said complexion model is obtained by described area of skin color, and described non-complexion model is obtained by described eye areas, lip region and described hair zones.
Further, described model acquisition module comprises: the first model acquiring unit, and for adding up the color histogram of colourity, saturation degree and three components of brightness of human face region, using described color histogram as complexion model.
Further, described model acquisition module also comprises: the second model acquiring unit, for the area of skin color of described complexion model is wiped out, the color histogram of remainder is added the color histogram of described hair zones as non-complexion model in people's face.
Wherein, described the second determination module comprises: the 3rd determining unit, for using the mean shift Cam shift algorithm keeps track human face region of the continuous adaptive that has added described face characteristic model, determine human face region position; The 4th determining unit, for using described Cam shift algorithm to follow the tracks of respectively hair zones and shoulder region, determines people's face exterior domain position.
Compared with prior art, the embodiment of the present invention is by the basis of complexion model, obtain the color histogram of eye areas and lip region, and obtain hair zones and shoulder region by the method for off-line training, and pass through the color histogram of hair zones, the color histogram of eye areas and lip region and colour of skin histogram weighted combination are originally got up, obtain a comprehensive face characteristic model and comprehensive color histogram, by comprehensive face characteristic model is joined in Cam shift algorithm, more accurately people's face is followed the tracks of, can greatly reduce the interference of the background color close with the colour of skin to face tracking, and in people's face generation expression shape change, attitude change and situation about being blocked under can not interrupt the tracking to people's face, strengthened the robustness of face tracking, by hair zones and shoulder region, further constraint and restriction are done in the position of people's face in addition, improved the accuracy of face tracking.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The method flow diagram that Fig. 1 provides for the embodiment of the present invention one;
The method flow diagram that Fig. 2 provides for the embodiment of the present invention two;
The apparatus structure schematic diagram that Fig. 3, Fig. 4 provide for the embodiment of the present invention three;
The apparatus structure schematic diagram that Fig. 5 provides for the embodiment of the present invention four.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making all other embodiment that obtain under creative work prerequisite, belong to the scope of protection of the invention.
For making the advantage of technical solution of the present invention clearer, below in conjunction with drawings and Examples, the present invention is elaborated.
Embodiment mono-
The present embodiment provides a kind of face tracking method, and as shown in Figure 1, described method comprises:
101, determine human face region and people's face exterior domain.
Wherein, described human face region comprises: area of skin color, eye areas and lip region; Described people's face exterior domain comprises: hair zones and shoulder region.
Wherein, determine that human face region is directly initialization to be carried out in face tracking region, can obtain human face region; Determine that people's face exterior domain is that method by off-line training obtains, and comprising: add up the priori of described hair zones, obtain the prior probability of described hair zones, choose region that probability is larger as hair zones; Add up the priori in described shoulder region, obtain the prior probability in described shoulder region, choose region that probability is larger as shoulder region; Using described hair zones together with described shoulder region as people's face exterior domain.
102, according to described human face region and described people's face exterior domain, obtain face characteristic model.
Wherein, after obtaining human face region and people's face exterior domain, need to set up face characteristic model, comprise: according to described human face region and described people's face exterior domain, obtain complexion model and non-complexion model, described complexion model and described non-complexion model are weighted to average acquisition face characteristic model, wherein said complexion model is obtained by described area of skin color, and described non-complexion model is obtained by described eye areas, lip region and described hair zones.
Wherein, obtain complexion model and comprise: the color histogram of colourity, saturation degree and three components of brightness of statistics human face region, using described color histogram as complexion model; Obtaining non-complexion model in people's face comprises: the area of skin color in described complexion model is wiped out, and the color histogram of remainder is added the color histogram of described hair zones as non-complexion model in people's face.
103,, in next frame image, according to described human face region, people's face exterior domain and face characteristic model, determine human face region position and people's face exterior domain position.
For example, use the mean shift Cam shift algorithm keeps track human face region of the continuous adaptive that has added described face characteristic model, determine human face region position; Use described Cam shift algorithm to follow the tracks of respectively hair zones and shoulder region, determine people's face exterior domain position.
104, according to human face region position and people's face exterior domain position, determine people's face position.
Further, described, according to human face region position and people's face exterior domain position, after determining people's face position, also comprise: the tracking results to described final people's face position is done smoothing processing, according to people's face position weighting of multiple image, obtain beautiful woman's face position.
Compared with prior art, the embodiment of the present invention is by the basis of complexion model, obtain the color histogram of eye areas and lip region, and obtain hair zones and shoulder region by the method for off-line training, and pass through the color histogram of hair zones, the color histogram of eye areas and lip region and colour of skin histogram weighted combination are originally got up, obtain a comprehensive face characteristic model and comprehensive color histogram, by comprehensive face characteristic model is joined in Cam shift algorithm, more accurately people's face is followed the tracks of, can greatly reduce the interference of the background color close with the colour of skin to face tracking, and in people's face generation expression shape change, attitude change and situation about being blocked under can not interrupt the tracking to people's face, strengthened the robustness of face tracking, by hair zones and shoulder region, further constraint and restriction are done in the position of people's face in addition, improved the accuracy of face tracking.
Embodiment bis-
The present embodiment provides a kind of face tracking method, and as shown in Figure 2, described method comprises:
201, initialization face tracking region, determines human face region and people's face exterior domain.
Wherein, described human face region comprises: area of skin color, eye areas and lip region; Described people's face exterior domain comprises: hair zones and shoulder region.After carrying out initialization face tracking region, just can obtain the region of people's face of tracking, this region has mainly comprised area of skin color, eye areas and the lip region of people's face.
Further, when definite people's face exterior domain, need to use the method for off-line training to allow system estimate according to a plurality of samples, final definite hair zones and shoulder region, for example, the priori of statistics hair zones, obtains the prior probability of hair zones, chooses region that probability is larger as hair zones; The priori in statistics shoulder region, obtains the prior probability in shoulder region, chooses region that probability is larger as shoulder region.
202, calculate the complexion model of human face region.
For example, the color histogram of colourity, saturation degree and three components of brightness of statistics human face region, using this color histogram as complexion model, it should be noted that, here because human face region is mainly area of skin color, so can think that this color histogram is complexion model.
203, calculate the non-complexion model of human face region, and the non-complexion model of people's face exterior domain.
For example, the non-complexion model of calculating human face region comprises: according to the complexion model obtaining in step 202, area of skin color in this complexion model is removed, remaining model is the non-complexion model of human face region; Wherein people's face exterior domain comprises hair zones and shoulder region, we only consider hair zones and shoulder region are not calculated when calculating people's face exterior domain model, because shoulder region is mainly to the determining of people's face position in the method, and adds calculating too complicated.The non-complexion model that calculates people's face exterior domain comprises: according to hair zones definite in step 201, obtain the method for complexion model according to step 202, obtain the color histogram of hair zones as the non-complexion model of people's face exterior domain.
204, obtain face characteristic model.
For example, obtain in step 202 and step 203 complexion model and two color histograms corresponding to non-complexion model are weighted on average, obtain new color histogram, it is new model, using new model as face characteristic model, face characteristic model is mainly used in joining in the model of Cam shift algorithm use, strengthens the tracking effect of people's face, eliminates nearly colour of skin background or the interference of object to face tracking.
The Cam shift algorithm of complexion model and the Cam shift algorithm of end user's face characteristic model are only used in contrast, the frame face tracking image of take is example, can find, two images are being done after back projection, the image of end user's face characteristic model is compared with using the image of complexion model, the human face region of the image of end user's face characteristic model is obviously used the human face region of image of complexion model more obvious, has greatly improved the accuracy of following the tracks of.
205, continue to follow the tracks of next frame image.
206, judge whether this two field picture arrives postamble, if arrived postamble, process ends; If do not arrive postamble, perform step 207.
Optionally, need the number of image frames of current face tracking to judge here, if last frame image during current image stops face tracking; If present image is not last frame image, continue to follow the tracks of.The fundamental purpose of doing is like this exactly in order to prevent that the image when face tracking from being last frame, has not continued to follow the tracks of if desired, can not stop in time flow process, wastes some resources.
207, use Cam shift algorithm respectively human face region, shoulder region and hair zones to be followed the tracks of.
For human face region, use the Cam shift algorithm that has added face characteristic model, for shoulder region and hair zones, use common Cam shift algorithm, for example, use the Cam shift algorithm keeps track human face region that has added described face characteristic model, determine human face region position; Re-use Cam shift algorithm keeps track shoulder region and hair zones, obtain the position in these two regions, according to the position in hair zones and shoulder region, can better help to determine the final position of people's face, improve the accuracy of face tracking.
208, determine people's face position.
209, judge whether face tracking lost efficacy, if lost efficacy, perform step 201, if do not lose efficacy, perform step 210.
210, face tracking result is carried out to smooth disposal.
For example, weighted mean is made in people's face position of nearest several two field pictures, obtain relatively accurate people's face position.
211, the face tracking result after output smoothing processing.
Wherein, face tracking result is presented at and on screen, is one and follows the tracks of frame, is used for frame to live people's face.
Further, after output tracking result, can also return and continue execution step 205, realize the lasting tracking to target.
Compared with prior art, the embodiment of the present invention is by the basis of complexion model, obtain the color histogram of eye areas and lip region, and obtain hair zones and shoulder region by the method for off-line training, and pass through the color histogram of hair zones, the color histogram of eye areas and lip region and colour of skin histogram weighted combination are originally got up, obtain a comprehensive face characteristic model and comprehensive color histogram, by comprehensive face characteristic model is joined in Cam shift algorithm, more accurately people's face is followed the tracks of, can greatly reduce the interference of the background color close with the colour of skin to face tracking, and in people's face generation expression shape change, attitude change and situation about being blocked under can not interrupt the tracking to people's face, strengthened the robustness of face tracking, by hair zones and shoulder region, further constraint and restriction are done in the position of people's face in addition, improved the accuracy of face tracking.
Embodiment tri-
The present embodiment provides a kind of face tracking device, and as shown in Figure 3, described device comprises:
The first determination module 31, for initialization face tracking region, determines human face region and people's face exterior domain;
Model acquisition module 32, for obtaining face characteristic model according to described human face region and described people's face exterior domain;
The second determination module 33, at next frame image, determines human face region position and people's face exterior domain position according to described human face region, people's face exterior domain and face characteristic model;
The 3rd determination module 34, for according to human face region position and people's face exterior domain position, determines people's face position.
Wherein, described human face region comprises: area of skin color, eye areas and lip region; Described people's face exterior domain comprises: hair zones and shoulder region.
Further, as shown in Figure 4, described the first determination module 31 can also comprise:
The first determining unit 311, for according to the initialization result in initialization face tracking region, obtains human face region.
Further, as shown in Figure 4, described the first determination module 31 can also comprise:
The first statistic unit 312, for adding up the priori of described hair zones, obtains the prior probability of described hair zones, chooses region that probability is larger as hair zones;
The second statistic unit 313, for adding up the priori in described shoulder region, obtains the prior probability in described shoulder region, chooses region that probability is larger as shoulder region;
The second determining unit 314, for using described hair zones together with described shoulder region as people's face exterior domain.
Wherein, described model acquisition module 32 is specifically for the complexion model and the non-complexion model that obtain according to described human face region and described people's face exterior domain, described complexion model and described non-complexion model are weighted to average acquisition face characteristic model, wherein said complexion model is obtained by described area of skin color, and described non-complexion model is obtained by described eye areas, lip region and described hair zones.
Further, as shown in Figure 4, described model acquisition module 32 can also comprise:
The first model acquiring unit 321, for adding up the color histogram of colourity, saturation degree and three components of brightness of human face region, using described color histogram as complexion model.
Further, as shown in Figure 4, described model acquisition module 32 can also comprise:
The second model acquiring unit 322, for the area of skin color of described complexion model is wiped out, the color histogram of remainder is added the color histogram of described hair zones as non-complexion model in people's face.
Further, as shown in Figure 4, described the second determination module 33 can also comprise:
The 3rd determining unit 331, for using the mean shift Cam shift algorithm keeps track human face region of the continuous adaptive that has added described face characteristic model, determines human face region position;
The 4th determining unit 332, for using described Cam shift algorithm to follow the tracks of respectively hair zones and shoulder region, determines people's face exterior domain position.
Further, as shown in Figure 4, described device can also comprise:
Result treatment module 35, for the tracking results of described final people's face position is done to smoothing processing, obtains beautiful woman's face position according to people's face position weighting of multiple image.
Compared with prior art, the embodiment of the present invention is by the basis of complexion model, obtain the color histogram of eye areas and lip region, and obtain hair zones and shoulder region by the method for off-line training, and pass through the color histogram of hair zones, the color histogram of eye areas and lip region and colour of skin histogram weighted combination are originally got up, obtain a comprehensive face characteristic model and comprehensive color histogram, by comprehensive face characteristic model is joined in Cam shift algorithm, more accurately people's face is followed the tracks of, can greatly reduce the interference of the background color close with the colour of skin to face tracking, and in people's face generation expression shape change, attitude change and situation about being blocked under can not interrupt the tracking to people's face, strengthened the robustness of face tracking, by hair zones and shoulder region, further constraint and restriction are done in the position of people's face in addition, improved the accuracy of face tracking.
Embodiment tetra-
The present embodiment provides a kind of face tracking device, and as shown in Figure 5, described device comprises:
Receiver 41, for recipient's face picture signal;
First processor 42, for the facial image signal receiving according to receiver 41, initialization face tracking region, determines human face region and people's face exterior domain;
The second processor 43, for obtaining face characteristic model according to the definite human face region of first processor 42 and people's face exterior domain;
The 3rd processor 44, for using face characteristic model and definite human face region and the people's face exterior domain of first processor 42 that the second processor 43 obtains to carry out face tracking to the next frame image of receiver 41 receptions, determine human face region position and people's face exterior domain position;
Four-processor 45, for according to the definite human face region position of the 3rd processor 44 and people's face exterior domain location positioning people's face position.
Further, also comprise: the 5th processor 46, for smooth disposal is carried out in people's face position of nearly several two field pictures of four-processor acquisition, obtains best people's face position.
Compared with prior art, the embodiment of the present invention is by the basis of complexion model, obtain the color histogram of eye areas and lip region, and obtain hair zones and shoulder region by the method for off-line training, and pass through the color histogram of hair zones, the color histogram of eye areas and lip region and colour of skin histogram weighted combination are originally got up, obtain a comprehensive face characteristic model and comprehensive color histogram, by comprehensive face characteristic model is joined in Cam shift algorithm, more accurately people's face is followed the tracks of, can greatly reduce the interference of the background color close with the colour of skin to face tracking, and in people's face generation expression shape change, attitude change and situation about being blocked under can not interrupt the tracking to people's face, strengthened the robustness of face tracking, by hair zones and shoulder region, further constraint and restriction are done in the position of people's face in addition, improved the accuracy of face tracking.
The above-mentioned embodiment of the method providing can be provided the face tracking device that the embodiment of the present invention provides, and concrete function is realized and referred to the explanation in embodiment of the method, does not repeat them here.The face tracking method that the embodiment of the present invention provides and device go for processing dynamic human face information, but are not limited only to this.
One of ordinary skill in the art will appreciate that all or part of flow process realizing in above-described embodiment method, to come the hardware that instruction is relevant to complete by computer program, described program can be stored in a computer read/write memory medium, this program, when carrying out, can comprise as the flow process of the embodiment of above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or random store-memory body (Random Access Memory, RAM) etc.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited to this, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (18)

1. a face tracking method, is characterized in that, comprising:
Determine human face region and people's face exterior domain;
According to described human face region and described people's face exterior domain, obtain face characteristic model;
In next frame image, according to described human face region, people's face exterior domain and face characteristic model, determine human face region position and people's face exterior domain position;
According to human face region position and people's face exterior domain position, determine people's face position.
2. method according to claim 1, is characterized in that, described human face region comprises: area of skin color, eye areas and lip region; Described people's face exterior domain comprises: hair zones and shoulder region.
3. according to claim 1, it is characterized in that, described definite human face region comprises:
Initialization result according to initialization face tracking region, obtains human face region.
4. method according to claim 1, is characterized in that, described definite people's face exterior domain comprises:
Add up the priori of described hair zones, obtain the prior probability of described hair zones, choose region that probability is larger as hair zones;
Add up the priori in described shoulder region, obtain the prior probability in described shoulder region, choose region that probability is larger as shoulder region;
Using described hair zones together with described shoulder region as people's face exterior domain.
5. method according to claim 1, is characterized in that, describedly according to described human face region and described people's face exterior domain, obtains face characteristic model and comprises:
According to complexion model and the non-complexion model of described human face region and the acquisition of described people's face exterior domain, described complexion model and described non-complexion model are weighted to average acquisition face characteristic model, wherein said complexion model is obtained by described area of skin color, and described non-complexion model is obtained by described eye areas, lip region and described hair zones.
6. method according to claim 5, is characterized in that, described acquisition complexion model comprises:
The color histogram of colourity, saturation degree and three components of brightness of statistics human face region, using described color histogram as complexion model.
7. according to the method described in claim 5 or 6, it is characterized in that, in described acquisition people face, non-complexion model comprises:
Area of skin color in described complexion model is wiped out, and the color histogram of remainder is added the color histogram of described hair zones as non-complexion model in people's face.
8. method according to claim 1, is characterized in that, describedly according to described human face region, people's face exterior domain and face characteristic model, determines that human face region position and people's face exterior domain position comprise:
Use has added the mean shift Cam shift algorithm keeps track human face region of the continuous adaptive of described face characteristic model, determines human face region position;
Use described Cam shift algorithm to follow the tracks of respectively hair zones and shoulder region, determine people's face exterior domain position.
9. method according to claim 1, is characterized in that, described, according to human face region position and people's face exterior domain position, after determining people's face position, also comprises:
Tracking results to described final people's face position is done smoothing processing, according to people's face position weighting of multiple image, obtains beautiful woman's face position.
10. a face tracking device, is characterized in that, comprising:
The first determination module, for initialization face tracking region, determines human face region and people's face exterior domain;
Model acquisition module, for obtaining face characteristic model according to described human face region and described people's face exterior domain;
The second determination module, at next frame image, determines human face region position and people's face exterior domain position according to described human face region, people's face exterior domain and face characteristic model;
The 3rd determination module, for according to human face region position and people's face exterior domain position, determines people's face position.
11. devices according to claim 10, is characterized in that, described human face region comprises: area of skin color, eye areas and lip region; Described people's face exterior domain comprises: hair zones and shoulder region.
12. devices according to claim 10, is characterized in that, described the first determination module comprises:
The first determining unit, for according to the initialization result in initialization face tracking region, obtains human face region.
13. devices according to claim 10, is characterized in that, described the first determination module also comprises:
The first statistic unit, for adding up the priori of described hair zones, obtains the prior probability of described hair zones, chooses region that probability is larger as hair zones;
The second statistic unit, for adding up the priori in described shoulder region, obtains the prior probability in described shoulder region, chooses region that probability is larger as shoulder region;
The second determining unit, for using described hair zones together with described shoulder region as people's face exterior domain.
14. devices according to claim 10, it is characterized in that, described model acquisition module is specifically for the complexion model and the non-complexion model that obtain according to described human face region and described people's face exterior domain, described complexion model and described non-complexion model are weighted to average acquisition face characteristic model, wherein said complexion model is obtained by described area of skin color, and described non-complexion model is obtained by described eye areas, lip region and described hair zones.
15. devices according to claim 14, is characterized in that, described model acquisition module comprises:
The first model acquiring unit, for adding up the color histogram of colourity, saturation degree and three components of brightness of human face region, using described color histogram as complexion model.
16. according to the device described in claims 14 or 15, it is characterized in that, described model acquisition module also comprises:
The second model acquiring unit, for the area of skin color of described complexion model is wiped out, the color histogram of remainder is added the color histogram of described hair zones as non-complexion model in people's face.
17. devices according to claim 10, is characterized in that, described the second determination module comprises:
The 3rd determining unit, for using the mean shift Cam shift algorithm keeps track human face region of the continuous adaptive that has added described face characteristic model, determines human face region position;
The 4th determining unit, for using described Cam shift algorithm to follow the tracks of respectively hair zones and shoulder region, determines people's face exterior domain position.
18. devices according to claim 10, is characterized in that, also comprise:
Result treatment module, for the tracking results of described final people's face position is done to smoothing processing, obtains beautiful woman's face position according to people's face position weighting of multiple image.
CN201210296738.8A 2012-08-20 2012-08-20 Face tracking method and device Active CN103632126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210296738.8A CN103632126B (en) 2012-08-20 2012-08-20 Face tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210296738.8A CN103632126B (en) 2012-08-20 2012-08-20 Face tracking method and device

Publications (2)

Publication Number Publication Date
CN103632126A true CN103632126A (en) 2014-03-12
CN103632126B CN103632126B (en) 2018-03-13

Family

ID=50213158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210296738.8A Active CN103632126B (en) 2012-08-20 2012-08-20 Face tracking method and device

Country Status (1)

Country Link
CN (1) CN103632126B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971092A (en) * 2014-04-09 2014-08-06 中国船舶重工集团公司第七二六研究所 Facial trajectory tracking method
CN105701472A (en) * 2016-01-15 2016-06-22 杭州鸿雁电器有限公司 Method and device for identifying face of dynamic target
CN106210855A (en) * 2016-07-11 2016-12-07 网易(杭州)网络有限公司 Object displaying method and device
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN109146913A (en) * 2018-08-02 2019-01-04 苏州浪潮智能软件有限公司 A kind of face tracking method and device
CN109272259A (en) * 2018-11-08 2019-01-25 梁月竹 A kind of autism-spectrum disorder with children mood ability interfering system and method
CN110009662A (en) * 2019-04-02 2019-07-12 北京迈格威科技有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of face tracking
CN110070487A (en) * 2019-04-02 2019-07-30 清华大学 Semantics Reconstruction face oversubscription method and device based on deeply study
CN112766038A (en) * 2020-12-22 2021-05-07 深圳金证引擎科技有限公司 Vehicle tracking method based on image recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050074148A1 (en) * 2003-07-18 2005-04-07 Rodyushkin Konstantin V. Face tracking
CN101794385A (en) * 2010-03-23 2010-08-04 上海交通大学 Multi-angle multi-target fast human face tracking method used in video sequence
CN102324025A (en) * 2011-09-06 2012-01-18 北京航空航天大学 Human face detection and tracking method based on Gaussian skin color model and feature analysis
CN102436637A (en) * 2010-09-29 2012-05-02 中国科学院计算技术研究所 Method and system for automatically segmenting hairs in head images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050074148A1 (en) * 2003-07-18 2005-04-07 Rodyushkin Konstantin V. Face tracking
CN101794385A (en) * 2010-03-23 2010-08-04 上海交通大学 Multi-angle multi-target fast human face tracking method used in video sequence
CN102436637A (en) * 2010-09-29 2012-05-02 中国科学院计算技术研究所 Method and system for automatically segmenting hairs in head images
CN102324025A (en) * 2011-09-06 2012-01-18 北京航空航天大学 Human face detection and tracking method based on Gaussian skin color model and feature analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
叶楠: "结合发色的人脸检测与跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
左军毅 等: "基于多个颜色分布模型的Camshift跟踪算法", 《自动化学报》 *
沈晔湖 等: "用于个性化人脸动漫生成的自动头发提取方法", 《计算机辅助设计与图形学学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971092B (en) * 2014-04-09 2018-06-05 中国船舶重工集团公司第七二六研究所 The method of face track following
CN103971092A (en) * 2014-04-09 2014-08-06 中国船舶重工集团公司第七二六研究所 Facial trajectory tracking method
CN105701472A (en) * 2016-01-15 2016-06-22 杭州鸿雁电器有限公司 Method and device for identifying face of dynamic target
CN105701472B (en) * 2016-01-15 2019-07-09 杭州鸿雁电器有限公司 A kind of face recognition method and device of dynamic object
CN106210855A (en) * 2016-07-11 2016-12-07 网易(杭州)网络有限公司 Object displaying method and device
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN109146913B (en) * 2018-08-02 2021-05-18 浪潮金融信息技术有限公司 Face tracking method and device
CN109146913A (en) * 2018-08-02 2019-01-04 苏州浪潮智能软件有限公司 A kind of face tracking method and device
CN109272259A (en) * 2018-11-08 2019-01-25 梁月竹 A kind of autism-spectrum disorder with children mood ability interfering system and method
CN110070487A (en) * 2019-04-02 2019-07-30 清华大学 Semantics Reconstruction face oversubscription method and device based on deeply study
CN110009662A (en) * 2019-04-02 2019-07-12 北京迈格威科技有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of face tracking
CN110009662B (en) * 2019-04-02 2021-09-17 北京迈格威科技有限公司 Face tracking method and device, electronic equipment and computer readable storage medium
CN112766038A (en) * 2020-12-22 2021-05-07 深圳金证引擎科技有限公司 Vehicle tracking method based on image recognition
CN112766038B (en) * 2020-12-22 2021-12-17 深圳金证引擎科技有限公司 Vehicle tracking method based on image recognition

Also Published As

Publication number Publication date
CN103632126B (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN103632126A (en) Human face tracking method and device
CN102270348B (en) Method for tracking deformable hand gesture based on video streaming
Barranco et al. Contour motion estimation for asynchronous event-driven cameras
Xie et al. Image de-noising algorithm based on Gaussian mixture model and adaptive threshold modeling
US20120062594A1 (en) Methods and Systems for Collaborative-Writing-Surface Image Formation
CN108965740A (en) A kind of real-time video is changed face method, apparatus, equipment and storage medium
CN110443210A (en) A kind of pedestrian tracting method, device and terminal
Chen et al. Spatiotemporal background subtraction using minimum spanning tree and optical flow
CN104240277A (en) Augmented reality interaction method and system based on human face detection
CN103412643B (en) Terminal and its method for remote control
KR20070016849A (en) Method and apparatus for serving prefer color conversion of skin color applying face detection and skin area detection
CN110070565A (en) A kind of ship trajectory predictions method based on image superposition
García et al. Adaptive multi-cue 3D tracking of arbitrary objects
Zhu et al. Action recognition in broadcast tennis video
CN112069943A (en) Online multi-person posture estimation and tracking method based on top-down framework
CN110390300A (en) A kind of target follower method and device for robot
CN111582036B (en) Cross-view-angle person identification method based on shape and posture under wearable device
CN117218246A (en) Training method and device for image generation model, electronic equipment and storage medium
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
Lee et al. Efficient Face Detection and Tracking with extended camshift and haar-like features
Liu et al. Improved high-speed vision system for table tennis robot
CN110377033A (en) A kind of soccer robot identification based on RGBD information and tracking grasping means
CN105701840A (en) System for real-time tracking of multiple objects in video and implementation method
JP2004157778A (en) Nose position extraction method, program for operating it on computer, and nose position extraction device
Lefevre et al. Structure and appearance features for robust 3d facial actions tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant