CN110633648A - Face recognition method and system in natural walking state - Google Patents
Face recognition method and system in natural walking state Download PDFInfo
- Publication number
- CN110633648A CN110633648A CN201910775183.7A CN201910775183A CN110633648A CN 110633648 A CN110633648 A CN 110633648A CN 201910775183 A CN201910775183 A CN 201910775183A CN 110633648 A CN110633648 A CN 110633648A
- Authority
- CN
- China
- Prior art keywords
- personnel target
- target
- specific
- specific personnel
- personnel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 238000012986 modification Methods 0.000 claims description 13
- 230000004048 modification Effects 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 230000001186 cumulative effect Effects 0.000 claims description 9
- 230000009286 beneficial effect Effects 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 description 5
- 230000000739 chaotic effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 241001272996 Polyphylla fullo Species 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Vascular Medicine (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a face recognition method and a face recognition system in a natural walking state, wherein the method comprises the following steps: s1, shooting a video picture of the person target in a natural walking state; s2, detecting each specific personnel target, identifying the relationship between each specific personnel target in the two frame frames before and after the detection, and updating the ID mark of each specific personnel in the personnel target record form; s3, calculating the accumulated score of each specific personnel target according to the ID mark of each specific personnel target, determining the specific personnel target, and distributing an adjustable short-distance camera; s4, predicting the spatial position of the specific personnel target, adjusting the shooting direction of the close-range camera, and acquiring the front face image of the specific personnel target; and S5, carrying out face extraction and recognition on the front face image, recording the result, and releasing the adjustable close-range camera. The method is beneficial to expanding the recognition scene of the face recognition technology and breaking the limitation of the face recognition technology.
Description
Technical Field
The invention relates to the technical field of video monitoring and face recognition, in particular to a face recognition method and system in a natural walking state.
Background
The face recognition technology is mainly based on the face features of people, firstly, whether a face exists in an input face image or a video stream is judged, if the face exists, the position and the size of the face and the position information of each main facial organ are obtained, according to the information, identity features contained in the face are further extracted, and the identity features are compared with the known face, so that the identity of the face is recognized, and therefore the face recognition needs to acquire a front face image with high definition, and the recognition reliability can be ensured.
At present, the face recognition technology is widely applied to various scenes such as face recognition entrance guard attendance systems, face recognition security doors, police law and authorities search and catch evasion and track lost population, but not all scenes can collect the positive face image with higher definition to meet the requirements of the face image required by the face recognition technology.
Therefore, how to capture a front face image with high definition for naturally walking people and combine the acquired front face image with an image recognition technology to realize that the face recognition technology can be applied without limitation to scenes is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for recognizing a face in a natural walking state, wherein a close-range camera is reasonably called to shoot a human target by extracting features of each person in a walking group and measuring and calculating motion parameters, position parameters, detection times, and the like, so as to obtain a high-definition front face image, and the human target is recognized by combining with a face recognition technology, so as to ensure the application of the face recognition technology in each scene.
In order to achieve the purpose, the invention adopts the following technical scheme:
a face recognition method under a natural walking state comprises the following steps:
s1, shooting a video picture of the person target in a natural walking state;
s2, detecting each specific personnel target in the video picture obtained in S1, identifying the relationship between each specific personnel target in the current video picture and each specific personnel target in the previous frame of video picture, and updating the ID mark of each specific personnel in the personnel target record form;
s3, calculating the accumulated score of each specific personnel target and determining the specific personnel target according to the ID mark of each specific personnel target, and distributing an adjustable close-range camera;
s4, predicting the spatial position of the specific personnel target, adjusting the shooting direction of the close-range camera, and acquiring the front face image of the specific personnel target;
and S5, carrying out face extraction and recognition on the front face image, recording the result, and releasing the adjustable close-range camera.
Preferably, the video frame in S1 is captured by using a fixed wide-angle camera, the fixed capturing frequency of the fixed wide-angle camera is 10 frames per second, and the fixed capturing frequency has the advantages of fixed capturing direction and wide capturing visual field range, so the fixed wide-angle camera is suitable for capturing the image of the walker in the present invention, and is beneficial to the comparison of the subsequent frame frames before and after the image is captured.
Preferably, the S2 includes two specific steps S21-S22:
s21, extracting a picture area of each specific human target in the video picture and one or more of edge features, color distribution features and texture features of the picture area;
s22, comparing the picture area and the characteristics thereof obtained in the S21 with the picture area and the corresponding characteristics thereof in the previous frame of the video picture, and modifying the ID mark of each specific personnel target according to the comparison result.
The feature extraction is to convert the original image into a group of features with obvious statistical significance, so that the two frame image areas before and after are compared conveniently, and the position change condition and the motion condition of each specific person in the two frame image areas before and after are obtained, and the newly added person is monitored.
Preferably, the comparison result and the corresponding modification result in S22 are as follows:
when the comparison result is not matched, the specific personnel target extracted from the video picture is a newly added personnel target, the newly added personnel target is given an ID mark in a personnel target recording form, the detected times of the newly added personnel target are recorded as 1 time, and the position parameter of the newly added personnel target is recorded;
and when the comparison results are matched, adding 1 to the detected times of the specific personnel target in the video picture in a personnel target recording table, recording the position parameter of the specific personnel target in the current video picture, calculating the movement speed and the movement direction according to the change of the position parameter, and recording the movement speed and the movement direction as the movement parameter of the specific personnel target.
In the personnel target recording form, each specific personnel target is provided with an ID mark, the ID mark comprises a position parameter, a motion parameter and the detected times of the personnel target, and the personnel target recording form is arranged to facilitate the subsequent calculation of the accumulated score of the personnel, so that the specific personnel target is determined.
Preferably, the S3 comprises three specific steps S31-S33:
s31, calculating the cumulative score of each specific personnel target according to the motion parameters and the detected times: c ═ a × S + D + b × F,
wherein C is the accumulated score, S is the movement speed, a is the conversion coefficient from the movement speed S to the accumulated score C, D is the determined score according to the movement direction, F is the detected times, and b is the conversion coefficient from the detected times to the accumulated score C;
s32, determining the specific currently tracked personnel target according to the accumulated score C;
and S33, assigning one adjustable short-distance camera to the specific personnel target.
Because the speed of motion, the direction of motion of every people of walking naturally and the number of times that is detected are all different, consequently with the speed of motion, the direction of motion and the number of times that is detected integration together, calculate accumulative score C, then every person's accumulative score C must be different, regard accumulative score C as the characteristic value of every concrete personnel target, then can distinguish numerous pedestrians, be convenient for record camera distribution state that can be clear when carrying out the distribution of adjustable camera of closely to every personnel target, avoid repeated shooting, the adjustable camera of closely distributes unreasonablely and the condition that the shooting condition record is chaotic appears.
Preferably, the S4 includes two specific steps S41-S42:
s41, predicting the spatial position of the specific personnel target after 1-1.5S according to the historical position parameter of the specific personnel target, and sending the predicted spatial position to a holder controller corresponding to the adjustable close-range camera;
and S42, controlling a pan-tilt of the close-range camera to adjust the shooting direction according to the predicted spatial position, and acquiring the front face image of the specific personnel target.
The adjustable close-range camera has narrow shooting visual field range, but has good imaging effect and high definition, and after the tripod head controller receives the predicted spatial position, the shooting direction can be adjusted through the tripod head matched with the adjustable close-range camera, so that a clear front face image of a person target can be easily obtained.
Preferably, after the face extraction and recognition are completed, the adjustable close-range camera is released, so that the adjustable close-range camera can continuously track other personnel targets which do not shoot the face image at the front side, the input quantity of the adjustable close-range camera is reduced, and the speed of acquiring the face image is increased.
Based on the method, the invention designs the following system:
a face recognition system in a natural walking state, comprising: the system comprises a fixed wide-angle camera, a plurality of adjustable close-range cameras, a personnel detection and marking module, a tracked personnel determination module, a position prediction module and a human face extraction and identification module; wherein,
the fixed wide-angle camera is used for shooting video pictures of the person target in a natural walking state;
the personnel detection and marking module is used for detecting each specific personnel target in the video picture obtained by the fixed wide-angle camera, identifying the relationship between each specific personnel target in the current video picture and each specific personnel target in the previous frame of video picture, and updating the ID mark of each specific personnel in the personnel target recording list;
according to each specific personnel target ID mark, the tracking personnel determining module is used for calculating the accumulated score of each specific personnel target, determining the specific personnel target and distributing an adjustable close-range camera;
the position prediction module is used for predicting the spatial position of a specific personnel target, adjusting the shooting direction of the close-range camera and acquiring a front face image of the specific personnel target;
the face extraction and recognition module is used for extracting and recognizing the face of the front face image, recording the result and releasing the adjustable close-range camera.
Preferably, the fixed shooting frequency of the fixed wide-angle camera is 10 frames of pictures shot every second, and the fixed wide-angle camera has the excellent performances of fixed shooting direction and wide shooting visual field range, so that the fixed wide-angle camera is suitable for shooting walkers in the invention and is beneficial to comparison of subsequent front and rear frame picture frames.
Preferably, the personnel detection and marking module comprises a feature extraction unit and an ID mark modification unit; wherein,
the feature extraction unit is used for extracting a picture area of each specific human target in the video picture and one or more features of edge features, color distribution features and texture features of the picture area;
the ID mark modifying unit is used for comparing the picture area and the characteristics thereof acquired in the characteristic extracting unit with the picture area and the corresponding characteristics thereof in the previous frame of the video picture, and modifying the ID mark of each specific personnel target according to the comparison result.
Preferably, the comparison result and the corresponding modification result in the ID tag modification unit are as follows:
when the comparison result is not matched, the specific personnel target extracted from the video picture is a newly added personnel target, the newly added personnel target is given an ID mark in a personnel target recording form, the detected times of the newly added personnel target are recorded as 1 time, and the position parameter of the newly added personnel target is recorded;
and when the comparison results are matched, adding 1 to the detected times of the specific personnel target in the video picture in a personnel target recording table, recording the position parameter of the specific personnel target in the current video picture, calculating the movement speed and the movement direction according to the change of the position parameter, and recording the movement speed and the movement direction as the movement parameter of the specific personnel target.
Preferably, the missing person determination module comprises: the system comprises an accumulated score calculation unit, a specific personnel target determination unit and an adjustable close-range camera distribution unit; wherein,
aiming at each specific personnel target, according to the motion parameters and the detected times, the cumulative score calculating unit is used for calculating the cumulative score of the personnel target: c ═ a × S + D + b × F,
wherein C is the accumulated score, S is the movement speed, a is the conversion coefficient from the movement speed S to the accumulated score C, D is the score determined according to the movement direction, F is the detected times, and b is the conversion coefficient from the detected times to the accumulated score C;
the specific personnel target determining unit is used for determining the currently tracked specific personnel target according to the accumulated score C;
the adjustable close-range camera distribution unit is used for distributing one adjustable close-range camera to the specific personnel target.
Preferably, the position prediction module comprises a pan-tilt controller and a spatial position prediction unit; wherein,
according to the historical position parameter of the specific personnel target, the spatial position prediction unit is used for predicting the spatial position of the specific personnel target after 1-1.5S, and sending the predicted spatial position to the tripod head controller corresponding to the adjustable close-range camera;
and according to the predicted spatial position, the holder controller is used for controlling the holder of the adjustable close-distance camera to adjust the shooting direction and acquiring the front face image of the specific personnel target.
Preferably, after the face extraction and recognition are completed, the adjustable close-range camera is released, so that the camera can continuously track other personnel targets which do not shoot the face image of the front side, the input quantity of the adjustable close-range camera is reduced, and the speed of acquiring the face image is increased.
The invention has the following beneficial effects:
according to the technical scheme, in order to make up for the defects of the prior art, the invention provides the face recognition method and the face recognition system in the natural walking state, the personnel targets are determined in a picture feature acquisition mode, each specific personnel target is distinguished according to the characteristics of the personnel target such as the movement speed and the movement direction, an adjustable close-range camera is distributed to take pictures and record the specific personnel targets one by one, so that the clear front face features of the personnel targets in the natural walking are acquired, and the face features are recognized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a method of the present invention;
fig. 2 is a block diagram of the system architecture of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention provides the following method:
a face recognition method under a natural walking state comprises the following steps:
s1, shooting a video picture of the person target in a natural walking state;
in order to further optimize the technical characteristics, the video frame in S1 is captured by using a fixed wide-angle camera, the fixed capturing frequency of the fixed wide-angle camera is 10 frames per second, and the fixed wide-angle camera has the excellent performances of fixed capturing direction and wide capturing visual field range, so that the fixed wide-angle camera is suitable for capturing the walking person in the invention and is beneficial to comparison of subsequent frames of the frame before and after the walking person.
S2, detecting each specific personnel target in the video picture obtained in S1, identifying the relationship between each specific personnel target in the current video picture and each specific personnel target in the previous frame of video picture, and updating the ID mark of each specific personnel in the personnel target record form;
s2 includes two specific steps S21-S22:
s21, extracting a picture area of each specific human target in the video picture and one or more of edge features, color distribution features and texture features of the picture area;
s22, comparing the picture area and the characteristics thereof obtained in the S21 with the picture area and the corresponding characteristics thereof in the previous frame of the video picture, and modifying the ID mark of each specific personnel target according to the comparison result.
Specifically, the feature extraction is a primary operation in image processing, after the features are detected, the features can be extracted from the image, the result is called feature description or feature vector, the commonly used image features include edge features, color features, texture features, shape features, spatial relationship features and the like.
In addition, the comparison result and the corresponding modification result in S22 are as follows:
when the comparison result is not matched, the specific personnel target extracted from the video picture is a newly added personnel target, the newly added personnel target is given an ID mark in a personnel target recording form, the detected times of the newly added personnel target are recorded as 1 time, and the position parameter of the newly added personnel target is recorded;
and when the comparison results are matched, adding 1 to the detected times of the specific personnel target in the video picture in a personnel target recording table, recording the position parameter of the specific personnel target in the current video picture, calculating the movement speed and the movement direction according to the change of the position parameter, and recording the movement speed and the movement direction as the movement parameter of the specific personnel target.
In a personnel target record form, each specific personnel target is provided with an ID mark, the ID mark comprises position parameters, motion parameters and detected times of the personnel target, after the characteristics of a current shot picture frame and a previous picture frame are compared, if the current shot picture frame and the previous picture frame are not matched, the personnel in the current shot picture frame are extracted as a newly added personnel target, the newly added personnel ID mark is given, the detected times of the newly added personnel target are recorded as 1 time in a personnel target record form, and the position parameters of the newly added personnel target are recorded; and when the results are not matched, proving that no additional personnel target exists, analyzing each specific personnel target of the current picture frame and each specific personnel target feature in the previous picture frame to perform comparison matching according to one or more of edge features, color distribution features and texture features of the picture frame, thereby analyzing and obtaining the change condition of the position parameter of each specific personnel target in the current picture frame, recording the movement speed and the movement direction according to the change condition of the position parameter, recording the movement parameter and the position parameter into a personnel target record form corresponding to each specific personnel target, adding 1 to the detected times of each specific personnel target, updating the ID mark, setting the personnel target record form to be beneficial to calculating the accumulated score of the personnel subsequently, and determining the specific personnel target.
S3, calculating the accumulated score of each specific personnel target and determining the specific personnel target according to the ID mark of each specific personnel target, and distributing an adjustable close-range camera;
in order to further optimize the technical characteristics, the S3 comprises three specific steps S31-S33, as follows:
s31, calculating the cumulative score of each specific personnel target according to the motion parameters and the detected times: c ═ a × S + D + b × F,
wherein C is the accumulated score, S is the movement speed, a is the conversion coefficient from the movement speed S to the accumulated score C, D is the determined score according to the movement direction, F is the detected times, and b is the conversion coefficient from the detected times to the accumulated score C;
s32, determining the specific currently tracked personnel target according to the accumulated score C;
and S33, assigning one adjustable short-distance camera to the specific personnel target.
Specifically, because the speed of motion of every people of walking naturally, the direction of motion and the number of times that is detected are all different, consequently with the speed of motion, the direction of motion and the number of times that is detected integration together, calculate accumulative score C, then every person's accumulative score C must be different, regard accumulative score C as the eigenvalue of every concrete personnel target, then can distinguish numerous pedestrians, be convenient for record camera distribution state that can be clear when carrying out the distribution of adjustable camera of closely to every personnel target, avoid repeated shooting, the unreasonable and chaotic condition of shooting condition record of adjustable camera distribution appears.
S4, predicting the spatial position of the specific personnel target, adjusting the shooting direction of the close-range camera, and acquiring the front face image of the specific personnel target;
in order to further optimize the technical characteristics, the S4 comprises two specific steps S41-S42:
s41, predicting the spatial position of the specific personnel target after 1-1.5S according to the historical position parameter of the specific personnel target, and sending the predicted spatial position to a holder controller corresponding to the adjustable close-range camera;
and S42, controlling a pan-tilt of the close-range camera to adjust the shooting direction according to the predicted spatial position, and acquiring the front face image of the specific personnel target.
The adjustable close-range camera has a narrow shooting visual field range, but has a good imaging effect and high definition, and after the tripod head controller receives the predicted spatial position, the shooting direction can be adjusted through the tripod head matched with the adjustable close-range camera, so that a clear front face image of a person target can be easily obtained.
And S5, carrying out face extraction and recognition on the front face image, recording the result, and releasing the adjustable close-range camera.
Specifically, the face recognition technology is based on the face features of people, firstly, whether a face exists in a shot image is judged for a clear front face image shot by an input adjustable close-range camera, if the face exists, the position and the size of each face and the position information of each main facial organ are further given, and according to the information, the identity features contained in the face of each specific personnel target are further extracted and compared with the known face, so that the identity of the face of each specific personnel target is recognized; after the face extraction and identification are completed, the adjustable close-range camera is released, so that the camera can continuously track other personnel targets which do not shoot the face image on the front side, the input quantity of the adjustable close-range camera is reduced, and the speed of acquiring the face image is increased.
As shown in fig. 2, the following system is designed according to the above method:
a face recognition system in a natural walking state, comprising: the system comprises a fixed wide-angle camera 1, a plurality of adjustable close-range cameras 5, a personnel detection and marking module 2, a tracked personnel determination module 3, a position prediction module 4 and a face extraction and recognition module 6; wherein,
the fixed wide-angle camera 1 is used for shooting video pictures of a person target in a natural walking state;
the personnel detection and marking module 2 is used for detecting each specific personnel target in the video picture obtained by the fixed wide-angle camera 1, identifying the relationship between each specific personnel target in the current video picture and each specific personnel target in the previous frame of video picture, and updating the ID mark of each specific personnel in the personnel target record form;
according to each specific personnel target ID mark, the tracking personnel determining module 3 is used for calculating the accumulated score of each specific personnel target, determining the specific personnel target and distributing an adjustable short-distance camera 5;
the position prediction module 4 is used for predicting the spatial position of a specific personnel target, adjusting the shooting direction of the close-range camera 5 and acquiring a front face image of the specific personnel target;
the face extraction and recognition module 6 is used for extracting and recognizing the face of the front face image, recording the result and releasing the adjustable close-range camera 5.
In order to further optimize the above technology, the personnel detection and marking module 2 comprises a feature extraction unit and an ID mark modification unit; wherein,
the characteristic extraction unit is used for extracting one or more characteristics of an image area of each specific personnel target in the video image and the edge characteristics, the color distribution characteristics and the texture characteristics of the image area;
and the ID mark modifying unit is used for comparing the picture area and the characteristics thereof acquired in the characteristic extracting unit with the picture area and the corresponding characteristics thereof in the previous frame of the video picture, and modifying the ID mark of each specific personnel target according to the comparison result.
In addition, the comparison result and the corresponding modification result in the ID tag modification unit are as follows:
when the comparison result is not matched, the specific personnel target extracted from the video picture is a newly added personnel target, a newly added personnel target ID mark is given in a personnel target recording form, the detected times of the newly added personnel target are recorded as 1 time, and the position parameter of the newly added personnel target is recorded;
and when the comparison results are matched, adding 1 to the detected times of the specific personnel target in the video picture in the personnel target recording table, recording the position parameter of the specific personnel target in the current video picture, calculating the movement speed and the movement direction according to the change of the position parameter, and recording the movement speed and the movement direction as the movement parameter of the specific personnel target.
In order to further optimize the above technical solution, the tracker determination module 3 includes: the system comprises an accumulated score calculation unit, a specific personnel target determination unit and an adjustable close-range camera distribution unit; wherein,
aiming at each specific personnel target, according to the motion parameters and the detected times, the cumulative score calculating unit is used for calculating the cumulative score of the personnel target: c ═ a × S + D + b × F,
wherein C is the accumulated score, S is the movement speed, a is the conversion coefficient from the movement speed S to the accumulated score C, D is the score determined according to the movement direction, F is the detected times, and b is the conversion coefficient from the detected times to the accumulated score C;
the specific personnel target determining unit is used for determining the currently tracked specific personnel target according to the accumulated score C;
the adjustable close-range camera allocation unit is used for allocating an adjustable close-range camera 5 to the specific personnel object.
The position prediction module 4 comprises a pan-tilt controller and a spatial position prediction unit; wherein,
according to the historical position parameters of the specific personnel target, the spatial position prediction unit is used for predicting the spatial position of the specific personnel target after 1-1.5S, and sending the predicted spatial position to the tripod head controller corresponding to the adjustable close-range camera 5;
according to the predicted spatial position, the pan-tilt controller is used for controlling the pan-tilt of the adjustable close-range camera 5 to adjust the shooting direction and acquiring the front face image of the specific personnel target.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A face recognition method under a natural walking state is characterized by comprising the following steps:
s1, shooting a video picture of the person target in a natural walking state;
s2, detecting each specific personnel target in the video picture obtained in S1, identifying the relationship between each specific personnel target in the current video picture and each specific personnel target in the previous frame of video picture, and updating the ID mark of each specific personnel in the personnel target record form;
s3, calculating the accumulated score of each specific personnel target and determining the specific personnel target according to the ID mark of each specific personnel target, and distributing an adjustable close-range camera;
s4, predicting the spatial position of the specific personnel target, adjusting the shooting direction of the close-range camera, and acquiring the front face image of the specific personnel target;
and S5, carrying out face extraction and recognition on the front face image, recording the result, and releasing the adjustable close-range camera.
2. The method for recognizing the face in the natural walking state according to claim 1, wherein the specific steps of S2 are as follows:
s21, extracting a picture area of each specific human target in the video picture and one or more of edge features, color distribution features and texture features of the picture area;
s22, comparing the picture area and the characteristics thereof obtained in the S21 with the picture area and the corresponding characteristics thereof in the previous frame of the video picture, and modifying the ID mark of each specific personnel target according to the comparison result.
3. The method according to claim 2, wherein the comparison result and the corresponding modification result in S22 are as follows:
when the comparison result is not matched, the specific personnel target extracted from the video picture is a newly added personnel target, the newly added personnel target is given an ID mark in a personnel target recording form, the detected times of the newly added personnel target are recorded as 1 time, and the position parameter of the newly added personnel target is recorded;
and when the comparison results are matched, adding 1 to the detected times of the specific personnel target in the video picture in a personnel target recording table, recording the position parameter of the specific personnel target in the current video picture, calculating the movement speed and the movement direction according to the change of the position parameter, and recording the movement speed and the movement direction as the movement parameter of the specific personnel target.
4. The method for recognizing the face in the natural walking state according to claim 1, wherein the specific steps of S3 are as follows:
s31, calculating the cumulative score of each specific personnel target according to the motion parameters and the detected times: c ═ a × S + D + b × F,
wherein C is the accumulated score, S is the movement speed, a is the conversion coefficient from the movement speed S to the accumulated score C, D is the determined score according to the movement direction, F is the detected times, and b is the conversion coefficient from the detected times to the accumulated score C;
s32, determining the specific currently tracked personnel target according to the accumulated score C;
and S33, assigning one adjustable short-distance camera to the specific personnel target.
5. The method for recognizing the face in the natural walking state according to claim 1, wherein the specific steps of S4 are as follows:
s41, predicting the spatial position of the specific personnel target after 1-1.5S according to the historical position parameter of the specific personnel target, and sending the predicted spatial position to a holder controller corresponding to the adjustable close-range camera;
and S42, controlling a pan-tilt of the close-range camera to adjust the shooting direction according to the predicted spatial position, and acquiring the front face image of the specific personnel target.
6. A face recognition system in a natural walking state, comprising: the system comprises a fixed wide-angle camera (1), a plurality of adjustable short-distance cameras (5), a personnel detection and marking module (2), a tracked personnel determination module (3), a position prediction module (4) and a human face extraction and recognition module (6); wherein,
the fixed wide-angle camera (1) is used for shooting video pictures of a person target in a natural walking state;
the personnel detection and marking module (2) is used for detecting each specific personnel target in the video picture obtained by the fixed wide-angle camera (1), identifying the relationship between each specific personnel target in the current video picture and each specific personnel target in the previous frame of video picture, and updating the ID mark of each specific personnel in the personnel target recording list;
according to each specific personnel target ID mark, the tracking personnel determining module (3) is used for calculating the accumulated score of each specific personnel target, determining the specific personnel target and distributing an adjustable short-distance camera (5);
the position prediction module (4) is used for predicting the spatial position of a specific personnel target, adjusting the shooting direction of the close-range camera (5) and acquiring a front face image of the specific personnel target;
the face extraction and recognition module (6) is used for extracting and recognizing the face of the front face image, recording the result and releasing the adjustable close-range camera (5).
7. A face recognition system according to claim 6, characterized in that the person detection and marking module (2) comprises a feature extraction unit, an ID marking modification unit; wherein,
the feature extraction unit is used for extracting a picture area of each specific human target in the video picture and one or more features of edge features, color distribution features and texture features of the picture area;
the ID mark modifying unit is used for comparing the picture area and the characteristics thereof acquired in the characteristic extracting unit with the picture area and the corresponding characteristics thereof in the previous frame of the video picture, and modifying the ID mark of each specific personnel target according to the comparison result.
8. The face recognition system of claim 7, wherein the comparison result and the corresponding modification result in the ID tag modification unit are as follows:
when the comparison result is not matched, the specific personnel target extracted from the video picture is a newly added personnel target, the newly added personnel target is given an ID mark in a personnel target recording form, the detected times of the newly added personnel target are recorded as 1 time, and the position parameter of the newly added personnel target is recorded;
and when the comparison results are matched, adding 1 to the detected times of the specific personnel target in the video picture in a personnel target recording table, recording the position parameter of the specific personnel target in the current video picture, calculating the movement speed and the movement direction according to the change of the position parameter, and recording the movement speed and the movement direction as the movement parameter of the specific personnel target.
9. A face recognition system according to claim 6, wherein the missing person determination module (3) comprises: the system comprises an accumulated score calculation unit, a specific personnel target determination unit and an adjustable close-range camera distribution unit; wherein,
aiming at each specific personnel target, according to the motion parameters and the detected times, the cumulative score calculating unit is used for calculating the cumulative score of the personnel target: c ═ a × S + D + b × F,
wherein C is the accumulated score, S is the movement speed, a is the conversion coefficient from the movement speed S to the accumulated score C, D is the score determined according to the movement direction, F is the detected times, and b is the conversion coefficient from the detected times to the accumulated score C;
the specific personnel target determining unit is used for determining the currently tracked specific personnel target according to the accumulated score C;
an adjustable close-range camera allocation unit is used for allocating one adjustable close-range camera (5) to the specific personnel object.
10. A face recognition system according to claim 6, characterized in that the position prediction module (4) comprises a pan-tilt controller, a spatial position prediction unit; wherein,
according to the historical position parameter of the specific personnel target, the spatial position prediction unit is used for predicting the spatial position of the specific personnel target after 1-1.5S, and sending the predicted spatial position to a tripod head controller corresponding to the adjustable close-range camera (5);
and according to the predicted spatial position, the holder controller is used for controlling the holder of the adjustable close-distance camera (5) to adjust the shooting direction and acquiring the front face image of the specific personnel target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910775183.7A CN110633648B (en) | 2019-08-21 | 2019-08-21 | Face recognition method and system in natural walking state |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910775183.7A CN110633648B (en) | 2019-08-21 | 2019-08-21 | Face recognition method and system in natural walking state |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110633648A true CN110633648A (en) | 2019-12-31 |
CN110633648B CN110633648B (en) | 2020-09-11 |
Family
ID=68970651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910775183.7A Active CN110633648B (en) | 2019-08-21 | 2019-08-21 | Face recognition method and system in natural walking state |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110633648B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111291989A (en) * | 2020-02-03 | 2020-06-16 | 重庆特斯联智慧科技股份有限公司 | System and method for deep learning and allocating pedestrian flow of large building |
CN111310601A (en) * | 2020-01-20 | 2020-06-19 | 北京正和恒基滨水生态环境治理股份有限公司 | Intelligent runway system based on face recognition, speed measuring method and electronic equipment |
CN111339833A (en) * | 2020-02-03 | 2020-06-26 | 重庆特斯联智慧科技股份有限公司 | Identity verification method, system and equipment based on face edge calculation |
WO2022020148A1 (en) * | 2020-07-23 | 2022-01-27 | Motorola Solutions, Inc. | Device and method for adjusting a configuration of a camera device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101572804A (en) * | 2009-03-30 | 2009-11-04 | 浙江大学 | Multi-camera intelligent control method and device |
CN102426646A (en) * | 2011-10-24 | 2012-04-25 | 西安电子科技大学 | Multi-angle human face detection device and method |
CN102622584A (en) * | 2012-03-02 | 2012-08-01 | 成都三泰电子实业股份有限公司 | Method for detecting mask faces in video monitor |
CN104601964A (en) * | 2015-02-06 | 2015-05-06 | 武汉大学 | Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system |
CN105894702A (en) * | 2016-06-21 | 2016-08-24 | 南京工业大学 | Intrusion detection alarm system based on multi-camera data fusion and detection method thereof |
US20180077345A1 (en) * | 2016-09-12 | 2018-03-15 | Canon Kabushiki Kaisha | Predictive camera control system and method |
CN108171207A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | Face identification method and device based on video sequence |
US20180324393A1 (en) * | 2017-05-05 | 2018-11-08 | VergeSense, Inc. | Method for monitoring occupancy in a work area |
CN108921001A (en) * | 2018-04-18 | 2018-11-30 | 特斯联(北京)科技有限公司 | A kind of video monitor holder and its method for tracing using artificial intelligence prediction tracking |
CN109151388A (en) * | 2018-09-10 | 2019-01-04 | 合肥巨清信息科技有限公司 | A kind of video frequency following system that multichannel video camera is coordinated |
CN109922373A (en) * | 2019-03-14 | 2019-06-21 | 上海极链网络科技有限公司 | Method for processing video frequency, device and storage medium |
CN110232323A (en) * | 2019-05-13 | 2019-09-13 | 特斯联(北京)科技有限公司 | A kind of parallel method for quickly identifying of plurality of human faces for crowd and its device |
US20190388731A1 (en) * | 2010-01-05 | 2019-12-26 | Isolynx, Llc | Systems and methods for analyzing event data |
-
2019
- 2019-08-21 CN CN201910775183.7A patent/CN110633648B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101572804A (en) * | 2009-03-30 | 2009-11-04 | 浙江大学 | Multi-camera intelligent control method and device |
US20190388731A1 (en) * | 2010-01-05 | 2019-12-26 | Isolynx, Llc | Systems and methods for analyzing event data |
CN102426646A (en) * | 2011-10-24 | 2012-04-25 | 西安电子科技大学 | Multi-angle human face detection device and method |
CN102622584A (en) * | 2012-03-02 | 2012-08-01 | 成都三泰电子实业股份有限公司 | Method for detecting mask faces in video monitor |
CN104601964A (en) * | 2015-02-06 | 2015-05-06 | 武汉大学 | Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system |
CN105894702A (en) * | 2016-06-21 | 2016-08-24 | 南京工业大学 | Intrusion detection alarm system based on multi-camera data fusion and detection method thereof |
US20180077345A1 (en) * | 2016-09-12 | 2018-03-15 | Canon Kabushiki Kaisha | Predictive camera control system and method |
US20180324393A1 (en) * | 2017-05-05 | 2018-11-08 | VergeSense, Inc. | Method for monitoring occupancy in a work area |
CN108171207A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | Face identification method and device based on video sequence |
CN108921001A (en) * | 2018-04-18 | 2018-11-30 | 特斯联(北京)科技有限公司 | A kind of video monitor holder and its method for tracing using artificial intelligence prediction tracking |
CN109151388A (en) * | 2018-09-10 | 2019-01-04 | 合肥巨清信息科技有限公司 | A kind of video frequency following system that multichannel video camera is coordinated |
CN109922373A (en) * | 2019-03-14 | 2019-06-21 | 上海极链网络科技有限公司 | Method for processing video frequency, device and storage medium |
CN110232323A (en) * | 2019-05-13 | 2019-09-13 | 特斯联(北京)科技有限公司 | A kind of parallel method for quickly identifying of plurality of human faces for crowd and its device |
Non-Patent Citations (2)
Title |
---|
LIN LIZHONG: "Research on Detection and Tracking of Moving Target in Intelligent Video Surveillance", 《2012 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ELECTRONICS ENGINEERING》 * |
郑玺: "基于OpenCV 的组合优化多目标检测追踪算法", 《计算机应用》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310601A (en) * | 2020-01-20 | 2020-06-19 | 北京正和恒基滨水生态环境治理股份有限公司 | Intelligent runway system based on face recognition, speed measuring method and electronic equipment |
CN111291989A (en) * | 2020-02-03 | 2020-06-16 | 重庆特斯联智慧科技股份有限公司 | System and method for deep learning and allocating pedestrian flow of large building |
CN111339833A (en) * | 2020-02-03 | 2020-06-26 | 重庆特斯联智慧科技股份有限公司 | Identity verification method, system and equipment based on face edge calculation |
CN111339833B (en) * | 2020-02-03 | 2022-10-28 | 重庆特斯联智慧科技股份有限公司 | Identity verification method, system and equipment based on face edge calculation |
CN111291989B (en) * | 2020-02-03 | 2023-03-24 | 重庆特斯联智慧科技股份有限公司 | System and method for deep learning and allocating pedestrian flow of large building |
WO2022020148A1 (en) * | 2020-07-23 | 2022-01-27 | Motorola Solutions, Inc. | Device and method for adjusting a configuration of a camera device |
US11475596B2 (en) | 2020-07-23 | 2022-10-18 | Motorola Solutions, Inc. | Device, method and system for adjusting a configuration of a camera device |
Also Published As
Publication number | Publication date |
---|---|
CN110633648B (en) | 2020-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110633648B (en) | Face recognition method and system in natural walking state | |
CN108111818B (en) | Moving target actively perceive method and apparatus based on multiple-camera collaboration | |
Wheeler et al. | Face recognition at a distance system for surveillance applications | |
CN106650620B (en) | A kind of target person identification method for tracing using unmanned plane monitoring | |
CN109819208B (en) | Intensive population security monitoring management method based on artificial intelligence dynamic monitoring | |
CN111832457B (en) | Stranger intrusion detection method based on cloud edge cooperation | |
CN110222640B (en) | Method, device and method for identifying suspect in monitoring site and storage medium | |
KR101172747B1 (en) | Camera tracking monitoring system and method using thermal image coordinates | |
CN110830756B (en) | Monitoring method and device | |
CN110969118B (en) | Track monitoring system and method | |
KR101337060B1 (en) | Imaging processing device and imaging processing method | |
WO2014171258A1 (en) | Information processing system, information processing method, and program | |
JP6555906B2 (en) | Information processing apparatus, information processing method, and program | |
JP6077655B2 (en) | Shooting system | |
WO2018177153A1 (en) | Method for tracking pedestrian and electronic device | |
CN109905641B (en) | Target monitoring method, device, equipment and system | |
WO2020094088A1 (en) | Image capturing method, monitoring camera, and monitoring system | |
WO2019080669A1 (en) | Method for person re-identification in enclosed place, system, and terminal device | |
CN106529500A (en) | Information processing method and system | |
WO2022134916A1 (en) | Identity feature generation method and device, and storage medium | |
CN109816700B (en) | Information statistical method based on target identification | |
JP5758165B2 (en) | Article detection device and stationary person detection device | |
JP2002342762A (en) | Object tracing method | |
JP2017211731A (en) | Head-count counting system, head-count counting method, and head-count counting result browsing method | |
JP6798609B2 (en) | Video analysis device, video analysis method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |