KR101781358B1 - Personal Identification System And Method By Face Recognition In Digital Image - Google Patents

Personal Identification System And Method By Face Recognition In Digital Image Download PDF

Info

Publication number
KR101781358B1
KR101781358B1 KR1020150107542A KR20150107542A KR101781358B1 KR 101781358 B1 KR101781358 B1 KR 101781358B1 KR 1020150107542 A KR1020150107542 A KR 1020150107542A KR 20150107542 A KR20150107542 A KR 20150107542A KR 101781358 B1 KR101781358 B1 KR 101781358B1
Authority
KR
South Korea
Prior art keywords
face
image
information
unit
image data
Prior art date
Application number
KR1020150107542A
Other languages
Korean (ko)
Other versions
KR20170015639A (en
Inventor
이중
변준석
정도준
심규선
Original Assignee
대한민국
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 대한민국 filed Critical 대한민국
Priority to KR1020150107542A priority Critical patent/KR101781358B1/en
Publication of KR20170015639A publication Critical patent/KR20170015639A/en
Application granted granted Critical
Publication of KR101781358B1 publication Critical patent/KR101781358B1/en

Links

Images

Classifications

    • G06K9/00221
    • G06K9/00268
    • G06K9/00288
    • G06K9/00771

Abstract

An image input unit 303 which is controlled by the control unit 301 and receives image data stored by the image photographing apparatus, a control unit 301, a display unit 305 controlled by the control unit 301, An input unit 307 connected to the control unit 301 for receiving a command from a user and a storage unit 309 being a memory device connected to the control unit 301; In the storage unit 309, an information DB 310 including a face database and a face recognition program 320 are stored; When an execution command of the face recognition program 320 is inputted through the input unit 307, at least one face region inputted from the image input unit 303 and stored in the storage unit is detected and stored in the storage unit 309 Extracts the feature value of each face area, compares the feature value with the feature value of the face photographs of the face database included in the information DB 310, identifies the individual, extracts the EXIF data included in the image data And a personal identification system using face recognition in a digital image and a method thereof, characterized in that the identified personal photograph is controlled by the control unit (301) so as to be displayed on the display unit (305) together with the photographing time of the image data.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a personal identification system,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a personal identification system and method using face recognition in a digital image, and more particularly, to a system and method for recognizing a face from a face recognition step in image data stored in an image storage device such as a CCTV, a black box, a smart phone, The facial database is a face database. The facial features included in the information database are compared to identify the individual. The facial image and the like of the identified individual are stored in the database in the form of an information database, Extracts the EXIF data included in the image data, extracts the position information of the facial photograph by the photographing time and photographing time, and provides the timeline of the person whose face is recognized and the moving line information according to the timeline And a personal identification system and method using face recognition in a digital image.

As a prior art relating to face recognition, "Face Recognition Apparatus and Face Recognition Method" is disclosed in Patent Registration No. 10-1381439. 1 shows a block diagram of a conventional face recognition apparatus.

The face recognition apparatus 100 includes an image input unit 101, a face area detection unit 102, a method holding unit 103, a method selection unit 104, a feature point detection unit 105, a feature extraction unit 106, A display unit 107, a recognition unit 108, and a display unit 109. [ In addition, the face recognition apparatus 100 detects a face portion (hereinafter, referred to as a 'face') from image data, which is a video image photographed by the camera 150, . If the image data is a moving image, the moving image may be in various formats such as exe, avi, mp4, and the like.

2 is a diagram showing an example of image data input from the camera 150. Fig.

The image data may include face areas of various sizes (for example, areas 211, 212, and 213). When the face size is small in the image data (area 211), the resolution of the image corresponding to the face area is very low. This makes it possible to detect the correct position by the face feature point detection when the face recognition process is performed by a method suitable for the conventional high resolution for an image with a small face size and a low resolution, . In addition, when performing the process of extracting feature points, necessary feature information can not be acquired depending on the resolution, and all the faces to be identified become similar feature information, and the identification accuracy is lowered.

The face region detection unit 102 detects a face region of a person from the image data input by the image input unit 101. [ The face area detection unit 102 obtains coordinates indicating a face area using the luminance information on the image data in the input image data. As a method of detecting a face area, various methods have been known in the past. For example, Joint Haar-like feature based on air (air) suitable for face detection (Yutaka Mita, et al .: Journal of The Institute of Electronics, Information and Communication Engineers, Vol. J89-D, 8, pp1791- 1801 (2006)). Examples of other detection methods include a method of detecting a position giving the highest correlation value as a face region by obtaining a correlation value while moving a prepared template in input image data, a method of detecting a face using a unique space method or a subspace method Extraction method or the like.

As shown in Fig. 2, the shape of the face region to be detected is detected as a rectangular shape. The coordinates indicating the apexes of the rectangular shape are referred to as face regions.

The method holding unit 103 holds a plurality of detection methods for detecting and processing facial features from the face region detected by the face region detecting unit 102 based on the difference in roughness of the detecting process. The method holding unit 103 maintains a method of detecting three types of facial feature points, and can be a four or more types of detection methods or two types of detection methods.

The feature extracting unit 106 selects a proper image from the face regions of the plurality of frames corresponding to the same person in order to recognize the person, and extracts a feature value that is the feature information of the face from the selected face region. In addition, the feature extraction unit 106 extracts a feature value every time the size of the face region is enlarged, so that an arbitrary number of image data is used until the face region having the maximum size is detected.

3 is a diagram showing a feature of the facial feature point detection method maintained by the method maintaining unit.

The method holding unit 103 includes a first facial feature point detection method, a second facial feature point detection method, and a third facial feature point detection method. Various facial feature point detection methods have different features as shown in Fig.

The first facial feature point detection method is strongest at a low resolution because of the roughest detection processing among the three types of detection methods, but the recognition accuracy is low. In the second facial feature point detection method, the detection processing is roughly second in the three types of detection methods, second in the low resolution, and high in recognition accuracy. In the third facial feature point detection method, since the detection processing is the most detailed among the three types of detection methods, the resolution is weak to the low resolution, but the recognition accuracy is the highest when the resolution is high. Among the three facial feature point detection methods, an optimal facial feature point detection method is used according to the size of the detected face region.

The method selection unit 104 selects the face feature point from the face feature point detection method held by the method holding unit 103 based on the image size of the face region detected by the face region detection unit 102 The facial feature point detection method is selected. The face feature point detection method is selected based on whether or not the face prime number is equal to or greater than a predetermined threshold value.

When three kinds of facial feature point detection methods are maintained, the method selection unit 104 selects a facial feature point detection method using two kinds of threshold values of resolution. Let the two types of resolution threshold values be A and B (B < A). Then, the method selection unit 104 selects the first facial feature point detection method when the calculated width (or width) is less than B. [ In addition, the second face feature point detection method is selected when the calculated width width (or width) is equal to or larger than B and less than A, and when the width is greater than or equal to A, the third face feature point detection method is selected.

The feature point detection unit 105 extracts the positions of face parts such as eyes and nose from the face area detected by the face area detection unit 102 as feature points of the face using the face feature point detection method selected by the method selection unit 104 do.

The first facial feature point detection method, the second facial feature point detection method, and the third facial feature point detection method are different from each other in terms of the positions of the feature points (for example, the eyes, the eyes, and the like) The number of feature points remains unchanged.

4 is an example of feature points of eyes, nose, mouth, and forehead detected by various facial feature point detection methods by the feature point detection unit 105. Fig. 4 is an example of a feature point. As shown in FIG. 4, the feature point detector 105 detects 15 feature points.

Thus, even when a feature value, which is feature information of a face, is extracted by using any of the first facial feature point detecting method, the second facial feature point detecting method, and the third facial feature point detecting method, It is possible to authenticate the person by comparing the feature value, which is the feature information of each person's face stored in the information management unit 107, with the feature value. Next, each facial feature point detection method will be described in detail.

The first facial feature point detection method is used when the face region becomes low in resolution so that detailed structure information of the face is not seen. In the first facial feature point detection method, an average model (wire frame) of facial feature point coordinates to be detected is provided in advance in a plurality of types according to the face direction. When the feature point detection section 105 performs the detection processing, the brightness value included in the face area and the average model (wire frame) previously prepared are compared with each other, and the wire frame having the highest degree of matching is applied. At that time, the wire frame is applied to the face-down portion in consideration of the face direction and the like. The feature point detection unit 105 detects feature points of the face according to the applied wire frame.

The second face feature point detection method has a better detection position accuracy of the face feature point than the first face feature point detection method but does not have a fine position accuracy for each part of the face as compared with the third face feature point detection method. The second facial feature point detection method is described in reference to Coote.TF, Walker.K., Taylor C.J, "View-based active appearance models", Image and Vision Computing 20, pp 227-232, It is possible to consider the tracking of facial expression by the AAM (Active Appearance model). The feature point detection unit 105 performs tracking of the facial expression using the second face feature point detection method, thereby detecting the position that becomes the feature point of the face.

The third facial feature point detection method can be used when the resolution of the face region is sufficiently large, and the feature point of the face is detected by using the shape information of the facial feature point and the luminance distribution information. This third facial feature point detection method is a detection method with the highest detection position accuracy among the three facial feature point detection methods when the resolution of the face region is sufficiently large.

As a method for detecting the third facial feature point, there are known methods such as Kazuhiro Fukui and Osamu Yamaguchi, "Facial Feature Point Extraction by Combining Feature Extraction and Pattern Contrast", Journal of the Institute of Electronics, Information and Communication Engineers, Vol. J80-D-II, No 8, pp. 2170-2177 (1997)). As a result, feature points such as eyes, nose, mouth, and the like are detected. In addition, for detection of feature points indicating the mouth area, a method of detecting the feature points representing the mouth area is described in the document (Mayumi Yuasa, Akiko Nakajima: "Digital Make System based on High-Precision Facial Feature Point Detection", Proceedings of 10th Image Sensing Symposium, pp219-224 The method shown can be used. Information that can be handled as an image of a two-dimensional array shape can be acquired regardless of any method used as the third facial feature point detection method, and feature points of a face can be detected from the information.

When a plurality of faces exist in the image data, the minutia matching point detecting section 105 makes it possible to deal with the face by performing similar processing for each face.

The feature extraction unit 106 extracts feature information (hereinafter, referred to as "feature value") indicating a feature of a face that can identify an individual of a face obtained from the feature points of the face detected by the feature point detection unit 105. Thus, feature values using any one of a plurality of types of facial feature point detection methods can be extracted. The feature extraction unit 106 outputs, as feature values, a sequence representing each feature of the face. The feature extraction unit 106 divides the area of the face into a predetermined size and shape based on the coordinates of the feature points of the face detected by the feature detection unit 106 and outputs the shade information to a feature quantity . the information of mxn dimensions is extracted as the feature quantity vector by using the density value of the area of m pixels × n pixels intact as information.

The feature extraction unit 106 normalizes the feature vector and the length of the feature vector so as to be 1 by the simple similarity method, and calculates the inner product to obtain the similarity indicating the similarity between the feature vectors. The above method can be realized by using the subspace method shown in the literature (Elkiayazu, Hidemitsu Ogawa, Satomakoto translation, "Pattern recognition and subspace method", Industrial book, 1986). In addition, with respect to one piece of facial image information shown in a document (Toshiba (Kodama Tadao Koza: "Image Recognition Apparatus, Method and Program"), Japanese Patent Application Laid-Open No. 2007-4767) To thereby improve the accuracy of the image data. By using these methods, feature values can be extracted from one piece of image data. After the feature point detection unit 105 detects the feature points, the feature extraction unit 106 performs direction correction (three-dimensional), size correction, and brightness correction on the feature points. For example, when the direction of the detected face is the leftward direction, the direction correction is performed by applying a leftward face to the three-dimensional model of a face of a person previously provided and changing the direction to the front face. The size correction is a correction for reducing or enlarging in order to match the face size in advance as a reference. After these corrections are made, characteristic values are extracted. Thus, the feature values extracted by the feature extraction unit 106 can be unified regardless of the detected face direction and face size. This makes it easy to compare feature values of a person managed by the person information management unit 107. [

The person information management unit 107 manages previously registered feature values for each person. The person information management unit 107 is a database used when the recognition unit 108 performs recognition processing of a person. The person information management unit 107 according to the present embodiment is different from the person information management unit 107 according to the first embodiment in that the person information management unit 107 of the present embodiment is different from the person information management unit 107 in that, The person ID and the name are managed so as to be corresponded.

The feature value extracted by the feature extraction unit 106 to be managed may be a feature quantity vector of mxn or a correlation matrix immediately before performing partial space or KL expansion. Further, the feature values extracted by the feature extraction unit 106 are managed together with the image data input by the image input unit 101, thereby facilitating the display of search or search of an individual.

The recognition unit 108 recognizes the feature value extracted by the feature extraction unit 106 and the feature value stored in the character information management unit 107 based on information about the person And recognizes the person included in the image data. The recognition unit 108 according to the present embodiment extracts the feature value stored in the similarity information management unit 107 similar to the feature value extracted by the feature extraction unit 106 and outputs the extracted feature value to the camera 150 ) As a candidate photographed by the photographer.

The recognition unit 108 calculates the similarity between the feature value extracted by the feature extraction unit 106 and the feature value stored in the person information management unit 107, And outputs the information represented by the value to the display unit 109 (e.g., LCD screen). The recognition unit 108 outputs the person ID corresponding to the face identification information and the information indicating the calculated similarity by the person information management unit 107 in order from the face identification information having high similarity as a result of processing. And may be various pieces of information related to the person corresponding to the person ID.

The recognition unit 108 correlates the feature values extracted by the feature extraction unit 106 and outputs the facial feature point detection method used for the detection by the feature point detection unit 105 to the feature point detection unit 105 . Then, the recognition unit 108 outputs to the display unit 109 in association with the detected feature values by the above-described method for identifying the facial feature point detection method.

The information indicating the similarity is the similarity between the subspaces managed as the feature value.

For the similarity calculation method, a calculation method such as a partial space method or a complex similarity method may be used. In these calculation methods, the feature values stored in the person information management unit 107 and the feature values extracted by the feature extraction unit 106 are expressed as subspaces. Then, the "angle" formed by the two subspaces is defined as the similarity. Then, the recognition unit 108 finds the correlation matrix Cin based on these two subspaces, and then diagonals it with Cin =? InAin? InT to obtain the eigenvector? In. Thereafter, the recognition unit 108 obtains the degree of similarity (0.0 to 1.0) between the subspaces of the subspaces represented by two? In and? InT, and determines the degree of similarity for recognizing this. As a concrete calculation method, for example, the methods described in the above-mentioned documents (Elkio Ozagawa, Hidemitsu Ogawa, Satomakoto Translation, Pattern Recognition and Subspace Method, Industrial Books, 1986) may be used. It is also possible to improve the accuracy by integrating a plurality of face images which are known to be the same person in advance and identifying whether or not the person is the person by projection onto the subspace. In addition, a search method using a TREE structure may be used to search at high speed.

5 is a diagram showing an example of a display displayed on the display section. The display unit 109 displays information on candidates determined to be highly similar in recognition based on the image data group input by the image input unit 101 and the face area included in the image data. As shown in the right column of Fig. 5, the display unit 109 displays information on candidates up to the fifth in order of high similarity. In the left column, image data including the face of the person is displayed among the image data photographed by the camera 150. [

Further, the display unit 109 displays a symbol indicating the detection reliability based on the feature point detection method of the face when displaying the identification result of the person. In the present embodiment, when the face is largely reflected in the image data, since the third facial feature point detection method can be used, high accuracy can be expected also for the coordinates of the detected feature point. (503) indicating that the detection accuracy is high is displayed. In addition, when the face is displayed in a certain size on the image data, '?' (502,501) indicating that the detection accuracy is normal is displayed because the second facial feature point detection method can be used. Further, when the face is small in the image data, since the first facial feature point detection method is used, a minus sign indicating that the detection accuracy is lower than the other two, is displayed for the detected minutia point coordinates.

6 is a diagram showing another example of the display displayed on the display section.

In the example of the screen, only the face area is displayed for each face of each person, and the minutia point coordinates are not displayed. This is to make it easy to grasp the part of the face. The display unit 109 displays the detection reliability based on the detection of the facial feature point, for example, '?' (601) and '?' (602) in the detected face region among the input image data. Thereby, the user can recognize whether or not the reliability of the detection accuracy of the feature points of the face detected for each face region is high.

7 is a flowchart showing the sequence of recognition processing of a face of a person in the face recognition apparatus.

The image input unit 101 inputs image data from the camera 150 to the face recognition apparatus 100 (S701). Subsequently, the face area detection unit 102 detects the face area from the input image data (S702).

Subsequently, the method selection unit 104 determines whether the size of the detected face area is equal to or greater than a predetermined threshold value B (S703). If it is determined that the value is less than the predetermined threshold value B (S703: NO), the method selecting unit 104 selects the second facial feature point detecting method (S707). Then, the feature point detection unit 105 detects feature points of the face using the selected second face feature point detection method (S708).

If the method selection unit 104 determines that the size of the detected face region is equal to or larger than the predetermined threshold value A (S706: YES), the method selection unit 104 selects the third facial feature point detection method S709). Then, the feature point detection unit 105 detects the feature points of the face using the third face feature point detection method selected for the detected face region (S710).

Then, the feature extraction unit 106 extracts a feature value that is feature information of the face based on the detected feature points of the face (S711). At this time, the feature extraction unit 106 performs direction correction (three-dimensional), size correction, and brightness correction on the detected feature points. As a result, different sizes, brightness, and directions of faces are modified for each face region of the image data.

Thereafter, the recognition unit 108 performs a face recognition process based on the feature value extracted by the feature extraction unit 106 and the feature value stored by the person information management unit 107, thereby obtaining a candidate of a person reflected in the image data (S712). Then, the display unit 109 displays a list of extracted candidates and reliability based on the facial feature point detection method (S713)

The conventional face recognition method as described above only recognizes the individual faces included in the image and extracts and extracts the feature points of the face, does not provide specific information about the recognized individual, and can not provide specific information about the individual such as CCTV, vehicle black box, As the image-taking device becomes popular, the amount of images collected as proof is vast, and summarizing is necessary for individuals who are photographed in such a vast amount of images, and it takes a lot of time, labor, There is also a concern that it may miss important clues.

Korean Patent Publication No. 10-2004-0028210 Korean Patent No. 10-0723417 Korean Patent No. 10-0863882 Korean Patent No. 10-0828411 Korean Patent No. 10-0847142 Korean Patent No. 10-1117549 Korean Patent No. 10-1381439

SUMMARY OF THE INVENTION The present invention has been proposed in order to solve the problems of the related art as described above, and it is an object of the present invention to provide a method and apparatus for capturing one or more images or photographs stored in a video photographing device such as a CCTV, a black box, a camcorder, (For example, a suspect face database, a criminal face database, a missing person database, or the like), and identifies the personal information by comparing the recognized feature values with the EXIF data Extracts the position of the face photograph by the photographing time and the photographing time to provide the time line of the person whose face is recognized and the copper line information according to the time line and displays the number of times of the person detected at the specific place, , Face recognition in digital images for suspect arrest and crime prevention To provide a personal identification system through.

In accordance with another aspect of the present invention, there is provided an image processing apparatus including an image input unit which is controlled by a control unit and receives image data stored by a video photographing apparatus, a control unit, a display unit controlled by the control unit, A receiving unit, and a storage unit, which is a memory device connected to the control unit; An information DB and a face recognition program including a face database stored in the storage unit; When an execution command of the face recognition program is inputted through an input unit, at least one face area is detected from image data input through an image input unit and stored in a storage unit, and stored as picture files, and each stored picture file has a predetermined size A storage unit for performing magnitude correction and angle correction on a photograph, extracting feature values of the data, and storing the feature values; The feature values of the feature points of the eye, nose, mouth, and jaw line of each face region are extracted and the similarity of the feature values of the feature points of the face images of the face database included in the information DB is retrieved as the feature value. Extracts the EXIF data contained in the image data, extracts the photographed time of the image data and the positional information of the identified personal image in addition to the time line of the recognized person who is stored in the database and is the face, A display unit for displaying position information on the GIS map according to the photographing time; And a controller for causing the identified personal photograph to display the photographing time and the copper line information of the image data on the display unit.

In the above, the face recognition program includes: a file input module for inputting image data photographed by the image photographing device; A face region detection module for detecting one or more face regions included in the image data input by the file input module and storing the detected face image files in a storage unit; A feature detecting module for calculating feature values of eye, nose, mouth, and jaw line of the face region detected by the face region detecting module; A recognition module that compares feature values of the detected one or more face photographs with feature values of a face database included in the information DB to identify individual information; An EXIF data extraction module for extracting EXIF data of image data included in the detected face photograph; And a display module for displaying the detected face photograph, the face recognition result, and the image data photographing time on the display unit.

In the above, the positional information of the image photographing apparatus is extracted from the EXIF data or stored in the storage unit together with the image data by the user's input; And the display unit displays positional information according to the photographing time of the identified individual on the GIS map.

In the above, the EXIF data extracted from the image data is stored in the storage unit together with the image of the face region.

In the above, the number of times the individuals identified by the execution of the face recognition program appear at the photographed place are displayed in a graph and displayed on the display unit.

According to another aspect of the present invention, there is provided an image processing apparatus including an image input unit which is controlled by a control unit and receives image data stored by a video photographing apparatus, a control unit, a display unit controlled by the control unit, an input unit connected to the control unit, And a storage unit which is a memory device connected to the control unit; Wherein the storage unit stores an information DB including a face database and a face recognition program; A method for identifying a person through face recognition in a digital image executed when an execution command of the face recognition program is input through an input unit, the method comprising:
When a command for executing the face recognition program is input through the input unit, the face detection unit detects at least one face area from the image data input through the image input unit and stored in the storage unit, An image data input step of performing size correction and angle correction on each of the stored picture files, extracting feature values of the data, and storing the extracted feature values in a storage unit; An EXIF data extraction step of extracting EXIF data of image data by the EXIF data extraction module and storing the extracted EXIF data; Detecting a face area with respect to the image data by the face area detection module so as to recognize one or more faces included in the image data; Extracting feature values of eye, nose, mouth, and jaw line of the face region detected by the feature detection module with respect to the detected face region, and storing feature values; A face recognition step of comparing a face feature value of image data of the detected one or more face photographs with a feature value of a face database included in the information DB 310 to recognize a face; And a display step in which a photograph taken by the display module and a face recognition result are displayed on the display unit 305,
In the face recognition step, the feature values of the feature points of the eye, nose, mouth, and jaw line of each face region are extracted and the similarity of the feature values of the feature points of the face images of the face database included in the information DB is retrieved An individual for a face with a high degree of similarity is identified and the EXIF data included in the image data is extracted and the photographed time and location information of the identified personal photograph are stored in the database and added to the timeline of the recognized person The position information according to the photographing time is displayed on the GIS map along the time line according to the time line, and the photographing time and the line information of the image data of the identified individual photograph are displayed on the display unit under the control of the control unit And provides a personal identification method through face recognition in a digital image.

The detected face region is stored as a picture file. The feature value is derived, and the personal identification information is derived in comparison with the feature value of the face database included in the information DB. The EXIF data extracted from the image data And is stored in the storage unit, and the photographing time is displayed on the display unit.

In the above, when the appearance time of the person detected for each CCTV installation point installed in some places by the face recognition in a picture or a picture by the frequency output module according to the time line in the face recognition program is confirmed, Is displayed on the display unit 305 as a graph in the form of a graph showing how often and when the appearance occurred in the neighborhood.

In the above, the positional information of the image photographing apparatus is extracted from the EXIF data or stored in the storage unit together with the image data by the user's input; The display unit 305 displays location information on the GIS map according to the photographing time of the identified individual.

The technique proposed in the present invention recognizes a face appearing in a video file stored in a video photographing device such as a CCTV, a black box, a smart phone, a digital camera, or the like, Personal information about the recognized face can be provided to identify who. Basically, at least one face identified in the image is extracted through multiple face recognition in one image, and then the extracted face is extracted from the constructed face database (for example, a suspect face database or a criminal face database or a missing person database) Extracts the EXIF data included in the image data, extracts the position of the face photograph by the photographing time and the photographing time, extracts the position of the face of the person who recognized the face on the map, As shown in FIG.

If the face is not found in the face database, the personal information disclosed on the Internet is extracted through the image search through the Internet to identify the individual. In addition, a face of a moving picture file or a real-time image is recorded in a form of a time line when the person is found in the image so that the time of appearance can be confirmed. When the time of occurrence is confirmed, it can analyze how often and when the person appears, and can help in crime analysis and prevention.

According to the present invention as described above, a face is recognized from image data that is real-time or stored in an image stored in a video storage device such as a CCTV, a black box, a smart phone, or a digital camera, So that the individual identification information can be confirmed. If the system is connected to a personal face database, it can check in real time whether there are suspects, criminals or missing persons in the CCTV control center or a large number of image evidence.

The face recognition system proposed in the present invention provides a real-time notification service upon detection of a suspect, a criminal, or a missing person as well as crime prevention by recognizing a face in the real-time image and thereby extracting the personal identification information. And provides effective crime prevention and crime prevention effectiveness.

The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings. Brief Description of the Drawings The advantages and features of the present invention, and how to achieve them, will become apparent with reference to the embodiments described below with reference to the accompanying drawings. Like reference numerals refer to like elements throughout the specification.

1 is a block diagram of a conventional face recognition apparatus,
2 is a diagram showing an example of image data input from a camera,
Fig. 3 shows a feature of the facial feature point detection method maintained by the method maintaining unit,
4 is a diagram showing an example of a feature point detected by various facial feature point detection methods by the feature point detection unit,
5 shows an example of a screen displayed on the display unit,
6 shows an example of another screen displayed on the display unit,
Fig. 7 is a flowchart showing a procedure of recognition processing of a person's face in the conventional face recognition apparatus,
FIG. 8 is a diagram schematically illustrating a system in which a personal identification program through face recognition in a digital image is executed according to the present invention,
9 is a configuration diagram of a face recognition program according to an embodiment of the present invention,
FIG. 10 is a schematic view showing an execution screen of a face recognition program for performing personal identification using face recognition according to an embodiment of the present invention,
FIG. 11 schematically shows a screen on which positional information is displayed on the GIS map according to the timeline of the copper line information of a person whose face is recognized,
FIG. 12 is a flowchart illustrating a personal identification method using face recognition in a digital image according to the present invention,
FIG. 13 is a flowchart illustrating a process of displaying personal information on the second display screen when the specific face recognized on the first display screen is selected in the face recognition program, and location information by time period according to the time line,
FIG. 14 shows an example in which the number of times the individual who has been recognized as a face by a time zone is displayed on the display unit for a specific place.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.

FIG. 8 is a diagram schematically showing a system in which a personal identification program is executed through face recognition in a digital image according to the present invention, FIG. 9 is a configuration diagram of a face recognition program according to an embodiment of the present invention, 10 schematically shows an execution screen of a face recognition program for performing personal identification using face recognition according to an embodiment of the present invention. FIG. 12 is a flowchart illustrating a personal identification method using face recognition in a digital image according to the present invention. FIG. 13 is a flowchart illustrating a personal identification When a specific face recognized on the screen is selected, personal information is displayed on the second display screen, position information A flowchart illustrating a process shown, Fig. 14 shows an example in which the hit of the face recognized individual classified time is displayed on the display for a specific place.

As shown in FIG. 3, the personal identification system 300 using facial recognition in a digital image according to the present invention includes an image input unit 303, a control unit 301, an input unit 307, a display unit 305, and a storage unit 309. The storage unit 309 stores an individual identification program 320 through face recognition in a digital image and an information DB 310 including a face database which is personal information constructed based on face information. The personal identification program 320 is executed by a user's execution instruction through the input unit 307. [ The storage unit 309 may be a server connected to the control unit 301 via a communication network.

 The system recognizes all recognizable faces when one or a plurality of images or pictures stored in a video photographing device such as a CCTV, a black box, a camcorder, a digital camera, Information is compared with an existing built-in face database (for example, a suspect face database or a criminal's face database or a missing person database) to extract personal information, extract EXIF data included in image data of the detected face photographs, And displays the number of times the person has been detected through face recognition at a plurality of points or a specific place so as to search for missing persons, arrest the suspects, And crime prevention.

The personal identification system 300 through the face recognition in the digital image of the present invention is connected to the control unit 301 and receives an image or a photograph stored by a video photographing device such as a TV, a black box, a digital camera, An image input unit 303; A facial region is detected for each face image so as to recognize all recognizable faces when one or a plurality of images stored in the image capturing apparatus are displayed, and the feature points of eyes, nose, mouth, and jaw line are obtained, The feature values of the face are calculated and compared with photographs having high contrast similarity with the face database (e.g., the suspect's face database or the offender's face database or missing persons database) included in the information DB 310 stored in the storage unit, And extracts EXIF data contained in the image data and extracts the position of the face photograph by the photographing time and the photographing time of the face photograph to obtain the time line of the person (eg, suspect, criminal, missing person, etc.) And a control unit (301) for controlling to extract copper line information according to a time line; A facial recognition result, which is connected to the control unit 301 and compares the feature values of the eye, nose, and mouth of the photographs of the face database with each other within a set error range, a time line, A display unit 305 to be displayed together with a map; An input unit 307 connected to the control unit 301 and receiving user commands such as a keyboard and a mouse; And a storage unit 309 for storing images or photographs stored by the image photographing apparatus and storing a face recognition program 320 and an information DB as a face database.

The storage unit 309 stores a face recognition program 320 and an information DB 310 which is a face database such as a suspect face database or a criminal face database or a missing person database.

The face database may be a suspect face database or a criminal face database or a missing persons database as needed.

When the face recognition program 320 is executed, one or more face regions are detected from the input image and stored in the storage unit 309. The feature points of the eyes, nose, mouth, and jaw line of each face region are extracted, The feature values of the facial photographs of the face database are compared and the EXIF data included in the input image is extracted and the position of the corresponding image taken at the photographing time and the photographing time is extracted, Information is displayed on the display unit 305. [

The face recognition program 320 can be applied to any one of many facial recognition algorithms, and can be applied to a conventional template matching method, a PCA (Principal Component Analysis) based Eigenvector extraction method, an ANN (Artificial Neural Network) Or the like may be applied.

In the template matching method, feature points of the eyes, nose, and mouth are obtained at the time of face recognition, and feature values such as eyes, nose, and mouth are obtained, and a certain area around the feature vector is set as a template. , Facial image size, and slope. PCA using KL Transform is an efficient method for extracting eigenvectors. The ANN method is a method of recognizing a face by inputting a pixel image to a neural network without extracting feature values from the facial image.

9 is a configuration diagram of a face recognition program according to an embodiment of the present invention.

The face recognition program 320 includes a file input module 321 for receiving image data, which is a moving image or a photograph, stored by a video photographing device such as a CCTV, a black box, a camcorder, a digital camera, A face area detection module (322) for detecting a face area for each image data so as to recognize one or more faces included in the image data inputted from the file input module (321); A feature detection module (323) for calculating feature points from feature points by obtaining feature points of eye, nose, mouth, and jaw line of the detected face region; A recognizing module 324 for comparing the images of the face database (for example, a suspect face database, a criminal face database, a missing person database, etc.) having high similarity from the calculated feature values to identify personal information; An EXIF data extraction module 325 for extracting EXIF data included in image data of the detected face photograph; And a display module 326 for displaying the image data and the face recognition result.

When at least one face region is detected in the image data, the face recognition program 320 primarily stores data of at least one face region as a picture file, and each stored picture file is a picture of a predetermined size The feature value of the data is extracted and stored and the similarity of the feature values of the feature points of the data and the feature points (forehead, eye, nose, mouth, jaw line, etc.) The photographing time and position information are stored in the database from the EXIF data extracted by the image data in which the individual identification information is confirmed, and the time line of the individual whose face is recognized is stored in the database And is displayed on the display unit 305.

The display module 326 displays positional information (①, ②, ③, ④ in FIG. 11) on the GIS map on the display unit 305 in accordance with the time line in addition to the personal timeline in accordance with the time line .

The display module 326 displays how often and when the person appears in a specific place in accordance with the photographing time, which is the appearance time of the person detected for each CCTV installation point installed in various places by the face recognition in the image or the photograph inputted from the image input device A frequency graph may be displayed on the display unit 305. [

The EXIF format applies to JPEG and TIFF 6.0, which are used in digital cameras, and to file formats such as WAV.

EXIF data information is stored in the image data. The EXIF data may include GPS information as well as the camera model used, the lens, the shooting date, the shooting conditions, and the camera model having the GPS receiver such as CCTV or black box. The GPS information should be equipped with a device for reading GPS information from the camera body. Video recorders with a GPS receiver in a CCTV or black box camera can be stored, including GPS information, in a digital picture, for example, a GPS receiver is programmed to store the location data in a picture in a camera with built- In this case, the EXIF data included in the photographed image or digital photograph includes GPS information. Therefore, information such as position information according to the time of the individual detected from the EXIF data included in the image data stored in the image storage device can be obtained. If the EXIF data does not include GPS information, the GPS of the photographed place through the input unit 307 may be input and stored in the image data and stored in the storage unit 309. [ In the case of fixed CCTV, since the location of the CCTV is known, the location information can be used instead of the EXIF information by inputting the CCTV position at the time of image input.

When the EXIF data is input through the input unit 307, the address information and the address information are stored in the storage unit 309. When the address information is input, the storage unit 309 automatically converts the GPS information into the GPS information and registers the address information.

The face recognition algorithm can be applied to various conventional face recognition algorithms, but is not limited thereto.

For example, face recognition of feature-based image data has a feature value detection method using a haar-like feature in a smartphone and a detection method using an MCT (Modified Census Transform) image. For example, the contour and eye region of the face region are detected using the facial and eye detectors learned from Haar-like feature in the input image of the camera of the mobile device, the preprocessing process is performed to detect the pupil, (ROI, Region of Interest) is converted into grayscale, and the threshold value of the eye and the eye is extracted from the eye region image in the bright illumination state and the dark illumination state, The histogram of the image (pixel value of each pixel of the x axis, number of pixel values of the y axis) is obtained, binarization of the image of the eye is performed, and histogram equalization is performed to pre- Detects the contours of the eyes, nose, and mouth of the detected face region, and extracts texture faults and shape features.

The feature values of the eyes, nose, and mouth of the detected face region are represented by the difference between the sum of the pixels included in the white region of the Haar-like feature and the sum of the pixels included in the black region. For example, the distance from the detected eye area to both ends of the right and left eyes, and the size of the iris using the hough circle transform algorithm can be applied to the feature values.

The face recognition system and method of the present invention recognizes a face displayed in a moving image file or a real-time image stored in a video photographing device such as a CCTV, a black box, a smart phone, a digital camera or the like, You can identify who you are by providing personal information about you. Basically, the face detected in the image is detected through face recognition in the image. The facial photographs detected later are compared with facial feature points of the existing face database (for example, a suspect face database, a criminal face database, a missing person database, or the like) to identify individuals through stored information, The extracted face image extracts the included EXIF data, extracts the position of the face photograph by the photographing time and photographing time, and provides the moving line information according to the time line.

If a face is not found in the pre-established face database, the user is connected to another face information system (eg resident information system) or an image is retrieved through the Internet to extract personal information disclosed on the Internet to identify the individual. In addition, for a face shown by a moving picture file or a real-time image or a photograph, a face recognizing program records in a timeline form when the person is found in an image, confirms the time of appearance, and provides information on the time line.

In the case of criminals, crime scenes are characterized by pre-screening and re-appearing, and once the time of occurrence is identified, the frequency of how often and when the person has appeared can be analyzed and the suspects can be arrested, .

10 schematically shows an execution screen (display unit) of a face recognition program for performing personal identification using face recognition according to an embodiment of the present invention.

When the file input module 321 is executed, a real-time image of CCTV or image data which is a stored image is input through the image input unit 303 and is stored in the storage unit 309. [ In the case of a video shot in real time, the video is received from a video photographing device such as a CCTV or a black box.

When the face region detection module 322 is executed, the face regions are detected in the received or reproduced image by recognizing the feature points (eyes, nose, mouth, jaw line, etc.) of the face. A moving picture file is recognized as a face for each frame constituting a moving picture while a file is being played back. If there are several faces in one image, each face region is detected separately.

The feature point detection module 323 executes the feature point detection module 323 to detect the feature points of the face from the face data of the detected face region, Similarity of feature points of facial feature points stored in a suspect's face database, a criminal's face database, or a missing person's database) is compared to obtain highly similar personal information stored in the face and face database of the image or moving picture file transmitted in real time. It is also possible to display different colors on the display unit 305 according to the similarity to the feature values. For example, if the degree of similarity is 90% or more, the color is red, 80% to 90% is blue, 70% to 80% is green, and 60% to 70% Is displayed.

If there is no face data recognized in the information DB 310, information is additionally retrieved through the Internet image search, face tagging, and face search service. If they are not searched on the Internet, they are grouped after arbitrary IDs are assigned and stored in the storage unit.

In FIG. 10, reference numeral 333 denotes an area in which image data, which is a transmitted real-time image or a moving image file being reproduced, is output and an actual image is displayed.

Reference numeral 332 denotes an area in which the timeline is displayed. The time line is the time at which the recognized individual displayed in the area 332 is photographed. The same person may appear more than once in the real-time image or the image being reproduced. For example, if you assume that a person A appears again 30 minutes after searching for a crime, you can see on the timeline that it has been recognized in the image data 30 minutes before the accident and at the time of the accident, You can check, and you can check the timelines for multiple detected people at the same time.

Reference numeral 335 denotes an area in which the face database included in the information DB 310 is displayed. You can set the camera to receive live video or set the video to play and select the face database you want to use in the investigation agency or the control center. For example, in the case of an investigative agency, you may want to select and run a suspect database, and other agencies may choose a missing persons database. It is also possible to select multiple databases, but it can be selected to prevent privacy aspects and abuse aspects of the face database.

An area 337 is an area in which the person identification information searched when the face output in the area 333 is in the set face database. The personal identification information searched in the area 337 is displayed, and the entire timeline in which the detected person is displayed can be displayed and confirmed in the area 339.

FIG. 11 schematically shows a screen on which positional information is displayed on a GIS map according to time lines of a person whose face is recognized, according to a time line.

When at least one face portion is detected in one image input to the image storage device, at least one face region is primarily stored as a picture file. Each stored file is subjected to size correction and angle correction with a photograph of a predetermined size and the degree of similarity of feature values of the feature points (forehead, eye, nose, mouth, jaw line, etc.) of the faces of the database is searched to determine individual identification Derive information. The photographing time and position information extracted from the EXIF data extracted from the face photographs in which the individual identification information is confirmed are stored in the database, the time and position information of the recognized person is updated in the database, and the position information according to the time line is displayed together So that it is possible to trace the moving movement line of the detected individual. In addition to the positional information according to the time line, the corresponding image data of the individual recognized in the divided partial area of the display unit 305 may be displayed, or the entire video data acquired by the photographing device on the time line for the recognized individual may be displayed Only the section where the individual appears can be scrapped and reproduced. The movement time is also displayed on the display unit 305 together.

In the face recognition program, when at least one face portion is detected in one image, at least one face region is primarily stored as a picture file. The similarity of the feature values of the feature points (forehead, eyes, nose, mouth, jaw line, etc.) of the faces is retrieved from the stored files to derive the individual identification information for the face with high similarity, (1), (2), (3) and (4) in FIG. 11) on the GIS map according to the timeline, and stores the time and position information extracted from the EXIF data in the database, / RTI >

Even if a face with a high degree of virtual similarity is detected at face recognition, if the degree of similarity is less than the threshold (for example, less than 35% of the similarity is set through experimental results), image search using Facebook, Google, The web page is combined with the information displayed on the Internet to collect the individual identification information and extract the individual identification information.

12 is a flowchart illustrating a personal identification method using face recognition in a digital image according to the present invention.

A personal identification method using face recognition in a digital image according to the present invention is a method of recognizing a face image by using a face recognition program, such as a video image stored in a camera, such as a CCTV, a black box, a camcorder, a digital camera, An image data input step of inputting image data such as a photograph or the like; Extracting EXIF data of the image data included in the face photograph detected by the EXIF data extraction module 325 and storing the extracted EXIF data in a storage unit; The face image detection module 322 detects each face image in the image or the photograph of one or more faces included in the image input from the file input module 321, Detecting a face region with respect to the face region; A feature point detection module 323 for obtaining feature points of the eye, nose, mouth, and jaw line of the face region to store position and coordinate feature values thereof; (Reduction or enlargement of a face image), an angle correction (matching of the center points of the circles of the eyes of two eyes of the face coincide with each other for image data of one or more face photographs detected by the recognition module 324, (For example, movement of images, rotation of images), personal information about recognized face images of a standard size is stored in a face database (for example, a suspect face database or a criminal face database or a missing persons database) A face recognition step of comparing personal information by comparing the face information with the face information; A display step of displaying on the screen the similarity (reliability) and the face photograph of the high similarity photographs so as to display the photographs taken by the display module 325 and the face recognition result; And a storing step of storing the face recognition result for each image data.

When at least one face region is detected in the image data, each face region is stored as a picture file (jpg, tiff, etc.), the face feature is detected in the face region, and the feature points (forehead, The person identification information is retrieved from the information DB 310 in accordance with the degree of similarity of the feature values of the person's identification information, The time and position information is stored in the information DB 310 and the position information (1, 2, 3, 4) in the GIS map together with the time line of the individual whose face is recognized by the display module 325 Is displayed.

In the above method, the appearance time of a person detected for each CCTV installation point installed in various places by the face recognition is confirmed in the image data according to the time line of the face recognition program, and as shown in FIG. 14 by the display module 325 The number of times the corresponding individual appears at a specific place according to time is displayed on the display unit 305 as a graph. When displayed on the display unit 305, it is also possible to display different colors according to the number of times of appearance. For example, the higher the frequency is displayed as the red system and the lower the frequency is displayed as the blue system.

The present invention can be applied to a computer or a smart device that receives image data such as a video image or a photograph stored by a video photographing device such as a CCTV, a black box, a camcorder, a digital camera, Image data input function; An EXIF data extraction function for extracting EXIF data of image data included in a face photograph detected by the EXIF data extraction module and storing the extracted EXIF data in a storage unit; When a face or a plurality of faces are viewed from an image or a photograph of at least one face included in the image input from the file input module, the face area detection module detects the face area with respect to each image data so as to recognize all faces Function; Storing feature points of the eye, nose, mouth, and jaw line of the face region detected by the feature detecting module; (Reduction and enlargement of the face image), angle correction (movement and rotation of the image so that the centers of the circles of the eyes of the eyes of the two eyes coincide with each other) A facial recognition function for identifying personal information about a recognized face image of a standard size through a geometric process of a feature value similarity of a face database (a suspect face database, a criminal face database, a missing person database, or the like) A display step of displaying on the screen the similarity (reliability) and the face photograph of the high similarity photographs so as to display the photographs taken by the display module and the face recognition result; A storage function for storing a face recognition result for each image data; When at least one face region is detected in one image (a frame constituting an image or a moving image) in the face recognition program, each face region is primarily stored as a picture file (jpg, tiff, etc.) Is a photograph of a predetermined size and performs size correction and angle correction to retrieve the similarity of the feature values of the feature points (forehead, eye, nose, mouth, jaw line, etc.) And stores the time and position information extracted from the EXIF data extracted from the facial photographs in which the personal identification information is confirmed in the database and displays the movement information of the person with the face on the GIS map on the GIS map (1), (2), (3) and (4) in FIG. 11); And when the appearance time of the person detected for each of the CCTV installation points installed in various places by the face recognition in one image or photograph according to the timeline in the face recognition program is confirmed, And a frequency output function according to a time line showing a frequency of occurrence of a time line in a graph.

FIG. 13 is a flowchart illustrating a process of displaying personal information and time information according to a time line on a second display screen when a specific face recognized on the first display screen is selected in the face recognition program, FIG. 14 is a flowchart And the number of times the face is recognized by the individual.

In the case of criminals, crime scenes are characterized by pre-screening and re-appearing, and once the time of occurrence is identified, the frequency of how often and when the person has appeared can be analyzed and the suspects can be arrested, .

The technique proposed in the present invention recognizes a face appearing in a moving picture file or a real-time image stored in a video photographing device such as a CCTV, a black box, a smart phone, a digital camera, a camcorder or the like, To identify the face by providing personal information about the recognized face. Basically, it extracts facial images that are recognized in the image through multi-facial recognition in the image. The extracted face is compared with the existing facial database (eg, suspect face database, criminal face database, missing person database, etc.) and facial feature points, and identifies individuals through stored information if determined to be the same person.

If a face is not found in the constructed face database, the personal information disclosed on the Internet is extracted through image search through the Internet to identify the individual. In addition, a face of a moving picture file or a real-time image is recorded in a form of a time line when the person is found in the image so that the time of appearance can be confirmed. When the time of occurrence is confirmed, it can analyze how often and when the person appears, and helps in crime analysis and prevention.

According to the present invention, a face stored in a video storage device such as a CCTV, a black box, a smart phone, or a digital camera is recognized as a face in a real-time or stored video, . If the system is connected to suspects, criminals, or missing persons' face database, it can be confirmed in real time whether there are suspects, criminals, missing persons, etc. in the CCTV control center or a large number of image evidence.

The face recognition system proposed in the present invention provides a real-time notification service to a manager in case of detecting suspects, criminals, missing persons, etc., as well as crime prevention, by recognizing various faces in real-

As described above, the method of the present invention can be implemented as a program and recorded on a recording medium (CD-ROM, RAM, ROM, memory card, hard disk, magneto-optical disk, storage device, etc.) Lt; / RTI >

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is clearly understood that the same is by way of illustration and example only and is not to be taken in conjunction with the present invention. The present invention can be variously modified or modified.

301: control unit 303: image input unit
305: display unit 307: input unit
309: storage unit 320: face recognition program
310: Information DB 321: File Input Module
322: face area detection module 323: feature detection module
324: recognition module 325: EXIF data extraction module
326: Display module

Claims (9)

An image input unit 303 which is controlled by the control unit 301 and receives image data stored by the image photographing apparatus; a control unit 301; a display unit 305 controlled by the control unit 301; An input unit 307 connected to the control unit 301 and receiving a command from a user, and a storage unit 309 as a memory device connected to the control unit 301;
An information DB 310 and a face recognition program 320 including a face database stored in the storage unit 309;
When an execution command of the face recognition program 320 is input through the input unit 307, at least one face region is detected from the image data input through the image input unit 303 and stored in the storage unit, Each stored picture file includes a storage unit 309 for performing size correction, angle correction, and extracting feature values of corresponding data from pictures of predetermined sizes, respectively.
The feature values of the feature points of the eye, nose, mouth, and jaw line of each face region are extracted and the similarity of the feature values of the feature points of the face photographs of the face database included in the information DB 310 included in the feature database is retrieved, The individual person is identified and the EXIF data included in the image data is extracted and the photographed time and position information of the image data are stored in the database and added to the timeline of the recognized individual who is the face, A display unit 305 for displaying position information on the GIS map along the line according to the photographing time; And
A control unit (301) for causing the identified personal photograph to display the photographing time and copper line information of the image data on the display unit (305);
A face recognition unit for recognizing face images of the face image;
The image processing apparatus according to claim 1, wherein the face recognition program (320) comprises: a file input module (321) for inputting image data photographed by the image photographing apparatus; A face region detection module 322 for detecting one or more face regions included in the image data input by the file input module 321 and storing the detected face image files in the storage unit 309; A feature detection module 323 for calculating feature values of eye, nose, mouth, and jaw line of the face region detected by the face region detection module 322; A recognition module 324 for comparing the feature values of the detected one or more face photographs with the feature values of the face database included in the information DB 310 to identify the individual information; An EXIF data extraction module 325 for extracting EXIF data of image data included in the detected face photograph; And a display module (326) for displaying the detected face photograph, the face recognition result, and the image data photographing time on the display unit (305). The method according to claim 1, wherein the position information of the image photographing apparatus is extracted from EXIF data or stored in a storage unit together with image data by a user's input; Wherein the display unit (305) displays location information on the GIS map in accordance with the photographing time of the identified individual. The personal identification system according to claim 1, wherein the EXIF data extracted from the image data is stored in a storage unit (309) together with an image of a face region. The face recognition apparatus according to claim 1, wherein the number of times the individual identified by the execution of the face recognition program (320) appears at a photographed place is displayed in a graph and displayed on the display unit (305) Personal identification system through. An image input unit 303 which is controlled by the control unit 301 and receives image data stored by the image photographing apparatus; a control unit 301; a display unit 305 controlled by the control unit 301; An input unit 307 connected to the control unit 301 and receiving a command from a user, and a storage unit 309 as a memory device connected to the control unit 301; In the storage unit 309, an information DB 310 including a face database and a face recognition program 320 are stored; A method for identifying a person through face recognition in a digital image executed when an execution command of the face recognition program (320) is inputted through an input unit (307)
When the execution command of the face recognition program 320 is inputted through the input unit 307 and inputted through the image input unit 303, at least one or more of the image data stored in the storage unit A face image is detected and stored as a picture file, and each stored picture file is subjected to size correction and angle correction with a picture of a predetermined size, and feature values of the data are extracted and stored in a storage unit 309 ;
An EXIF data extraction step of extracting EXIF data of image data by the EXIF data extraction module and storing the extracted EXIF data;
Detecting a face area with respect to the image data by the face area detection module so as to recognize one or more faces included in the image data;
Extracting feature values of eye, nose, mouth, and jaw line of the face region detected by the feature detection module with respect to the detected face region, and storing feature values;
A face recognition step of comparing a face feature value of image data of the detected one or more face photographs with a feature value of a face database included in the information DB 310 to recognize a face; And
The photograph taken by the display module and the face recognition result include a display step on the display unit 305,
In the face recognition step, the feature values of the feature points of the eye, nose, mouth, and jaw line of each face region are extracted, and the feature values are compared with the feature values of the feature points of the face images of the face database included in the information DB 310 And the EXIF data included in the image data is extracted, and the photographed time and location information of the identified personal photograph are stored in the database, and the time of the recognized person who is the face The positional information according to the photographing time is displayed on the GIS map in accordance with the time line in addition to the line information, and the photographed time and the copper line information of the identified individual photograph are displayed on the display unit ( 305). ≪ RTI ID = 0.0 > 31. < / RTI >
7. The method according to claim 6, wherein the face region detected in the method is stored as a picture file, the feature value is derived, personal identification information is derived from the feature value of the face database included in the information DB (310) Wherein the photographing time is stored in the storage unit (309) from the EXIF data extracted from the EXIF data, and the photographing time is displayed on the display unit (305). 8. The method according to claim 7, wherein when the appearance time of the person detected at the photographing point by the face recognition in one image or photograph according to the time line in the face recognition program is confirmed, And the number of times of occurrence, frequency, and appearance of the face image is displayed on the display unit 305 as a graph. 8. The apparatus according to claim 7, wherein the position information of the image photographing apparatus is extracted from EXIF data or stored in a storage unit together with image data by a user's input; Wherein the display unit (305) displays location information according to the photographing time of the identified individual on the GIS map.
KR1020150107542A 2015-07-29 2015-07-29 Personal Identification System And Method By Face Recognition In Digital Image KR101781358B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150107542A KR101781358B1 (en) 2015-07-29 2015-07-29 Personal Identification System And Method By Face Recognition In Digital Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150107542A KR101781358B1 (en) 2015-07-29 2015-07-29 Personal Identification System And Method By Face Recognition In Digital Image

Publications (2)

Publication Number Publication Date
KR20170015639A KR20170015639A (en) 2017-02-09
KR101781358B1 true KR101781358B1 (en) 2017-09-26

Family

ID=58154370

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150107542A KR101781358B1 (en) 2015-07-29 2015-07-29 Personal Identification System And Method By Face Recognition In Digital Image

Country Status (1)

Country Link
KR (1) KR101781358B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200050139A (en) 2018-11-01 2020-05-11 전형고 Apparatus and method for recognizing face
KR20210050426A (en) * 2019-10-28 2021-05-07 주식회사 칸트 Artificial intelligence digital signage system

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6127219B2 (en) * 2013-11-30 2017-05-10 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Method and system for extracting facial features from facial image data
CN107844758A (en) * 2017-10-24 2018-03-27 量子云未来(北京)信息科技有限公司 Intelligence pre- film examination method, computer equipment and readable storage medium storing program for executing
WO2019095221A1 (en) * 2017-11-16 2019-05-23 深圳前海达闼云端智能科技有限公司 Method for searching for person, apparatus, terminal and cloud server
KR102039277B1 (en) 2018-12-07 2019-10-31 장승현 Pedestrian face recognition system and method thereof
CN109741605A (en) * 2018-12-25 2019-05-10 深圳市天彦通信股份有限公司 Vehicle monitoring method and relevant apparatus
CN110633627A (en) * 2019-08-01 2019-12-31 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for positioning object in video
CN111144215B (en) * 2019-11-27 2023-11-24 北京迈格威科技有限公司 Image processing method, device, electronic equipment and storage medium
CN112036241A (en) * 2020-07-27 2020-12-04 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113887527B (en) * 2021-11-04 2022-08-26 北京智慧眼信息技术有限公司 Face image processing method and device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040028210A (en) 2002-09-30 2004-04-03 주식회사 드림미르 Apparatus for Identifying a Person through Recognizing a Face and Method thereof
KR100723417B1 (en) 2005-12-23 2007-05-30 삼성전자주식회사 Apparatuses and methods for recognizing face, and apparatus and method for extracting face from multiple face images
KR100863882B1 (en) 2006-09-27 2008-10-15 김종헌 Method for preserving of a public peace by means of a face recognition, and a face recognition apparatus
KR100828411B1 (en) 2006-10-20 2008-05-09 연세대학교 산학협력단 Global feature extraction method for 3D face recognition
KR100847142B1 (en) 2006-11-30 2008-07-18 한국전자통신연구원 Preprocessing method for face recognition, face recognition method and apparatus using the same
KR101117549B1 (en) 2010-03-31 2012-03-07 경북대학교 산학협력단 Face recognition system and method thereof
KR101381439B1 (en) 2011-09-15 2014-04-04 가부시끼가이샤 도시바 Face recognition apparatus, and face recognition method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200050139A (en) 2018-11-01 2020-05-11 전형고 Apparatus and method for recognizing face
KR20210050426A (en) * 2019-10-28 2021-05-07 주식회사 칸트 Artificial intelligence digital signage system
KR102251783B1 (en) * 2019-10-28 2021-05-13 주식회사 칸트 Artificial intelligence digital signage system

Also Published As

Publication number Publication date
KR20170015639A (en) 2017-02-09

Similar Documents

Publication Publication Date Title
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
JP7317919B2 (en) Appearance search system and method
KR102596897B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
AU2012219026B2 (en) Image quality assessment
KR102147052B1 (en) Emotional recognition system and method based on face images
US9104914B1 (en) Object detection with false positive filtering
KR101381439B1 (en) Face recognition apparatus, and face recognition method
US6661907B2 (en) Face detection in digital images
JP4479478B2 (en) Pattern recognition method and apparatus
US8351662B2 (en) System and method for face verification using video sequence
US7715596B2 (en) Method for controlling photographs of people
CN111797653A (en) Image annotation method and device based on high-dimensional image
JP2014016968A (en) Person retrieval device and data collection device
Detsing et al. Detection and facial recognition for investigation
Wu et al. Privacy leakage of sift features via deep generative model based image reconstruction
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
JP2013218605A (en) Image recognition device, image recognition method, and program
Ali et al. A robust and efficient system to detect human faces based on facial features
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
CN113243015A (en) Video monitoring system and method
CN114140674A (en) Electronic evidence usability identification method combining image processing and data mining technology
KR101031369B1 (en) Apparatus for identifying face from image and method thereof
CN112699846B (en) Specific character and specific behavior combined retrieval method and device with identity consistency check function
KR102435581B1 (en) Attendance check system using face recognition and attendance check method using same
MS Missing Person Detection Using AI

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
GRNT Written decision to grant