Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the face acquisition method and the related products provided in the embodiments of the present application, a system architecture of the face acquisition method applicable to the embodiments of the present application is described below. Referring to fig. 1A, fig. 1A is a schematic diagram of a system architecture of a face acquisition method according to an embodiment of the present application. As shown in fig. 1A, the system architecture may include one or more servers and a plurality of electronic devices, wherein:
the server may include, but is not limited to, a background server, a component server, a face acquisition system server, or a face acquisition software server, and the like, and the server may communicate with a plurality of electronic devices through the internet. And the server sends the face acquisition result to the electronic equipment.
The electronic device in the embodiment of the present application may include, but is not limited to, any handheld electronic product based on an intelligent operating system, which can perform human-computer interaction with a user through an input device such as a keyboard, a virtual keyboard, a touch pad, a touch screen, and a voice control device, such as a smart phone, a tablet computer, a personal computer, and the like. The smart operating system includes, but is not limited to, any operating system that enriches device functionality by providing various mobile applications to the mobile device, such as Android (Android), iOSTM, Windows Phone, and the like.
The face collecting device or the electronic device described in the embodiment of the present application may include a smart Phone (such as an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile Internet device (MID, Mobile Internet Devices), or a wearable device, which are examples, but not exhaustive, and include but are not limited to the above Devices, of course, the face collecting device may also be a server, and at least one Wi-Fi probe may be installed in the face collecting device.
It should be noted that the system architecture of the face acquisition method provided by the present application is not limited to that shown in fig. 1A.
Fig. 1B is a schematic flowchart illustrating an embodiment of a face acquisition method according to an embodiment of the present application. The face acquisition method described in the embodiment includes the following steps:
101. Wi-Fi scanning is carried out in a specified range by adopting a Wi-Fi probe technology at a first moment to obtain a first Wi-Fi MAC address list, wherein the first Wi-Fi MAC address list comprises at least one MAC address.
The specified range can be specified by a user, or can be defaulted to a specific space range, the first moment can be a certain moment in a short time, specifically can be specified by the user or defaulted by a system, the face acquisition device can perform Wi-Fi scanning on the specified range by adopting a Wi-Fi probe technology at the first moment, and if the Wi-Fi function is turned on by at least one other device, the Wi-Fi MAC address of at least one device can be obtained, so that a first Wi-Fi MAC address list is obtained by the MAC address of the scanned device, and the first Wi-Fi MAC address list comprises at least one MAC address.
102. And carrying out face acquisition on the specified range to obtain a first face image set, wherein the first face image set comprises at least one face image.
The face acquisition device acquires faces of persons within a specified range through the camera, so that a first face image set containing a plurality of face images of the persons within the specified range is obtained, the specified range can be specified by a user, or the first face image set can default to a specific range (for example, within the shooting range of the camera).
Optionally, in the step 102, the performing face acquisition on the specified range to obtain the first face image set may include the following steps:
21. shooting the specified range to obtain a target image;
22. carrying out image segmentation on the target image to obtain a P personal object image, wherein P is a positive integer;
23. carrying out face recognition on the P personal object images to obtain Q personal face images and P-Q non-face images, wherein Q is a positive integer not greater than P;
24. carrying out target tracking and face recognition on the P-Q non-face images to obtain P-Q personal face images;
25. and taking the Q personal face images and the P-Q personal face images as the first face image set.
The camera of the face acquisition device can shoot a designated range to obtain a plurality of target images in the designated range, and the plurality of target images may include a plurality of face images or character images or scene images, so that image segmentation processing can be performed on the plurality of target images to obtain a P person image, wherein P is a positive integer. Specifically, the person foreground(s) of each target image may be framed, and if no person foreground image exists in the target image, the target image may be directly rejected; if the figure foreground image exists in the target image, the figure foreground and the figure background can be modeled respectively, each pixel in the target image can be connected with a figure foreground or background node, and if two adjacent nodes do not belong to the same figure foreground or background, the edge between the two nodes can be cut off, so that the figure foreground image and the figure background image are distinguished, and a P person image is obtained.
In addition, after the face acquisition device obtains a P-person image, the P-person image may include a face image, the face acquisition device may perform face recognition on the P-person image to obtain Q-person face images and P-Q non-face images, Q is a positive integer not greater than P, then the face acquisition device may continue to perform target tracking on the P-Q non-face images based on a target tracking algorithm, after the target tracking, the face acquisition device may obtain the face images of the P-Q non-face images, for the face images, face recognition may be used to obtain face images corresponding to the P-Q person, and the Q-person face images and the P-Q-person face images are used as a first face image set, so that faces of all persons within a specified range may be obtained as much as possible.
Wherein, the target tracking algorithm may comprise at least one of the following: a Tracking-by-Detection Tracking algorithm, a Tracking-Learning-Detection Tracking algorithm, a Struck algorithm, etc., and is not limited thereto.
Optionally, each MAC address corresponds to an electronic device, and the step 102 of performing face acquisition on the specified range to obtain a first face image set may include the following steps:
26. determining at least one camera within the specified range;
27. determining a position corresponding to each MAC address in the first Wi-Fi MAC address list through the first Wi-Fi MAC address list to obtain a plurality of positions;
28. controlling the at least one camera to focus and shoot the plurality of positions to obtain a plurality of images;
29. and carrying out image segmentation on the plurality of images to obtain a plurality of face images, and carrying out duplication removal processing on the plurality of face images to obtain the first face image set.
Wherein, because one MAC address corresponds to one electronic device, the electronic device may be carried by a user, therefore, the first Wi-Fi MAC address list comprises a plurality of MAC addresses, when Wi-Fi probe technology scans Wi-Fi in a specified range, the position of the electronic device of each MAC address or signal intensity can be recorded, and then the position of each electronic device can be located to obtain a plurality of positions, and then the at least one camera can be controlled to focus and shoot a plurality of positions to obtain a plurality of images, specifically, the at least one camera can be controlled to focus and shoot a plurality of positions at preset time intervals within a preset time period to obtain a plurality of images, the images of the plurality of images are segmented to obtain a plurality of face images, and the face images are matched in pairs, repeated face images are removed to obtain a first face image set, and therefore the face images can be clearly and accurately collected.
103. And Wi-Fi scanning is carried out in the specified range by adopting the Wi-Fi probe technology at the second moment to obtain a second Wi-Fi MAC address list, wherein the second Wi-Fi MAC address list comprises at least one MAC address.
Wherein the designated range can be designated by the user, or can be defaulted to a specific range, the designated range can be consistent with the above step 101, Wi-Fi scanning can be carried out in the specified range by adopting a Wi-Fi probe technology at the second moment, if at least one other device opens the Wi-Fi function, the Wi-Fi MAC address of at least one device can be obtained, thereby obtaining a second Wi-Fi MAC address list, wherein the second Wi-Fi MAC address list comprises at least one MAC address, wherein the second time can be a preset time which is later than the first time and is set by the user, in particular, the face acquisition device can adopt Wi-Fi probe technology to carry out Wi-Fi scanning in a specified range to obtain a second Wi-Fi MAC address list, wherein the MAC addresses in the second Wi-Fi MAC address list may or may not be identical to those of the first Wi-Fi MAC address.
104. Comparing the second Wi-Fi MAC address list with the first Wi-Fi MAC address list.
Wherein the second Wi-Fi MAC address list may or may not be identical to the MAC address in the first Wi-Fi MAC address list, for example, the second Wi-Fi MAC address list may be increased compared to the MAC address in the first Wi-Fi MAC address list, that is, a new MAC address may exist, or a MAC address may be decreased, or the MAC addresses in the second Wi-Fi MAC address list and the first Wi-Fi MAC address list may not be increased or decreased, that is, may not be changed, specifically, the second Wi-Fi MAC address list may be compared with the first Wi-Fi MAC address list to obtain a comparison result of incremental change or reduced change or no change, because the MAC address of each device may be unique and the MAC address of each device is not the same, therefore, the increase and decrease of the personnel in the designated range can be judged through the change of the MAC address.
105. And if the comparison result is that the preset increment change occurs in the first Wi-Fi MAC address list, carrying out face acquisition on the specified range again to obtain a second face image set, carrying out deduplication processing on the second face image set according to the first face image set to obtain at least one target face image, wherein the target face image is not matched with any face image in the first face image set.
The preset incremental change may be set by a user or default by the system, for example, at least one new MAC address is added, specifically, the preset incremental change may be that a MAC addresses are added to the second Wi-Fi MAC address list compared with MAC addresses in the first Wi-Fi MAC address list, where a is a positive integer, or the preset incremental change may be that a MAC address that is not present in the first Wi-Fi MAC address list appears in the second Wi-Fi MAC address list. Specifically, after comparing the MAC addresses in the second Wi-Fi MAC address list with the MAC addresses in the first Wi-Fi MAC address list, the face acquisition device may perform face acquisition again for persons in a designated range to obtain a second face image set, and may perform deduplication processing on the second face image set and the first face image set by using a deduplication processing algorithm to obtain face images of persons in at least one designated area that are not matched with any face image in the first face image set, so that face images matched in the first face image set and the second face image set are rejected, and efficiency of deduplication processing in the face acquisition is improved.
Wherein, the algorithm of the deduplication processing may include one of the following: an OpenCV image processing algorithm, a fast deduplication algorithm based on motion matching, an algorithm for performing comparison deduplication based on color histogram and LBP histogram features, and the like, which are not limited herein.
Optionally, in step 105, the performing a deduplication process on the second facial image set according to the first facial image set to obtain at least one target facial image may include the following steps:
51. matching each face image in the first face image set with each face image in the second face image set to obtain a plurality of matching values;
52. selecting a matching value larger than a preset threshold value from the multiple matching values to obtain at least one target matching value;
53. determining a face image corresponding to the at least one target matching value;
54. and removing the face image corresponding to the at least one target matching value from the second face image set to obtain the at least one target face image.
The face acquisition device may match and deduplicate each face image in the first face image set with each face image in the second face image set, and specifically, may determine a corresponding relationship between each face image in the first face image set and each face image in the second face image set by using similarity measurement, for example, a gray matrix of a real-time image window of a certain size of each face image in the second face image set and all window gray arrays of each face image in the first face image set are searched and matched by a similarity measurement method, and a probability value of the window gray array matched in each image is obtained as a matching value, thereby obtaining a plurality of matching values.
In addition, the preset threshold value can be set by a user or defaulted by a system, after a plurality of matching values are obtained, the preset threshold value can be compared with the preset threshold value, if the preset threshold value is larger than the preset threshold value, the matching value is considered as a target matching value, a plurality of target matching values are obtained, then, the face image corresponding to the target matching value is determined as a target face image, a plurality of target face images are obtained, therefore, the face images matched with the first face image set and the second face image set are removed, and the efficiency of the duplicate removal processing in the face acquisition process is improved.
Wherein the similarity measure may comprise one of: the correlation function, the covariance function, the sum of squared differences, the sum of absolute differences, and the extremum of the equality measure are not limited herein.
Optionally, in step 51, the matching each face image in the first face image set with each face image in the second face image set to obtain a plurality of matching values may include the following steps:
a1, acquiring a target image quality evaluation value of a face image i, wherein the face image i is any one face image in the first face image set;
a2, determining a target matching threshold corresponding to the target image quality evaluation value according to a preset mapping relation between the image quality evaluation value and the matching threshold;
a3, extracting the contour of the face image i to obtain a first peripheral contour;
a4, extracting feature points of the face image i to obtain a first feature point set;
a5, matching the first peripheral outline with a second peripheral outline of a face image j to obtain a first matching value, wherein the face image j is any face image in the second face image set;
a6, matching the first feature point set with a second feature point set of the face image j to obtain a second matching value;
and A7, determining a target matching value according to the first matching value and the second matching value.
In the face recognition process, success or failure depends on image quality of the face images to a great extent, and therefore, image quality evaluation may be performed on any one of the face images in the first face image set to obtain a plurality of image quality evaluation values, and the plurality of image quality evaluation values are stored in a memory of the face recognition device, specifically, image quality evaluation indexes may be used to perform image quality evaluation on the plurality of face images in the collected first face image set to obtain a plurality of image quality evaluation values, and the image quality evaluation indexes may include but are not limited to: the average gray scale, mean square error, entropy, edge preservation, signal-to-noise ratio, etc. may be defined as the larger the resulting image quality evaluation value, the better the image quality.
In addition, the face collecting device may store a mapping relationship between a preset image quality evaluation value and a matching threshold, and further determine a target matching threshold corresponding to the target image quality evaluation value according to the mapping relationship, on this basis, perform contour extraction on the target face image i to obtain a first peripheral contour, perform feature point extraction on the target face image i to obtain a first feature point set, match the first peripheral contour with a second peripheral contour of any one of the face images j of a second face image to obtain a first matching value, match the first feature point set with a second feature point set of the face image j to obtain a second matching value, and further determine a target matching value according to the first matching value and the second matching value, for example, the face recognizing device may store a mapping relationship between a matching value and a weight value pair in advance to obtain a first weight coefficient corresponding to the first matching value, and a second weight coefficient corresponding to the second matching value, wherein the target matching value is the first matching value and the first weight coefficient plus the second matching value and the second weight coefficient.
In addition, the algorithm of the contour extraction may be at least one of: hough transform, canny operator, etc., and the algorithm for feature point extraction may be at least one of the following algorithms: harris corners, Scale Invariant Feature Transform (SIFT), and the like, without limitation.
Optionally, after the step 104, the following steps may be further included:
and if the comparison result is that the preset decrement change or no change exists in the Wi-Fi MAC address list, skipping the duplicate removal process and confirming that the face acquisition process is completed.
The preset decrement change can be set by a user or defaulted by a system, for example, at least one MAC address is reduced, if the second Wi-Fi MAC address list is compared with the first Wi-Fi MAC address list, the obtained comparison result is decrement change or no change, that is, the number of people in the designated area is reduced, or the number of people is not increased or reduced, the deduplication processing is not performed, the completion of the face acquisition process is confirmed, and because the MAC addresses of all the devices are different and unique, the increase or decrease or no change of people in the designated area can be judged through the change of the MAC addresses, so that the face recognition efficiency is improved.
It can be seen that, according to the embodiment of the application, Wi-Fi scanning is performed on the specified range by adopting a Wi-Fi probe technology at a first moment to obtain a first Wi-Fi MAC address list, the first Wi-Fi MAC address list comprises at least one MAC address, face acquisition is performed on the specified range to obtain a first face image set, the first face image set comprises at least one face image, Wi-Fi scanning is performed on the specified range by adopting the Wi-Fi probe technology at a second moment to obtain a second Wi-Fi MAC address list, the second Wi-Fi MAC address list comprises at least one MAC address, the second Wi-Fi MAC address list is compared with the first Wi-Fi MAC address list, if the comparison result shows that a preset increment change occurs in the first Wi-Fi MAC address list, the method comprises the steps of carrying out face collection on an appointed range again to obtain a second face image set, carrying out duplication elimination processing on the second face image set according to a first face image set to obtain at least one target face image, wherein the target face image is not matched with any face image in the first face image set.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of a face acquisition method according to an embodiment of the present application. The face acquisition method described in the embodiment includes the following steps:
201. Wi-Fi scanning is carried out in a specified range by adopting a Wi-Fi probe technology at a first moment to obtain a first Wi-Fi MAC address list, wherein the first Wi-Fi MAC address list comprises at least one MAC address.
202. And carrying out face acquisition on the specified range to obtain a first face image set, wherein the first face image set comprises at least one face image.
203. And Wi-Fi scanning is carried out in the specified range by adopting the Wi-Fi probe technology at the second moment to obtain a second Wi-Fi MAC address list, wherein the second Wi-Fi MAC address list comprises at least one MAC address.
204. Comparing the second Wi-Fi MAC address list with the first Wi-Fi MAC address list.
205. And if the comparison result is that the preset increment change occurs in the first Wi-Fi MAC address list, carrying out face acquisition on the specified range again to obtain a second face image set, carrying out deduplication processing on the second face image set according to the first face image set to obtain at least one target face image, wherein the target face image is not matched with any face image in the first face image set.
206. And if the comparison result is that the preset decrement change or no change exists in the Wi-Fi MAC address list, skipping the duplicate removal process and confirming that the face acquisition process is completed.
Optionally, the detailed description of the steps 201 to 206 may refer to the corresponding steps from step 101 to step 105 of the face acquisition method described in fig. 1B, and will not be described herein again.
It can be seen that, according to the embodiment of the application, Wi-Fi scanning is performed on the specified range by adopting a Wi-Fi probe technology at a first moment to obtain a first Wi-Fi MAC address list, the first Wi-Fi MAC address list comprises at least one MAC address, face acquisition is performed on the specified range to obtain a first face image set, the first face image set comprises at least one face image, Wi-Fi scanning is performed on the specified range by adopting the Wi-Fi probe technology at a second moment to obtain a second Wi-Fi MAC address list, the second Wi-Fi MAC address list comprises at least one MAC address, the second Wi-Fi MAC address list is compared with the first Wi-Fi MAC address list, if the comparison result shows that a preset increment change occurs in the first Wi-Fi MAC address list, the face collection is carried out on the appointed range again to obtain a second face image set, the duplication removing processing is carried out on the second face image set according to the first face image set to obtain at least one target face image, the target face image is not matched with any face image in the first face image set, if the comparison result shows that preset decrement change or no change occurs in the Wi-Fi MAC address list, the duplication removing process is skipped, the face collection process is confirmed to be completed, therefore, the personnel change of the appointed area can be judged through the Wi-Fi probe technology, when the decrement change or no change occurs to personnel, the duplication removing processing is not needed, accordingly, resource waste caused by unnecessary face duplication removing processing is avoided, and the face recognition efficiency is improved.
In accordance with the above, the following is a device for implementing the above face acquisition method, specifically as follows:
please refer to fig. 3A, which is a schematic structural diagram of an embodiment of a face acquisition device according to an embodiment of the present disclosure. The face acquisition device described in this embodiment includes: the scanning unit 301, the acquisition unit 302, the comparison unit 303 and the processing unit 304 are specifically as follows:
the scanning unit 301 is configured to perform Wi-Fi scanning on a specified range by using a Wi-Fi probe technology at a first time to obtain a first Wi-Fi MAC address list, where the first Wi-Fi MAC address list includes at least one MAC address, perform Wi-Fi scanning on the specified range by using the Wi-Fi probe technology at a second time to obtain a second Wi-Fi MAC address list, where the second Wi-Fi MAC address list includes at least one MAC address;
an acquiring unit 302, configured to perform face acquisition on the specified range to obtain a first face image set, where the first face image set includes at least one face image;
the scanning unit 301 is further configured to perform Wi-Fi scanning on the specified range at a second time by using the Wi-Fi probe technology to obtain a second Wi-Fi MAC address list, where the second Wi-Fi MAC address list includes at least one MAC address;
a comparing unit 303, configured to compare the second Wi-Fi MAC address list with the first Wi-Fi MAC address list;
and the processing unit 304 is configured to, if the comparison result indicates that a preset increment change occurs in the first Wi-Fi MAC address list, perform face acquisition again on the specified range to obtain a second face image set, perform deduplication processing on the second face image set according to the first face image set to obtain at least one target face image, where the target face image is not matched with any face image in the first face image set.
The scanning unit 301 may be configured to implement the methods described in the above steps 101 and 103, the collecting unit 302 may be configured to implement the method described in the above step 102, the comparing unit 303 may be configured to implement the method described in the above step 104, the processing unit 304 may be configured to implement the method described in the above step 105, and so on.
Optionally, as shown in fig. 3B, fig. 3B is a modified structure of the face capturing device depicted in fig. 3A, and compared with fig. 3A, the face capturing device may further include: a validation unit 305, wherein, among other things,
a confirming unit 305, configured to skip the deduplication process and confirm that the face collection process is completed if the comparison result indicates that a preset decrement change or no change occurs in the Wi-Fi MAC address list.
Alternatively, as shown in fig. 3C, fig. 3C is a detailed structure of the acquisition unit 302 in the face acquisition device depicted in fig. 3A, where the acquisition unit 302 may include: the shooting module 3021, the segmentation module 3022, the identification module 3023, the tracking module 3024, and the determination module 3025 are specifically as follows:
a shooting module 3021, configured to shoot the specified range to obtain a target image;
a segmentation module 3022, configured to perform image segmentation on the target image to obtain a P person image, where P is a positive integer;
an identification module 3023, configured to perform face identification on the P personal object image to obtain Q personal face images and P-Q non-personal face images, where Q is a positive integer not greater than P;
a tracking module 3024, configured to perform target tracking and face recognition on the P-Q non-face images to obtain P-Q personal face images;
a determining module 3025 configured to use the Q personal face image and the P-Q personal face image as the first set of face images.
It can be seen that, with the face collecting device described in the embodiment of the present application, Wi-Fi scanning is performed within a specified range at a first time by using a Wi-Fi probe technology to obtain a first Wi-Fi MAC address list, where the first Wi-Fi MAC address list includes at least one MAC address, face collection is performed within the specified range to obtain a first face image set, the first face image set includes at least one face image, Wi-Fi scanning is performed within the specified range at a second time by using the Wi-Fi probe technology to obtain a second Wi-Fi MAC address list, where the second Wi-Fi MAC address list includes at least one MAC address, the second Wi-Fi MAC address list is compared with the first Wi-Fi MAC address list, and if a comparison result indicates that a preset incremental change occurs in the first Wi-Fi MAC address list, the method comprises the steps of carrying out face collection on an appointed range again to obtain a second face image set, carrying out duplication elimination processing on the second face image set according to a first face image set to obtain at least one target face image, wherein the target face image is not matched with any face image in the first face image set.
It can be understood that the functions of each program module of the face acquisition device of this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
In accordance with the above, please refer to fig. 4, which is a schematic structural diagram of an embodiment of a face recognition apparatus according to an embodiment of the present application. The face acquisition device described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, e.g., a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 being connected by a bus 5000.
The input device 1000 may be a touch panel, a physical button, or a mouse.
The output device 2000 may be a display screen.
The memory 4000 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 4000 is used for storing a set of program codes, and the input device 1000, the output device 2000 and the processor 3000 are used for calling the program codes stored in the memory 4000 to execute the following operations:
the processor 3000 is configured to:
Wi-Fi scanning is carried out in a specified range by adopting a Wi-Fi probe technology at a first moment to obtain a first Wi-Fi MAC address list, wherein the first Wi-Fi MAC address list comprises at least one MAC address;
carrying out face acquisition on the specified range to obtain a first face image set, wherein the first face image set comprises at least one face image;
Wi-Fi scanning is carried out in the specified range by adopting the Wi-Fi probe technology at a second moment to obtain a second Wi-Fi MAC address list, wherein the second Wi-Fi MAC address list comprises at least one MAC address;
comparing the second Wi-Fi MAC address list with the first Wi-Fi MAC address list;
and if the comparison result is that the preset increment change occurs in the first Wi-Fi MAC address list, carrying out face acquisition on the specified range again to obtain a second face image set, carrying out deduplication processing on the second face image set according to the first face image set to obtain at least one target face image, wherein the target face image is not matched with any face image in the first face image set.
In one possible example, the processor 3000 is further configured to:
and if the comparison result is that the preset decrement change or no change exists in the Wi-Fi MAC address list, skipping the duplicate removal process and confirming that the face acquisition process is completed.
In one possible example, in the aspect of performing face acquisition on the designated range to obtain the first face image set, the processor 3000 is specifically configured to:
shooting the specified range to obtain a target image;
carrying out image segmentation on the target image to obtain a P personal object image, wherein P is a positive integer;
carrying out face recognition on the P personal object images to obtain Q personal face images and P-Q non-face images, wherein Q is a positive integer not greater than P;
carrying out target tracking and face recognition on the P-Q non-face images to obtain P-Q personal face images;
and taking the Q personal face images and the P-Q personal face images as the first face image set.
In a possible example, in the aspect that the second facial image set is subjected to the deduplication processing according to the first facial image set to obtain at least one target facial image, the processor 3000 is specifically configured to:
matching each face image in the first face image set with each face image in the second face image set to obtain a plurality of matching values;
selecting a matching value larger than a preset threshold value from the multiple matching values to obtain at least one target matching value;
determining a face image corresponding to the at least one target matching value;
and removing the face image corresponding to the at least one target matching value from the second face image set to obtain the at least one target face image.
In one possible example, in terms of matching each facial image in the first facial image set with each facial image in the second facial image set to obtain a plurality of matching values, the processor 3000 is specifically configured to:
acquiring a target image quality evaluation value of a face image i, wherein the face image i is any one face image in the first face image set;
determining a target matching threshold corresponding to the target image quality evaluation value according to a mapping relation between a preset image quality evaluation value and the matching threshold;
extracting the contour of the face image i to obtain a first peripheral contour;
extracting feature points of the face image i to obtain a first feature point set;
matching the first peripheral outline with a second peripheral outline of a face image j to obtain a first matching value, wherein the face image j is any one face image in the second face image set;
matching the first feature point set with a second feature point set of the face image j to obtain a second matching value;
and determining a target matching value according to the first matching value and the second matching value.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a program, and when the program is executed, the program includes some or all of the steps of any one of the face acquisition methods described in the above method embodiments.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.