WO2019015684A1 - 人脸图像去重方法和装置、电子设备、存储介质、程序 - Google Patents
人脸图像去重方法和装置、电子设备、存储介质、程序 Download PDFInfo
- Publication number
- WO2019015684A1 WO2019015684A1 PCT/CN2018/096542 CN2018096542W WO2019015684A1 WO 2019015684 A1 WO2019015684 A1 WO 2019015684A1 CN 2018096542 W CN2018096542 W CN 2018096542W WO 2019015684 A1 WO2019015684 A1 WO 2019015684A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- face image
- queue
- images
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/174—Redundancy elimination performed by the file system
- G06F16/1748—De-duplication implemented within the file system, e.g. based on file segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/535—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5854—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
Definitions
- the present application relates to computer vision technology, and in particular, to a face image deduplication method and apparatus, an electronic device, a storage medium, and a program
- the image contains rich and intuitive information.
- a large number of images are needed to convey information for the user.
- the number of repeated images increases. Therefore, the image information provider needs to de-emphasize the image before using the image information, avoiding duplicate images, affecting the user experience, and increasing the maintenance amount of the image.
- image providers use a large number of image information such as user uploads and crawler downloads every day, and the number has exceeded the limit of manual review.
- a face image deduplication technique provided by an embodiment of the present application.
- a method for de-duplicating a face image includes:
- Determining whether to perform a deduplication operation for the second face image is performed according to the matching result.
- the image queue includes at least one third human face image corresponding to different people respectively.
- the performing a filtering operation on the obtained multiple first face images includes:
- the face attributes corresponding to the first face images are used to represent the first face images
- the face attribute includes one or more of the following: a face angle, a face width height value, and a face blur degree;
- the first condition comprises at least one of the following: the face angle is at a first preset Within the range, the face width height value is greater than a second preset threshold, and the face blur degree is less than a third preset threshold.
- performing a filtering operation on the obtained plurality of first face images to obtain at least one second face image whose image quality reaches a first preset condition including:
- the filtering, the at least one first face image corresponding to the same person, to obtain the second face image of the at least one first face image whose quality reaches the first preset condition includes:
- the face angle includes one or more of the following: a face horizontal corner, a face pitch angle, and a face tilt angle.
- the at least one first face image corresponding to the same person is filtered based on a face angle corresponding to the first face image, to obtain a second face image whose quality reaches a first preset condition.
- the identifying, by the first first face image, the at least one first face image corresponding to the same person including:
- the second face image is matched with the at least one third face image in the image queue to obtain a matching result, including:
- a matching result is obtained based on a similarity between the second face image and at least one third face image in the image queue.
- the obtaining a matching result based on the similarity between the second facial image and the at least one third facial image in the image queue comprises:
- the second facial feature corresponding to the at least one second facial image and the pre-existing facial feature corresponding to the at least one third facial image in the image queue obtain the second
- the similarity between the face image and the at least one third face image in the image queue includes:
- determining whether to perform a deduplication operation on the second face image according to the matching result includes:
- determining whether to perform a deduplication operation on the second face image according to the matching result includes:
- the method before performing the filtering operation on the obtained multiple first face images, the method further includes:
- the plurality of first face images are obtained based on at least one frame of video images.
- the obtaining, by the at least one frame of the video image, the plurality of first facial images including:
- the method before performing the face recognition processing on the at least one frame of the video image, the method further includes:
- it also includes:
- Performing a filtering operation on the obtained plurality of first face images to obtain at least one second face image whose image quality reaches a first preset condition including:
- the method is applied to a client
- the method further includes:
- a face image deduplication device including:
- a filtering unit configured to perform a filtering operation on the obtained plurality of first face images, to obtain at least one second face image whose image quality reaches a first preset condition
- a matching unit configured to match the second face image with at least one third face image in the image queue to obtain a matching result
- a de-weighting unit configured to determine, according to the matching result, whether to perform a deduplication operation on the second facial image.
- the image queue includes at least one third human face image corresponding to different people respectively.
- the filtering unit includes:
- An attribute filtering module configured to filter the obtained first face images based on the face attributes corresponding to the first face image; the face attributes corresponding to the first face images are used to represent the The display quality of the face in the first face image;
- An angle filtering module configured to filter, according to a face angle in the first face image, a plurality of first face images obtained, wherein a face angle in the first face image is used to represent a deflection angle of a face in the first face image.
- the face attribute includes one or more of the following: a face angle, a face width and a high value, and a face ambiguity;
- the matching unit is configured to determine, according to the first condition that the image quality of the first face image reaches a first preset condition, wherein the first condition includes at least one of the following: The face angle is within the first preset range, the face width height value is greater than the second preset threshold, and/or the face blur degree is less than the third preset threshold.
- the filtering unit is configured to identify at least one first face image corresponding to the same person from the plurality of first face images; and filter the at least one first face image corresponding to the same person And obtaining, in the at least one first face image, a second face image whose quality reaches a first preset condition.
- the filtering unit performs filtering on the at least one first face image corresponding to the same person to obtain a second face image in which the quality reaches the first preset condition in the at least one first face image. And filtering, by using the face angle corresponding to the first face image, the at least one first face image corresponding to the same person to obtain a second face image whose quality reaches a first preset condition.
- the face angle includes one or more of the following: a face horizontal corner, a face pitch angle, and a face tilt angle.
- the filtering unit includes:
- An angle conversion module configured to convert a face horizontal corner, a face pitch angle, and a face tilt angle corresponding to the first face image into a three-dimensional vector
- a vector filtering module configured to filter the at least one first face image corresponding to the same person based on the distance from the three-dimensional vector to the source point, to obtain a second face image whose quality reaches a first preset condition;
- the source point is a three-dimensional vector whose values are all zero.
- the filtering unit is configured to identify the first setting from the plurality of first facial images when the at least one first facial image corresponding to the same person is recognized from the plurality of first facial images. At least one first face image corresponding to the same person within the duration;
- the vector filtering module is configured to determine, as the second human face image, a first facial image in which the distance between the three-dimensional vector and the source point in the at least one first facial image is the smallest.
- the matching unit includes:
- a similarity module configured to obtain the second person based on a second facial feature corresponding to the second facial image and a third facial feature corresponding to at least one third facial image in the image queue a similarity between the face image and the at least one third face image in the image queue;
- a result matching module configured to obtain a matching result based on the similarity between the second face image and the at least one third face image in the image queue.
- the result matching module is configured to obtain a representation in response to a third facial image in the image queue that has a similarity with the second facial image that is greater than or equal to a preset similarity.
- the second face image has a matching result of the matching image in the image queue;
- the similarity module is specifically configured to respectively determine a second facial feature corresponding to the at least one second facial image and each third human in the at least one third facial image in the image queue a distance between the pre-stored face features corresponding to the face image; obtaining a similarity between the second face image and each third face image in the at least one third face image in the image queue based on the distance .
- the deduplication unit is configured to: in response to the matching result, indicating that the second face image has a matching image in the image queue, determining that the second face image is a repeated image, and/ Or, the second face image is not stored in the image queue.
- the deduplication unit is further configured to: in response to the matching result, indicating that the second face image does not have a matching image in the image queue, determining that the second face image is not a repeated image, And/or storing the second face image in the image queue.
- it also includes:
- an image acquiring unit configured to obtain the plurality of first face images based on the at least one frame video image.
- the image obtaining unit includes:
- a frame drawing module configured to acquire at least one frame of video images including a face image from the video stream
- the segmentation module is configured to perform face recognition processing on the at least one frame of video images to obtain the plurality of first face images.
- the image obtaining unit further includes:
- a face acquiring module configured to acquire at least one face image having a set size in the video image.
- the image obtaining unit further includes:
- a trajectory establishing module configured to establish at least one facial trajectory based on the obtained plurality of first facial images, each of the facial trajectories corresponding to one person;
- the filtering unit is configured to perform filtering operation on at least one first face image included in each face track of the at least one face track, to obtain an image quality of each face track reaching a first preset condition A second face image.
- the device is applied to a client
- the device also includes:
- a sending unit configured to send the target face image or the image queue obtained by the deduplication operation to the server.
- an electronic device includes a processor including a face image deduplication device as described above.
- an electronic device includes: a memory, configured to store executable instructions;
- a processor for communicating with the memory to execute the executable instructions to complete the face image deduplication method as described above.
- a computer storage medium for storing computer readable instructions that, when executed, perform a face image deduplication method as described above.
- a computer program comprising computer readable code, when executed on a device, a processor in the device is configured to implement The instruction of the face image deduplication method.
- a method and device for deciphering a face image, an electronic device, a storage medium, and a program are provided according to the foregoing embodiments of the present application, and performing filtering operations on the obtained plurality of first face images to obtain a first preset condition of the image quality. At least one second face image; quality-based filtering is implemented, the number of face images is reduced, the obtained face image quality satisfies the subsequent processing requirements of the face image, and the repeated processing of a large number of face images is avoided.
- FIG. 1 is a flow chart of an embodiment of a method for de-emphasizing a face image of an applicant.
- FIG. 2 is a schematic structural view of an embodiment of a face image deduplication device of the present applicant.
- FIG. 3 is a schematic structural diagram of an electronic device used to implement a terminal device or a server in an embodiment of the present application.
- Embodiments of the present application can be applied to computer systems/servers that can operate with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known computing systems, environments, and/or configurations suitable for use with computer systems/servers include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, based on Microprocessor systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
- the computer system/server can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
- program modules may be located on a local or remote computing system storage medium including storage devices.
- FIG. 1 is a flow chart of an embodiment of a method for de-emphasizing a face image of an applicant.
- the method may be performed by a face image deduplication device, such as a terminal device, a server, and the like.
- a face image deduplication device such as a terminal device, a server, and the like.
- the specific implementation of the face image deduplication device is not limited in the embodiment of the present application.
- the method of this embodiment includes:
- Step 101 Perform a filtering operation on the obtained plurality of first face images to obtain at least one second face image whose image quality reaches a first preset condition.
- the display quality of the face image can be evaluated by the face angle, the face width, and the face blur, but the embodiment does not limit the display quality of the face image based on the specific index;
- a plurality of second face images corresponding to the same person may be deduplicated, and when the plurality of second face images of the same person whose display quality is up to standard is obtained based on the one piece of video, If it is transmitted to subsequent operating devices, it will cause a great burden and consume a lot of resources to do nothing.
- the step 101 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a filtering unit 21 that is executed by the processor.
- Step 102 Match the second face image with at least one third face image in the image queue to obtain a matching result.
- the image queue includes at least one third face image respectively corresponding to different people; optionally, the image queue may further include at least one third face image corresponding to different people.
- Corresponding to different people; optionally, identifying whether two face images match, may be obtained based on the distance between face features corresponding to the face image, and the distance between the face features includes but is not limited to cosine distance, Euclidean distance, etc. This embodiment does not limit the distance calculation method between specific features.
- the feature of the second face image may be matched with the face feature of the third face image in the image queue. Whether the second face image is a repeated image is determined according to the result of the feature matching.
- the step 102 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a matching unit 22 that is executed by the processor.
- Step 103 Determine whether to perform a deduplication operation on the second face image according to the matching result.
- the face image when the filtered face image corresponds to the pre-existing face image, the face image is a repeated image, indicating that the face image corresponding to the person has been filtered and processed again. Discarding the face image or replacing the face image corresponding to the person in the image queue with the face image; and when the filtered face image does not correspond to the pre-stored face image, the face image is not repeated.
- the image indicates that the person corresponding to the face image is new and needs to be stored in the queue for subsequent recognition.
- the step 103 may be performed by a processor invoking a corresponding instruction stored in the memory or by a deduplication unit 23 operated by the processor.
- a filtering operation is performed on the obtained plurality of first face images to obtain at least one second face image whose image quality reaches a first preset condition;
- the quality-based filtering reduces the number of face images, and the obtained face image quality satisfies the subsequent processing requirements for face images, and avoids the problem of repeatedly processing a large number of face images;
- the second face image and image Matching at least one third face image in the queue to obtain a matching result; determining whether to perform a deduplication operation on the second face image according to the matching result, determining whether the face image has been stored according to the known image queue, and implementing the fast Repeat face recognition.
- Another embodiment of the present applicant's face image deduplication method on the basis of the foregoing embodiment, performing a filtering operation on the obtained plurality of first face images, including:
- the obtained plurality of first face images are filtered based on the face attributes corresponding to the first face image.
- the face attribute is used to represent the display quality of the face in the face image; the face attribute corresponding to the first face image is used to represent the display quality of the face in the first face image.
- the face attribute includes but is not limited to one or more of the following: a face angle, a face width height value, and a face blur degree; optionally, the face angle may include but is not limited to: a horizontal corner (yaw) ) is used to indicate the steering angle of the face in the horizontal direction; the pitch is used to indicate the rotation angle of the face in the vertical direction; and the roll is used to indicate the deflection angle of the face in the vertical direction.
- a face angle e.g., a face width height value, and a face blur degree
- the face angle may include but is not limited to: a horizontal corner (yaw) ) is used to indicate the steering angle of the face in the horizontal direction; the pitch is used to indicate the rotation angle of the face in the vertical direction; and the roll is used to indicate the deflection angle of the face in the vertical direction.
- filtering the obtained first face images based on the face attributes corresponding to the first face image includes:
- the first condition includes at least one of the following: the face angle is within the first preset range, the face width height value is greater than the second preset threshold, and the face blur degree is less than the third preset threshold.
- Matching at least one face image with a pre-stored face image in the image queue including:
- the face angle is within the first preset range
- the face width height value is greater than the second preset threshold
- the face blur degree is less than the third preset threshold
- Each face image in at least one face image is matched with a face image pre-stored in the image queue.
- the face angle is not within the first preset range
- the face width height value is less than or equal to the second preset threshold
- the face blur degree is greater than or equal to the third preset threshold
- the first preset range can be set to ⁇ 20° (the specific value can be set according to the specific situation), when the horizontal angle (yaw), pitch angle (pitch) and tilt angle (roll) in the face angle are both ⁇
- face width can include face width and face height (generally returned by detection, can be filtered by setting; for example: setting For 50 pixels, face images with width and height less than 50 pixels can be considered as non-compliant, width and height can be set to different values or the same value); face ambiguity (generally through the Queue Toolkit (SDK-alignment) Return, you can set different values, for example: set to 0.7, the blur degree is greater than 0.7 is considered to be a poor quality face image).
- the values are: ⁇ 20°, 50 pixels, and 0.7 are set thresholds, which can be set according to actual conditions.
- Performing a filtering operation on the obtained plurality of first face images may further include: filtering the obtained plurality of first face images based on a face angle in the first face image; wherein the face angle is used for The angle of deflection of the face in the face image is represented.
- the face angle in the first face image is used to represent the deflection angle of the face in the first face image.
- the deflection angle is relative to the standard front face, which is a face whose face is 0 in the horizontal, vertical, and oblique directions, and the face can be used as the origin to calculate the deflection angle of the face.
- a filtering operation By performing filtering on the multi-frame face image in the video stream, the purpose of selecting a frame from the video stream based on the face image can be achieved, and the face image in the video frame obtained by selecting the frame is consistent with the first preset condition.
- operation 101 includes:
- the filtering process may be implemented by establishing a face trajectory, including: obtaining a face trajectory based on at least one face image corresponding to the same person;
- the face image in the face trajectory is filtered based on the face angle corresponding to the face image, and the face image in the face trajectory whose quality reaches the first preset condition is obtained.
- an image with better quality is obtained for at least one person (for example, obtaining an image with better quality for each person), which may be
- the face angle determines whether the quality of the face image reaches the first preset condition, and the first preset condition herein can be adjusted according to the user setting, and the set angle range value or the face quality is better.
- the at least one first face image corresponding to the same person is filtered, and the second face image of the at least one first face image whose quality reaches the first preset condition is obtained, including:
- the face with a large angle deflection can be removed, and the second face image whose angle is within the set range can be obtained.
- the face angle includes, but is not limited to, one or more of the following: a human face horizontal corner, a human elevation angle, and a human face tilt angle.
- the at least one first face image corresponding to the same person is filtered according to the face angle corresponding to the first face image, and the second face image whose quality reaches the first preset condition is obtained, including:
- the at least one first face image corresponding to the same person is filtered based on the distance from the three-dimensional vector to the source point to obtain a second face image whose quality reaches the first preset condition.
- the source point is a three-dimensional vector whose values are all zero.
- the face image in the face trajectory is filtered based on the distance from the three-dimensional vector to the source point; the source point is a three-dimensional vector whose value is all zero.
- the distance value can be obtained by calculating the squared difference of the three-dimensional vector converted from the face horizontal corner, the face pitch angle, and the face tilt angle, and the quality of the face image is evaluated by the distance value, and the distance is smaller.
- the face image in the face track is filtered within a set time interval (for example, within 5 seconds, within 10 seconds, etc.).
- identifying at least one first face image corresponding to the same person from the plurality of first face images includes: identifying, from the plurality of first face images, corresponding to the same person within the first set duration At least one first face image;
- the at least one first face image corresponding to the same person is filtered according to the distance from the three-dimensional vector to the source point, and the second face image whose quality reaches the first preset condition is obtained, including:
- the first face image in which the distance between the three-dimensional vector and the source point in the at least one first face image is the smallest is determined as the second face image.
- the first face image with the smallest distance to the source point is the face with the smallest face angle in the face image, that is, the face closest to the face.
- the face trajectory further includes a time stamp corresponding to the face image, and the time stamp corresponds to a time when the face image starts performing the filtering operation;
- Filtering the face image in the face track based on the distance from the 3D vector to the source point including:
- a face image in which the corresponding distance in the at least one face image in the face trajectory is less than a preset threshold is obtained, and the face image whose corresponding distance is less than the preset threshold is saved.
- the quality of the face trajectory in the duration is obtained, that is, the face image with better quality is obtained, and the processing speed is accelerated. Subsequently, a new face trajectory can be established based on the better-quality face images obtained by the plurality of set durations, and based on the new face trajectory and then based on the quality filtering, the quality of all the face images in the plurality of set durations is obtained. Face image.
- operation 102 includes:
- a matching result is obtained based on the similarity between the second face image and the at least one third face image in the image queue.
- a matching result indicating that the second face image does not have a matching image in the image queue is obtained.
- the face deduplication is implemented, and the obtained good face image is compared with the face image in the existing image queue, which may be based on the face feature, and the face is obtained.
- the face features of the image can be obtained through a neural network, and the image queue can store the face image or store the face image and its corresponding face feature.
- the facial features corresponding to the pre-existing face images are obtained through the neural network.
- the second face feature corresponding to the at least one second face image is pre-stored corresponding to the at least one third face image in the image queue.
- the face feature obtains the similarity between the second face image and the at least one third face image in the image queue, including:
- the similarity of each third face image in the at least one third face image in the second face image and the image queue is obtained based on the distance.
- the similarity between the corresponding facial images may be determined by calculating the distance between the facial features, and the distance may specifically include, but is not limited to, a cosine distance, an Euclidean distance, a Mahalanobis distance, etc., between the facial features The closer the distance is, the larger the similarity between the corresponding face images is. Whether the face image of the same person can be judged by setting a threshold (for example, the similarity is 0.86, 0.88, etc.), and the threshold is set. It can be adjusted according to the actual situation.
- a threshold for example, the similarity is 0.86, 0.88, etc.
- the similarity between the face image obtained in the time period and the image queue may be compared within a set time period (for example, 300 seconds, 400 seconds, etc.), whenever the degree of similarity is reached.
- the time period is set, and the similarity between the face image obtained in the time period and the image queue is compared.
- operation 103 includes:
- the operation 103 includes: determining, in response to the matching result, that the second facial image does not have a matching image in the image queue, determining that the second facial image is not a repeated image, and/or storing the second facial image Image queue.
- the two face images may correspond to the same person.
- only one face is reserved for the same person.
- the image is OK.
- the newly received face image can be directly deleted, or the face image and the third face image can be directly compared with each other.
- the newly received face image quality is better, the newly received face is adopted.
- the image replaces the pre-stored face image in the image queue; when the repeated image is recognized, the number of occurrences corresponding to the face image may be accumulated and recorded to provide information for subsequent processing of the statistics; when determining that the face image is not repeated
- the face image is added to the image queue so that it can be accurately identified when similarly matching the newly received face images.
- the method further includes:
- a plurality of first face images are obtained based on at least one frame of the video image.
- the face image that needs to perform the face image deduplication method must be a large number, for example, a face image obtained from a plurality of video images extracted from a video, or directly captured from the network.
- the face recognition processing is performed on at least one frame of the video image to obtain a plurality of first face images.
- the video image in the video stream is obtained by drawing frames, and the face in the video image can be identified and segmented through the neural network, or the face recognition can be performed through the neural network, and then based on other segmentation technologies or other segmentation networks.
- the face image is segmented from the video; this embodiment does not limit the specific face recognition and segmentation techniques, so that the purpose of the embodiment can be achieved as a standard.
- an image capturing device such as a camera is set to collect a video stream, and the video stream is decomposed to obtain a video image, and the video image is recognized by a face recognition technology (eg, Convolutional Neural Network, CNN).
- the video image of the image is obtained by segmenting the face image from the video image by image segmentation technology, and the captured face image is obtained, and at least one face image may be included in one frame of the video image, or there may be no face image
- the embodiment of the present application does not perform the collection; the video image of the decomposed video image may be subjected to face mapping, software package detection (SDK-detect), and the same map.
- SDK-detect software package detection
- the method may further include:
- the screening of face image sizes may be based on neural networks or other screening methods.
- the method further includes:
- each face trajectory corresponds to one person.
- step 101 may include:
- the face trajectory is established based on the face image, which can provide a basis for the de-duplication of the face image of the same person in the subsequent operation, and the method for establishing the face trajectory is not limited. .
- the method for de-emphasis of the face image of the present application can be applied to the fields of intelligent video analysis, intelligent business, security monitoring, and the like.
- the face image de-duplication method of the embodiment of the present application can be applied to any processing of the video stream.
- the face image deduplication method of the embodiment of the present application can be applied to a large number of frame pictures
- the face image deduplication method of the embodiment of the present application can be applied to any method that involves uploading a large number of frame pictures to the cloud.
- the method further includes:
- the face image corresponding to the face trajectory is obtained by the filtering operation and/or the de-duplication operation on the face image in the face trajectory, and the attribute detection and the face comparison are performed based on the target face image.
- the client needs to perform attribute detection and face matching on the face in the real-time collected video, and select one frame in consecutive multi-frame images including the same personal face. Most suitable for processing, to better perform attribute detection and face matching. At this time, the program is required to make a selection of the face image that meets the requirements.
- the method of the embodiment is applied to a client
- the target face image or image queue obtained by the deduplication operation is sent to the server.
- the server may include a local server and/or a cloud server.
- the face image or image queue obtained by the filtering operation and/or the deduplication operation is sent to the server and/or the cloud server, and the server and/or the cloud server receives the face that has undergone the filtering operation and/or the deduplication operation from the client.
- Image or image queue comparing the face image of the face image or the image queue with the existing face image in the image database, and determining whether the collected face image or the face image in the image queue is in the image database A corresponding face image exists, and the face image or image queue is stored or not stored in the image database according to the judgment result.
- the image database is used to save the face image obtained by the judgment and stored; the image database in the initial state is empty or the face image has been stored, and the face image is continuously sent to the server and/or the cloud server. , more and more face images that meet the requirements can be automatically stored in the image database to realize the construction of the image database.
- the client processes the video stream, sends the required face image to the cloud, and directly sends the full face to the cloud, which causes the cloud to be overstressed and repeated, and the quality is low.
- the image doesn't make much sense, so you need to do a heavy filtering before uploading the image to the cloud on the client. This solution is needed to make a better choice of face images.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
- FIG. 2 is a schematic structural view of an embodiment of a face image deduplication device of the present applicant.
- the apparatus of this embodiment can be used to implement the various method embodiments described above. As shown in FIG. 2, the apparatus of this embodiment includes:
- the filtering unit 21 is configured to perform a filtering operation on the obtained plurality of first face images to obtain at least one second face image whose image quality reaches a first preset condition.
- the matching unit 22 is configured to match the second face image with at least one third face image in the image queue to obtain a matching result.
- the image queue includes at least one third face image corresponding to different people respectively; each third face image in the image queue may correspond to different people, or in the image queue.
- the partial images respectively correspond to different people; optionally, whether the two facial images are matched or not can be obtained based on the distance between the facial features corresponding to the facial images, and the distance between the facial features includes a cosine distance, an Euclidean distance, and the like. This embodiment does not limit the distance calculation method between specific features.
- the de-weighting unit 23 is configured to determine whether to perform a deduplication operation on the second face image according to the matching result.
- the face image When the filtered face image corresponds to the pre-existing face image, the face image is a repeated image, indicating that the face image corresponding to the person has been filtered and processed again. At this time, the face can be selected to be discarded.
- the image or the face image is used to replace the face image corresponding to the person in the image queue; and when the filtered face image does not correspond to the pre-stored face image, the face image is not a duplicate image, indicating that the image is The person corresponding to the face image is new and needs to be stored in the queue for subsequent identification.
- the face image deduplication device provided by the above embodiment of the present application implements quality-based filtering, greatly reducing the number of face images, and the obtained face image quality satisfies the subsequent processing requirements for the face image, and The problem of repeatedly processing a large number of face images is avoided; whether the face image has been stored is determined according to the known image queue, and a faster repeated face recognition is realized.
- the filtering unit 21 includes:
- the attribute filtering module is configured to filter the obtained first face images based on the face attributes corresponding to the first face image.
- the face attribute is used to represent the display quality of the face in the face image.
- the face attribute corresponding to the first face image is used to indicate the display quality of the face in the first face image;
- the face attributes include, but are not limited to, one or more of the following: face angle, face width and height value, face ambiguity; more specifically, the face angle may include, but is not limited to, a horizontal corner (yaw), It is used to indicate the steering angle of the face in the horizontal direction; the pitch is used to indicate the rotation angle of the face in the vertical direction; the roll is used to indicate the deflection angle of the face in the vertical direction.
- an angle filtering module configured to filter the obtained plurality of first face images based on a face angle in the first face image.
- the face angle is used to represent the deflection angle of the face in the face image.
- the face angle in the first face image is used to represent the deflection angle of the face in the first face image; It is relative to the standard front face.
- the standard positive face refers to the face whose face is 0 in the horizontal, vertical and oblique directions. The face can be used as the origin to calculate the deflection angle of the face.
- a frame selection module may be further included, configured to perform a filtering operation on the multi-frame face image obtained from the video stream.
- a filtering operation By performing filtering on the multi-frame face image in the video stream, the purpose of selecting a frame from the video stream based on the face image can be achieved, and the face image in the video frame obtained by selecting the frame is consistent with the first preset condition.
- the face attributes include, but are not limited to, one or more of the following: face angle, face width and height value, face ambiguity;
- the matching unit 22 is configured to determine, according to the satisfying the first condition, that the image quality of the first face image reaches a first preset condition, wherein the first condition includes at least one of the following: the face angle is at the first preset Within the range, the face width height value is greater than the second preset threshold, and the face blur degree is less than the third preset threshold.
- the attribute filtering module is further configured to: when the face angle is not within the first preset range, the face width and height value is less than or equal to the second pre- The threshold value, and/or the face ambiguity is greater than or equal to the third preset threshold, and the face image is deleted.
- the filtering unit may be configured to identify at least one first face image corresponding to the same person from the plurality of first face images; The at least one first face image of the person is filtered to obtain a second face image of the at least one first face image whose quality reaches the first preset condition.
- the face image in the face trajectory is filtered based on the face angle corresponding to the face image, and the face image in the face trajectory whose quality reaches the first preset condition is obtained.
- the filtering unit performs filtering on the at least one first face image corresponding to the same person to obtain a second face image in which the quality reaches the first preset condition in the at least one first face image, and is used to be based on the first
- the face angle corresponding to the face image filters at least one first face image corresponding to the same person, and obtains a second face image whose quality reaches the first preset condition.
- the face angle includes, but is not limited to, one or more of the following: a face horizontal corner, a face pitch angle, and a face tilt angle.
- the filtering unit comprises:
- An angle conversion module configured to convert a face horizontal corner, a face pitch angle, and a face tilt angle corresponding to the first face image into a three-dimensional vector
- a vector filtering module configured to filter a face image in the at least one first face trajectory corresponding to the same person based on the distance from the three-dimensional vector to the source point, to obtain a second face image whose quality reaches a first preset condition;
- the source point is a three-dimensional vector whose values are all zero.
- the filtering unit is configured to identify the first set duration from the plurality of first facial images when the at least one first facial image corresponding to the same person is recognized from the plurality of first facial images. Corresponding to at least one first face image of the same person;
- a vector filtering module configured to determine, as the second human face image, a first facial image that minimizes a distance between the three-dimensional vector and the source point in the at least one first facial image.
- the face trajectory further includes a time stamp corresponding to the face image, and the time stamp corresponds to a time when the face image starts to perform the filtering operation;
- the vector filtering module is configured to obtain, according to the distance from the three-dimensional vector to the source point, a face image of the at least one face image in the face trajectory that is less than a preset threshold in the first set duration, and the saved corresponding distance is less than a preset threshold. Face image.
- the matching unit 22 includes:
- a similarity module configured to obtain a third facial image and an image queue according to the second facial feature corresponding to the second facial image and the third facial feature corresponding to the at least one third facial image in the image queue The similarity of at least one third face image;
- the result matching module is configured to obtain a matching result based on the similarity between the second face image and the at least one third face image in the image queue.
- the result matching module is configured to: in response to the third face image in the image queue that has a similarity with the second face image greater than or equal to the preset similarity, obtain the image indicating that the second face image is in the image There is a matching result of the matching image in the queue; and/or
- a matching result indicating that the second face image does not have a matching image in the image queue is obtained.
- the face deduplication is implemented, and the obtained good face image is compared with the face image in the existing image queue, which may be based on the face feature, and the face is obtained.
- the face features of the image can be obtained through a neural network, and the image queue can store the face image or store the face image and its corresponding face feature.
- the facial features corresponding to the pre-existing face images are obtained through the neural network.
- the similarity module is configured to respectively determine a second facial feature corresponding to the at least one second facial image and at least one third person in the image queue. a distance between pre-stored face features corresponding to each third face image in the face image; obtaining a third face image and each third face image in the at least one third face image in the image queue based on the distance Similarity.
- the deduplication unit 23 is configured to determine that the second face image has a matching image in the image queue in response to the matching result, and determine the second face image. To repeat the image, and/or, the second face image is not stored in the image queue.
- the de-weighting unit 23 is further configured to: in response to the matching result, indicating that the second human face image does not have a matching image in the image queue, determining that the second human face image is not a repeated image, and/or, the second human face The image is stored in the image queue.
- a further embodiment of the present applicant's face image deduplication device further includes:
- an image acquiring unit configured to obtain a plurality of first face images based on the at least one frame of the video image.
- the face image that needs to perform the face image deduplication method must be a large number, for example, a face image obtained from a plurality of video frames extracted from a video, or directly captured from the network.
- a frame drawing module configured to acquire at least one frame of video images including a face image from the video stream
- the segmentation module is configured to perform face recognition processing on at least one frame of the video image to obtain a plurality of first face images.
- the image obtaining unit further includes:
- the face acquisition module is configured to acquire at least one face image having a set size in the video image.
- the image obtaining unit further includes:
- a trajectory establishing module configured to establish at least one facial trajectory based on the obtained plurality of first facial images, where each facial trajectory corresponds to one person.
- the filtering unit establishes at least one facial trajectory based on the obtained plurality of first facial images, and each facial trajectory corresponds to one person.
- the image acquisition unit In a specific example of the above embodiments of the present applicant's face image deduplication device, the image acquisition unit,
- It can also be used to obtain a target face image corresponding to a face trajectory based on a face operation and/or a de-duplication operation on each face image in the face trajectory, and perform attribute detection and face comparison based on the target face image.
- the device of the embodiment is applied to a client
- a sending unit configured to send the target face image or the image queue obtained by the deduplication operation to the server.
- the server may include a local server and/or a cloud server.
- an electronic device includes a processor, where the processor includes the face image deduplication device of any of the above embodiments of the present application.
- an electronic device includes: a memory, configured to store executable instructions;
- a processor for communicating with the memory to execute executable instructions to perform the operations of any of the above embodiments of the Applicant Face Image Deduplication method.
- a computer storage medium for storing computer readable instructions, and when the instructions are executed, performing the operation of any one of the embodiments of the present applicant's face image deduplication method.
- a computer program comprising computer readable code, when a computer readable code is run on a device, a processor in the device performs image deduplication for implementing the applicant's face image Method
- a processor in the device performs image deduplication for implementing the applicant's face image Method
- the embodiment of the present application further provides an electronic device, such as a mobile terminal, a personal computer (PC), a tablet computer, a server, and the like.
- an electronic device such as a mobile terminal, a personal computer (PC), a tablet computer, a server, and the like.
- FIG. 3 a schematic structural diagram of an electronic device 300 suitable for implementing a terminal device or a server of an embodiment of the present application is shown.
- the electronic device 300 includes one or more processors and a communication unit.
- the one or more processors such as: one or more central processing units (CPUs) 301, and/or one or more image processors (GPUs) 313, etc., the processors may be stored in a read only memory ( Various suitable actions and processes are performed by executable instructions in ROM 302 or loaded from storage portion 308 into executable instructions in random access memory (RAM) 303.
- the communication unit 312 can include, but is not limited to, a network card, which can include, but is not limited to, an IB (Infiniband) network card.
- the processor can communicate with the read-only memory 302 and/or the random access memory 303 to execute executable instructions, connect to the communication unit 312 via the bus 304, and communicate with other target devices via the communication unit 312, thereby completing the embodiments of the present application.
- Corresponding operations of any one of the methods for example, performing a filtering operation on the obtained plurality of face images to obtain at least one face image whose image quality reaches a first preset condition; and each face image in the at least one face image Matching with at least one face image pre-stored in the image queue to obtain a matching result; determining whether to perform a deduplication operation on the face image according to the matching result.
- RAM 303 various programs and data required for the operation of the device can be stored.
- the CPU 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
- ROM 302 is an optional module.
- the RAM 303 stores executable instructions, or writes executable instructions to the ROM 302 at runtime, and the executable instructions cause the central processing unit (CPU) 301 to perform operations corresponding to the above-described communication methods.
- An input/output (I/O) interface 305 is also coupled to bus 304.
- the communication unit 312 may be integrated or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and on the bus link.
- the following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, etc.; an output portion 307 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 308 including a hard disk or the like. And a communication portion 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the Internet.
- Driver 310 is also connected to I/O interface 305 as needed.
- a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 310 as needed so that a computer program read therefrom is installed into the storage portion 308 as needed.
- FIG. 3 is only an optional implementation manner.
- the number and type of components in the foregoing FIG. 3 may be selected, deleted, added, or replaced according to actual needs; Different function component settings may also be implemented in separate settings or integrated settings.
- the GPU 313 and the CPU 301 may be separately configured or the GPU 313 may be integrated on the CPU 301.
- the communication unit may be separately configured or integrated on the CPU 301 or the GPU 313. and many more.
- an embodiment of the present application includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising Executing an instruction corresponding to the method step provided by the embodiment of the present application, for example, performing a filtering operation on the obtained plurality of face images to obtain at least one face image whose image quality reaches a first preset condition; and at least one face image Each face image is matched with at least one face image pre-stored in the image queue to obtain a matching result; and determining whether to perform a deduplication operation on the face image according to the matching result.
- the computer program can be downloaded and installed from the network via the communication portion 309, and/or installed from the removable medium 311.
- the computer program is executed by the central processing unit (CPU) 301, the above-described functions defined in the method of the present application are performed.
- the methods and apparatus of the present application may be implemented in a number of ways.
- the methods and apparatus of the present application can be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware.
- the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present application are not limited to the order specifically described above unless otherwise specifically stated.
- the present application can also be implemented as a program recorded in a recording medium, the programs including machine readable instructions for implementing the method according to the present application.
- the present application also covers a recording medium storing a program for executing the method according to the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
- Collating Specific Patterns (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (42)
- 一种人脸图像去重方法,其特征在于,包括:对获得的多个第一人脸图像执行过滤操作,得到图像质量达到第一预设条件的至少一个第二人脸图像;将所述第二人脸图像与图像队列中的至少一个第三人脸图像进行匹配,得到匹配结果;根据所述匹配结果确定是否针对所述第二人脸图像执行去重操作。
- 根据权利要求1所述的方法,其特征在于,所述图像队列中包括分别对应不同人的至少一个第三人脸图像。
- 根据权利要求1或2所述的方法,其特征在于,所述对获得的多个第一人脸图像执行过滤操作,包括:基于所述第一人脸图像对应的人脸属性,对获得的多个第一人脸图像进行过滤;所述第一人脸图像对应的人脸属性用于表示所述第一人脸图像中人脸的显示质量;和/或,基于所述第一人脸图像中的人脸角度,对获得的多个第一人脸图像进行过滤,其中,所述第一人脸图像中的人脸角度用于表示所述第一人脸图像中人脸的偏转角度。
- 根据权利要求3所述的方法,其特征在于,所述人脸属性包括以下一项或多项:人脸角度、人脸宽高值、人脸模糊度;所述基于所述第一人脸图像对应的人脸属性,对获得的多个第一人脸图像进行过滤,包括:响应于满足第一条件,确定所述第一人脸图像的图像质量达到第一预设条件,其中,所述第一条件包括下列中的至少一种:所述人脸角度在第一预设范围内、所述人脸宽高值大于第二预设阈值、所述人脸模糊度小于第三预设阈值。
- 根据权利要求1-4任一所述的方法,其特征在于,所述对获得的多个第一人脸图像执行过滤操作,得到图像质量达到第一预设条件的至少一个第二人脸图像,包括:从多个第一人脸图像中识别出对应同一人的至少一个第一人脸图像;对所述对应同一人的至少一个第一人脸图像进行过滤,得到所述至少一个第一人脸图像中质量达到第一预设条件的第二人脸图像。
- 根据权利要求5所述的方法,其特征在于,所述对所述对应同一人的至少一个第一人脸图像进行过滤,得到所述至少一个第一人脸图像中质量达到第一预设条件的第二人脸图像,包括:基于所述第一人脸图像对应的人脸角度对所述对应同一人的至少一个第一人脸图像进行过滤,得到质量达到第一预设条件的第二人脸图像。
- 根据权利要求3-6任一所述的方法,其特征在于,所述人脸角度包括以下一项或多项:人脸水 平转角、人脸俯仰角、人脸倾斜角。
- 根据权利要求7所述的方法,其特征在于,所述基于所述第一人脸图像对应的人脸角度对所述对应同一人的至少一个第一人脸图像进行过滤,得到质量达到第一预设条件的第二人脸图像,包括:将所述第一人脸图像对应的人脸水平转角、人脸俯仰角和人脸倾斜角转换为一个三维向量;基于所述三维向量到源点的距离对所述对应于同一人的至少一个第一人脸图像进行过滤,得到质量达到第一预设条件的第二人脸图像,其中,所述源点为值全为0的三维向量。
- 根据权利要求8所述的方法,其特征在于,所述从多个第一人脸图像中识别出对应同一人的至少一个第一人脸图像,包括:从多个第一人脸图像中识别出第一设定时长内对应同一人的至少一个第一人脸图像;所述基于所述三维向量到源点的距离对所述对应于同一人的至少一个第一人脸图像进行过滤,得到质量达到第一预设条件的第二人脸图像,包括:将所述至少一个第一人脸图像中三维向量到所述源点之间距离最小的第一人脸图像确定为所述第二人脸图像。
- 根据权利要求1-9任一所述的方法,其特征在于,将所述第二人脸图像与图像队列中的至少一个第三人脸图像进行匹配,得到匹配结果,包括:基于所述第二人脸图像对应的第二人脸特征,和所述图像队列中的至少一个第三人脸图像对应的第三人脸特征,获得所述第二人脸图像与所述图像队列中的所述至少一个第三人脸图像的相似度;基于所述第二人脸图像与所述图像队列中的至少一个第三人脸图像的相似度,得到匹配结果。
- 根据权利要求10所述的方法,其特征在于,所述基于所述第二人脸图像与所述图像队列中的至少一个第三人脸图像的相似度,得到匹配结果,包括:响应于所述图像队列中存在与所述第二人脸图像之间的相似度大于或等于预设相似度的第三人脸图像,得到表示所述第二人脸图像在所述图像队列中存在匹配图像的匹配结果;和/或响应于所述图像队列中不存在与所述第二人脸图像之间的相似度大于或等于所述预设相似度的第三人脸图像,得到表示所述第二人脸图像在所述图像队列中不存在匹配图像的匹配结果。
- 根据权利要求11所述的方法,其特征在于,所述基于所述至少一个第二人脸图像对应的第二人脸特征,和所述图像队列中的至少一个第三人脸图像对应的预存人脸特征,获得所述第二人脸图像与所述图像队列中的所述至少一个第三人脸图像的相似度,包括:分别确定所述至少一个第二人脸图像对应的第二人脸特征和所述图像队列中至少一个第三人脸图像中每个第三人脸图像对应的第三人脸特征之间的距离;基于所述距离获得所述第二人脸图像与所述图像队列中的至少一个第三人脸图像中每个第三人脸图像的相似度。
- 根据权利要求1-12任一所述的方法,其特征在于,根据所述匹配结果确定是否针对所述第二 人脸图像执行去重操作,包括:响应于所述匹配结果表示所述第二人脸图像在所述图像队列中存在匹配图像,确定所述第二人脸图像为重复图像,和/或,不将所述第二人脸图像存入所述图像队列。
- 根据权利要求1-13任一所述的方法,其特征在于,根据所述匹配结果确定是否针对所述第二人脸图像执行去重操作,包括:响应于所述匹配结果表示所述第二人脸图像在所述图像队列中不存在匹配图像,确定所述第二人脸图像不是重复图像,和/或,将所述第二人脸图像存入所述图像队列。
- 根据权利要求1-14任一所述的方法,其特征在于,所述对获得的多个第一人脸图像执行过滤操作之前,还包括:基于至少一帧视频图像获得所述多个第一人脸图像。
- 根据权利要求15所述的方法,其特征在于,所述基于至少一帧视频图像获得所述多个第一人脸图像,包括:从视频流中获取包括人脸图像的至少一帧视频图像;对所述至少一帧视频图像进行人脸识别处理,得到所述多个第一人脸图像。
- 根据权利要求16所述的方法,其特征在于,所述对所述至少一帧视频图像进行人脸识别处理之前,还包括:获取所述视频图像中具有设定大小的至少一个人脸图像。
- 根据权利要求15-17任一所述的方法,其特征在于,还包括:基于所述获得的多个第一人脸图像建立至少一个人脸轨迹,每个所述人脸轨迹对应一个人;所述对获得的多个第一人脸图像执行过滤操作,得到图像质量达到第一预设条件的至少一个第二人脸图像,包括:对所述至少一个人脸轨迹中每个人脸轨迹包括的至少一个第一人脸图像进行过滤操作,得到所述每个人脸轨迹中图像质量达到第一预设条件的一个第二人脸图像。
- 根据权利要求1-18任一所述的方法,其特征在于,所述方法应用于客户端;所述方法还包括:将所述去重操作得到的目标人脸图像或图像队列发送给服务器。
- 一种人脸图像去重装置,其特征在于,包括:过滤单元,用于对获得的多个第一人脸图像执行过滤操作,得到图像质量达到第一预设条件的至少一个第二人脸图像;匹配单元,用于将所述第二人脸图像与图像队列中的至少一个第三人脸图像进行匹配,得到匹配结果;去重单元,用于根据所述匹配结果确定是否针对所述第二人脸图像执行去重操作。
- 根据权利要求20所述的装置,其特征在于,所述图像队列中包括分别对应不同人的至少一个第三人脸图像。
- 根据权利要求20或21所述的装置,其特征在于,所述过滤单元,包括:属性过滤模块,用于基于所述第一人脸图像对应的人脸属性,对获得的多个第一人脸图像进行过滤;所述第一人脸图像对应的人脸属性用于表示所述第一人脸图像中人脸的显示质量;和/或,角度过滤模块,用于基于所述第一人脸图像中的人脸角度,对获得的多个第一人脸图像进行过滤,其中,所述第一人脸图像中的人脸角度用于表示所述第一人脸图像中人脸的偏转角度。
- 根据权利要求22所述的装置,其特征在于,所述人脸属性包括以下一项或多项:人脸角度、人脸宽高值、人脸模糊度;所述匹配单元,具体用于响应于满足第一条件,确定所述第一人脸图像的图像质量达到第一预设条件,其中,所述第一条件包括下列中的至少一种:所述人脸角度在第一预设范围内、所述人脸宽高值大于第二预设阈值、和/或所述人脸模糊度小于第三预设阈值。
- 根据权利要求20-23任一所述的方法,其特征在于,所述过滤单元,用于从多个第一人脸图像中识别出对应同一人的至少一个第一人脸图像;对所述对应同一人的至少一个第一人脸图像进行过滤,得到所述至少一个第一人脸图像中质量达到第一预设条件的第二人脸图像。
- 根据权利要求24所述的方法,其特征在于,所述过滤单元在所述对应同一人的至少一个第一人脸图像进行过滤,得到所述至少一个第一人脸图像中质量达到第一预设条件的第二人脸图像时,用于基于所述第一人脸图像对应的人脸角度对所述对应同一人的至少一个第一人脸图像进行过滤,得到质量达到第一预设条件的第二人脸图像。
- 根据权利要求22-25任一所述的装置,其特征在于,所述人脸角度包括以下一项或多项:人脸水平转角、人脸俯仰角、人脸倾斜角。
- 根据权利要求26所述的装置,其特征在于,所述过滤单元,包括:角度转换模块,用于将所述第一人脸图像对应的人脸水平转角、人脸俯仰角和人脸倾斜角转换为一个三维向量;向量过滤模块,用于基于所述三维向量到源点的距离对所述对应于同一人的至少一个第一人脸图像进行过滤,得到质量达到第一预设条件的第二人脸图像;其中,所述源点为值全为0的三维向量。
- 根据权利要求27所述的装置,其特征在于,所述过滤单元在从多个第一人脸图像中识别出对应同一人的至少一个第一人脸图像时,用于从多个第一人脸图像中识别出第一设定时长内对应同一人的至少一个第一人脸图像;所述向量过滤模块,用于将所述至少一个第一人脸图像中三维向量到所述源点之间距离最小的第一人脸图像确定为所述第二人脸图像。
- 根据权利要求20-28任一所述的装置,其特征在于,所述匹配单元,包括:相似度模块,用于基于所述第二人脸图像对应的第二人脸特征,和所述图像队列中的至少一个第三人脸图像对应的第三人脸特征,获得所述第二人脸图像与所述图像队列中的所述至少一个第三人脸图像的相似度;结果匹配模块,用于基于所述第二人脸图像与所述图像队列中的至少一个第三人脸图像的相似度,得到匹配结果。
- 根据权利要求29所述的装置,其特征在于,所述结果匹配模块,用于响应于所述图像队列中存在与所述第二人脸图像之间的相似度大于或等于预设相似度的第三人脸图像,得到表示所述第二人脸图像在所述图像队列中存在匹配图像的匹配结果;和/或响应于所述图像队列中不存在与所述第二人脸图像之间的相似度大于或等于所述预设相似度的第三人脸图像,得到表示所述第二人脸图像在所述图像队列中不存在匹配图像的匹配结果。
- 根据权利要求30所述的装置,其特征在于,所述相似度模块,具体用于分别确定所述至少一个第二人脸图像对应的第二人脸特征和所述图像队列中至少一个第三人脸图像中每个第三人脸图像对应的预存人脸特征之间的距离;基于所述距离获得所述第二人脸图像与所述图像队列中的至少一个第三人脸图像中每个第三人脸图像的相似度。
- 根据权利要求20-31任一所述的装置,其特征在于,所述去重单元,用于响应于所述匹配结果表示所述第二人脸图像在所述图像队列中存在匹配图像,确定所述第二人脸图像为重复图像,和/或,不将所述第二人脸图像存入所述图像队列。
- 根据权利要求20-32任一所述的装置,其特征在于,所述去重单元,还用于响应于所述匹配结果表示所述第二人脸图像在所述图像队列中不存在匹配图像,确定所述第二人脸图像不是重复图像,和/或,将所述第二人脸图像存入所述图像队列。
- 根据权利要求20-33任一所述的装置,其特征在于,还包括:图像获取单元,用于基于至少一帧视频图像获得所述多个第一人脸图像。
- 根据权利要求34所述的装置,其特征在于,所述图像获取单元,包括:抽帧模块,用于从视频流中获取包括人脸图像的至少一帧视频图像;识别分割模块,用于对所述至少一帧视频图像进行人脸识别处理,得到所述多个第一人脸图像。
- 根据权利要求35所述的装置,其特征在于,所述图像获取单元,还包括:人脸获取模块,用于获取所述视频图像中具有设定大小的至少一个人脸图像。
- 根据权利要求34-36任一所述的方法,其特征在于,所述图像获取单元,还包括:轨迹建立模块,用于基于所述获得的多个第一人脸图像建立至少一个人脸轨迹,每个所述人脸轨迹对应一个人;所述过滤单元,用于对所述至少一个人脸轨迹中每个人脸轨迹包括的至少一个第一人脸图像进行过 滤操作,得到所述每个人脸轨迹中图像质量达到第一预设条件的一个第二人脸图像。
- 根据权利要求20-37任一所述的装置,其特征在于,所述装置应用于客户端;所述装置还包括:发送单元,用于将所述去重操作得到的目标人脸图像或图像队列发送给服务器。
- 一种电子设备,其特征在于,包括处理器,所述处理器包括权利要求20至38任意一项所述的人脸图像去重装置。
- 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1至19任意一项所述人脸图像去重方法。
- 一种计算机存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时执行权利要求1至19任意一项所述人脸图像去重方法的操作。
- 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1至19任意一项所述人脸图像去重方法的指令。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020197029413A KR102349980B1 (ko) | 2017-07-21 | 2018-07-20 | 얼굴 이미지 중복 제거 방법 및 장치, 전자 기기, 저장 매체, 프로그램 |
SG11201909069Q SG11201909069QA (en) | 2017-07-21 | 2018-07-20 | Methods and apparatuses for face image deduplication, electronic devices, storage media, and programs |
JP2019553912A JP6916895B2 (ja) | 2017-07-21 | 2018-07-20 | 顔画像重複削除方法及び装置、電子機器、記憶媒体、プログラム |
CN201880018965.XA CN110869937A (zh) | 2017-07-21 | 2018-07-20 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
US16/412,854 US11132581B2 (en) | 2017-07-21 | 2019-05-15 | Method and apparatus for face image deduplication and storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710605539.3 | 2017-07-21 | ||
CN201710605539 | 2017-07-21 | ||
CN201810041797.8A CN108228872A (zh) | 2017-07-21 | 2018-01-16 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
CN201810041797.8 | 2018-01-16 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/412,854 Continuation US11132581B2 (en) | 2017-07-21 | 2019-05-15 | Method and apparatus for face image deduplication and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019015684A1 true WO2019015684A1 (zh) | 2019-01-24 |
Family
ID=62640576
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/096540 WO2019015682A1 (zh) | 2017-07-21 | 2018-07-20 | 人脸图像动态入库方法和装置、电子设备、介质、程序 |
PCT/CN2018/096542 WO2019015684A1 (zh) | 2017-07-21 | 2018-07-20 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/096540 WO2019015682A1 (zh) | 2017-07-21 | 2018-07-20 | 人脸图像动态入库方法和装置、电子设备、介质、程序 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11132581B2 (zh) |
JP (2) | JP6916895B2 (zh) |
KR (1) | KR102349980B1 (zh) |
CN (4) | CN108228872A (zh) |
SG (2) | SG11201909069QA (zh) |
WO (2) | WO2019015682A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695643A (zh) * | 2020-06-24 | 2020-09-22 | 北京金山云网络技术有限公司 | 图像处理方法、装置和电子设备 |
CN112036957A (zh) * | 2020-09-08 | 2020-12-04 | 广州图普网络科技有限公司 | 一种访客留存数确定方法、装置、电子设备和存储介质 |
CN116521046A (zh) * | 2023-04-23 | 2023-08-01 | 西北核技术研究所 | 一种交通态势系统态势回溯功能的控制方法及系统 |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108228872A (zh) * | 2017-07-21 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
CN108491822B (zh) * | 2018-04-02 | 2020-09-08 | 杭州高创电子科技有限公司 | 一种基于嵌入式设备有限缓存的人脸检测去重方法 |
CN109241310B (zh) * | 2018-07-25 | 2020-05-01 | 南京甄视智能科技有限公司 | 人脸图像数据库的数据去重方法与系统 |
CN109190532A (zh) * | 2018-08-21 | 2019-01-11 | 北京深瞐科技有限公司 | 一种基于云边融合的人脸识别方法、装置及系统 |
CN109271923A (zh) * | 2018-09-14 | 2019-01-25 | 曜科智能科技(上海)有限公司 | 人脸姿态检测方法、系统、电子终端及存储介质 |
CN109902550A (zh) * | 2018-11-08 | 2019-06-18 | 阿里巴巴集团控股有限公司 | 行人属性的识别方法和装置 |
CN109635142B (zh) * | 2018-11-15 | 2022-05-03 | 北京市商汤科技开发有限公司 | 图像选择方法及装置、电子设备和存储介质 |
CN109711287B (zh) * | 2018-12-12 | 2020-11-24 | 深圳云天励飞技术有限公司 | 人脸采集方法及相关产品 |
CN109658572B (zh) * | 2018-12-21 | 2020-09-15 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN109858371B (zh) * | 2018-12-29 | 2021-03-05 | 深圳云天励飞技术有限公司 | 人脸识别的方法及装置 |
CN111582894A (zh) * | 2019-02-15 | 2020-08-25 | 普罗文化股份有限公司 | 人群空间行为分析系统 |
CN109977823B (zh) * | 2019-03-15 | 2021-05-14 | 百度在线网络技术(北京)有限公司 | 行人识别跟踪方法、装置、计算机设备和存储介质 |
CN110084130B (zh) * | 2019-04-03 | 2023-07-25 | 深圳鲲云信息科技有限公司 | 基于多目标跟踪的人脸筛选方法、装置、设备及存储介质 |
CN110263830B (zh) * | 2019-06-06 | 2021-06-08 | 北京旷视科技有限公司 | 图像处理方法、装置和系统及存储介质 |
CN110321843B (zh) * | 2019-07-04 | 2021-11-09 | 杭州视洞科技有限公司 | 一种基于深度学习的人脸择优方法 |
CN110929605A (zh) * | 2019-11-11 | 2020-03-27 | 中国建设银行股份有限公司 | 视频关键帧的保存方法、装置、设备及存储介质 |
KR102114267B1 (ko) * | 2019-12-10 | 2020-05-22 | 셀렉트스타 주식회사 | 딥러닝 기반 유사 텍스트를 필터링하는 방법 및 그를 이용한 장치 |
KR102114223B1 (ko) | 2019-12-10 | 2020-05-22 | 셀렉트스타 주식회사 | 딥러닝 기반 유사 이미지를 필터링하는 방법 및 그를 이용한 장치 |
CN113095110B (zh) * | 2019-12-23 | 2024-03-08 | 浙江宇视科技有限公司 | 人脸数据动态入库的方法、装置、介质及电子设备 |
CN111160200B (zh) * | 2019-12-23 | 2023-06-16 | 浙江大华技术股份有限公司 | 一种路人库的建立方法及装置 |
CN113128293A (zh) * | 2019-12-31 | 2021-07-16 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
US11687778B2 (en) | 2020-01-06 | 2023-06-27 | The Research Foundation For The State University Of New York | Fakecatcher: detection of synthetic portrait videos using biological signals |
CN111476105A (zh) * | 2020-03-17 | 2020-07-31 | 深圳力维智联技术有限公司 | 人脸数据清洗方法、装置及设备 |
CN111488476B (zh) * | 2020-04-03 | 2023-06-27 | 北京爱芯科技有限公司 | 图像推送方法、模型训练方法及对应装置 |
CN111625745B (zh) * | 2020-05-27 | 2023-12-26 | 抖音视界有限公司 | 推荐方法、装置、电子设备和计算机可读介质 |
CN111985348B (zh) * | 2020-07-29 | 2024-05-10 | 深思考人工智能科技(上海)有限公司 | 人脸识别方法和系统 |
CN112052347B (zh) * | 2020-10-09 | 2024-06-04 | 北京百度网讯科技有限公司 | 图像存储方法、装置以及电子设备 |
CN112148907A (zh) * | 2020-10-23 | 2020-12-29 | 北京百度网讯科技有限公司 | 图像数据库的更新方法、装置、电子设备和介质 |
CN112836660B (zh) * | 2021-02-08 | 2024-05-28 | 上海卓繁信息技术股份有限公司 | 一种用于监控领域的人脸库生成方法、装置和电子设备 |
US11921831B2 (en) * | 2021-03-12 | 2024-03-05 | Intellivision Technologies Corp | Enrollment system with continuous learning and confirmation |
CN113297420A (zh) * | 2021-04-30 | 2021-08-24 | 百果园技术(新加坡)有限公司 | 视频图像处理方法、装置、存储介质及电子设备 |
CN113344132A (zh) * | 2021-06-30 | 2021-09-03 | 成都商汤科技有限公司 | 身份识别方法、系统、装置、计算机设备及存储介质 |
CN113591620A (zh) * | 2021-07-15 | 2021-11-02 | 北京广亿兴业科技发展有限公司 | 一种基于一体式移动采集设备的预警方法、装置及系统 |
CN114140861A (zh) * | 2021-12-13 | 2022-03-04 | 中电云数智科技有限公司 | 人脸检测去重的方法与装置 |
CN114639143B (zh) * | 2022-03-07 | 2024-04-16 | 北京百度网讯科技有限公司 | 基于人工智能的人像归档方法、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793697A (zh) * | 2014-02-17 | 2014-05-14 | 北京旷视科技有限公司 | 一种人脸图像的身份标注方法及人脸身份识别方法 |
CN103824053A (zh) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | 一种人脸图像的性别标注方法及人脸性别检测方法 |
CN103984738A (zh) * | 2014-05-22 | 2014-08-13 | 中国科学院自动化研究所 | 一种基于搜索匹配的角色标注方法 |
CN105243373A (zh) * | 2015-10-27 | 2016-01-13 | 北京奇虎科技有限公司 | 人脸图像滤重抓拍方法、服务器、智能监控设备及系统 |
CN106570465A (zh) * | 2016-10-31 | 2017-04-19 | 深圳云天励飞技术有限公司 | 一种基于图像识别的人流量统计方法及装置 |
CN108228872A (zh) * | 2017-07-21 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6549914B1 (en) * | 2000-05-17 | 2003-04-15 | Dell Products, L.P. | System and method for statistical file preload for factory installed software in a computer |
JP2007102341A (ja) * | 2005-09-30 | 2007-04-19 | Fujifilm Corp | 自動計数装置 |
JP2007102342A (ja) * | 2005-09-30 | 2007-04-19 | Fujifilm Corp | 自動計数装置 |
US7751597B2 (en) | 2006-11-14 | 2010-07-06 | Lctank Llc | Apparatus and method for identifying a name corresponding to a face or voice using a database |
JP5010905B2 (ja) * | 2006-12-13 | 2012-08-29 | パナソニック株式会社 | 顔認証装置 |
JP4577410B2 (ja) * | 2008-06-18 | 2010-11-10 | ソニー株式会社 | 画像処理装置、画像処理方法およびプログラム |
JP4753193B2 (ja) * | 2008-07-31 | 2011-08-24 | 九州日本電気ソフトウェア株式会社 | 動線管理システムおよびプログラム |
JP4636190B2 (ja) * | 2009-03-13 | 2011-02-23 | オムロン株式会社 | 顔照合装置、電子機器、顔照合装置の制御方法、および顔照合装置制御プログラム |
US8705813B2 (en) * | 2010-06-21 | 2014-04-22 | Canon Kabushiki Kaisha | Identification device, identification method, and storage medium |
KR101180471B1 (ko) * | 2011-09-27 | 2012-09-07 | (주)올라웍스 | 한정된 메모리 환경 하에서 얼굴 인식 성능 향상을 위한 참조 얼굴 데이터베이스 관리 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 |
US20150088625A1 (en) | 2012-01-30 | 2015-03-26 | Nokia Corporation | Method, an apparatus and a computer program for promoting the apparatus |
AU2013200450B2 (en) * | 2012-01-30 | 2014-10-02 | Accenture Global Services Limited | System and method for face capture and matching |
CN102629940A (zh) | 2012-03-19 | 2012-08-08 | 天津书生投资有限公司 | 一种存储方法、系统和装置 |
US20140075193A1 (en) | 2012-03-19 | 2014-03-13 | Donglin Wang | Storage method |
US9384518B2 (en) * | 2012-03-26 | 2016-07-05 | Amerasia International Technology, Inc. | Biometric registration and verification system and method |
ITVI20120104A1 (it) * | 2012-05-03 | 2013-11-04 | St Microelectronics Srl | Metodo e apparato per generare in tempo reale uno storyboard visuale |
CN103514432B (zh) * | 2012-06-25 | 2017-09-01 | 诺基亚技术有限公司 | 人脸特征提取方法、设备和计算机程序产品 |
CN102799877A (zh) * | 2012-09-11 | 2012-11-28 | 上海中原电子技术工程有限公司 | 人脸图像筛选方法及系统 |
CN102880726B (zh) * | 2012-10-23 | 2015-08-05 | 深圳市宜搜科技发展有限公司 | 一种图像过滤方法及系统 |
US9116924B2 (en) | 2013-01-14 | 2015-08-25 | Xerox Corporation | System and method for image selection using multivariate time series analysis |
US9690978B2 (en) | 2013-09-13 | 2017-06-27 | Nec Hong Kong Limited | Information processing apparatus, information processing and program |
US10083368B2 (en) * | 2014-01-28 | 2018-09-25 | Qualcomm Incorporated | Incremental learning for dynamic feature database management in an object recognition system |
CN104166694B (zh) * | 2014-07-31 | 2018-12-14 | 联想(北京)有限公司 | 一种图像分类存储方法和电子设备 |
CN104679913B (zh) * | 2015-03-25 | 2018-05-29 | 广东欧珀移动通信有限公司 | 图像存储方法及装置 |
CN104915114B (zh) * | 2015-05-29 | 2018-10-19 | 小米科技有限责任公司 | 信息记录方法和装置、智能终端 |
CN106326816A (zh) | 2015-06-30 | 2017-01-11 | 芋头科技(杭州)有限公司 | 一种面部识别系统及面部识别方法 |
JP6006841B2 (ja) * | 2015-07-08 | 2016-10-12 | オリンパス株式会社 | 画像取扱装置、画像取扱方法、およびプログラム |
CN105138962A (zh) * | 2015-07-28 | 2015-12-09 | 小米科技有限责任公司 | 图像显示方法及装置 |
CN105513101B (zh) * | 2015-12-03 | 2018-08-07 | 小米科技有限责任公司 | 图片处理方法及装置 |
CN105701466A (zh) * | 2016-01-13 | 2016-06-22 | 杭州奇客科技有限公司 | 快速的全角度人脸跟踪方法 |
CN105760461A (zh) * | 2016-02-04 | 2016-07-13 | 上海卓易科技股份有限公司 | 相册的自动建立方法及其装置 |
CN106204779B (zh) * | 2016-06-30 | 2018-08-31 | 陕西师范大学 | 基于多人脸数据采集策略和深度学习的课堂考勤方法 |
CN106203333A (zh) * | 2016-07-08 | 2016-12-07 | 乐视控股(北京)有限公司 | 人脸识别方法及系统 |
CN106407916A (zh) * | 2016-08-31 | 2017-02-15 | 北京维盛视通科技有限公司 | 分布式人脸识别方法、装置及系统 |
CN106570110B (zh) | 2016-10-25 | 2020-09-08 | 北京小米移动软件有限公司 | 图像去重方法及装置 |
CN106657069A (zh) * | 2016-12-24 | 2017-05-10 | 深圳云天励飞技术有限公司 | 一种图像数据处理系统 |
CN106803067B (zh) * | 2016-12-28 | 2020-12-08 | 浙江大华技术股份有限公司 | 一种人脸图像质量评估方法及装置 |
-
2018
- 2018-01-16 CN CN201810041797.8A patent/CN108228872A/zh active Pending
- 2018-01-16 CN CN201810041796.3A patent/CN108228871A/zh active Pending
- 2018-07-20 JP JP2019553912A patent/JP6916895B2/ja active Active
- 2018-07-20 WO PCT/CN2018/096540 patent/WO2019015682A1/zh active Application Filing
- 2018-07-20 SG SG11201909069Q patent/SG11201909069QA/en unknown
- 2018-07-20 JP JP2019553920A patent/JP6896882B2/ja active Active
- 2018-07-20 CN CN201880018961.1A patent/CN110799972A/zh not_active Withdrawn
- 2018-07-20 KR KR1020197029413A patent/KR102349980B1/ko active IP Right Grant
- 2018-07-20 SG SG11201909068X patent/SG11201909068XA/en unknown
- 2018-07-20 CN CN201880018965.XA patent/CN110869937A/zh not_active Withdrawn
- 2018-07-20 WO PCT/CN2018/096542 patent/WO2019015684A1/zh active Application Filing
-
2019
- 2019-05-15 US US16/412,854 patent/US11132581B2/en active Active
- 2019-05-16 US US16/413,611 patent/US11409983B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793697A (zh) * | 2014-02-17 | 2014-05-14 | 北京旷视科技有限公司 | 一种人脸图像的身份标注方法及人脸身份识别方法 |
CN103824053A (zh) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | 一种人脸图像的性别标注方法及人脸性别检测方法 |
CN103984738A (zh) * | 2014-05-22 | 2014-08-13 | 中国科学院自动化研究所 | 一种基于搜索匹配的角色标注方法 |
CN105243373A (zh) * | 2015-10-27 | 2016-01-13 | 北京奇虎科技有限公司 | 人脸图像滤重抓拍方法、服务器、智能监控设备及系统 |
CN106570465A (zh) * | 2016-10-31 | 2017-04-19 | 深圳云天励飞技术有限公司 | 一种基于图像识别的人流量统计方法及装置 |
CN108228872A (zh) * | 2017-07-21 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695643A (zh) * | 2020-06-24 | 2020-09-22 | 北京金山云网络技术有限公司 | 图像处理方法、装置和电子设备 |
CN111695643B (zh) * | 2020-06-24 | 2023-07-25 | 北京金山云网络技术有限公司 | 图像处理方法、装置和电子设备 |
CN112036957A (zh) * | 2020-09-08 | 2020-12-04 | 广州图普网络科技有限公司 | 一种访客留存数确定方法、装置、电子设备和存储介质 |
CN112036957B (zh) * | 2020-09-08 | 2023-11-28 | 广州图普网络科技有限公司 | 一种访客留存数确定方法、装置、电子设备和存储介质 |
CN116521046A (zh) * | 2023-04-23 | 2023-08-01 | 西北核技术研究所 | 一种交通态势系统态势回溯功能的控制方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
US20190272415A1 (en) | 2019-09-05 |
KR20190125428A (ko) | 2019-11-06 |
US11409983B2 (en) | 2022-08-09 |
CN110869937A (zh) | 2020-03-06 |
CN108228872A (zh) | 2018-06-29 |
US20190266441A1 (en) | 2019-08-29 |
SG11201909069QA (en) | 2019-10-30 |
WO2019015682A1 (zh) | 2019-01-24 |
KR102349980B1 (ko) | 2022-01-11 |
JP6896882B2 (ja) | 2021-06-30 |
JP2020516188A (ja) | 2020-05-28 |
SG11201909068XA (en) | 2019-10-30 |
CN110799972A (zh) | 2020-02-14 |
JP2020512648A (ja) | 2020-04-23 |
JP6916895B2 (ja) | 2021-08-11 |
CN108228871A (zh) | 2018-06-29 |
US11132581B2 (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019015684A1 (zh) | 人脸图像去重方法和装置、电子设备、存储介质、程序 | |
WO2019042230A1 (zh) | 人脸图像检索方法和系统、拍摄装置、计算机存储介质 | |
WO2019100608A1 (zh) | 摄像装置、人脸识别的方法、系统及计算机可读存储介质 | |
WO2020094091A1 (zh) | 一种图像抓拍方法、监控相机及监控系统 | |
CN109299703B (zh) | 对鼠情进行统计的方法、装置以及图像采集设备 | |
US8971591B2 (en) | 3D image estimation for 2D image recognition | |
CN112633384A (zh) | 基于图像识别模型的对象识别方法、装置和电子设备 | |
CN108229321A (zh) | 人脸识别模型及其训练方法和装置、设备、程序和介质 | |
WO2020094088A1 (zh) | 一种图像抓拍方法、监控相机及监控系统 | |
US10997469B2 (en) | Method and system for facilitating improved training of a supervised machine learning process | |
CN111079670A (zh) | 人脸识别方法、装置、终端和介质 | |
CN112669344A (zh) | 一种运动物体的定位方法、装置、电子设备及存储介质 | |
CN112561879A (zh) | 模糊度评价模型训练方法、图像模糊度评价方法及装置 | |
CN115761571A (zh) | 基于视频的目标检索方法、装置、设备以及存储介质 | |
CN112699270A (zh) | 基于云计算的监控安防数据传输储存方法、系统、电子设备和计算机存储介质 | |
Gupta et al. | Reconnoitering the Essentials of Image and Video Processing: A Comprehensive Overview | |
WO2018095037A1 (zh) | 一种获取云存储系统中数据的方法及装置 | |
CN112015951B (zh) | 视频监测方法、装置、电子设备以及计算机可读介质 | |
CN110572618A (zh) | 一种非法拍照行为监控方法、装置及系统 | |
EP2766850B1 (en) | Faceprint generation for image recognition | |
Moon et al. | Multiresolution face recognition through virtual faces generation using a single image for one person | |
US20130311461A1 (en) | System and method for searching raster data in the cloud | |
CN117079287A (zh) | 一种任务挖掘场景下的文字识别方法、装置、设备及存储介质 | |
CN118038312A (zh) | 一种基于边缘计算设备的视频分析方法及系统 | |
WO2024085987A1 (en) | Reduced video stream resource usage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019553912 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20197029413 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/05/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18835107 Country of ref document: EP Kind code of ref document: A1 |