WO2020124984A1 - 图像处理方法及装置、电子设备和存储介质 - Google Patents
图像处理方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2020124984A1 WO2020124984A1 PCT/CN2019/093388 CN2019093388W WO2020124984A1 WO 2020124984 A1 WO2020124984 A1 WO 2020124984A1 CN 2019093388 W CN2019093388 W CN 2019093388W WO 2020124984 A1 WO2020124984 A1 WO 2020124984A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- verification
- feature data
- verified
- stranger
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 33
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000012795 verification Methods 0.000 claims abstract description 307
- 238000000034 method Methods 0.000 claims abstract description 66
- 230000004044 response Effects 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 12
- 230000003993 interaction Effects 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000002452 interceptive effect Effects 0.000 claims description 2
- 230000001815 facial effect Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000000717 retained effect Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000005096 rolling process Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000013077 scoring method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00563—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Definitions
- the present disclosure relates to the field of intelligent monitoring, and in particular to image processing methods and devices, electronic equipment, and storage media.
- Embodiments of the present disclosure provide an image processing method and device, electronic equipment, and storage medium capable of jointly determining the identity of an object to be verified in a corresponding area through image information collected by multiple camera modules, and the determination accuracy is high , The false alarm rate is low.
- an image processing method which includes:
- the first image and the second image are used for joint verification, and the identity of the object to be verified is determined according to the second verification result of the joint verification.
- the target library includes a white/blacklist library
- the comparing the first image with the image data in the target library to perform identity verification to obtain the first verification result includes:
- the object to be verified corresponding to the first image is determined to be Blacklist objects or whitelist objects.
- the target library includes a marked stranger library
- the comparison between the first image and the image data in the target library to perform identity verification to obtain the first verification result includes:
- the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is determined as Stranger who has been marked.
- the method when there is feature data matching the first feature data in the marked stranger library, the method further includes:
- the method further includes:
- the first image and the associated information of the first image are added to the matching record corresponding to the matched feature data, wherein the association of the first image
- the information includes at least one of time information that the first camera module collects the first image, identification information of the first camera module, and position information of the first camera module.
- the method before the first image and the second image are used for joint verification, the method further includes:
- the first image and the second image are used for joint verification, and the second verification result of the joint verification is used to determine the The identity of the object to be verified, including:
- the determining the similarity of each image in the image set to other images includes:
- the similarity between each image and the remaining images is determined based on the sum value and the number of feature data in the image set.
- the first image whose first verification result is a verification failure and the second image whose first verification result is a verification failure within a second time range is clustered to obtain a verification for each Object image set, including:
- the determining whether the image set meets the second preset condition based on the similarity corresponding to each image in the image set includes at least one of the following ways:
- the maximum similarity among the similarities corresponding to the images in the image set is greater than the first similarity threshold
- the number of similarity feature data in the similarity corresponding to each image in the image set that is greater than the second similarity threshold exceeds a preset ratio
- the minimum similarity among the similarities corresponding to the images in the image set is greater than the third similarity threshold.
- the first image and the second image are used for joint verification, and the second verification result of the joint verification is used to determine the The identity of the object to be verified also includes:
- the determination that the object to be verified corresponding to the image set is a stranger when the image set meets the second preset condition includes:
- the image corresponding to the feature data in the feature data set is an image collected by different camera modules in different time ranges, it is determined that the object to be verified corresponding to the feature data set is a stranger.
- the acquiring the first image and the second image of the object to be verified includes:
- the image satisfying the quality requirement in the third image is determined as the first image, and the image satisfying the quality requirement in the fourth image is determined as the second image.
- the method further includes:
- the first image and/or the second image contain a predetermined feature
- the first image and/or the second image containing the predetermined feature are marked, wherein the predetermined feature includes a mask, a hat, At least one of sunglasses.
- the method further includes:
- the output prompts the first verification result or the second verification result.
- the output prompting the first verification result or the second verification result includes:
- the identity and associated information of the object to be verified are output in a preset manner, and when it is determined that the object to be verified is a marked stranger, the output is marked as unfamiliar The number of people; or
- the second verification result is output.
- the method further includes:
- the second verification result is that the object to be verified is a stranger
- the verification result, statistical information and prompt information of the stranger determined to be displayed through the user interactive interface are displayed through the user interactive interface.
- an image processing apparatus including:
- An obtaining module configured to obtain a first image and a second image of the object to be verified, wherein the first image is collected by a first camera module, and the second image is collected by at least one second camera module;
- a first verification module configured to compare the first image with the image data in the target library to perform identity verification and obtain a first verification result
- a second verification module configured to respond to the case where the first verification result is a verification failure, perform joint verification using the first image and the second image, and determine the object to be verified according to the second verification result of the joint verification identity of.
- the target library includes a white/blacklist library
- the first verification module is also used to compare the first feature data of the first image with the feature data of each image in the white/blacklist library;
- the object to be verified corresponding to the first image is determined to be Blacklist objects or whitelist objects.
- the target library includes a marked stranger library
- the first verification module is further used to compare the acquired first characteristic data of the first image with the characteristic data of the image in the marked stranger library;
- the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is determined as Stranger who has been marked.
- the device further includes a statistics module configured to count the first image when there is feature data matching the first feature data in the marked stranger library The number of times the corresponding object to be verified is marked as a stranger.
- the first verification module is further configured to add the first image and the associated information of the first image to the matched In the matching record corresponding to the feature data, the associated information of the first image includes the time information of the first camera module collecting the first image, the identification information of the first camera module, and the first At least one of the location information of a camera module.
- the device further includes a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image. And/or the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
- a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image.
- the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
- the second verification module is further configured to cluster the first image whose first verification result is a verification failure and the second image whose first verification result is a verification failure within a second time range. To obtain an image set for each object to be verified, and
- the second verification module is further used to obtain a sum value of the product of the feature data of each image in each image set and the feature data of all the images, and
- the similarity between each image and the remaining images is determined based on the sum value and the number of feature data in the image set.
- the second verification module is further configured to obtain first feature data and second feature data corresponding to the first image and the second image data that have failed verification in the second time range, and
- the second verification module is further configured to perform the determining whether the image set meets the second pre-measurement based on the similarity corresponding to each image in the image set in at least one of the following ways Set conditions:
- the maximum similarity among the similarities corresponding to the images in the image set is greater than the first similarity threshold
- the number of similarity feature data in the similarity corresponding to each image in the image set that is greater than the second similarity threshold exceeds a preset ratio
- the minimum similarity among the similarities corresponding to the images in the image set is greater than the third similarity threshold.
- the second verification module is further configured to delete all images corresponding to the image set when the similarity between the images in the image set does not satisfy the preset condition.
- the second verification module is also used when the image corresponding to the feature data in the feature data set is an image collected by different camera modules in different time ranges, then It is determined that the object to be verified corresponding to the feature data set is a stranger.
- the acquisition module is further configured to separately acquire the first video collected by the first camera module and the second video collected by at least one second camera module, and preprocess the first video Obtaining a third image and preprocessing the second video to obtain a fourth image, or receiving the third image and the fourth image, and
- the image satisfying the quality requirement in the third image is determined as the first image, and the image satisfying the quality requirement in the fourth image is determined as the second image.
- the acquiring module is further configured to acquire the first feature data of the first image after acquiring the first image and the second image of the object to be verified, and Comparing the first feature data with the feature data in the target library to perform identity verification, before acquiring the first verification result, detecting whether the first image and/or the second image contain predetermined features, and
- the first image and/or the second image contain a predetermined feature
- the first image and/or the second image containing the predetermined feature are marked, wherein the predetermined feature includes a mask, a hat, At least one of sunglasses.
- the apparatus further includes a prompt module configured to output a prompt of the first verification result or the second verification result.
- the prompting module is further configured to output the identity and associated information of the object to be verified in a preset manner in response to the case where the first verification result is that the verification is successful, and to determine the pending When the verification object is a marked stranger, output the number of times marked as stranger; or
- the second verification result is output.
- the second verification module is further configured to, in response to the second verification result being that the object to be verified is a stranger, the first image, the second image and the associated information corresponding to the object to be verified Store to the target library, and control to display the verification result, statistical information and prompt information determined to be a stranger through the user interaction interface.
- an electronic device including:
- Memory for storing processor executable instructions
- the processor is configured to: execute the method of any one of the first aspect.
- a computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor implements the method of any one of the first aspects.
- the embodiments of the present disclosure can determine the identity authority of the object to be verified based on the image information collected by multiple camera modules, which can effectively reduce the false alarm rate and greatly improve the recognition accuracy of strangers.
- FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure
- FIG. 2 shows a flowchart of step S100 in an image processing method according to an embodiment of the present disclosure
- step S200 in the image processing method according to an embodiment of the present disclosure
- step S200 shows a flowchart of step S200 in the image processing method according to an embodiment of the present disclosure
- FIG. 5 shows a flowchart of step S300 in the image processing method according to an embodiment of the present disclosure
- FIG. 6 shows a flowchart of step S301 in the image processing method according to an embodiment of the present disclosure
- step S302 shows a flowchart of step S302 in the image processing method according to an embodiment of the present disclosure
- FIG. 8 shows a flowchart of an image processing method according to an embodiment of the present disclosure
- FIG. 10 shows a block diagram of an image processing device implemented according to the present disclosure
- FIG. 11 shows a block diagram of an electronic device 800 implemented according to the present disclosure
- FIG. 12 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, where the image processing method of the embodiment of the present disclosure can be applied to government buildings, enterprise parks, hotels, communities, office buildings, etc. that require management of entering personnel For a place, it is possible to jointly determine the identity of the object to be verified through the image information collected by the camera modules installed in different areas, so as to determine whether the object to be verified is a stranger or a registered person in the library.
- the image processing method of the embodiment of the present disclosure may include:
- S100 Acquire a first image and a second image of the object to be verified, wherein the first image is collected by a first camera module, and the second image is collected by at least one second camera module.
- the image processing method of the embodiment of the present disclosure may be applied to electronic devices with image processing functions such as terminal devices or servers, and the terminal devices may be, for example, mobile phones, computer devices, and the like. These electronic devices are electrically connected to camera devices installed in various corners of the area to be inspected.
- the camera devices include but are not limited to cameras, snap cameras, and the like. In other embodiments, these electronic devices include display screens.
- the object to be verified refers to a person who enters the area to be verified.
- the first image and the second image may be facial images of the object to be verified that need to be identified, or full-body images. In the embodiments of the present invention, facial images are used Explain, but not to limit the invention.
- the first image and the second image here come from different video sources.
- the first image can be collected by the first camera module
- the second image can be collected by at least one second camera module.
- Different camera modules are set on different location areas, that is, the first camera module and the second camera module may be camera modules installed in different positions.
- the camera modules other than the first camera module are collectively referred to as the second camera module.
- the positions of the second camera modules can also be different. In this way, real-time collections in different locations can be collected.
- the acquisition times of the first image and the second image may be the same or different, which is not limited in this disclosure.
- the neural network can be used to obtain the first feature data of each first image, and the first feature data is compared with the pre-stored feature data of the image data in the target library
- the target library can include registered black Lists and whitelists, and objects that have been marked as strangers.
- the second image collected by at least one second camera module may be combined for joint verification to verify the identity of the object to be verified.
- the identity of the object to be verified can be jointly verified for the first image and the second image that fail to be verified, so that the verification success rate of the object to be verified can be improved.
- the image processing method of the embodiment of the present disclosure can be applied in a place managed by personnel, and cameras can be installed at different locations of the place, any of which can be used as the first camera mode of the embodiment of the present disclosure Group, for convenience of description below, camera modules other than the first camera module are referred to as second camera modules, and images collected by the second camera module may be referred to as second images.
- the first image and the second image that need to be authenticated obtained in step S100 in the embodiment of the present disclosure may be images directly obtained from the first camera module and the second camera module, or may be analyzed and filtered. Image. This disclosure does not limit this.
- FIG. 2 shows a flowchart of step S100 in an image processing method according to an embodiment of the present disclosure, where the acquiring the first image to be identified may include:
- S101 Obtain a first video collected by a first camera module and a second video collected by at least one second camera module, and preprocess the video to obtain multiple third images and pre-process the second video
- the fourth image is obtained by processing, or the third image and the fourth image including the facial information of the object to be verified are directly received.
- the received information may be information in the form of video or information in the form of picture.
- the video information may be preprocessed to obtain from the video information
- the third image and the fourth image to be processed, where the preprocessing operations may include video decoding, image sampling, and face detection and other processing operations, through which the corresponding third image and fourth image including the facial image can be obtained image.
- the obtained third and fourth images may be in the form of pictures.
- the third and fourth images may be directly processed, that is, the face detection method may be used to obtain The third and fourth images of the face image of the subject.
- the first camera module can directly collect the third image including the facial image and the second camera module can directly collect the fourth image including the facial image, for example, the first camera module and the second camera module can be a face capture machine ,
- the obtained third image and fourth image are face images, and this disclosure does not specifically limit it, as long as the obtained third image and fourth image include the face area of the object to be verified to be determined, That is, it can serve as an embodiment of the present disclosure.
- S102 Determine an image that meets quality requirements in the obtained third image as the first image, and determine an image that meets quality requirements in the fourth image as the second image.
- the third image collected from the camera module is obtained After the fourth image, an image that meets the quality requirements from the third image and the fourth image needs to be selected to perform the detection and determination of the user's identity. Among them, the third image and the fourth image can be jointly judged by angle and quality score, and pictures below a certain quality will be discarded.
- the image quality of the third image and the fourth image may be determined through a neural network, or the image quality of the third image may also be determined through a preset algorithm, in which the image clarity, The third image and the fourth image are scored according to the angle of the face. If the score value is lower than the preset score value, if it is lower than 80 points, the third image and the fourth image may be deleted. If the score value is higher than The preset score indicates that the quality of the image satisfies the quality requirements. At this time, the third image and the fourth image can be used to determine the identity of the person, that is, the third image that meets the quality requirements can be used as the first to be authenticated. An image, and a fourth image that meets quality requirements is used as the second image to be authenticated. Among them, the preset score can be set according to different needs and application scenarios, and this work does not make specific limitations.
- the first feature data Compared with the feature data in the target library to perform identity verification, before obtaining the first verification result, it is also possible to detect whether the first image and/or the second image contains predetermined features, and after detecting the third image and/or the fourth When the image contains predetermined features, the third image and/or the second image containing the predetermined features may be marked.
- the mark here means that the third image and/or the fourth image containing predetermined characteristics can be assigned an identifier, and the identifier is used to indicate that the corresponding image can be directly used as the first image and the second image to be authenticated.
- the predetermined characteristic may include at least one characteristic of a mask, a hat, and sunglasses.
- the object to be verified in the third image obtained from the first video collected from the first camera module is the object to be verified wearing a hat and a mask (that is, the feature data corresponding to the first image includes a hat and a mask, etc. Feature), you can directly list the object to be verified as a suspicious person, that is, the third image can be used as the first image.
- the object to be verified in the fourth image obtained from the second video collected from the second camera module is the object to be verified wearing a hat and sunglasses (that is, the feature data corresponding to the second image includes hats, sunglasses, etc.
- the fourth image can be used as the second image.
- the characteristic data of the third image and the fourth image can be detected by a neural network to determine whether it has the above-mentioned predetermined characteristics.
- the first image and the second image to be processed can be conveniently obtained for receiving different types of images, and since the obtained first image and second image are images that meet the quality requirements, they can be used to accurately perform Identity verification of the object to be verified.
- the embodiment of the present disclosure may include a target library, where the blacklist and whitelist, and the marked stranger information are recorded in the target library.
- the blacklist refers to the information of the objects that cannot enter the place
- the whitelist refers to the information of the objects that can be allowed to enter the place.
- the information stored in the target library is the objects with known identities and the marked Information for stranger objects.
- the embodiment of the present disclosure may compare the first feature data of the first image with the feature data of the image data in the target library match.
- the target database stores facial images and facial feature data of each first object, or may also include other information, such as name, age, etc., which is not specifically limited in the present disclosure.
- the first feature data of the first image can be compared with the feature data of each object in the target library. If there is feature data in the target library whose matching value with the first feature data exceeds the first matching threshold, Then, it can be determined that the object to be verified corresponding to the first image is an object in the target library, and this time indicates that the first verification result is that the verification is successful. Further, if the feature data corresponding to the first feature data cannot be queried, it can be determined that the first verification result is verification failure.
- the second image collected by the second camera module may be used for further determination.
- the embodiment of the present disclosure can perform the identity verification of the human object based on the image collected by the camera module or the received image, it can achieve the effect of comparing the input image with the image data in the target library, that is The effect of the image can be found in the target library that matches the input image.
- the target library in the embodiments of the present disclosure may include a white/blacklist library and a marked stranger library.
- the white/blacklist database includes registered blacklist objects and whitelist objects.
- the blacklist objects are the people who restrict access to the corresponding places, and the whitelist objects are the people who are allowed to enter the corresponding places.
- the whitelist objects and facial image of the blacklist objects included in the white/blacklist library may also include corresponding name, age, position and other information.
- identity verification of the object to be verified can be performed, and the verification result can indicate whether the object to be verified is a blacklist object or a whitelist object.
- FIG. 3 shows a flowchart of step S200 in the image processing method according to an embodiment of the present disclosure, where the first image is compared with the image data in the target library to perform identity verification to obtain the first verification result, including:
- the target library includes a white/blacklist library
- the white/blacklist library may include facial images of whitelisted objects and blacklisted objects or may directly include feature data of facial images.
- the first image and the associated information of the first image may be loaded into the matching record of the matched object,
- the associated information may be the time when the first camera module collected the first image, the identifier of the first camera module, and the corresponding location information.
- the associated information with each image may be obtained at the same time.
- a preset prompt operation may also be performed at this time, for example, by voice or display
- the output method prompts the entry of the blacklist.
- information such as the number of entries of the blacklist object may also be counted, and the number of entries is prompted to be output for the convenience of management personnel to view.
- the above information can be transmitted to the user interaction interface of the above electronic device, and displayed through the user interaction interface, which is convenient for viewing various prompt information.
- the identity verification of the blacklist object and the whitelist object can be performed, and when there is feature data matching the first feature data in the white/blacklist library, the first verification result is determined to be a successful verification .
- the target library may also include a marked stranger library
- the objects in the marked stranger library are objects marked as strangers, which may also include facial images of each object or directly include faces
- the feature data may also include related information such as the collection time and location of each facial image, and may also include the number of times a stranger has been marked.
- the identity verification of the object to be verified can be performed against the marked stranger library, and the verification result can indicate whether the object to be verified is a marked stranger object.
- FIG. 4 shows a flowchart of step S200 in the image processing method according to an embodiment of the present disclosure, in which the first image is compared with the image data in the target library to perform identity verification to obtain the first verification result, include:
- the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is compared Identified as a stranger who has been marked.
- the target library includes the marked stranger library.
- the marked stranger library may be a facial image of an object marked as a stranger or may directly include feature data of the facial image.
- the first image and the associated information of the first image may be loaded into the matching record of the matching object
- the associated information may be the time when the first camera module collected the first image, the identifier of the first camera module, and the corresponding location information.
- the associated information with each image may be obtained at the same time.
- a preset prompt operation may also be performed at this time, for example, the stranger may be prompted by voice or display output The entry of personnel.
- information such as the number of times the stranger was marked in the corresponding place, and the stranger's stay time in the corresponding place, the frequency of occurrence, and other information may also be counted, and the above information may be prompted to output for the convenience of management personnel to view.
- the stay time can be determined according to the time when the object is marked as a stranger.
- the time difference between the first time that the last time is marked as a stranger and the time that the first time is marked as a stranger can be used as a stay Time and frequency of occurrence can be the ratio of the number of times the stranger is identified to the above stay time.
- other information may also be counted, for example, the location information of the stranger, where the location of the stranger may be determined according to the identity or location of the camera module that collected the image of the stranger, thereby The running track of strangers can be obtained, and the statistical information is not listed here in this disclosure.
- the above information can be transmitted to the user interaction interface of the electronic device, and displayed through the user interaction interface, which is convenient for viewing various prompt information.
- the identity verification of the strange object that has been ranked can be performed, and if there is feature data matching the first feature data in the marked stranger library, the first verification result is determined to be a successful verification .
- the first matching threshold and the second matching threshold may be the same threshold, or may be different thresholds, and those skilled in the art can set it according to requirements.
- the verification order of the white/black list library in the target library and the marked stranger library can be set by a person skilled in the art according to requirements, where the white/black list library can be used first The first feature data is verified. When there is no matching feature data in the white/blacklist library, the tagged stranger library is used for verification, or the first feature data can be verified through the tagged stranger library. When there is no matching feature data in the marked stranger library, the white/blacklist library can be used for verification, or the white/blacklist library and the marked stranger library can also be used for verification. That is to say, the embodiment of the present disclosure does not specifically limit the time sequence of performing verification operations using two libraries, as long as it can perform the verification described above, it can be used as an embodiment of the present disclosure.
- the first verification result is verification failure
- the first image may be saved.
- joint verification may be performed based on the second image acquired by the second camera module other than the first camera module and the first image, based on the second verification result of the joint verification Determine the identity of the object to be verified.
- the process of the first verification operation on the second image in the embodiment of the present disclosure is the same as the first image, and the first verification result of the second image can also be obtained. The disclosure will not be repeated here.
- the first image may be temporarily stored.
- the first image within a preset time range may be deduplicated, thereby reducing excessive temporary storage for the same object to be verified Image.
- the embodiment of the present disclosure may perform deduplication processing on the first image and/or the second image that failed verification in the first time range to obtain the first preset condition that satisfies the first preset condition for each object to be verified within the first time range The first image and/or the second image.
- the first time range can be an adjustable time window (rolling window), for example, it can be set to 2-5 seconds, and the first image and the second image waiting to be archived (temporary storage) can be performed once according to the first time range
- the first image of the same object to be verified can be merged and deduplicated
- the second image of the same object to be verified can be merged and deduplicated.
- the temporarily stored first images may also be images of different objects to be verified or multiple images of one object to be verified, which can be recognized at this time
- the image of the same object to be verified in the first image can be compared according to the feature data of each image, for example, to determine the image with the similarity greater than the similarity threshold as the image of the same object to be verified, and further according to the first preset condition Only one image remains in each image of the same object to be verified.
- the first preset condition may be that the first temporarily stored image is retained according to the temporary storage time, and the remaining temporary images of the same object to be verified are deleted.
- the first preset condition may be to compare the score values of the images of the same object to be verified, retain the image with the highest score value, and delete the remaining images.
- the acquisition of the score value is the same as the above embodiment.
- the image can be analyzed according to a preset algorithm to obtain a score value, or the image can be scored using a neural network.
- the principle of scoring is based on the clarity of the image, the angle of the face, and the The occlusion situation is determined. A person skilled in the art can select a corresponding scoring method according to needs, which is not specifically limited in this disclosure.
- FIG. 5 shows a flowchart of step S300 in the image processing method according to an embodiment of the present disclosure, wherein, in response to the case where the first verification result is a verification failure, the first image is combined with the second image Verification, determining the identity of the object to be verified according to the second verification result of the joint verification may include:
- S301 Perform clustering processing on the first image whose first verification result is the verification failure and the second image whose first verification result is the verification failure in the second time range to obtain an image set for each object to be verified;
- the device performing the image processing method of the embodiment of the present disclosure may merge the first image and the second image of each camera module that do not match the feature data within the second time range, and perform clustering processing to obtain a Image sets of verification objects, and the images included in each image set are images of the same object to be verified.
- each image set can be conveniently processed.
- S302 Determine the similarity between each image in the image set and other images in the image set;
- the similarity analysis can be performed on the images of the image set of the same object to be verified to determine the similarity between each image and other images, so that it can be further judged whether each image in the image set is the same to be verified The image of the object.
- S303 Determine whether the image set meets the second preset condition based on the similarity corresponding to each image in the image set;
- the image set After obtaining the similarity between each image in each image set and other images, it can be determined whether the image set meets the second preset condition according to the obtained similarity value, and the image set can be determined when the second preset condition is met The probability of the images of the same object is high, and the image set can be retained. If it is determined that the similarity does not satisfy the second preset condition, it can be determined that the clustering of the images in the image set is not credible. The probability is low, and the image set can be deleted at this time. Furthermore, the image set satisfying the preset condition can be further used to determine whether the object to be verified is an unregistered object.
- step S301 shows a flowchart of step S301 in an image processing method according to an embodiment of the present disclosure, wherein the first image with the first verification result in the second time range is the verification failed and the first verification result is the verification failure
- the second image is clustered to obtain an image set for each object to be verified, which may include:
- S3012 Compare and match the first feature data with the second feature data to determine whether each of the first feature data and each of the second feature data corresponds to the same object to be verified;
- S3013 Cluster the first characteristic data of the first image and the second characteristic data of the second image of the same object to be verified to form an image set corresponding to the object to be verified.
- the second time range is a time range greater than the first time range.
- the first time range may be 2-5s and the second time range may be 10 minutes, but it is not a specific limitation of the embodiment of the present disclosure.
- the second time range greater than the first time range it is possible to obtain the first image and the second image obtained by the verification failure and deduplication processing in each first time range, and use each camera module in the second time range
- the obtained first image and second image obtain different images of different objects to be verified. For example, you can use the first camera module obtained in the second time range and at least one second camera module to deduplicate the first image and the second image obtained in each first time range, and select from them to find the duplicate The features of the object to be verified are merged.
- images with facial features greater than the similarity threshold can be combined into one category, that is, an image of the object to be verified.
- an image set for multiple objects to be verified can be obtained, and each image set is an image of the same object to be verified.
- each processed image in the embodiment of the present disclosure may include the identification information of the camera module associated with it, so that it can be determined which camera module each image was collected by, and the corresponding acquisition The location of the object to be verified.
- the image may also be associated with the time information that the camera module collects the image, so that the time that each image is collected can be determined, and the time at which the object to be verified is located at each position can be correspondingly determined.
- the neural network can identify the Feature data is not specifically limited in this disclosure.
- the first feature data and the second feature data can be compared and matched to determine whether each of the first feature data and the second feature data corresponds to the same object to be verified.
- Feature data corresponding to the same to-be-verified object are combined into one class to form an image set for each to-be-verified object.
- the image set may include each image and feature data corresponding to each image, or may include only the features of each image
- the data is not specifically limited in this disclosure.
- the method for determining whether each feature data corresponds to the same object to be verified may include using a neural network to determine, if the probability that the two identified feature data are the same object to be verified is higher than a preset threshold, the two may be determined as If the same object to be verified is lower than a preset threshold, it may be determined to be a different object to be verified. In this way, it can be determined whether each feature data is the same feature data of the object to be verified, and further determine the image set corresponding to different objects to be verified.
- step S302 shows a flowchart of step S302 in the image processing method according to an embodiment of the present disclosure, wherein the determination of the similarity of each image in the image set to other images in the image set includes:
- step S200 feature data for each image in the image set, such as the first feature data, can be obtained, which can be expressed in the form of a feature vector.
- the feature data of each image in the image set and the feature data of all images can be subjected to a number product operation and added.
- an image set may include n images, where n is an integer greater than 1, and the object may be acquired for the sum value of facial feature data between each image and all images.
- the feature data of each image obtained by the embodiment of the present disclosure is the feature vector of the normalization process, that is, the first feature data of each first image and the second image of the second image obtained by the embodiment of the present disclosure
- the two feature data are feature vectors with the same dimension and the same length, so that the feature data can be easily calculated.
- S3022 Determine the similarity between each image and the remaining images based on the sum value and the number of feature data in the image set.
- the similarity between each image and other images is determined according to the number of images in the image set.
- the similarity may be
- the obtained sum value can be n-1, that is, the similarity between each image and the remaining images can be obtained.
- the image set before determining whether the image set satisfies a preset condition based on the similarity between the images, before determining whether the object to be verified is a stranger based on the image set, It may be further determined whether the image set satisfies the second preset condition, and when the similarity corresponding to each image in the image set satisfies any one of the following cases, it is determined that the image set satisfies the second preset condition:
- the similarity with the maximum similarity between the remaining images and the first similarity threshold can be used for comparison. If the maximum similarity is greater than the first similarity threshold, it means that each image in the image set If the similarity between them is large, it can be determined that the image set satisfies the preset condition. If the maximum similarity is less than the first similarity threshold, it means that the clustering effect of the image set is not ideal. Each image in the image set has a different probability of being a different object to be verified, and the image set can be deleted at this time.
- the proportion of the similarity is greater than the preset ratio, if 50% of the similarities of the images are greater than the second similarity threshold, then this time If it is determined that the similarity between the images in the image set is large, it can be determined that the image set satisfies the preset condition. If the proportion of images greater than the second similarity threshold is less than the preset ratio, it means that the clustering effect of the image set is not ideal, and the probability of each image in the image set being different objects to be verified is high, and you can delete the Image set.
- the smallest similarity in the image set is greater than the third similarity threshold, it means that the similarity between the images in the image set is large, and it can be determined that the image set satisfies the preset condition. If the minimum similarity is less than the first similarity threshold, it means that the clustering effect of the image set is not ideal, and each image in the image set has a different probability of being an object to be verified, and the image set can be deleted at this time.
- the selection of the first similarity threshold, the second similarity threshold, and the third similarity threshold can be set according to different requirements, and this disclosure does not specifically limit this.
- determining whether the object to be verified is a stranger may include the images in the image set through different time ranges In the case of the image collected by the camera module, it is determined that the object to be verified is a stranger.
- the image set includes 2 images, and the two images are collected by the first camera module and the second camera module, respectively, and the collection time is in a different time range, then it can be determined that the image set corresponds to the verification
- the object is a stranger. That is, the first image collected by the first camera module does not recognize the identity of the object to be verified, and the second image collected by the second camera module does not recognize the identity of the object to be verified, and the first image and the first image
- the time of the second image acquisition is in a different time range, for example, in a different first time range, then in the case that the image set composed of the first image and the second image satisfies the preset condition, the corresponding The object to be verified is a stranger, that is, a stranger.
- the identity of the suspicious person can be jointly determined by the images collected by multiple camera modules, so that the identity of the object to be verified can be determined more accurately.
- a preset prompt operation is performed.
- the relevant person may be reminded of the stranger's information through audio or display output. That is, in the embodiment of the present disclosure, in the case where the object to be verified corresponding to the first image is a stranger, performing the preset prompt operation includes: displaying the image of the stranger on the display device, and the stranger’s current The location information and the statistical information of the number of occurrences; and/or the presence of the stranger, the current location information of the stranger, and the statistical information of the number of appearances by means of audio prompts.
- the stay time can be determined according to the time when the object is marked as a stranger.
- the time difference between the first time that the last time is marked as a stranger and the time that the first time is marked as a stranger can be used as a stay Time and frequency of occurrence can be the ratio of the number of times the stranger is identified to the above stay time.
- other information may also be counted, for example, the location information of the stranger, where the location of the stranger may be determined according to the identity or location of the camera module that collected the image of the stranger, thereby The running track of strangers can be obtained, and the statistical information is not listed here in this disclosure.
- the above information can be transmitted to the user interaction interface of the electronic device, and displayed through the user interaction interface, which is convenient for viewing various prompt information.
- the image set can be stored in the marked stranger library, where the acquisition time, acquisition position and acquisition image of each image can also be stored in association Information such as the logo of the camera module.
- the number of times marked as a stranger may be output; or the second verification result may be output.
- the second verification result is a result of confirmation after the joint determination of the target to be verified, such as information that can be identified as a stranger or that the target cannot be identified.
- FIG. 8 shows a flowchart of an image processing method according to an embodiment of the present disclosure
- FIG. 9 shows a flowchart of comparing strangers with an image processing method according to an embodiment of the present disclosure.
- the whitelist/blacklist personnel information is first entered into the system to form a white/blacklist library.
- the first object in the white/blacklist library is collectively referred to as the library staff, and the non-bank staff are strangers.
- the object information that has been marked as a stranger may constitute a marked stranger library, and the above two libraries may form a target library.
- the method of acquiring the image collected by the camera module may include using a front-end camera to collect portrait information, wherein the high-definition network camera collects video and streams it back to the back-end server, or it may also collect a face image through a face capture machine and directly pass it back to the server.
- the server When the server receives the video stream, it decodes the returned video stream, and extracts the face pictures and feature values (face features) through the face detection algorithm or neural network. For example, the server receives the returned face pictures, You can skip the video stream decoding and directly detect the feature value of the face image. Among them, while performing face detection, it can also detect whether the face picture contains the feature of wearing a mask, and the picture matching the feature of wearing a mask can be directly stored in the suspicious person's picture library; at the same time, the face picture is combined with angle and quality score It is determined that the face image that does not meet the quality requirements is discarded.
- the facial feature values of the acquired facial image can be regarded as a blacklist object or white.
- the face image can be stored in the comparison record of the white/black list library.
- the library can be compared with the marked stranger library. If the second matching threshold (adjustable) is exceeded, the matching is considered successful and the stranger is recognized again.
- the feature value of the face image is temporarily stored for processing.
- the first time window for example, 2-5 seconds
- the second time range can be the multiple first time ranges, and the second time range can be set to 10. Minutes, find the repeated portrait features in the face images retained by different camera devices in the second time range and merge them, which can use the similarity threshold threshold Lv3 (adjustable) for clustering, and can record the corresponding shooting device of the image
- Lv3 similarity threshold
- the feature values with similarity exceeding Lv2 and Lv3 are grouped into the same category, which are regarded as different picture features of the same person.
- the judgment is the following two conditions, i): whether it appears in n time windows (rolling window), n is usually set to 1 or 2 ; Ii): Whether the number of recorded devices is greater than m, m is usually set to 2; if all are met, then meet the criteria for determining the stranger and insert it into the stranger library. That is, it can be judged whether the images in the image set are taken by different camera devices in different time ranges. If it meets the stranger judgment condition, the obtained image set that meets the stranger judgment condition can be added to the second database, otherwise , Discard the image set.
- All the saved feature values in the above steps can correspond one-to-one with their original face pictures, and all have time and address (device number) information. Based on this information, the system queries stranger pictures, searches for pictures, and tracks strangers. , Situation statistics and other applications.
- the embodiments of the present disclosure can determine the identity authority of the object to be verified based on the image information collected by multiple camera modules, which can effectively reduce the false alarm rate and greatly increase the recognition accuracy of strangers.
- the embodiments of the present disclosure support recording persons wearing masks and hats directly in the list of suspicious persons, and at the same time recording the time and location, which is convenient for later inquiries; the business logic that mask persons appear as alarms can also be set according to requirements.
- Support stranger information recording and statistics query stranger pictures according to time and place, search with pictures, track query, stay time query, stranger appearance frequency and other operations.
- the information of strangers entering and exiting can be effectively recorded, and the accuracy rate can meet the practical application requirements, which solves the problem that the strangers cannot be effectively identified in public places.
- it can help management personnel and security personnel to control strangers from entering and leaving closed places such as government buildings, enterprise parks, hotels, communities, office buildings, etc., and improve the safety and order of the places.
- the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the image processing methods provided by the present disclosure.
- an image processing apparatus an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the image processing methods provided by the present disclosure.
- FIG. 10 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 10, the apparatus includes:
- the obtaining module 10 is configured to obtain the first image and the second image of the object to be verified, wherein the first image is collected by a first camera module, and the second image is collected by at least one second camera module;
- the first verification module 20 is configured to compare the first image with the image data in the target library to perform identity verification and obtain a first verification result
- the second verification module 30 is configured to respond to the case where the first verification result is a verification failure, perform joint verification using the first image and the second image, and determine the to-be-verified according to the second verification result of the joint verification The identity of the object.
- the target library includes a white/blacklist library
- the first verification module is also used to compare the first feature data of the first image with the feature data of each image in the white/blacklist library;
- the object to be verified corresponding to the first image is determined to be Blacklist objects or whitelist objects.
- the target library includes a marked stranger library
- the first verification module is further used to compare the acquired first characteristic data of the first image with the characteristic data of the image in the marked stranger library;
- the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is determined as Stranger who has been marked.
- the device further includes a statistics module configured to count the first image when there is feature data matching the first feature data in the marked stranger library The number of times the corresponding object to be verified is marked as a stranger.
- the first verification module is further configured to add the first image and the associated information of the first image to the matched In the matching record corresponding to the feature data, the associated information of the first image includes the time information of the first camera module collecting the first image, the identification information of the first camera module, and the first At least one of the location information of a camera module.
- the device further includes a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image. And/or the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
- a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image.
- the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
- the second verification module is further configured to cluster the first image whose first verification result is a verification failure and the second image whose first verification result is a verification failure within a second time range. To obtain an image set for each object to be verified, and
- the second verification module is further used to obtain a sum value of the product of the feature data of each image in each image set and the feature data of all the images, and
- the similarity between each image and the remaining images is determined based on the sum value and the number of feature data in the image set.
- the second verification module is further configured to obtain first feature data and second feature data corresponding to the first image and the second image data that have failed verification in the second time range, and
- the second verification module is further configured to perform the determining whether the image set meets the second pre-determination based on the similarity corresponding to each image in the image set in at least one of the following ways Set conditions:
- the maximum similarity among the similarities corresponding to the images in the image set is greater than the first similarity threshold
- the number of similarity feature data in the similarity corresponding to each image in the image set that is greater than the second similarity threshold exceeds a preset ratio
- the minimum similarity among the similarities corresponding to the images in the image set is greater than the third similarity threshold.
- the second verification module is further configured to delete all images corresponding to the image set when the similarity between the images in the image set does not satisfy the preset condition.
- the second verification module is also used when the image corresponding to the feature data in the feature data set is an image collected by different camera modules in different time ranges, then It is determined that the object to be verified corresponding to the feature data set is a stranger.
- the acquisition module is further configured to separately acquire the first video collected by the first camera module and the second video collected by at least one second camera module, and preprocess the first video Obtaining a third image and preprocessing the second video to obtain a fourth image, or receiving the third image and the fourth image, and
- the image satisfying the quality requirement in the third image is determined as the first image, and the image satisfying the quality requirement in the fourth image is determined as the second image.
- the acquiring module is further configured to acquire the first feature data of the first image after acquiring the first image and the second image of the object to be verified, and Comparing the first feature data with the feature data in the target library to perform identity verification, before acquiring the first verification result, detecting whether the first image and/or the second image contain predetermined features, and
- the first image and/or the second image contain a predetermined feature
- the first image and/or the second image containing the predetermined feature are marked, wherein the predetermined feature includes a mask, a hat, At least one of sunglasses.
- the apparatus further includes a prompt module configured to output a prompt of the first verification result or the second verification result.
- the prompting module is further configured to output the identity and associated information of the object to be verified in a preset manner in response to the case where the first verification result is that the verification is successful, and to determine the pending When the verification object is a marked stranger, output the number of times marked as stranger; or
- the second verification result is output.
- the second verification module is further configured to, in response to the second verification result being that the object to be verified is a stranger, the first image, the second image and the associated information corresponding to the object to be verified Store to the target library, and control to display the verification result, statistical information and prompt information determined to be a stranger through the user interaction interface.
- the functions provided by the apparatus provided by the embodiments of the present disclosure or the modules contained therein may be used to perform the methods described in the above method embodiments.
- An embodiment of the present disclosure also proposes a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein, the processor is configured as the above method.
- the electronic device may be provided as a terminal, server, or other form of device.
- Fig. 11 is a block diagram of an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, and a personal digital assistant.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , ⁇ 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps in the above method.
- the processing component 802 may include one or more modules to facilitate interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operation at the electronic device 800. Examples of these data include instructions for any application or method for operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 may be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable and removable Programmable read only memory
- PROM programmable read only memory
- ROM read only memory
- magnetic memory flash memory
- flash memory magnetic disk or optical disk.
- the power supply component 806 provides power to various components of the electronic device 800.
- the power component 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with status assessment in various aspects.
- the sensor component 814 can detect the on/off state of the electronic device 800, and the relative positioning of the components, for example, the component is the display and keypad of the electronic device 800, and the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of user contact with the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 may be used by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field Programming gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above method.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field Programming gate array
- controller microcontroller, microprocessor or other electronic components are used to implement the above method.
- a non-volatile computer-readable storage medium is also provided, for example, a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the above method.
- Fig. 12 is a block diagram of an electronic device 1900 according to an exemplary embodiment.
- the electronic device 1900 may be provided as a server.
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and memory resources represented by the memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application programs stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above method.
- the electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate an operating system based on the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, for example, a memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the above method.
- the present disclosure may be a system, method, and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for causing the processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), and erasable programmable read only memory (EPROM (Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a computer on which instructions are stored
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a computer on which instructions are stored
- the convex structure in the hole card or the groove and any suitable combination of the above.
- the computer-readable storage medium used here is not to be interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, optical pulses through fiber optic cables), or through wires The transmitted electrical signal.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device through a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages Source code or object code written in any combination.
- the programming languages include object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer readable program instructions can be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or completely on the remote computer or server carried out.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to pass the Internet connection).
- electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLA), are personalized by utilizing the state information of computer-readable program instructions.
- Computer-readable program instructions are executed to implement various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, or other programmable data processing device, thereby producing a machine that causes these instructions to be executed by the processor of a computer or other programmable data processing device A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is generated.
- the computer-readable program instructions may also be stored in a computer-readable storage medium. These instructions cause the computer, programmable data processing apparatus, and/or other devices to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions to implement various aspects of the functions/acts specified in one or more blocks in the flowchart and/or block diagram.
- the computer-readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment, so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , So that the instructions executed on the computer, other programmable data processing device, or other equipment implement the functions/acts specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a part of a module, program segment, or instruction that contains one or more Executable instructions.
- the functions marked in the blocks may also occur in an order different from that marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
- each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with a dedicated hardware-based system that performs specified functions or actions Or, it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims (36)
- 一种图像处理方法,包括:获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
- 根据权利要求1所述的方法,其中,所述目标库包括白/黑名单库;所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
- 根据权利要求1或2所述的方法,其中,所述目标库包括已标记的陌生人库;所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
- 根据权利要求3所述的方法,其中,在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述方法还包括:统计所述第一图像对应的待验证对象被标记为陌生人的次数。
- 根据权利要求1-4中任意一项所述的方法,其中,所述方法还包括:在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
- 根据权利要求1-5中任意一项所述的方法,其中,在利用所述第一图像与所述第二图像进行联合验证之前,所述方法还包括:对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
- 根据权利要求1-6中任意一项所述的方法,其中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,包括:将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集;确定所述图像集中每个图像与所述图像集中其他图像的相似度;基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件;在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
- 根据权利要求7所述的方法,其中,所述确定所述图像集中每个图像与其他图像的相似度,包括:获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值;基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
- 根据权利要求7或8所述的方法,其中,所述将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,包括:获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据;将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
- 根据权利要求7-9中任意一项所述的方法,其中,所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,包括以下方式中的至少一种:所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
- 根据权利要求7-10中任意一项所述的方法,其中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,还包括:在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
- 根据权利要求7-11中任意一项所述的方法,其中,所述在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人,包括:在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
- 根据权利要求1-12中任意一项所述的方法,其中,所述获取待验证对象的第一图像和第二图像,包括:分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像;将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
- 根据权利要求13所述的方法,其中,在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与 目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,还包括:检测所述第一图像和/或第二图像中是否包含预定特征;响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
- 根据权利要求1-14中任意一项所述的方法,其中,所述方法还包括:输出提示所述第一验证结果或者第二验证结果。
- 根据权利要求15所述的方法,其中,所述输出提示所述第一验证结果或第二验证结果包括:响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者输出所述第二验证结果。
- 根据权利要求1-16中任意一项所述的方法,其中,所述方法还包括:响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库;通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
- 一种图像处理装置,包括:获取模块,配置为获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;第一验证模块,配置为将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;第二验证模块,配置为响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
- 根据权利要求18所述的装置,其中,所述目标库包括白/黑名单库;所述第一验证模块还用于将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;以及在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
- 根据权利要求18或19所述的装置,其中,所述目标库包括已标记的陌生人库;所述第一验证模块还用于将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;以及在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
- 根据权利要求20所述的装置,其中,所述装置还包括统计模块,配置为在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,统计所述第一图像对应的待验证对象被标记为陌生人的次数。
- 根据权利要求18-21中任意一项所述的装置,其中,所述第一验证模块还用于在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包 括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
- 根据权利要求18-22中任意一项所述的装置,其中,所述装置还包括去重模块,配置为在利用所述第一图像与所述第二图像进行联合验证之前,对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
- 根据权利要求18-23中任意一项所述的装置,其中,所述第二验证模块还用于将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,并确定所述图像集中每个图像与所述图像集中其他图像的相似度,以及基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,以及在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
- 根据权利要求24所述的装置,其中,所述第二验证模块还用于获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值,以及基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
- 根据权利要求24或25所述的装置,其中,所述第二验证模块还用于获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据,并将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;以及将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
- 根据权利要求24-26中任意一项所述的装置,其中,所述第二验证模块还用于通过以下方式中的至少一种执行所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件:所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
- 根据权利要求24-27中任意一项所述的装置,其中,所述第二验证模块还用于在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
- 根据权利要求24-28中任意一项所述的装置,其中,所述第二验证模块还用于在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
- 根据权利要求18-29中任意一项所述的装置,其中,所述获取模块还用于分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像,以及将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
- 根据权利要求30所述的装置,其中,所述获取模块还用于在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,检测所述第一图像和/或第二图像中是否包含预定特征,并响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
- 根据权利要求18-31中任意一项所述的装置,其中,所述装置还包括提示模块,配置为输出提示所述第一验证结果或者第二验证结果。
- 根据权利要求32所述的装置,其中,所述提示模块还用于响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者输出所述第二验证结果。
- 根据权利要求18-33中任意一项所述的装置,其中,所述第二验证模块还用于响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库,以及控制通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
- 一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1至17中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至17中任意一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207026016A KR102450330B1 (ko) | 2018-12-21 | 2019-06-27 | 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체 |
JP2020547077A JP7043619B2 (ja) | 2018-12-21 | 2019-06-27 | 画像処理方法及び装置、電子機器並びに記憶媒体 |
SG11202008779VA SG11202008779VA (en) | 2018-12-21 | 2019-06-27 | Image processing method and apparatus, electronic device, and storage medium |
US17/015,189 US11410001B2 (en) | 2018-12-21 | 2020-09-09 | Method and apparatus for object authentication using images, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811574840.3 | 2018-12-21 | ||
CN201811574840.3A CN109658572B (zh) | 2018-12-21 | 2018-12-21 | 图像处理方法及装置、电子设备和存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/015,189 Continuation US11410001B2 (en) | 2018-12-21 | 2020-09-09 | Method and apparatus for object authentication using images, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020124984A1 true WO2020124984A1 (zh) | 2020-06-25 |
Family
ID=66115852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/093388 WO2020124984A1 (zh) | 2018-12-21 | 2019-06-27 | 图像处理方法及装置、电子设备和存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US11410001B2 (zh) |
JP (1) | JP7043619B2 (zh) |
KR (1) | KR102450330B1 (zh) |
CN (1) | CN109658572B (zh) |
SG (1) | SG11202008779VA (zh) |
TW (1) | TWI717146B (zh) |
WO (1) | WO2020124984A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111611572A (zh) * | 2020-06-28 | 2020-09-01 | 支付宝(杭州)信息技术有限公司 | 一种基于人脸认证的实名认证方法及装置 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108897777B (zh) * | 2018-06-01 | 2022-06-17 | 深圳市商汤科技有限公司 | 目标对象追踪方法及装置、电子设备和存储介质 |
JP7257765B2 (ja) * | 2018-09-27 | 2023-04-14 | キヤノン株式会社 | 情報処理装置、認証システムおよびそれらの制御方法、プログラム |
CN109658572B (zh) | 2018-12-21 | 2020-09-15 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
US11502843B2 (en) * | 2018-12-31 | 2022-11-15 | Nxp B.V. | Enabling secure internet transactions in an unsecure home using immobile token |
CN110443014A (zh) * | 2019-07-31 | 2019-11-12 | 成都商汤科技有限公司 | 身份验证方法、用于身份验证的电子设备和服务器、系统 |
CN112446395B (zh) | 2019-08-29 | 2023-07-25 | 杭州海康威视数字技术股份有限公司 | 网络摄像机、视频监控系统及方法 |
CN111027374B (zh) * | 2019-10-28 | 2023-06-30 | 华为终端有限公司 | 一种图像识别方法及电子设备 |
EP3839904A1 (de) * | 2019-12-17 | 2021-06-23 | Wincor Nixdorf International GmbH | Selbstbedienungsterminal und verfahren zum betreiben eines selbstbedienungsterminals |
CN111159445A (zh) * | 2019-12-30 | 2020-05-15 | 深圳云天励飞技术有限公司 | 一种图片过滤方法、装置、电子设备及存储介质 |
CN111382410B (zh) * | 2020-03-23 | 2022-04-29 | 支付宝(杭州)信息技术有限公司 | 刷脸验证方法及系统 |
US12105973B2 (en) * | 2020-03-25 | 2024-10-01 | Samsung Electronics Co., Ltd. | Dynamic quantization in storage devices using machine learning |
CN111914781B (zh) * | 2020-08-10 | 2024-03-19 | 杭州海康威视数字技术股份有限公司 | 一种人脸图像处理的方法及装置 |
CN113095289A (zh) * | 2020-10-28 | 2021-07-09 | 重庆电政信息科技有限公司 | 基于城市复杂场景下海量图像预处理网络方法 |
CN112597886A (zh) * | 2020-12-22 | 2021-04-02 | 成都商汤科技有限公司 | 乘车逃票检测方法及装置、电子设备和存储介质 |
CN113344132A (zh) * | 2021-06-30 | 2021-09-03 | 成都商汤科技有限公司 | 身份识别方法、系统、装置、计算机设备及存储介质 |
CN113688278A (zh) * | 2021-07-13 | 2021-11-23 | 北京旷视科技有限公司 | 信息处理方法、装置、电子设备和计算机可读介质 |
CN113569676B (zh) * | 2021-07-16 | 2024-06-11 | 北京市商汤科技开发有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN113609931B (zh) * | 2021-07-20 | 2024-06-21 | 上海德衡数据科技有限公司 | 基于神经网络的人脸识别方法及系统 |
CN114792451B (zh) * | 2022-06-22 | 2022-11-25 | 深圳市海清视讯科技有限公司 | 信息处理方法、设备、及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010005215A2 (ko) * | 2008-07-07 | 2010-01-14 | 주식회사 미래인식 | 생체인식을 이용한 출입관리 방법 및 시스템 |
CN105023005A (zh) * | 2015-08-05 | 2015-11-04 | 王丽婷 | 人脸识别装置及其识别方法 |
CN105956520A (zh) * | 2016-04-20 | 2016-09-21 | 东莞市中控电子技术有限公司 | 一种基于多模式生物识别信息的个人识别装置和方法 |
CN206541317U (zh) * | 2017-03-03 | 2017-10-03 | 北京国承万通信息科技有限公司 | 用户识别系统 |
CN107305624A (zh) * | 2016-04-20 | 2017-10-31 | 厦门中控智慧信息技术有限公司 | 一种基于多模式生物识别信息的个人识别方法和装置 |
CN109658572A (zh) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6111517A (en) * | 1996-12-30 | 2000-08-29 | Visionics Corporation | Continuous video monitoring using face recognition for access control |
US20020136433A1 (en) * | 2001-03-26 | 2002-09-26 | Koninklijke Philips Electronics N.V. | Adaptive facial recognition system and method |
US20080080748A1 (en) * | 2006-09-28 | 2008-04-03 | Kabushiki Kaisha Toshiba | Person recognition apparatus and person recognition method |
JP2009294955A (ja) | 2008-06-05 | 2009-12-17 | Nippon Telegr & Teleph Corp <Ntt> | 画像処理装置、画像処理方法、画像処理プログラムおよび同プログラムを記録した記録媒体。 |
CN102609729B (zh) * | 2012-02-14 | 2014-08-13 | 中国船舶重工集团公司第七二六研究所 | 多机位人脸识别方法及系统 |
KR20130133676A (ko) * | 2012-05-29 | 2013-12-09 | 주식회사 코아로직 | 카메라를 통한 얼굴인식을 이용한 사용자 인증 방법 및 장치 |
US9245276B2 (en) * | 2012-12-12 | 2016-01-26 | Verint Systems Ltd. | Time-in-store estimation using facial recognition |
KR101316805B1 (ko) * | 2013-05-22 | 2013-10-11 | 주식회사 파이브지티 | 자동 얼굴 위치 추적 및 얼굴 인식 방법 및 시스템 |
CN103530652B (zh) * | 2013-10-23 | 2016-09-14 | 北京中视广信科技有限公司 | 一种基于人脸聚类的视频编目方法、检索方法及其系统 |
CN105809096A (zh) * | 2014-12-31 | 2016-07-27 | 中兴通讯股份有限公司 | 人物标注方法和终端 |
US20160364609A1 (en) * | 2015-06-12 | 2016-12-15 | Delta ID Inc. | Apparatuses and methods for iris based biometric recognition |
CN205080692U (zh) * | 2015-11-09 | 2016-03-09 | 舒畅 | 一种警用建筑安防装置 |
CN105426485A (zh) * | 2015-11-20 | 2016-03-23 | 小米科技有限责任公司 | 图像合并方法和装置、智能终端和服务器 |
CN106250821A (zh) * | 2016-07-20 | 2016-12-21 | 南京邮电大学 | 一种聚类再分类的人脸识别方法 |
CN106228188B (zh) * | 2016-07-22 | 2020-09-08 | 北京市商汤科技开发有限公司 | 聚类方法、装置及电子设备 |
JP2018018324A (ja) | 2016-07-28 | 2018-02-01 | 株式会社東芝 | Icカードおよび携帯可能電子装置 |
JP6708047B2 (ja) | 2016-08-05 | 2020-06-10 | 富士通株式会社 | 認証装置、認証方法及び認証プログラム |
JP6809114B2 (ja) | 2016-10-12 | 2021-01-06 | 株式会社リコー | 情報処理装置、画像処理システム、プログラム |
CN106778470A (zh) * | 2016-11-15 | 2017-05-31 | 东软集团股份有限公司 | 一种人脸识别方法及装置 |
CN108228872A (zh) * | 2017-07-21 | 2018-06-29 | 北京市商汤科技开发有限公司 | 人脸图像去重方法和装置、电子设备、存储介质、程序 |
CN107729815B (zh) * | 2017-09-15 | 2020-01-14 | Oppo广东移动通信有限公司 | 图像处理方法、装置、移动终端及计算机可读存储介质 |
CN107480658B (zh) * | 2017-09-19 | 2020-11-06 | 苏州大学 | 基于多角度视频的人脸识别装置和方法 |
CN108229297B (zh) * | 2017-09-30 | 2020-06-05 | 深圳市商汤科技有限公司 | 人脸识别方法和装置、电子设备、计算机存储介质 |
CN107729928B (zh) * | 2017-09-30 | 2021-10-22 | 百度在线网络技术(北京)有限公司 | 信息获取方法和装置 |
CN108875522B (zh) * | 2017-12-21 | 2022-06-10 | 北京旷视科技有限公司 | 人脸聚类方法、装置和系统及存储介质 |
KR102495796B1 (ko) * | 2018-02-23 | 2023-02-06 | 삼성전자주식회사 | 시계(field of view)가 다른 복수의 카메라를 이용하여 생체 인증을 수행하는 방법 및 이를 위한 전자 장치 |
CN108446681B (zh) * | 2018-05-10 | 2020-12-15 | 深圳云天励飞技术有限公司 | 行人分析方法、装置、终端及存储介质 |
-
2018
- 2018-12-21 CN CN201811574840.3A patent/CN109658572B/zh active Active
-
2019
- 2019-06-27 SG SG11202008779VA patent/SG11202008779VA/en unknown
- 2019-06-27 KR KR1020207026016A patent/KR102450330B1/ko active IP Right Grant
- 2019-06-27 WO PCT/CN2019/093388 patent/WO2020124984A1/zh active Application Filing
- 2019-06-27 JP JP2020547077A patent/JP7043619B2/ja active Active
- 2019-12-12 TW TW108145587A patent/TWI717146B/zh active
-
2020
- 2020-09-09 US US17/015,189 patent/US11410001B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010005215A2 (ko) * | 2008-07-07 | 2010-01-14 | 주식회사 미래인식 | 생체인식을 이용한 출입관리 방법 및 시스템 |
CN105023005A (zh) * | 2015-08-05 | 2015-11-04 | 王丽婷 | 人脸识别装置及其识别方法 |
CN105956520A (zh) * | 2016-04-20 | 2016-09-21 | 东莞市中控电子技术有限公司 | 一种基于多模式生物识别信息的个人识别装置和方法 |
CN107305624A (zh) * | 2016-04-20 | 2017-10-31 | 厦门中控智慧信息技术有限公司 | 一种基于多模式生物识别信息的个人识别方法和装置 |
CN206541317U (zh) * | 2017-03-03 | 2017-10-03 | 北京国承万通信息科技有限公司 | 用户识别系统 |
CN109658572A (zh) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111611572A (zh) * | 2020-06-28 | 2020-09-01 | 支付宝(杭州)信息技术有限公司 | 一种基于人脸认证的实名认证方法及装置 |
CN111611572B (zh) * | 2020-06-28 | 2022-11-22 | 支付宝(杭州)信息技术有限公司 | 一种基于人脸认证的实名认证方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
JP7043619B2 (ja) | 2022-03-29 |
KR20200116158A (ko) | 2020-10-08 |
SG11202008779VA (en) | 2020-10-29 |
US11410001B2 (en) | 2022-08-09 |
TWI717146B (zh) | 2021-01-21 |
JP2021515945A (ja) | 2021-06-24 |
CN109658572B (zh) | 2020-09-15 |
KR102450330B1 (ko) | 2022-10-04 |
US20200401857A1 (en) | 2020-12-24 |
TW202036472A (zh) | 2020-10-01 |
CN109658572A (zh) | 2019-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020124984A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
US11232288B2 (en) | Image clustering method and apparatus, electronic device, and storage medium | |
WO2020073505A1 (zh) | 基于图像识别的图像处理方法、装置、设备及存储介质 | |
WO2020029966A1 (zh) | 视频处理方法及装置、电子设备和存储介质 | |
US20220067379A1 (en) | Category labelling method and device, and storage medium | |
WO2019214201A1 (zh) | 活体检测方法及装置、系统、电子设备、存储介质 | |
WO2021031645A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
TW202105199A (zh) | 資料更新方法、電子設備和儲存介質 | |
US20180151199A1 (en) | Method, Device and Computer-Readable Medium for Adjusting Video Playing Progress | |
TW202029055A (zh) | 一種行人識別方法、裝置、電子設備及非臨時性電腦可讀儲存介質 | |
WO2020019760A1 (zh) | 活体检测方法、装置及系统、电子设备和存储介质 | |
WO2021093375A1 (zh) | 检测同行人的方法及装置、系统、电子设备和存储介质 | |
TW202105202A (zh) | 影片處理方法及裝置、電子設備、儲存媒體和電腦程式 | |
WO2020010927A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2020181728A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021036382A9 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021103423A1 (zh) | 行人事件的检测方法及装置、电子设备和存储介质 | |
WO2022099989A1 (zh) | 活体识别、门禁设备控制方法和装置、电子设备和存储介质、计算机程序 | |
TWI766458B (zh) | 資訊識別方法及裝置、電子設備、儲存媒體 | |
WO2021164100A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2023094894A1 (zh) | 目标跟踪、事件检测方法及装置、电子设备和存储介质 | |
CN112101216A (zh) | 人脸识别方法、装置、设备及存储介质 | |
CN109101542B (zh) | 图像识别结果输出方法及装置、电子设备和存储介质 | |
CN111062407B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN111209769B (zh) | 身份验证系统及方法、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19900885 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020547077 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207026016 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19900885 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/09/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19900885 Country of ref document: EP Kind code of ref document: A1 |