WO2020124984A1 - 图像处理方法及装置、电子设备和存储介质 - Google Patents

图像处理方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2020124984A1
WO2020124984A1 PCT/CN2019/093388 CN2019093388W WO2020124984A1 WO 2020124984 A1 WO2020124984 A1 WO 2020124984A1 CN 2019093388 W CN2019093388 W CN 2019093388W WO 2020124984 A1 WO2020124984 A1 WO 2020124984A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
verification
feature data
verified
stranger
Prior art date
Application number
PCT/CN2019/093388
Other languages
English (en)
French (fr)
Inventor
卢屹
曹理
洪春蕾
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to KR1020207026016A priority Critical patent/KR102450330B1/ko
Priority to JP2020547077A priority patent/JP7043619B2/ja
Priority to SG11202008779VA priority patent/SG11202008779VA/en
Publication of WO2020124984A1 publication Critical patent/WO2020124984A1/zh
Priority to US17/015,189 priority patent/US11410001B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present disclosure relates to the field of intelligent monitoring, and in particular to image processing methods and devices, electronic equipment, and storage media.
  • Embodiments of the present disclosure provide an image processing method and device, electronic equipment, and storage medium capable of jointly determining the identity of an object to be verified in a corresponding area through image information collected by multiple camera modules, and the determination accuracy is high , The false alarm rate is low.
  • an image processing method which includes:
  • the first image and the second image are used for joint verification, and the identity of the object to be verified is determined according to the second verification result of the joint verification.
  • the target library includes a white/blacklist library
  • the comparing the first image with the image data in the target library to perform identity verification to obtain the first verification result includes:
  • the object to be verified corresponding to the first image is determined to be Blacklist objects or whitelist objects.
  • the target library includes a marked stranger library
  • the comparison between the first image and the image data in the target library to perform identity verification to obtain the first verification result includes:
  • the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is determined as Stranger who has been marked.
  • the method when there is feature data matching the first feature data in the marked stranger library, the method further includes:
  • the method further includes:
  • the first image and the associated information of the first image are added to the matching record corresponding to the matched feature data, wherein the association of the first image
  • the information includes at least one of time information that the first camera module collects the first image, identification information of the first camera module, and position information of the first camera module.
  • the method before the first image and the second image are used for joint verification, the method further includes:
  • the first image and the second image are used for joint verification, and the second verification result of the joint verification is used to determine the The identity of the object to be verified, including:
  • the determining the similarity of each image in the image set to other images includes:
  • the similarity between each image and the remaining images is determined based on the sum value and the number of feature data in the image set.
  • the first image whose first verification result is a verification failure and the second image whose first verification result is a verification failure within a second time range is clustered to obtain a verification for each Object image set, including:
  • the determining whether the image set meets the second preset condition based on the similarity corresponding to each image in the image set includes at least one of the following ways:
  • the maximum similarity among the similarities corresponding to the images in the image set is greater than the first similarity threshold
  • the number of similarity feature data in the similarity corresponding to each image in the image set that is greater than the second similarity threshold exceeds a preset ratio
  • the minimum similarity among the similarities corresponding to the images in the image set is greater than the third similarity threshold.
  • the first image and the second image are used for joint verification, and the second verification result of the joint verification is used to determine the The identity of the object to be verified also includes:
  • the determination that the object to be verified corresponding to the image set is a stranger when the image set meets the second preset condition includes:
  • the image corresponding to the feature data in the feature data set is an image collected by different camera modules in different time ranges, it is determined that the object to be verified corresponding to the feature data set is a stranger.
  • the acquiring the first image and the second image of the object to be verified includes:
  • the image satisfying the quality requirement in the third image is determined as the first image, and the image satisfying the quality requirement in the fourth image is determined as the second image.
  • the method further includes:
  • the first image and/or the second image contain a predetermined feature
  • the first image and/or the second image containing the predetermined feature are marked, wherein the predetermined feature includes a mask, a hat, At least one of sunglasses.
  • the method further includes:
  • the output prompts the first verification result or the second verification result.
  • the output prompting the first verification result or the second verification result includes:
  • the identity and associated information of the object to be verified are output in a preset manner, and when it is determined that the object to be verified is a marked stranger, the output is marked as unfamiliar The number of people; or
  • the second verification result is output.
  • the method further includes:
  • the second verification result is that the object to be verified is a stranger
  • the verification result, statistical information and prompt information of the stranger determined to be displayed through the user interactive interface are displayed through the user interactive interface.
  • an image processing apparatus including:
  • An obtaining module configured to obtain a first image and a second image of the object to be verified, wherein the first image is collected by a first camera module, and the second image is collected by at least one second camera module;
  • a first verification module configured to compare the first image with the image data in the target library to perform identity verification and obtain a first verification result
  • a second verification module configured to respond to the case where the first verification result is a verification failure, perform joint verification using the first image and the second image, and determine the object to be verified according to the second verification result of the joint verification identity of.
  • the target library includes a white/blacklist library
  • the first verification module is also used to compare the first feature data of the first image with the feature data of each image in the white/blacklist library;
  • the object to be verified corresponding to the first image is determined to be Blacklist objects or whitelist objects.
  • the target library includes a marked stranger library
  • the first verification module is further used to compare the acquired first characteristic data of the first image with the characteristic data of the image in the marked stranger library;
  • the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is determined as Stranger who has been marked.
  • the device further includes a statistics module configured to count the first image when there is feature data matching the first feature data in the marked stranger library The number of times the corresponding object to be verified is marked as a stranger.
  • the first verification module is further configured to add the first image and the associated information of the first image to the matched In the matching record corresponding to the feature data, the associated information of the first image includes the time information of the first camera module collecting the first image, the identification information of the first camera module, and the first At least one of the location information of a camera module.
  • the device further includes a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image. And/or the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
  • a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image.
  • the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
  • the second verification module is further configured to cluster the first image whose first verification result is a verification failure and the second image whose first verification result is a verification failure within a second time range. To obtain an image set for each object to be verified, and
  • the second verification module is further used to obtain a sum value of the product of the feature data of each image in each image set and the feature data of all the images, and
  • the similarity between each image and the remaining images is determined based on the sum value and the number of feature data in the image set.
  • the second verification module is further configured to obtain first feature data and second feature data corresponding to the first image and the second image data that have failed verification in the second time range, and
  • the second verification module is further configured to perform the determining whether the image set meets the second pre-measurement based on the similarity corresponding to each image in the image set in at least one of the following ways Set conditions:
  • the maximum similarity among the similarities corresponding to the images in the image set is greater than the first similarity threshold
  • the number of similarity feature data in the similarity corresponding to each image in the image set that is greater than the second similarity threshold exceeds a preset ratio
  • the minimum similarity among the similarities corresponding to the images in the image set is greater than the third similarity threshold.
  • the second verification module is further configured to delete all images corresponding to the image set when the similarity between the images in the image set does not satisfy the preset condition.
  • the second verification module is also used when the image corresponding to the feature data in the feature data set is an image collected by different camera modules in different time ranges, then It is determined that the object to be verified corresponding to the feature data set is a stranger.
  • the acquisition module is further configured to separately acquire the first video collected by the first camera module and the second video collected by at least one second camera module, and preprocess the first video Obtaining a third image and preprocessing the second video to obtain a fourth image, or receiving the third image and the fourth image, and
  • the image satisfying the quality requirement in the third image is determined as the first image, and the image satisfying the quality requirement in the fourth image is determined as the second image.
  • the acquiring module is further configured to acquire the first feature data of the first image after acquiring the first image and the second image of the object to be verified, and Comparing the first feature data with the feature data in the target library to perform identity verification, before acquiring the first verification result, detecting whether the first image and/or the second image contain predetermined features, and
  • the first image and/or the second image contain a predetermined feature
  • the first image and/or the second image containing the predetermined feature are marked, wherein the predetermined feature includes a mask, a hat, At least one of sunglasses.
  • the apparatus further includes a prompt module configured to output a prompt of the first verification result or the second verification result.
  • the prompting module is further configured to output the identity and associated information of the object to be verified in a preset manner in response to the case where the first verification result is that the verification is successful, and to determine the pending When the verification object is a marked stranger, output the number of times marked as stranger; or
  • the second verification result is output.
  • the second verification module is further configured to, in response to the second verification result being that the object to be verified is a stranger, the first image, the second image and the associated information corresponding to the object to be verified Store to the target library, and control to display the verification result, statistical information and prompt information determined to be a stranger through the user interaction interface.
  • an electronic device including:
  • Memory for storing processor executable instructions
  • the processor is configured to: execute the method of any one of the first aspect.
  • a computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor implements the method of any one of the first aspects.
  • the embodiments of the present disclosure can determine the identity authority of the object to be verified based on the image information collected by multiple camera modules, which can effectively reduce the false alarm rate and greatly improve the recognition accuracy of strangers.
  • FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of step S100 in an image processing method according to an embodiment of the present disclosure
  • step S200 in the image processing method according to an embodiment of the present disclosure
  • step S200 shows a flowchart of step S200 in the image processing method according to an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of step S300 in the image processing method according to an embodiment of the present disclosure
  • FIG. 6 shows a flowchart of step S301 in the image processing method according to an embodiment of the present disclosure
  • step S302 shows a flowchart of step S302 in the image processing method according to an embodiment of the present disclosure
  • FIG. 8 shows a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 10 shows a block diagram of an image processing device implemented according to the present disclosure
  • FIG. 11 shows a block diagram of an electronic device 800 implemented according to the present disclosure
  • FIG. 12 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, where the image processing method of the embodiment of the present disclosure can be applied to government buildings, enterprise parks, hotels, communities, office buildings, etc. that require management of entering personnel For a place, it is possible to jointly determine the identity of the object to be verified through the image information collected by the camera modules installed in different areas, so as to determine whether the object to be verified is a stranger or a registered person in the library.
  • the image processing method of the embodiment of the present disclosure may include:
  • S100 Acquire a first image and a second image of the object to be verified, wherein the first image is collected by a first camera module, and the second image is collected by at least one second camera module.
  • the image processing method of the embodiment of the present disclosure may be applied to electronic devices with image processing functions such as terminal devices or servers, and the terminal devices may be, for example, mobile phones, computer devices, and the like. These electronic devices are electrically connected to camera devices installed in various corners of the area to be inspected.
  • the camera devices include but are not limited to cameras, snap cameras, and the like. In other embodiments, these electronic devices include display screens.
  • the object to be verified refers to a person who enters the area to be verified.
  • the first image and the second image may be facial images of the object to be verified that need to be identified, or full-body images. In the embodiments of the present invention, facial images are used Explain, but not to limit the invention.
  • the first image and the second image here come from different video sources.
  • the first image can be collected by the first camera module
  • the second image can be collected by at least one second camera module.
  • Different camera modules are set on different location areas, that is, the first camera module and the second camera module may be camera modules installed in different positions.
  • the camera modules other than the first camera module are collectively referred to as the second camera module.
  • the positions of the second camera modules can also be different. In this way, real-time collections in different locations can be collected.
  • the acquisition times of the first image and the second image may be the same or different, which is not limited in this disclosure.
  • the neural network can be used to obtain the first feature data of each first image, and the first feature data is compared with the pre-stored feature data of the image data in the target library
  • the target library can include registered black Lists and whitelists, and objects that have been marked as strangers.
  • the second image collected by at least one second camera module may be combined for joint verification to verify the identity of the object to be verified.
  • the identity of the object to be verified can be jointly verified for the first image and the second image that fail to be verified, so that the verification success rate of the object to be verified can be improved.
  • the image processing method of the embodiment of the present disclosure can be applied in a place managed by personnel, and cameras can be installed at different locations of the place, any of which can be used as the first camera mode of the embodiment of the present disclosure Group, for convenience of description below, camera modules other than the first camera module are referred to as second camera modules, and images collected by the second camera module may be referred to as second images.
  • the first image and the second image that need to be authenticated obtained in step S100 in the embodiment of the present disclosure may be images directly obtained from the first camera module and the second camera module, or may be analyzed and filtered. Image. This disclosure does not limit this.
  • FIG. 2 shows a flowchart of step S100 in an image processing method according to an embodiment of the present disclosure, where the acquiring the first image to be identified may include:
  • S101 Obtain a first video collected by a first camera module and a second video collected by at least one second camera module, and preprocess the video to obtain multiple third images and pre-process the second video
  • the fourth image is obtained by processing, or the third image and the fourth image including the facial information of the object to be verified are directly received.
  • the received information may be information in the form of video or information in the form of picture.
  • the video information may be preprocessed to obtain from the video information
  • the third image and the fourth image to be processed, where the preprocessing operations may include video decoding, image sampling, and face detection and other processing operations, through which the corresponding third image and fourth image including the facial image can be obtained image.
  • the obtained third and fourth images may be in the form of pictures.
  • the third and fourth images may be directly processed, that is, the face detection method may be used to obtain The third and fourth images of the face image of the subject.
  • the first camera module can directly collect the third image including the facial image and the second camera module can directly collect the fourth image including the facial image, for example, the first camera module and the second camera module can be a face capture machine ,
  • the obtained third image and fourth image are face images, and this disclosure does not specifically limit it, as long as the obtained third image and fourth image include the face area of the object to be verified to be determined, That is, it can serve as an embodiment of the present disclosure.
  • S102 Determine an image that meets quality requirements in the obtained third image as the first image, and determine an image that meets quality requirements in the fourth image as the second image.
  • the third image collected from the camera module is obtained After the fourth image, an image that meets the quality requirements from the third image and the fourth image needs to be selected to perform the detection and determination of the user's identity. Among them, the third image and the fourth image can be jointly judged by angle and quality score, and pictures below a certain quality will be discarded.
  • the image quality of the third image and the fourth image may be determined through a neural network, or the image quality of the third image may also be determined through a preset algorithm, in which the image clarity, The third image and the fourth image are scored according to the angle of the face. If the score value is lower than the preset score value, if it is lower than 80 points, the third image and the fourth image may be deleted. If the score value is higher than The preset score indicates that the quality of the image satisfies the quality requirements. At this time, the third image and the fourth image can be used to determine the identity of the person, that is, the third image that meets the quality requirements can be used as the first to be authenticated. An image, and a fourth image that meets quality requirements is used as the second image to be authenticated. Among them, the preset score can be set according to different needs and application scenarios, and this work does not make specific limitations.
  • the first feature data Compared with the feature data in the target library to perform identity verification, before obtaining the first verification result, it is also possible to detect whether the first image and/or the second image contains predetermined features, and after detecting the third image and/or the fourth When the image contains predetermined features, the third image and/or the second image containing the predetermined features may be marked.
  • the mark here means that the third image and/or the fourth image containing predetermined characteristics can be assigned an identifier, and the identifier is used to indicate that the corresponding image can be directly used as the first image and the second image to be authenticated.
  • the predetermined characteristic may include at least one characteristic of a mask, a hat, and sunglasses.
  • the object to be verified in the third image obtained from the first video collected from the first camera module is the object to be verified wearing a hat and a mask (that is, the feature data corresponding to the first image includes a hat and a mask, etc. Feature), you can directly list the object to be verified as a suspicious person, that is, the third image can be used as the first image.
  • the object to be verified in the fourth image obtained from the second video collected from the second camera module is the object to be verified wearing a hat and sunglasses (that is, the feature data corresponding to the second image includes hats, sunglasses, etc.
  • the fourth image can be used as the second image.
  • the characteristic data of the third image and the fourth image can be detected by a neural network to determine whether it has the above-mentioned predetermined characteristics.
  • the first image and the second image to be processed can be conveniently obtained for receiving different types of images, and since the obtained first image and second image are images that meet the quality requirements, they can be used to accurately perform Identity verification of the object to be verified.
  • the embodiment of the present disclosure may include a target library, where the blacklist and whitelist, and the marked stranger information are recorded in the target library.
  • the blacklist refers to the information of the objects that cannot enter the place
  • the whitelist refers to the information of the objects that can be allowed to enter the place.
  • the information stored in the target library is the objects with known identities and the marked Information for stranger objects.
  • the embodiment of the present disclosure may compare the first feature data of the first image with the feature data of the image data in the target library match.
  • the target database stores facial images and facial feature data of each first object, or may also include other information, such as name, age, etc., which is not specifically limited in the present disclosure.
  • the first feature data of the first image can be compared with the feature data of each object in the target library. If there is feature data in the target library whose matching value with the first feature data exceeds the first matching threshold, Then, it can be determined that the object to be verified corresponding to the first image is an object in the target library, and this time indicates that the first verification result is that the verification is successful. Further, if the feature data corresponding to the first feature data cannot be queried, it can be determined that the first verification result is verification failure.
  • the second image collected by the second camera module may be used for further determination.
  • the embodiment of the present disclosure can perform the identity verification of the human object based on the image collected by the camera module or the received image, it can achieve the effect of comparing the input image with the image data in the target library, that is The effect of the image can be found in the target library that matches the input image.
  • the target library in the embodiments of the present disclosure may include a white/blacklist library and a marked stranger library.
  • the white/blacklist database includes registered blacklist objects and whitelist objects.
  • the blacklist objects are the people who restrict access to the corresponding places, and the whitelist objects are the people who are allowed to enter the corresponding places.
  • the whitelist objects and facial image of the blacklist objects included in the white/blacklist library may also include corresponding name, age, position and other information.
  • identity verification of the object to be verified can be performed, and the verification result can indicate whether the object to be verified is a blacklist object or a whitelist object.
  • FIG. 3 shows a flowchart of step S200 in the image processing method according to an embodiment of the present disclosure, where the first image is compared with the image data in the target library to perform identity verification to obtain the first verification result, including:
  • the target library includes a white/blacklist library
  • the white/blacklist library may include facial images of whitelisted objects and blacklisted objects or may directly include feature data of facial images.
  • the first image and the associated information of the first image may be loaded into the matching record of the matched object,
  • the associated information may be the time when the first camera module collected the first image, the identifier of the first camera module, and the corresponding location information.
  • the associated information with each image may be obtained at the same time.
  • a preset prompt operation may also be performed at this time, for example, by voice or display
  • the output method prompts the entry of the blacklist.
  • information such as the number of entries of the blacklist object may also be counted, and the number of entries is prompted to be output for the convenience of management personnel to view.
  • the above information can be transmitted to the user interaction interface of the above electronic device, and displayed through the user interaction interface, which is convenient for viewing various prompt information.
  • the identity verification of the blacklist object and the whitelist object can be performed, and when there is feature data matching the first feature data in the white/blacklist library, the first verification result is determined to be a successful verification .
  • the target library may also include a marked stranger library
  • the objects in the marked stranger library are objects marked as strangers, which may also include facial images of each object or directly include faces
  • the feature data may also include related information such as the collection time and location of each facial image, and may also include the number of times a stranger has been marked.
  • the identity verification of the object to be verified can be performed against the marked stranger library, and the verification result can indicate whether the object to be verified is a marked stranger object.
  • FIG. 4 shows a flowchart of step S200 in the image processing method according to an embodiment of the present disclosure, in which the first image is compared with the image data in the target library to perform identity verification to obtain the first verification result, include:
  • the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is compared Identified as a stranger who has been marked.
  • the target library includes the marked stranger library.
  • the marked stranger library may be a facial image of an object marked as a stranger or may directly include feature data of the facial image.
  • the first image and the associated information of the first image may be loaded into the matching record of the matching object
  • the associated information may be the time when the first camera module collected the first image, the identifier of the first camera module, and the corresponding location information.
  • the associated information with each image may be obtained at the same time.
  • a preset prompt operation may also be performed at this time, for example, the stranger may be prompted by voice or display output The entry of personnel.
  • information such as the number of times the stranger was marked in the corresponding place, and the stranger's stay time in the corresponding place, the frequency of occurrence, and other information may also be counted, and the above information may be prompted to output for the convenience of management personnel to view.
  • the stay time can be determined according to the time when the object is marked as a stranger.
  • the time difference between the first time that the last time is marked as a stranger and the time that the first time is marked as a stranger can be used as a stay Time and frequency of occurrence can be the ratio of the number of times the stranger is identified to the above stay time.
  • other information may also be counted, for example, the location information of the stranger, where the location of the stranger may be determined according to the identity or location of the camera module that collected the image of the stranger, thereby The running track of strangers can be obtained, and the statistical information is not listed here in this disclosure.
  • the above information can be transmitted to the user interaction interface of the electronic device, and displayed through the user interaction interface, which is convenient for viewing various prompt information.
  • the identity verification of the strange object that has been ranked can be performed, and if there is feature data matching the first feature data in the marked stranger library, the first verification result is determined to be a successful verification .
  • the first matching threshold and the second matching threshold may be the same threshold, or may be different thresholds, and those skilled in the art can set it according to requirements.
  • the verification order of the white/black list library in the target library and the marked stranger library can be set by a person skilled in the art according to requirements, where the white/black list library can be used first The first feature data is verified. When there is no matching feature data in the white/blacklist library, the tagged stranger library is used for verification, or the first feature data can be verified through the tagged stranger library. When there is no matching feature data in the marked stranger library, the white/blacklist library can be used for verification, or the white/blacklist library and the marked stranger library can also be used for verification. That is to say, the embodiment of the present disclosure does not specifically limit the time sequence of performing verification operations using two libraries, as long as it can perform the verification described above, it can be used as an embodiment of the present disclosure.
  • the first verification result is verification failure
  • the first image may be saved.
  • joint verification may be performed based on the second image acquired by the second camera module other than the first camera module and the first image, based on the second verification result of the joint verification Determine the identity of the object to be verified.
  • the process of the first verification operation on the second image in the embodiment of the present disclosure is the same as the first image, and the first verification result of the second image can also be obtained. The disclosure will not be repeated here.
  • the first image may be temporarily stored.
  • the first image within a preset time range may be deduplicated, thereby reducing excessive temporary storage for the same object to be verified Image.
  • the embodiment of the present disclosure may perform deduplication processing on the first image and/or the second image that failed verification in the first time range to obtain the first preset condition that satisfies the first preset condition for each object to be verified within the first time range The first image and/or the second image.
  • the first time range can be an adjustable time window (rolling window), for example, it can be set to 2-5 seconds, and the first image and the second image waiting to be archived (temporary storage) can be performed once according to the first time range
  • the first image of the same object to be verified can be merged and deduplicated
  • the second image of the same object to be verified can be merged and deduplicated.
  • the temporarily stored first images may also be images of different objects to be verified or multiple images of one object to be verified, which can be recognized at this time
  • the image of the same object to be verified in the first image can be compared according to the feature data of each image, for example, to determine the image with the similarity greater than the similarity threshold as the image of the same object to be verified, and further according to the first preset condition Only one image remains in each image of the same object to be verified.
  • the first preset condition may be that the first temporarily stored image is retained according to the temporary storage time, and the remaining temporary images of the same object to be verified are deleted.
  • the first preset condition may be to compare the score values of the images of the same object to be verified, retain the image with the highest score value, and delete the remaining images.
  • the acquisition of the score value is the same as the above embodiment.
  • the image can be analyzed according to a preset algorithm to obtain a score value, or the image can be scored using a neural network.
  • the principle of scoring is based on the clarity of the image, the angle of the face, and the The occlusion situation is determined. A person skilled in the art can select a corresponding scoring method according to needs, which is not specifically limited in this disclosure.
  • FIG. 5 shows a flowchart of step S300 in the image processing method according to an embodiment of the present disclosure, wherein, in response to the case where the first verification result is a verification failure, the first image is combined with the second image Verification, determining the identity of the object to be verified according to the second verification result of the joint verification may include:
  • S301 Perform clustering processing on the first image whose first verification result is the verification failure and the second image whose first verification result is the verification failure in the second time range to obtain an image set for each object to be verified;
  • the device performing the image processing method of the embodiment of the present disclosure may merge the first image and the second image of each camera module that do not match the feature data within the second time range, and perform clustering processing to obtain a Image sets of verification objects, and the images included in each image set are images of the same object to be verified.
  • each image set can be conveniently processed.
  • S302 Determine the similarity between each image in the image set and other images in the image set;
  • the similarity analysis can be performed on the images of the image set of the same object to be verified to determine the similarity between each image and other images, so that it can be further judged whether each image in the image set is the same to be verified The image of the object.
  • S303 Determine whether the image set meets the second preset condition based on the similarity corresponding to each image in the image set;
  • the image set After obtaining the similarity between each image in each image set and other images, it can be determined whether the image set meets the second preset condition according to the obtained similarity value, and the image set can be determined when the second preset condition is met The probability of the images of the same object is high, and the image set can be retained. If it is determined that the similarity does not satisfy the second preset condition, it can be determined that the clustering of the images in the image set is not credible. The probability is low, and the image set can be deleted at this time. Furthermore, the image set satisfying the preset condition can be further used to determine whether the object to be verified is an unregistered object.
  • step S301 shows a flowchart of step S301 in an image processing method according to an embodiment of the present disclosure, wherein the first image with the first verification result in the second time range is the verification failed and the first verification result is the verification failure
  • the second image is clustered to obtain an image set for each object to be verified, which may include:
  • S3012 Compare and match the first feature data with the second feature data to determine whether each of the first feature data and each of the second feature data corresponds to the same object to be verified;
  • S3013 Cluster the first characteristic data of the first image and the second characteristic data of the second image of the same object to be verified to form an image set corresponding to the object to be verified.
  • the second time range is a time range greater than the first time range.
  • the first time range may be 2-5s and the second time range may be 10 minutes, but it is not a specific limitation of the embodiment of the present disclosure.
  • the second time range greater than the first time range it is possible to obtain the first image and the second image obtained by the verification failure and deduplication processing in each first time range, and use each camera module in the second time range
  • the obtained first image and second image obtain different images of different objects to be verified. For example, you can use the first camera module obtained in the second time range and at least one second camera module to deduplicate the first image and the second image obtained in each first time range, and select from them to find the duplicate The features of the object to be verified are merged.
  • images with facial features greater than the similarity threshold can be combined into one category, that is, an image of the object to be verified.
  • an image set for multiple objects to be verified can be obtained, and each image set is an image of the same object to be verified.
  • each processed image in the embodiment of the present disclosure may include the identification information of the camera module associated with it, so that it can be determined which camera module each image was collected by, and the corresponding acquisition The location of the object to be verified.
  • the image may also be associated with the time information that the camera module collects the image, so that the time that each image is collected can be determined, and the time at which the object to be verified is located at each position can be correspondingly determined.
  • the neural network can identify the Feature data is not specifically limited in this disclosure.
  • the first feature data and the second feature data can be compared and matched to determine whether each of the first feature data and the second feature data corresponds to the same object to be verified.
  • Feature data corresponding to the same to-be-verified object are combined into one class to form an image set for each to-be-verified object.
  • the image set may include each image and feature data corresponding to each image, or may include only the features of each image
  • the data is not specifically limited in this disclosure.
  • the method for determining whether each feature data corresponds to the same object to be verified may include using a neural network to determine, if the probability that the two identified feature data are the same object to be verified is higher than a preset threshold, the two may be determined as If the same object to be verified is lower than a preset threshold, it may be determined to be a different object to be verified. In this way, it can be determined whether each feature data is the same feature data of the object to be verified, and further determine the image set corresponding to different objects to be verified.
  • step S302 shows a flowchart of step S302 in the image processing method according to an embodiment of the present disclosure, wherein the determination of the similarity of each image in the image set to other images in the image set includes:
  • step S200 feature data for each image in the image set, such as the first feature data, can be obtained, which can be expressed in the form of a feature vector.
  • the feature data of each image in the image set and the feature data of all images can be subjected to a number product operation and added.
  • an image set may include n images, where n is an integer greater than 1, and the object may be acquired for the sum value of facial feature data between each image and all images.
  • the feature data of each image obtained by the embodiment of the present disclosure is the feature vector of the normalization process, that is, the first feature data of each first image and the second image of the second image obtained by the embodiment of the present disclosure
  • the two feature data are feature vectors with the same dimension and the same length, so that the feature data can be easily calculated.
  • S3022 Determine the similarity between each image and the remaining images based on the sum value and the number of feature data in the image set.
  • the similarity between each image and other images is determined according to the number of images in the image set.
  • the similarity may be
  • the obtained sum value can be n-1, that is, the similarity between each image and the remaining images can be obtained.
  • the image set before determining whether the image set satisfies a preset condition based on the similarity between the images, before determining whether the object to be verified is a stranger based on the image set, It may be further determined whether the image set satisfies the second preset condition, and when the similarity corresponding to each image in the image set satisfies any one of the following cases, it is determined that the image set satisfies the second preset condition:
  • the similarity with the maximum similarity between the remaining images and the first similarity threshold can be used for comparison. If the maximum similarity is greater than the first similarity threshold, it means that each image in the image set If the similarity between them is large, it can be determined that the image set satisfies the preset condition. If the maximum similarity is less than the first similarity threshold, it means that the clustering effect of the image set is not ideal. Each image in the image set has a different probability of being a different object to be verified, and the image set can be deleted at this time.
  • the proportion of the similarity is greater than the preset ratio, if 50% of the similarities of the images are greater than the second similarity threshold, then this time If it is determined that the similarity between the images in the image set is large, it can be determined that the image set satisfies the preset condition. If the proportion of images greater than the second similarity threshold is less than the preset ratio, it means that the clustering effect of the image set is not ideal, and the probability of each image in the image set being different objects to be verified is high, and you can delete the Image set.
  • the smallest similarity in the image set is greater than the third similarity threshold, it means that the similarity between the images in the image set is large, and it can be determined that the image set satisfies the preset condition. If the minimum similarity is less than the first similarity threshold, it means that the clustering effect of the image set is not ideal, and each image in the image set has a different probability of being an object to be verified, and the image set can be deleted at this time.
  • the selection of the first similarity threshold, the second similarity threshold, and the third similarity threshold can be set according to different requirements, and this disclosure does not specifically limit this.
  • determining whether the object to be verified is a stranger may include the images in the image set through different time ranges In the case of the image collected by the camera module, it is determined that the object to be verified is a stranger.
  • the image set includes 2 images, and the two images are collected by the first camera module and the second camera module, respectively, and the collection time is in a different time range, then it can be determined that the image set corresponds to the verification
  • the object is a stranger. That is, the first image collected by the first camera module does not recognize the identity of the object to be verified, and the second image collected by the second camera module does not recognize the identity of the object to be verified, and the first image and the first image
  • the time of the second image acquisition is in a different time range, for example, in a different first time range, then in the case that the image set composed of the first image and the second image satisfies the preset condition, the corresponding The object to be verified is a stranger, that is, a stranger.
  • the identity of the suspicious person can be jointly determined by the images collected by multiple camera modules, so that the identity of the object to be verified can be determined more accurately.
  • a preset prompt operation is performed.
  • the relevant person may be reminded of the stranger's information through audio or display output. That is, in the embodiment of the present disclosure, in the case where the object to be verified corresponding to the first image is a stranger, performing the preset prompt operation includes: displaying the image of the stranger on the display device, and the stranger’s current The location information and the statistical information of the number of occurrences; and/or the presence of the stranger, the current location information of the stranger, and the statistical information of the number of appearances by means of audio prompts.
  • the stay time can be determined according to the time when the object is marked as a stranger.
  • the time difference between the first time that the last time is marked as a stranger and the time that the first time is marked as a stranger can be used as a stay Time and frequency of occurrence can be the ratio of the number of times the stranger is identified to the above stay time.
  • other information may also be counted, for example, the location information of the stranger, where the location of the stranger may be determined according to the identity or location of the camera module that collected the image of the stranger, thereby The running track of strangers can be obtained, and the statistical information is not listed here in this disclosure.
  • the above information can be transmitted to the user interaction interface of the electronic device, and displayed through the user interaction interface, which is convenient for viewing various prompt information.
  • the image set can be stored in the marked stranger library, where the acquisition time, acquisition position and acquisition image of each image can also be stored in association Information such as the logo of the camera module.
  • the number of times marked as a stranger may be output; or the second verification result may be output.
  • the second verification result is a result of confirmation after the joint determination of the target to be verified, such as information that can be identified as a stranger or that the target cannot be identified.
  • FIG. 8 shows a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 9 shows a flowchart of comparing strangers with an image processing method according to an embodiment of the present disclosure.
  • the whitelist/blacklist personnel information is first entered into the system to form a white/blacklist library.
  • the first object in the white/blacklist library is collectively referred to as the library staff, and the non-bank staff are strangers.
  • the object information that has been marked as a stranger may constitute a marked stranger library, and the above two libraries may form a target library.
  • the method of acquiring the image collected by the camera module may include using a front-end camera to collect portrait information, wherein the high-definition network camera collects video and streams it back to the back-end server, or it may also collect a face image through a face capture machine and directly pass it back to the server.
  • the server When the server receives the video stream, it decodes the returned video stream, and extracts the face pictures and feature values (face features) through the face detection algorithm or neural network. For example, the server receives the returned face pictures, You can skip the video stream decoding and directly detect the feature value of the face image. Among them, while performing face detection, it can also detect whether the face picture contains the feature of wearing a mask, and the picture matching the feature of wearing a mask can be directly stored in the suspicious person's picture library; at the same time, the face picture is combined with angle and quality score It is determined that the face image that does not meet the quality requirements is discarded.
  • the facial feature values of the acquired facial image can be regarded as a blacklist object or white.
  • the face image can be stored in the comparison record of the white/black list library.
  • the library can be compared with the marked stranger library. If the second matching threshold (adjustable) is exceeded, the matching is considered successful and the stranger is recognized again.
  • the feature value of the face image is temporarily stored for processing.
  • the first time window for example, 2-5 seconds
  • the second time range can be the multiple first time ranges, and the second time range can be set to 10. Minutes, find the repeated portrait features in the face images retained by different camera devices in the second time range and merge them, which can use the similarity threshold threshold Lv3 (adjustable) for clustering, and can record the corresponding shooting device of the image
  • Lv3 similarity threshold
  • the feature values with similarity exceeding Lv2 and Lv3 are grouped into the same category, which are regarded as different picture features of the same person.
  • the judgment is the following two conditions, i): whether it appears in n time windows (rolling window), n is usually set to 1 or 2 ; Ii): Whether the number of recorded devices is greater than m, m is usually set to 2; if all are met, then meet the criteria for determining the stranger and insert it into the stranger library. That is, it can be judged whether the images in the image set are taken by different camera devices in different time ranges. If it meets the stranger judgment condition, the obtained image set that meets the stranger judgment condition can be added to the second database, otherwise , Discard the image set.
  • All the saved feature values in the above steps can correspond one-to-one with their original face pictures, and all have time and address (device number) information. Based on this information, the system queries stranger pictures, searches for pictures, and tracks strangers. , Situation statistics and other applications.
  • the embodiments of the present disclosure can determine the identity authority of the object to be verified based on the image information collected by multiple camera modules, which can effectively reduce the false alarm rate and greatly increase the recognition accuracy of strangers.
  • the embodiments of the present disclosure support recording persons wearing masks and hats directly in the list of suspicious persons, and at the same time recording the time and location, which is convenient for later inquiries; the business logic that mask persons appear as alarms can also be set according to requirements.
  • Support stranger information recording and statistics query stranger pictures according to time and place, search with pictures, track query, stay time query, stranger appearance frequency and other operations.
  • the information of strangers entering and exiting can be effectively recorded, and the accuracy rate can meet the practical application requirements, which solves the problem that the strangers cannot be effectively identified in public places.
  • it can help management personnel and security personnel to control strangers from entering and leaving closed places such as government buildings, enterprise parks, hotels, communities, office buildings, etc., and improve the safety and order of the places.
  • the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the image processing methods provided by the present disclosure.
  • an image processing apparatus an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the image processing methods provided by the present disclosure.
  • FIG. 10 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 10, the apparatus includes:
  • the obtaining module 10 is configured to obtain the first image and the second image of the object to be verified, wherein the first image is collected by a first camera module, and the second image is collected by at least one second camera module;
  • the first verification module 20 is configured to compare the first image with the image data in the target library to perform identity verification and obtain a first verification result
  • the second verification module 30 is configured to respond to the case where the first verification result is a verification failure, perform joint verification using the first image and the second image, and determine the to-be-verified according to the second verification result of the joint verification The identity of the object.
  • the target library includes a white/blacklist library
  • the first verification module is also used to compare the first feature data of the first image with the feature data of each image in the white/blacklist library;
  • the object to be verified corresponding to the first image is determined to be Blacklist objects or whitelist objects.
  • the target library includes a marked stranger library
  • the first verification module is further used to compare the acquired first characteristic data of the first image with the characteristic data of the image in the marked stranger library;
  • the first verification result is that the verification is successful, and the object to be verified corresponding to the first image is determined as Stranger who has been marked.
  • the device further includes a statistics module configured to count the first image when there is feature data matching the first feature data in the marked stranger library The number of times the corresponding object to be verified is marked as a stranger.
  • the first verification module is further configured to add the first image and the associated information of the first image to the matched In the matching record corresponding to the feature data, the associated information of the first image includes the time information of the first camera module collecting the first image, the identification information of the first camera module, and the first At least one of the location information of a camera module.
  • the device further includes a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image. And/or the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
  • a deduplication module configured to perform a verification on the first image that fails in the first time range and the first image before performing joint verification on the first image and the second image.
  • the second image is subjected to deduplication processing to obtain the first image and/or the second image satisfying the first preset condition for each object to be verified within the first time range.
  • the second verification module is further configured to cluster the first image whose first verification result is a verification failure and the second image whose first verification result is a verification failure within a second time range. To obtain an image set for each object to be verified, and
  • the second verification module is further used to obtain a sum value of the product of the feature data of each image in each image set and the feature data of all the images, and
  • the similarity between each image and the remaining images is determined based on the sum value and the number of feature data in the image set.
  • the second verification module is further configured to obtain first feature data and second feature data corresponding to the first image and the second image data that have failed verification in the second time range, and
  • the second verification module is further configured to perform the determining whether the image set meets the second pre-determination based on the similarity corresponding to each image in the image set in at least one of the following ways Set conditions:
  • the maximum similarity among the similarities corresponding to the images in the image set is greater than the first similarity threshold
  • the number of similarity feature data in the similarity corresponding to each image in the image set that is greater than the second similarity threshold exceeds a preset ratio
  • the minimum similarity among the similarities corresponding to the images in the image set is greater than the third similarity threshold.
  • the second verification module is further configured to delete all images corresponding to the image set when the similarity between the images in the image set does not satisfy the preset condition.
  • the second verification module is also used when the image corresponding to the feature data in the feature data set is an image collected by different camera modules in different time ranges, then It is determined that the object to be verified corresponding to the feature data set is a stranger.
  • the acquisition module is further configured to separately acquire the first video collected by the first camera module and the second video collected by at least one second camera module, and preprocess the first video Obtaining a third image and preprocessing the second video to obtain a fourth image, or receiving the third image and the fourth image, and
  • the image satisfying the quality requirement in the third image is determined as the first image, and the image satisfying the quality requirement in the fourth image is determined as the second image.
  • the acquiring module is further configured to acquire the first feature data of the first image after acquiring the first image and the second image of the object to be verified, and Comparing the first feature data with the feature data in the target library to perform identity verification, before acquiring the first verification result, detecting whether the first image and/or the second image contain predetermined features, and
  • the first image and/or the second image contain a predetermined feature
  • the first image and/or the second image containing the predetermined feature are marked, wherein the predetermined feature includes a mask, a hat, At least one of sunglasses.
  • the apparatus further includes a prompt module configured to output a prompt of the first verification result or the second verification result.
  • the prompting module is further configured to output the identity and associated information of the object to be verified in a preset manner in response to the case where the first verification result is that the verification is successful, and to determine the pending When the verification object is a marked stranger, output the number of times marked as stranger; or
  • the second verification result is output.
  • the second verification module is further configured to, in response to the second verification result being that the object to be verified is a stranger, the first image, the second image and the associated information corresponding to the object to be verified Store to the target library, and control to display the verification result, statistical information and prompt information determined to be a stranger through the user interaction interface.
  • the functions provided by the apparatus provided by the embodiments of the present disclosure or the modules contained therein may be used to perform the methods described in the above method embodiments.
  • An embodiment of the present disclosure also proposes a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein, the processor is configured as the above method.
  • the electronic device may be provided as a terminal, server, or other form of device.
  • Fig. 11 is a block diagram of an electronic device 800 according to an exemplary embodiment.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, and a personal digital assistant.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , ⁇ 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps in the above method.
  • the processing component 802 may include one or more modules to facilitate interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operation at the electronic device 800. Examples of these data include instructions for any application or method for operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 may be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable and removable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 806 provides power to various components of the electronic device 800.
  • the power component 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with status assessment in various aspects.
  • the sensor component 814 can detect the on/off state of the electronic device 800, and the relative positioning of the components, for example, the component is the display and keypad of the electronic device 800, and the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of the component changes, the presence or absence of user contact with the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 800 may be used by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field Programming gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field Programming gate array
  • controller microcontroller, microprocessor or other electronic components are used to implement the above method.
  • a non-volatile computer-readable storage medium is also provided, for example, a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the above method.
  • Fig. 12 is a block diagram of an electronic device 1900 according to an exemplary embodiment.
  • the electronic device 1900 may be provided as a server.
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and memory resources represented by the memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
  • the application programs stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • the electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate an operating system based on the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium is also provided, for example, a memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the above method.
  • the present disclosure may be a system, method, and/or computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for causing the processor to implement various aspects of the present disclosure.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), and erasable programmable read only memory (EPROM (Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a computer on which instructions are stored
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a computer on which instructions are stored
  • the convex structure in the hole card or the groove and any suitable combination of the above.
  • the computer-readable storage medium used here is not to be interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, optical pulses through fiber optic cables), or through wires The transmitted electrical signal.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device through a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages Source code or object code written in any combination.
  • the programming languages include object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer readable program instructions can be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or completely on the remote computer or server carried out.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to pass the Internet connection).
  • electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLA), are personalized by utilizing the state information of computer-readable program instructions.
  • Computer-readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, or other programmable data processing device, thereby producing a machine that causes these instructions to be executed by the processor of a computer or other programmable data processing device A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is generated.
  • the computer-readable program instructions may also be stored in a computer-readable storage medium. These instructions cause the computer, programmable data processing apparatus, and/or other devices to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions to implement various aspects of the functions/acts specified in one or more blocks in the flowchart and/or block diagram.
  • the computer-readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment, so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , So that the instructions executed on the computer, other programmable data processing device, or other equipment implement the functions/acts specified in one or more blocks in the flowchart and/or block diagram.
  • each block in the flowchart or block diagram may represent a part of a module, program segment, or instruction that contains one or more Executable instructions.
  • the functions marked in the blocks may also occur in an order different from that marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with a dedicated hardware-based system that performs specified functions or actions Or, it can be realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

一种图像处理方法及装置、电子设备和存储介质,图像处理方法包括:获取待验证对象的第一图像和第二图像,其中,第一图像由第一摄像模组采集,第二图像由至少一个第二摄像模组采集(S100);将第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果(S200);响应于第一验证结果为验证失败的情况,利用第一图像与第二图像进行联合验证,根据该联合验证的第二验证结果确定待验证对象的身份(S300)。其具有判定精度高,误报率低的特点。

Description

图像处理方法及装置、电子设备和存储介质
相关申请的交叉引用
本申请基于申请号为201811574840.3、申请日为2018年12月21日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及智能监控领域,特别涉及图像处理方法及装置、电子设备和存储介质。
背景技术
当前,政府大楼、企业园区、酒店、小区、写字楼等场所通常通过传统的人防手段来对进入相应场所的人员进行管理,但是该种方法无法识别到访人员是否有进入区域内部权限。基于该问题,通常采用闸机,门禁进行刷卡,或者通过人脸识别的方式进行管理。然而闸机或者刷卡的方式无法防止私下换卡或尾随行为,另外基于人脸识别的陌生人识别,由于实际场景中人员出现在摄像机前时经常出现脸部遮挡、侧脸、低头等情况,造成人员抓拍照片与目标库照片差异较大,造成陌生人的误报率非常高。
发明内容
本公开实施例提供了一种能够通过多个摄像模组采集的图像信息对相应区域场所内的待验证对象的身份进行联合判定的图像处理方法及装置、电子设备和存储介质,其判定精度高,误报率低。
根据本公开的一方面,提供了一种图像处理方法,其包括:
获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;
将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;
响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
在一些可能的实施方式中,所述目标库包括白/黑名单库;
所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:
将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;
在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
在一些可能的实施方式中,所述目标库包括已标记的陌生人库;
所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验 证结果,包括:
将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;
在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
在一些可能的实施方式中,在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述方法还包括:
统计所述第一图像对应的待验证对象被标记为陌生人的次数。
在一些可能的实施方式中,所述方法还包括:
在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
在一些可能的实施方式中,在利用所述第一图像与所述第二图像进行联合验证之前,所述方法还包括:
对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
在一些可能的实施方式中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,包括:
将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集;
确定所述图像集中每个图像与所述图像集中其他图像的相似度;
基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件;
在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
在一些可能的实施方式中,所述确定所述图像集中每个图像与其他图像的相似度,包括:
获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值;
基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
在一些可能的实施方式中,所述将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,包括:
获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据;
将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;
将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
在一些可能的实施方式中,所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,包括以下方式中的至少一种:
所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;
所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;
所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
在一些可能的实施方式中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,还包括:
在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
在一些可能的实施方式中,所述在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人,包括:
在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
在一些可能的实施方式中,所述获取待验证对象的第一图像和第二图像,包括:
分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像;
将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
在一些可能的实施方式中,在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,还包括:
检测所述第一图像和/或第二图像中是否包含预定特征;
响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
在一些可能的实施方式中,所述方法还包括:
输出提示所述第一验证结果或者第二验证结果。
在一些可能的实施方式中,所述输出提示所述第一验证结果或第二验证结果包括:
响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者
输出所述第二验证结果。
在一些可能的实施方式中,所述方法还包括:
响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库;
通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
根据本公开的第二方面,提供了一种图像处理装置,其包括:
获取模块,配置为获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;
第一验证模块,配置为将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;
第二验证模块,配置为响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的 身份。
在一些可能的实施方式中,所述目标库包括白/黑名单库;
所述第一验证模块还用于将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;以及
在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
在一些可能的实施方式中,所述目标库包括已标记的陌生人库;
所述第一验证模块还用于将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;以及
在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
在一些可能的实施方式中,所述装置还包括统计模块,配置为在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,统计所述第一图像对应的待验证对象被标记为陌生人的次数。
在一些可能的实施方式中,所述第一验证模块还用于在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
在一些可能的实施方式中,所述装置还包括去重模块,配置为在利用所述第一图像与所述第二图像进行联合验证之前,对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
在一些可能的实施方式中,所述第二验证模块还用于将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,并
确定所述图像集中每个图像与所述图像集中其他图像的相似度,以及
基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,以及
在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
在一些可能的实施方式中,所述第二验证模块还用于获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值,以及
基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
在一些可能的实施方式中,所述第二验证模块还用于获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据,并
将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;以及
将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
在一些可能的实施方式中,所述第二验证模块还用于通过以下方式中的至少一种执 行所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件:
所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;
所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;
所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
在一些可能的实施方式中,所述第二验证模块还用于在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
在一些可能的实施方式中,所述第二验证模块还用于在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
在一些可能的实施方式中,所述获取模块还用于分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像,以及
将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
在一些可能的实施方式中,所述获取模块还用于在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,检测所述第一图像和/或第二图像中是否包含预定特征,并
响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
在一些可能的实施方式中,所述装置还包括提示模块,配置为输出提示所述第一验证结果或者第二验证结果。
在一些可能的实施方式中,所述提示模块还用于响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者
输出所述第二验证结果。
在一些可能的实施方式中,所述第二验证模块还用于响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库,以及控制通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
根据本公开的第三方面,提供了一种电子设备,其包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:执行第一方面中任意一项所述的方法。
根据本公开的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现第一方面中任意一项所述的方法。
本公开实施例能够基于多个摄像模组采集的图像信息对待验证对象的身份权限进行判定,可以有效的降低误报率,大幅提升对陌生人员的识别准确率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清 楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开实施例的一种图像处理方法的流程图;
图2示出根据本公开实施例的一种图像处理方法中步骤S100的流程图;
图3示出根据本公开实施例的图像处理方法中步骤S200的流程图;
图4示出根据本公开实施例的图像处理方法中步骤S200的流程图;
图5示出根据本公开实施例的图像处理方法中的步骤S300的流程图;
图6示出根据本公开实施例的图像处理方法中步骤S301的流程图;
图7示出根据本公开实施例的图像处理方法中步骤S302的流程图;
图8示出根据本公开实施例的图像处理方法的流程图;
图9示出根据本公开实施例的图像处理方法陌生人比对的流程图;
图10示出根据本公开实施的一种图像处理装置的框图;
图11示出根据本公开实施的一种电子设备800的框图;
图12示出根据本公开实施例的一种电子设备1900的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
图1示出根据本公开实施例的一种图像处理方法的流程图,其中本公开实施例的图像处理方法可以应用在政府大楼、企业园区、酒店、小区、写字楼等需要对进入人员进行管理的场所,其可以通过设置在不同区域位置的摄像模组采集的图像信息对待验证对象的身份进行联合判定,从而可以确定该待验证对象是否为陌生人,或者是否为已登记的在库人员。
如图1所示,本公开实施例的图像处理方法可以包括:
S100:获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集。
本公开实施例的图像处理方法可以应用在终端设备或者服务器等具有图像处理功能的电子设备中,终端设备可以为如手机、计算机设备等。这些电子设备与安装在待检 测区域的各个角落的摄像装置电连接,所述的摄像装置包括不限于摄像机,抓拍机等。在其他实施例中,这些电子设备包含有显示屏幕。
其中,待验证对象是指进入待验证区域的人员,第一图像和第二图像可以为需要被确定身份的待验证对象的面部图像,也可以全身图像,在本发明的实施例中以面部图像解释说明,但不能理解为对本发明的限制。这里的第一图像和第二图像来自于不同的视频源,例如第一图像可以由第一摄像模组采集,第二图像可以由至少一个第二摄像模组采集,本公开实施例可以通过在不同的位置区域上设置不同的摄像模组,即第一摄像模组和第二摄像模组可以为设置在不同位置上的摄像模组。同时为了方便描述,将第一摄像模组以外的摄像模组统称为第二摄像模组,各第二摄像模组设置的位置也可以不同,通过该种方式可以实时的采集不同位置区域内的图像信息。另外,上述第一图像和第二图像的采集时间可以相同也可以不同,本公开对此不进行限定。
S200:将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;
其中,可以利用神经网络获得各第一图像的第一特征数据,并将该第一特征数据与预先存储的目标库中的图像数据的特征数据进行对比,该目标库中可以包括已登记的黑名单和白名单,以及已被标注为陌生人对象。通过将第一特征数据与目标库中的特征数据进行对比匹配,可以方便的确定该第一特征数据对应的对象是否为目标库中的人员对象。其中,如果目标库中不存在与第一特征数据匹配的特征数据,则表明针对第一图像的第一验证结果为验证失败,如果目标库中存在与第一特征数据匹配的特征数据,则表明针对第一图像的第一验证结果为验证成功。
S300:响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
基于步骤S200的身份验证,如果目标库中不存在与第一特征数据匹配的特征数据时,可以结合至少一个第二摄像模组采集的第二图像进行联合验证,来验证待验证对象的身份。
本公开实施例中可以将针对验证失败的第一图像以及第二图像对待验证对象的身份进行联合验证,从而可以提高对待验证对象的验证成功率。
下面对本公开实施例的具体过程进行详细说明,其中,在执行本公开实施例时,首先通过步骤S100获取待待验证对象的第一图像和第二图像,该第一图像为通过第一摄像模组采集的图像。其中,如上所述,本公开实施例的图像处理方法可以应用在人员进行管理的场所中,该场所的不同位置处可以安装摄像头,其中任一摄像头都可以作为本公开实施例的第一摄像模组,为了方便描述下文中该第一摄像模组以外的摄像模组被称为第二摄像模组,并且第二摄像模组采集的图像可以被称为第二图像。
本公开实施例中步骤S100中获得的需要进行身份验证的第一图像和第二图像可以为直接从第一摄像模组和第二摄像模组获取的图像,也可以为经过分析处理、筛选后的图像。本公开对此不进行限定。图2示出根据本公开实施例的一种图像处理方法中步骤S100的流程图,其中所述获取待进行身份判定的第一图像,可以包括:
S101:获取第一摄像模组采集的第一视频以及至少一个第二摄像模组采集的第二视频,并对所述视频进行预处理获得多个第三图像以及对所述第二视频进行预处理得到第四图像,或者直接接收包括待验证对象的面部信息的所述第三图像和第四图像。
本公开实施例中,接收的信息可以为视频形式的信息,也可以为图片形式的信息,在接收的为视频形式的信息时,可以对该视频信息进行预处理操作,以从视频信息中获得需要处理的第三图像以及第四图像,其中预处理操作可以包括视频解码、图像的采样以及人脸检测等处理操作,通过上述预处理操作可以获取相应的包括面部图像的第三图 像以及第四图像。
在另一些可能的实施例中,获得的可以为图片形式的第三图像和第四图像,此时可以直接对第三图像和第四图像进行处理,即可以通过人脸检测方式获得包括待验证对象的面部图像的第三图像和第四图像。或者第一摄像模组可以直接采集包括面部图像的第三图像以及第二摄像模组可以直接采集包括面部图像的第四图像,如第一摄像模组和第二摄像模组可以为人脸抓拍机,获得的第三图像和第四图像即为面部图像,本公开对此不进行具体限定,只要在获得的第三图像和第四图像中包括待判定的待验证对象的面部区域的情况下,即可以作为本公开的实施例。
S102:将获得的第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
由于在实际场景下,采集的图像的角度、清晰度、是否佩戴帽饰、口罩眼睛等配饰以及有无其他物体或者人物的遮挡都具有随机性,因此在获得从摄像模组采集的第三图像和第四图像之后,还需用从第三图像和第四图像中筛选出符合质量要求的图像,执行用户的身份的检测和判定。其中,可以同时对第三图像和第四图像进行角度、质量分数联合判定,低于一定质量的图片将会被丢弃。
本公开实施例中,可以通过神经网络对第三图像和第四图像的图像质量进行确定,或者也可以通过预设的算法对第三图像的图像质量进行确定,其中可以结合图像的清晰度、人脸的角度对第三图像和第四图像进行评分,如果该评分值低于预设分值,如低于80分,则可以将该第三图像和第四图像删除,如果评分值高于预设分值,则说明该图像的质量满足质量要求,此时可以利用该第三图像和第四图像执行人员身份的判定,即可以利用满足质量要求的第三图像作为待进行身份验证的第一图像,满足质量要求的第四图像作为待进行身份验证的第二图像。其中,预设分数可以根据不同的需求和应用场景自行设定,本工开不作具体限定。
在另一些可能的实施方式中,在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,还可以检测第一图像和/或第二图像中是否包含预定特征,在检测到第三图像和/或第四图像中包含预定特征时,可以对包含预定特征的第三图像和/或第二图像进行标记。这里的标记是指,可以为包含预定特征的第三图像和/或第四图像分配标识符,该标识符用于表示对应的图像可以直接作为待身份验证的第一图像和第二图像。其中,所述预定特征可以包括口罩、帽子、墨镜中的至少一种特征。例如,在检测到从第一摄像模组采集的第一视频获得的第三图像中的待验证对象为戴帽子、戴口罩的待验证对象(即第一图像对应的特征数据包括帽子、口罩等特征)时,可以直接将该待验证对象列入可疑人员,即其第三图像可以作为第一图像。或者,在检测到从第二摄像模组采集的第二视频获得的第四图像中的待验证对象为戴帽子、戴墨镜的待验证对象(即第二图像对应的特征数据包括帽子、墨镜等特征)时,可以直接将该待验证对象列入可疑人员,即其第四图像可以作为第二图像。其中,可以通过神经网络检测第三图像和第四图像的特征数据来确定是否具有上述预定特征。
通过上述方式,可以方便的针对接收不同类型的图像获得待处理的第一图像和第二图像,其中由于获得的第一图像和第二图像为满足质量要求的图像,则可以用于精确的进行待验证对象的身份验证。
在获得第一图像和第二图像之后,则可以将该第一图像和第二图像与目标库中的对象的特征数据进行对比匹配,即可以执行步骤S200。其中,本公开实施例可以包括目标库,其中目标库中记录有黑名单和白名单,以及被标记的陌生人信息。其中黑名单是指不能进入该场所的对象的信息,以及白名单是指能够允许进入该场所的对象的信息, 本公开实施例目标库中存储的是具有已知身份的对象的信息以及被标记为陌生人的对象的信息。
例如,针对第一摄像模组获得的第一图像,本公开实施例在通过步骤S100获得第一图像之后,可以将第一图像的第一特征数据与目标库中的图像数据的特征数据进行对比匹配。例如,目标库中存储有各第一对象的面部图像及其面部特征数据,或者也可以包括其他信息,如姓名、年龄等等,本公开不作具体限定。
本公开实施例可以将第一图像的第一特征数据与目标库中各对象的特征是数据进行对比,如果在目标库中存在与第一特征数据的匹配值超过第一匹配阈值的特征数据,则可以确定第一图像对应的待验证对象为目标库中的对象,此时表明第一验证结果为验证成功。进一步的,如果查询不到与第一特征数据对应特征数据,即可以确定第一验证结果为验证失败。另外,在目标库中不存在与第一图像的第一特征数据匹配的特征数据时,如目标库中全部的对象的面部特征与第一特征数据的匹配值都低于第一匹配阈值,此时可以确定目标库中不存在与第一特征数据匹配的特征数据。即第一图像对应的待验证对象并非目标库中的人员。此时可以结合第二摄像模组采集的第二图像进行进一步的判定。其中,由于本公开实施例可以根据摄像模组采集的图像或者接收的图像进行人物对象的身份验证,其可以实现以输入的图像与目标库中的图像数据进行对比的效果,即具有以图搜图的效果,可以搜寻到与输入图像匹配的目标库中的图像。
其中,在此需要说明的是,本公开实施例的目标库可以包括白/黑名单库以及已标记的陌生人库。其中,白/黑名单库中包括为已登记的黑名单对象和白名单对象,其中黑名单对象即为限制进入相应的场所的人员,白名单对象为准许进入相应场所的人员。其中白/黑名单库中包括的白名单对象和黑名单对象的面部图像,或者还可以包括相应的姓名、年龄、职位等信息。针对白/黑名单库可以执行待验证对象的身份验证,通过验证结果可以表明该待验证对象是否为黑名单对象或者白名单对象。
图3示出根据本公开实施例的图像处理方法中步骤S200的流程图,其中,将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:
S201:将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;
S202:在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
如上所述,目标库包括白/黑名单库,白/黑名单库中可以包括白名单对象和黑名单对象的面部图像或者也可以直接包括面部图像的特征数据。通过将第一特征数据与白/黑名单库中各对象的图像数据中的特征数据进行匹配,如果存在与第一特征数据匹配度高于第一匹配阈值的特征数据,则可以确定该待验证对象为白/黑名单库中的对象,并可以将匹配度最高的特征数据对应的身份信息确定为该待验证对象的身份信息,此时可以确认该待验证对象的身份,并且表明第一验证结果为验证成功。否则,如果白/黑名单库中的全部的特征数据与第一特征数据的匹配度都低于第一匹配阈值,则表明白/黑名单库中不存在与该待验证对象匹配的对象。
在一些可能的实施方式中,在白/黑名单库中查询到与第一特征数据匹配的特征数据后,可以将第一图像以及第一图像的关联信息加载到匹配的对象的匹配记录中,其中关联信息可以为第一摄像模组采集第一图像的时间、第一摄像模组的标识、以及对应的位置信息等。本公开实施例在获取各图像时,可以同时获得与各图像的关联信息。通过将验证成功的第一图像及其关联信息添加至对应的匹配记录中可以方便对该对象的轨迹、出行时间等等进行分析。
在另一些实施例中,如果在白/黑名单库中查询到与第一特征匹配的特征数据对应的对象为黑名单对象,此时还可以执行预设的提示操作,例如可以通过语音或者显示输出的方式提示该黑名单人员的进入情况。或者,也可以统计该黑名单对象的进入次数等信息,同时提示输出该进入次数,方便管理人员进行查看。本公开实施例中,可以将上述信息传输至上述电子设备的用户交互界面,并通过该用户交互界面进行显示,方便查看各项提示信息。
通过上述即可以执行黑名单对象和白名单对象的身份验证,并且在白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功。
另外,如上所述,目标库还可以包括已标记的陌生人库,该已标记的陌生人库中的对象为被标记为陌生人的对象,其中也可以包括各对象的面部图像或者直接包括面部特征数据,同时也可以包括各面部图像的采集时间、位置等关联信息,还可以包括被标记陌生人的次数等。
针对已标记的陌生人库可以执行待验证对象的身份验证,通过验证结果可以表明该待验证对象是否为被标记的陌生人对象。
图4示出根据本公开实施例的图像处理方法中步骤S200的流程图,其中,所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:
S203:将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;
S204:在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
如上所述,目标库包括已标记的陌生人库,已标记的陌生人库中可以被标记为陌生人的对象的面部图像或者也可以直接包括面部图像的特征数据。通过将第一特征数据与已标记的陌生人库中各对象的特征数据进行匹配,如果存在与第一特征数据匹配度高于第二匹配阈值的特征数据,则可以确定该待验证对象为已标记的陌生人库中的对象,并可以将匹配度最高的特征数据对应的对象的身份信息确定为该验证对象的身份信息,此时可以确认该待验证对象的身份为陌生人,并且表明第一验证结果为验证成功。否则,如果全部的特征数据与第一特征数据的匹配度都低于第二匹配阈值,则表明已标记的陌生人库中不存在与该待验证对象匹配的对象。
在一些可能的实施方式中,在已标记的陌生人库中查询到与第一特征数据匹配的特征数据后,可以将第一图像以及第一图像的关联信息加载到匹配的对象的匹配记录中,其中关联信息可以为第一摄像模组采集第一图像的时间、第一摄像模组的标识、以及对应的位置信息等。本公开实施例在获取各图像时,可以同时获得与各图像的关联信息。通过将验证成功的第一图像及其关联信息添加至对应的匹配记录中可以方便对该对象的轨迹、出行时间等等进行分析。
在另一些实施例中,如果在已标记的陌生人库中查询到与第一特征匹配的特征数据,此时还可以执行预设的提示操作,例如可以通过语音或者显示输出的方式提示该陌生人员的进入情况。或者,也可以统计该陌生人员的在相应场所内被标记的次数、以及该陌生人在相应场所内的停留时间、出现的频率等信息,同时提示输出上述信息,方便管理人员进行查看。其中,停留时间可以根据检测到对象被标记为陌生人的时间来确定,如可以将最后一次被标记为陌生人的第一时间与第一次被标记为陌生人的时间之间的时间差作为停留时间,出现的频率可以为该陌生人被识别到的次数与上述停留时间的比值。在本公开的其他实施例中,也可以统计其他信息,例如该陌生人出现的位置信息, 其中可以根据采集到该陌生人的图像的摄像模组的标识或位置确定陌生人所在的位置,从而可以获取陌生人的运行轨迹,对于统计的信息本公开在此不一一列举。本公开实施例中,可以将上述信息传输至电子设备的用户交互界面,并通过该用户交互界面进行显示,方便查看各项提示信息。
通过上述即可以执行已被等级的陌生对象的身份验证,并且在已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功。
其中,第一匹配阈值和第二匹配阈值可以为相同的阈值,也可以为不同的阈值,本领域技术人员可以根据需求自行设定。
另外,本公开实施例中,对于目标库中的白/黑名单库和已标记的陌生人库的验证顺序,本领域技术人员可以根据需求设定,其中,可以先通过白/黑名单库对第一特征数据进行验证,在白/黑名单库中不存在匹配的特征数据时,再利用已标记的陌生人库进行验证,也可以先通过已标记的陌生人库对第一特征数据进行验证,在已标记的陌生人库中不存在匹配的特征数据时,再利用白/黑名单库进行验证,或者也可以同时利用白/黑名单库和已标记的陌生人库进行验证。也就是说,本公开实施例对利用两个库的执行验证操作的时间顺序不作具体限定,只要能够执行上述验证即可以作为本公开实施例。
另外,本公开实施例中,在所述目标库中不存在与所述第一图像的第一特征数据匹配的特征数据的情况下(即已标记的陌生人库和白/黑名单库都不存在匹配的特征数据),此时可以确定第一验证结果为验证失败,此时可以保存所述第一图像。例如在目标库中的全部对象的特征数据与第一图像的第一特征数据都不匹配时,可以保存该第一图像。同时响应于第一验证结果为验证失败的情况,可以基于第一摄像模组以外的第二摄像模组所获取的第二图像与第一图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
本公开实施例对第二图像的第一验证操作的过程与第一图像相同,同样的也可以获得第二图像的第一验证结果。本公开在此不再重复说明。
其中,在目标库中不存在与第一特征数据匹配的对象的情况下,可以暂存该第一图像。并且为了减小图像的冗余以及减小暂存的第一图像占用的存储空间,可以对预设时间范围内的第一图像进行去重处理,从而可以减少针对同一待验证对象暂存过多的图像。其中,本公开实施例可以对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。其中,第一时间范围可以为可调的时间窗口(rolling window),例如可以设置为2-5秒,可以按照第一时间范围对等待归档(暂存)的第一图像以及第二图像进行一次批量处理,此时可以对相同的待验证对象的第一图像进行合并以及去重处理,以及对相同的待验证对象的第二图像进行合并以及去重处理。由于第一时间范围内可以获得不同待验证对象的第一图像,因此暂存的第一图像也可以为不同待验证对象的图像,也可以为一个待验证对象的多个图像,此时可以识别第一图像中相同的待验证对象的图像,例如可以根据各图像的特征数据进行对比,将相似度大于相似度阈值的图像确定为相同待验证对象的图像,并可以进一步按照第一预设条件在相同待验证对象的各图像中仅保留一个图像。其中第一预设条件可以为按照暂存时间,将最先暂存的图像保留,删除相同待验证对象的其余暂存图像。或者第一预设条件也可以为对比对于相同待验证对象的各图像的评分值,将评分值最高的图像保留,删除其余图像。该评分值的获取与上述实施例相同,例如可以根据预设算法对图像进行分析,得到评分值,或者利用神经网络对图像进行评分,其中评分的原理根据图像的清晰度、面部的角度、被遮挡情况等确定。本领域技术人员可以根据需求选择相应的评分方法,本公开对此不进行具体限定。
通过上述方式则可以获得第一时间范围内第一摄像模组所采集的第一图像中的可疑人员(未匹配到第一对象以及第二对象),并为每个待验证对象仅保留一个第一图像,从而可以减少存储空间的使用。上述仅通过第一摄像模组的示例说明针对第一摄像模组的第一图像的处理,对于其余摄像模组的处理方式相同,在此不再重复说明。
在对各第一图像进行合并和去重处理之后,则可以结合其余的第二摄像模组采集的第二图像对待验证对象的身份进行判定。图5示出根据本公开实施例的图像处理方法中步骤S300的流程图,其中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,可以包括:
S301:将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集;
执行本公开实施例图像处理方法的设备可以将每个摄像模组在第二时间范围内未匹配到特征数据的第一图像和第二图像进行合并,并进行聚类处理,得到针对每个待验证对象的图像集,每个图像集包括的图像为相同待验证对象的图像。从而可以方便的针对每个图像集进行处理。
S302:确定所述图像集中每个图像与所述图像集中其他图像的相似度;
本公开实施例中,可以对同一待验证对象的图像集的图像进行相似度分析,即可以确定每个图像与其他图像之间的相似度,从而可以进一步判断图像集中各图像是否为同一待验证对象的图像。
S303:基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件;
S304:在确定所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
在获得每个图像集中各图像与其他图像之间的相似度之后,可以根据获得的相似度值确定图像集是否满足第二预设条件,在满足第二预设条件时即可以确定该图像集为相同对象的图像的概率较高,可以保留图像集,如果判断出各相似度不满足第二预设条件,则可以判断该图像集中各图像的聚类并不可信,为相同对象的图像的概率较低,此时可以删除该图像集。并可以进一步利用满足预设条件的图像集确定该待验证对象是否为未登记的对象。
下面对各过程进行详细说明。图6示出根据本公开实施例的图像处理方法中步骤S301的流程图,其中,所述将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,可以包括:
S3011:获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据;
S3012:将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;
S3013:将所述同一待验证对象的第一图像的第一特征数据和第二图像的第二特征数据进行聚类形成对应于该待验证对象图像集。
步骤S3011中,第二时间范围为大于第一时间范围的时间范围,例如,第一时间范围可以为2-5s,第二时间范围可以为10min,但不作为本公开实施例的具体限定。通过第二时间范围大于第一时间范围的限定,能够得到每个第一时间范围内验证失败且经过去重处理得到的第一图像和第二图像,并利用第二时间范围内各个摄像模组得到的第一图像和第二图像获得不同待验证对象的不同图像。例如可以用第二时间范围内获得的第 一摄像模组以及至少一个第二摄像模组在各第一时间范围内去重处理得到的第一图像和第二图像,并从中选择找出重复的待验证对象的特征进行合并,例如可以将图像的面部特征大于相似度阈值的图像合并为一类,即可以作为一个待验证对象的图像。通过该步骤可以得到针对多个待验证对象的图像集,每个图像集为相同待验证对象的图像。
其中,在此需要说明的是,本公开实施例中各处理的图像都可以包括与其关联的摄像模组的标识信息,从而可以确定每个图像都是由哪个摄像模组采集的,对应的获取待验证对象所在的位置。另外,图像也可以关联有摄像模组采集该图像的时间信息,从而可以确定每个图像被采集到的时间,对应的确定待验证对象在各位置的时间。
在对各图像进行聚类时可以首先获得第二时间范围内验证失败的第一图像的第一特征数据,以及验证失败的第二图像的第二特征数据,其中可以通过神经网络识别各图像的特征数据,本公开不作具体限定。在得到各第一特征数据以及第二特征数据之后,可以将第一特征数据和第二特征数据进行对比匹配,确定各第一特征数据和第二特征数据是否对应于相同的待验证对象,将对应于相同的待验证对象的特征数据组合到一类中,形成针对每个待验证对象的图像集,该图像集中可以包括各图像以及各图像对应的特征数据,也可以仅包括各图像的特征数据,本公开不作具体限定。其中,确定各特征数据是否为对应于同一待验证对象的方式可以包括利用神经网络确定,如果识别出的两个特征数据为同一待验证对象的概率高于预设阈值,则可以确定二者为同一待验证对象,如果低于预设阈值,则可以确定为不同的待验证对象。通过该种方式即可以确定各特征数据是否为相同的待验证对象的特征数据,进一步确定对应于不同待验证对象的图像集。
在获得针对每个待验证对象的图像集之后,可以确定每个图像集中各图像之间的相似度。图7示出根据本公开实施例的图像处理方法中步骤S302的流程图,其中,所述确定所述图像集中每个图像与所述图像集中其他图像的相似度,包括:
S3021:获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值;
其中,通过步骤S200,可以获得对于图像集中的每个图像的特征数据,如第一特征数据,其可以表示为特征向量的形式。在此基础上,可以将图像集中每个图像的特征数据与全部图像的特征数据进行数量积运算并加和处理。例如,图像集中可以包括n个图像,n为大于1的整数,则可以对象的获取针对每个图像与全部图像之间的面部特征数据之间的加和值。例如对于第i个图像的加和值可以为S i=N i·N 1+N i·N 2+...N i·N n,其中N i为第i个图像的面部特征数据。通过上述方式即可以获得每个图像的面部特征数据与全部图像的面部特征数据的数量积的加和值。
在此需要说明的是,本公开实施例获得的各图像的特征数据为归一化处理的特征向量,即通过本公开实施例得到的各第一图像的第一特征数据以及第二图像的第二特征数据均为维度相同且长度相同的特征向量,从而可以方便的对各特征数据进行运算。
S3022:基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
在获得各加和值之后,在根据图像集中图像的数量确定每个图像与其他图像的相似度。本公开实施例中相似度可以为
Figure PCTCN2019093388-appb-000001
即可以将获得的加和值处于n-1即可以获得每个图像与其余图像之间的相似度。
在获得各图像与其余图像之间的相似度之后,则可以根据获得的各相似度值确定图像集是否满足预设条件。并利用满足预设条件的图像集判断对应的待验证对象是否为未登记人员。
在一些可能的实施方式中,所述在基于各图像之间的相似度确定所述图像集满足预 设条件的情况下,基于所述图像集确定所述待验证对象是否为陌生人之前,还可以进一步确定图像集是否满足第二预设条件,在所述图像集中各图像对应的相似度满足以下情况中的任意一种时,确定为所述图像集满足第二预设条件:
a)所述图像集中各图像之间的相似度中的最大相似度大于第一相似度阈值;
在本公开实施例中,可以利用与其余图像之间的相似度最大的相似度和第一相似度阈值进行比较,如果该最大的相似度大于第一相似度阈值,则说明该图像集中各图像之间的相似度较大,则可以确定该图像集满足预设条件。如果最大的相似度小于第一相似度阈值,则说明该图像集的聚类效果不理想,该图像集中的各图像为不同待验证对象的概率较大,此时可以删除该图像集。
b)所述图像集中各图像之间的相似度中大于第二相似度阈值的相似度的图像数量超过预设比例;
类似的,如果图像集中各图像之间的相似度大于第二相似度阈值的相似度的比例大于预设比例,如有50%的图像的相似度都大于第二相似度阈值,则此时可以确定该图像集中各图像之间的相似度较大,则可以确定该图像集满足预设条件。如果大于第二相似度阈值的图像的比例小于该预设比例,则说明该图像集的聚类效果不理想,该图像集中的各图像为不同待验证对象的概率较大,此时可以删除该图像集。
c)所述图像集中各图像之间的相似度中的最小相似度大于第三相似度阈值。
类似的,如果图像集中最小的相似度大于第三相似度阈值,则说明该图像集中各图像之间的相似度较大,则可以确定该图像集满足预设条件。如果最小的相似度小于第一相似度阈值,则说明该图像集的聚类效果不理想,该图像集中的各图像为不同待验证对象的概率较大,此时可以删除该图像集。其中,第一相似度阈值、第二相似度阈值以及第三相似度阈值的选择可以根据不同的需求进行设定,本公开对此不进行具体限定。
通过上述方式即可以确定图像集是否满足预设条件,并进一步利用满足预设条件的图像集执行待验证对象的身份判定。其中在所述图像集中各图像之间的相似度满足预设条件的情况下,确定所述待验证对象是否为陌生人,可以包括在所述图像集中的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述待验证对象为陌生人。
即如果图像集中包括2个图像,其中两个图像分别由第一摄像模组以及第二摄像模组采集,并且采集的时间分别位于不同的时间范围,此时可以确定该图像集对应的待验证对象为陌生人。即第一摄像模组采集的第一图像中未识别出该待验证对象的身份,以及第二摄像模组采集的第二图像也没有识别出该待验证对象的身份,并且第一图像和第二图像采集的时间位于不同的时间范围,例如在不同的第一时间范围内,则在由该第一图像和第二图像构成的图像集满足预设条件的情况下,可以确定图像集对应的待验证对象为陌生人,即为陌生人员。
通过上述方式可以通过多个摄像模组采集的图像对可疑人员的身份进性联合判定,从而可以更加精确的确定待验证对象的身份。
在确定第一图像对应的待验证对象为陌生人的情况下,执行预设的提示操作。如上述实施例所述,可以通过音频或者显示输出的方式提示相关人员该陌生人的信息。即本公开实施例中,在所述第一图像对应的待验证对象为陌生人的情况下,执行预设的提示操作包括:在显示设备中显示该陌生人的图像,、该陌生人当前的位置信息,以及出现的次数的统计信息;和/或通过音频提示的方式提示出现陌生人、该陌生人当前的位置信息,以及出现的次数的统计信息。其中,停留时间可以根据检测到对象被标记为陌生人的时间来确定,如可以将最后一次被标记为陌生人的第一时间与第一次被标记为陌生人的时间之间的时间差作为停留时间,出现的频率可以为该陌生人被识别到的次数与上述 停留时间的比值。在本公开的其他实施例中,也可以统计其他信息,例如该陌生人出现的位置信息,其中可以根据采集到该陌生人的图像的摄像模组的标识或位置确定陌生人所在的位置,从而可以获取陌生人的运行轨迹,对于统计的信息本公开在此不一一列举。本公开实施例中,可以将上述信息传输至电子设备的用户交互界面,并通过该用户交互界面进行显示,方便查看各项提示信息。
另外,在确定图像集对应的待验证对象为陌生人的情况下,可以将该图像集存入至已标记的陌生人库,其中还可以关联的存储各图像的采集时间、采集位置以及采集图像的摄像模组的标识等信息。
在另一些可能的实施例中,在确定该待验证对象为已标记的陌生人时,可以输出被标记为陌生人的次数;或者输出所述第二验证结果。该第二验证结果为对待验证对象进行联合判定后确认的结果,如可以识别为陌生人或者无法识别该对象的信息等。
为了更加详细的说明本公开实施例,下面举例说明本公开实施例的具体过程。图8示出根据本公开实施例的图像处理方法的流程图,图9示出根据本公开实施例的图像处理方法陌生人比对的流程图。
其中,首先将白名单/黑名单人员信息录入系统形成白/黑名单库,白/黑名单库内的第一对象统称为在库人员,非在库人员即为陌生人。已被标记为陌生人的对象信息可以构成已标记的陌生人库,上述两个库可以形成目标库。获取摄像模组采集的图像的方式可以包括利用前端摄像机采集人像信息,其中高清网络摄像机采集视频流传回到后端服务器,或者也可以通过人脸抓拍机采集人脸图片直接传回给服务器。服务器在接收到视频流时,对传回的视频流进行解码,通过人脸检测算法或者神经网络提取其中人脸图片及特征值(面部特征),如服务器接收的为传回的人脸图片,则可以跳过视频流解码,直接检测人脸图像的特征值时。其中,在执行人脸检测的同时还可以检测该人脸图片是否含戴口罩的特征,符合戴口罩特征图片可以直接存入可疑人员图片库中保存;同时对人脸图片进行角度、质量分数联合判定,将不符合质量要求的人脸图像丢弃。
接着,可以将获取的人脸图像的人脸特征值和陌生人识别系统中的白/黑名单库进行比对,超过第一匹配阈值(可调),则视为比中黑名单对象或者白名单对象,此时可以将该人脸图像存入在白/黑名单库的比对记录中。在未比中在白/黑名单库的特征时,可以与已标记的陌生人库进行比对,超过第二匹配阈值(可调)则认为匹配成功,为陌生人再次被识别到。
如果既未比中白/黑名单库,又未比中已标记的陌生人库,将人脸图像的特征值暂存以等待处理。此时可以根据设置第一时间窗口(rolling window),例如为2-5秒,对等待归档的特征值进行一次批量处理,对当前时间窗口的所有特征遍历,若相似度阈值threshold Lv2(可调),则认为是同一人在一个场景被多拍,此时可以进行合并、去重(例如可以保留符合要求的最早的特征),并记录拍摄的设备的标识。
利用不同的摄像设备在多个第一时间范围后保留的人脸图像,进行合并和聚类分析,例如第二时间范围可以为该多个第一时间范围,第二时间范围可以被设置为10分钟,找出该第二时间范围内不同摄像设备保留的人脸图像中重复的人像特征进行合并,其中可以利用相似度阈值threshold Lv3(可调)进行聚类,并可以记录图像对应的拍摄设备的标识,此步骤的人脸特征原始值不丢弃,合并在一个类中存储。
经过上述两步之后,相似度超过Lv2和Lv3的特征值被归纳到同一类中,视为同一人的不同图片特征,针对这个人的所有特征值,N1,N2,N3....Nk,计算从1到k的每个特征值与其余特征值之间的相似度pi=(Ni*N1+Ni*N2+…+Ni*N(i-1)+Ni*N(i+1)...Ni*Nk)/k-1,取其中最大值,若其大于等于阈值threshold Lv4(可调,该Lv4大于Lv3),认为前期聚类未发生扩散,Ni对应的特征及人脸图片有效保留, 作为待展示人脸图片,如果小于threshold Lv4,则认为前期聚类不可信,丢弃所有特征及对应图片。
针对验证后的每个类别(即每个不同来访人员)的所有特征值,判断是以下两个条件,i):是否出现在n个时间窗口(rolling window)内,n通常设置为1或者2;ii):记录的设备数量是否大于m,m通常设置为2;如果都满足,则符合陌生人判定条件,插入到陌生人库中。即可以判断图像集中的图像是否为在不同时间范围内被不同的摄像设备拍摄,如是,在符合陌生人判定条件,可以将获得的满足陌生人判定条件的图像集加入至第二数据库中,否则,丢弃图像集。
以上步骤所有保存下来的特征值与其原始人脸图片均可一一对应,且均带时间、地址(设备号)信息,系统根据这些信息进行陌生人图片查询、以图搜图、陌生人轨迹查询,态势统计等应用。
综上所述,本公开实施例能够基于多个摄像模组采集的图像信息对待验证对象的身份权限进行判定,可以有效的降低误报率,陌生人员的识别准确率大幅提升。另外,本公开实施例支持将戴口罩、帽子人员直接记录在可疑人员列表中,同时记录时间地点,方便后期查询;也可以根据需求设定出现口罩人员即告警的业务逻辑。支持陌生人信息记录、统计:根据时间地点查询陌生人图片,以图搜图,轨迹查询,滞留时间查询,陌生人出现频率等操作。即根据本公开实施例可有效记录进出陌生人信息,且准确率达到实际应用要求,解决了公共场所无法有效识别陌生人的问题。在实际应用中,可帮助管理人员、安保人员管控陌生人进出政府大楼、企业园区、酒店、小区、写字楼等封闭场所,提高场所的安全性及秩序感。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。
此外,本公开还提供了图像处理装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种图像处理方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
图10示出根据本公开实施例的一种图像处理装置的框图,如图10所示,所述装置包括:
获取模块10,配置为获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;
第一验证模块20,配置为将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;
第二验证模块30,配置为响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
在一些可能的实施方式中,所述目标库包括白/黑名单库;
所述第一验证模块还用于将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;以及
在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
在一些可能的实施方式中,所述目标库包括已标记的陌生人库;
所述第一验证模块还用于将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;以及
在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
在一些可能的实施方式中,所述装置还包括统计模块,配置为在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,统计所述第一图像对应的待验证对象被标记为陌生人的次数。
在一些可能的实施方式中,所述第一验证模块还用于在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
在一些可能的实施方式中,所述装置还包括去重模块,配置为在利用所述第一图像与所述第二图像进行联合验证之前,对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
在一些可能的实施方式中,所述第二验证模块还用于将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,并
确定所述图像集中每个图像与所述图像集中其他图像的相似度,以及
基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,以及
在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
在一些可能的实施方式中,所述第二验证模块还用于获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值,以及
基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
在一些可能的实施方式中,所述第二验证模块还用于获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据,并
将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;以及
将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
在一些可能的实施方式中,所述第二验证模块还用于通过以下方式中的至少一种执行所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件:
所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;
所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;
所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
在一些可能的实施方式中,所述第二验证模块还用于在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
在一些可能的实施方式中,所述第二验证模块还用于在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
在一些可能的实施方式中,所述获取模块还用于分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像,以及
将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
在一些可能的实施方式中,所述获取模块还用于在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,检测所述第一图像和/或第二图像中是否包含预定特征,并
响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
在一些可能的实施方式中,所述装置还包括提示模块,配置为输出提示所述第一验证结果或者第二验证结果。
在一些可能的实施方式中,所述提示模块还用于响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者
输出所述第二验证结果。
在一些可能的实施方式中,所述第二验证模块还用于响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库,以及控制通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图11是根据一示例性实施例示出的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图11,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括 多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场 可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图12是根据一示例性实施例示出的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图12,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到 外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (36)

  1. 一种图像处理方法,包括:
    获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;
    将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;
    响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
  2. 根据权利要求1所述的方法,其中,所述目标库包括白/黑名单库;
    所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:
    将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;
    在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
  3. 根据权利要求1或2所述的方法,其中,所述目标库包括已标记的陌生人库;
    所述将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果,包括:
    将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;
    在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
  4. 根据权利要求3所述的方法,其中,在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述方法还包括:
    统计所述第一图像对应的待验证对象被标记为陌生人的次数。
  5. 根据权利要求1-4中任意一项所述的方法,其中,所述方法还包括:
    在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
  6. 根据权利要求1-5中任意一项所述的方法,其中,在利用所述第一图像与所述第二图像进行联合验证之前,所述方法还包括:
    对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
  7. 根据权利要求1-6中任意一项所述的方法,其中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,包括:
    将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集;
    确定所述图像集中每个图像与所述图像集中其他图像的相似度;
    基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件;
    在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
  8. 根据权利要求7所述的方法,其中,所述确定所述图像集中每个图像与其他图像的相似度,包括:
    获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值;
    基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
  9. 根据权利要求7或8所述的方法,其中,所述将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,包括:
    获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据;
    将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;
    将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
  10. 根据权利要求7-9中任意一项所述的方法,其中,所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,包括以下方式中的至少一种:
    所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;
    所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;
    所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
  11. 根据权利要求7-10中任意一项所述的方法,其中,所述响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份,还包括:
    在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
  12. 根据权利要求7-11中任意一项所述的方法,其中,所述在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人,包括:
    在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
  13. 根据权利要求1-12中任意一项所述的方法,其中,所述获取待验证对象的第一图像和第二图像,包括:
    分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像;
    将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
  14. 根据权利要求13所述的方法,其中,在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与 目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,还包括:
    检测所述第一图像和/或第二图像中是否包含预定特征;
    响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
  15. 根据权利要求1-14中任意一项所述的方法,其中,所述方法还包括:
    输出提示所述第一验证结果或者第二验证结果。
  16. 根据权利要求15所述的方法,其中,所述输出提示所述第一验证结果或第二验证结果包括:
    响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者
    输出所述第二验证结果。
  17. 根据权利要求1-16中任意一项所述的方法,其中,所述方法还包括:
    响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库;
    通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
  18. 一种图像处理装置,包括:
    获取模块,配置为获取待验证对象的第一图像和第二图像,其中,所述第一图像由第一摄像模组采集,所述第二图像由至少一个第二摄像模组采集;
    第一验证模块,配置为将所述第一图像与目标库中的图像数据进行对比以执行身份验证,获取第一验证结果;
    第二验证模块,配置为响应于第一验证结果为验证失败的情况,利用所述第一图像与所述第二图像进行联合验证,根据该联合验证的第二验证结果确定所述待验证对象的身份。
  19. 根据权利要求18所述的装置,其中,所述目标库包括白/黑名单库;
    所述第一验证模块还用于将所述第一图像的第一特征数据与所述白/黑名单库中的各图像的特征数据进行对比;以及
    在所述白/黑名单库中存在与所述第一特征数据匹配的特征数据的情况下,确定所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为黑名单对象或白名单对象。
  20. 根据权利要求18或19所述的装置,其中,所述目标库包括已标记的陌生人库;
    所述第一验证模块还用于将获取的所述第一图像的第一特征数据与所述已标记的陌生人库中的图像的特征数据进行对比;以及
    在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,所述第一验证结果为验证成功,并将所述第一图像对应的待验证对象确定为已被标记的陌生人。
  21. 根据权利要求20所述的装置,其中,所述装置还包括统计模块,配置为在所述已标记的陌生人库中存在与所述第一特征数据匹配的特征数据的情况下,统计所述第一图像对应的待验证对象被标记为陌生人的次数。
  22. 根据权利要求18-21中任意一项所述的装置,其中,所述第一验证模块还用于在所述第一验证结果为验证成功的情况下,将所述第一图像以及所述第一图像的关联信息添加至匹配的特征数据对应的匹配记录中,其中,所述第一图像的关联信息包 括所述第一摄像模组采集所述第一图像的时间信息、所述第一摄像模组的标识信息以及所述第一摄像模组的位置信息中至少一种。
  23. 根据权利要求18-22中任意一项所述的装置,其中,所述装置还包括去重模块,配置为在利用所述第一图像与所述第二图像进行联合验证之前,对第一时间范围内验证失败的第一图像和/或第二图像进行去重处理,获得在第一时间范围内针对每个待验证对象满足第一预设条件的第一图像和/或第二图像。
  24. 根据权利要求18-23中任意一项所述的装置,其中,所述第二验证模块还用于将第二时间范围内第一验证结果为验证失败的第一图像与第一验证结果为验证失败的第二图像进行聚类处理,获得针对每个待验证对象的图像集,并
    确定所述图像集中每个图像与所述图像集中其他图像的相似度,以及
    基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件,以及
    在所述图像集满足第二预设条件的情况下,确定该图像集对应的待验证对象为陌生人。
  25. 根据权利要求24所述的装置,其中,所述第二验证模块还用于获取每个图像集中的各图像的特征数据与全部的图像的特征数据的数量积的加和值,以及
    基于所述加和值与该图像集中的特征数据的个数确定每个图像与其余图像的相似度。
  26. 根据权利要求24或25所述的装置,其中,所述第二验证模块还用于获取所述第二时间范围内验证失败的第一图像和第二图像数据分别对应的第一特征数据和第二特征数据,并
    将所述第一特征数据与所述第二特征数据进行对比匹配,确定各第一特征数据和各第二特征数据是否对应于同一待验证对象;以及
    将所述同一待验证对象的第一特征数据和第二特征数据进行聚类形成所述同一待验证对象的图像集。
  27. 根据权利要求24-26中任意一项所述的装置,其中,所述第二验证模块还用于通过以下方式中的至少一种执行所述基于所述图像集中每个图像对应的相似度确定所述图像集是否满足第二预设条件:
    所述图像集中各图像对应的相似度中的最大相似度大于第一相似度阈值;
    所述图像集中各图像对应的相似度中大于第二相似度阈值的相似度的特征数据数量超过预设比例;
    所述图像集中各图像对应的相似度中的最小相似度大于第三相似度阈值。
  28. 根据权利要求24-27中任意一项所述的装置,其中,所述第二验证模块还用于在所述图像集中图像之间的相似度不满足预设条件的情况下,删除该图像集对应的全部图像。
  29. 根据权利要求24-28中任意一项所述的装置,其中,所述第二验证模块还用于在所述特征数据集中的特征数据对应的图像为在不同的时间范围内通过不同的摄像模组采集的图像的情况下,则确定所述特征数据集对应的待验证对象为陌生人。
  30. 根据权利要求18-29中任意一项所述的装置,其中,所述获取模块还用于分别获取第一摄像模组采集的第一视频和至少一个第二摄像模组采集的第二视频,对所述第一视频进行预处理获得第三图像以及对所述第二视频进行预处理得到第四图像,或者接收第三图像和第四图像,以及
    将第三图像中满足质量要求的图像确定为所述第一图像,以及将第四图像中满足质量要求的图像确定为第二图像。
  31. 根据权利要求30所述的装置,其中,所述获取模块还用于在所述获取待验证对象的第一图像和第二图像之后,且在所述获取所述第一图像的第一特征数据,并将所述第一特征数据与目标库中的特征数据进行对比以执行身份验证,获取第一验证结果之前,检测所述第一图像和/或第二图像中是否包含预定特征,并
    响应于所述第一图像和/或第二图像中包含预定特征的情况,对包含所述预定特征的第一图像和/或第二图像进行标记,其中,所述预定特征包括口罩、帽子、墨镜中的至少一种。
  32. 根据权利要求18-31中任意一项所述的装置,其中,所述装置还包括提示模块,配置为输出提示所述第一验证结果或者第二验证结果。
  33. 根据权利要求32所述的装置,其中,所述提示模块还用于响应于第一验证结果为验证成功的情况,通过预设的方式输出所述待验证对象的身份及其关联信息,以及在确定该待验证对象为已标记的陌生人时,输出被标记为陌生人的次数;或者
    输出所述第二验证结果。
  34. 根据权利要求18-33中任意一项所述的装置,其中,所述第二验证模块还用于响应于第二验证结果为待验证对象为陌生人的情况,将该待验证对象对应的第一图像、第二图像以及关联信息存储至所述目标库,以及控制通过用户交互界面显示被判定为陌生人的验证结果,统计信息和提示信息。
  35. 一种电子设备,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:执行权利要求1至17中任意一项所述的方法。
  36. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至17中任意一项所述的方法。
PCT/CN2019/093388 2018-12-21 2019-06-27 图像处理方法及装置、电子设备和存储介质 WO2020124984A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020207026016A KR102450330B1 (ko) 2018-12-21 2019-06-27 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체
JP2020547077A JP7043619B2 (ja) 2018-12-21 2019-06-27 画像処理方法及び装置、電子機器並びに記憶媒体
SG11202008779VA SG11202008779VA (en) 2018-12-21 2019-06-27 Image processing method and apparatus, electronic device, and storage medium
US17/015,189 US11410001B2 (en) 2018-12-21 2020-09-09 Method and apparatus for object authentication using images, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811574840.3 2018-12-21
CN201811574840.3A CN109658572B (zh) 2018-12-21 2018-12-21 图像处理方法及装置、电子设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/015,189 Continuation US11410001B2 (en) 2018-12-21 2020-09-09 Method and apparatus for object authentication using images, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020124984A1 true WO2020124984A1 (zh) 2020-06-25

Family

ID=66115852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093388 WO2020124984A1 (zh) 2018-12-21 2019-06-27 图像处理方法及装置、电子设备和存储介质

Country Status (7)

Country Link
US (1) US11410001B2 (zh)
JP (1) JP7043619B2 (zh)
KR (1) KR102450330B1 (zh)
CN (1) CN109658572B (zh)
SG (1) SG11202008779VA (zh)
TW (1) TWI717146B (zh)
WO (1) WO2020124984A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611572A (zh) * 2020-06-28 2020-09-01 支付宝(杭州)信息技术有限公司 一种基于人脸认证的实名认证方法及装置

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897777B (zh) * 2018-06-01 2022-06-17 深圳市商汤科技有限公司 目标对象追踪方法及装置、电子设备和存储介质
JP7257765B2 (ja) * 2018-09-27 2023-04-14 キヤノン株式会社 情報処理装置、認証システムおよびそれらの制御方法、プログラム
CN109658572B (zh) 2018-12-21 2020-09-15 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质
US11502843B2 (en) * 2018-12-31 2022-11-15 Nxp B.V. Enabling secure internet transactions in an unsecure home using immobile token
CN110443014A (zh) * 2019-07-31 2019-11-12 成都商汤科技有限公司 身份验证方法、用于身份验证的电子设备和服务器、系统
CN112446395B (zh) 2019-08-29 2023-07-25 杭州海康威视数字技术股份有限公司 网络摄像机、视频监控系统及方法
CN111027374B (zh) * 2019-10-28 2023-06-30 华为终端有限公司 一种图像识别方法及电子设备
EP3839904A1 (de) * 2019-12-17 2021-06-23 Wincor Nixdorf International GmbH Selbstbedienungsterminal und verfahren zum betreiben eines selbstbedienungsterminals
CN111159445A (zh) * 2019-12-30 2020-05-15 深圳云天励飞技术有限公司 一种图片过滤方法、装置、电子设备及存储介质
CN111382410B (zh) * 2020-03-23 2022-04-29 支付宝(杭州)信息技术有限公司 刷脸验证方法及系统
US12105973B2 (en) * 2020-03-25 2024-10-01 Samsung Electronics Co., Ltd. Dynamic quantization in storage devices using machine learning
CN111914781B (zh) * 2020-08-10 2024-03-19 杭州海康威视数字技术股份有限公司 一种人脸图像处理的方法及装置
CN113095289A (zh) * 2020-10-28 2021-07-09 重庆电政信息科技有限公司 基于城市复杂场景下海量图像预处理网络方法
CN112597886A (zh) * 2020-12-22 2021-04-02 成都商汤科技有限公司 乘车逃票检测方法及装置、电子设备和存储介质
CN113344132A (zh) * 2021-06-30 2021-09-03 成都商汤科技有限公司 身份识别方法、系统、装置、计算机设备及存储介质
CN113688278A (zh) * 2021-07-13 2021-11-23 北京旷视科技有限公司 信息处理方法、装置、电子设备和计算机可读介质
CN113569676B (zh) * 2021-07-16 2024-06-11 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN113609931B (zh) * 2021-07-20 2024-06-21 上海德衡数据科技有限公司 基于神经网络的人脸识别方法及系统
CN114792451B (zh) * 2022-06-22 2022-11-25 深圳市海清视讯科技有限公司 信息处理方法、设备、及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010005215A2 (ko) * 2008-07-07 2010-01-14 주식회사 미래인식 생체인식을 이용한 출입관리 방법 및 시스템
CN105023005A (zh) * 2015-08-05 2015-11-04 王丽婷 人脸识别装置及其识别方法
CN105956520A (zh) * 2016-04-20 2016-09-21 东莞市中控电子技术有限公司 一种基于多模式生物识别信息的个人识别装置和方法
CN206541317U (zh) * 2017-03-03 2017-10-03 北京国承万通信息科技有限公司 用户识别系统
CN107305624A (zh) * 2016-04-20 2017-10-31 厦门中控智慧信息技术有限公司 一种基于多模式生物识别信息的个人识别方法和装置
CN109658572A (zh) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6111517A (en) * 1996-12-30 2000-08-29 Visionics Corporation Continuous video monitoring using face recognition for access control
US20020136433A1 (en) * 2001-03-26 2002-09-26 Koninklijke Philips Electronics N.V. Adaptive facial recognition system and method
US20080080748A1 (en) * 2006-09-28 2008-04-03 Kabushiki Kaisha Toshiba Person recognition apparatus and person recognition method
JP2009294955A (ja) 2008-06-05 2009-12-17 Nippon Telegr & Teleph Corp <Ntt> 画像処理装置、画像処理方法、画像処理プログラムおよび同プログラムを記録した記録媒体。
CN102609729B (zh) * 2012-02-14 2014-08-13 中国船舶重工集团公司第七二六研究所 多机位人脸识别方法及系统
KR20130133676A (ko) * 2012-05-29 2013-12-09 주식회사 코아로직 카메라를 통한 얼굴인식을 이용한 사용자 인증 방법 및 장치
US9245276B2 (en) * 2012-12-12 2016-01-26 Verint Systems Ltd. Time-in-store estimation using facial recognition
KR101316805B1 (ko) * 2013-05-22 2013-10-11 주식회사 파이브지티 자동 얼굴 위치 추적 및 얼굴 인식 방법 및 시스템
CN103530652B (zh) * 2013-10-23 2016-09-14 北京中视广信科技有限公司 一种基于人脸聚类的视频编目方法、检索方法及其系统
CN105809096A (zh) * 2014-12-31 2016-07-27 中兴通讯股份有限公司 人物标注方法和终端
US20160364609A1 (en) * 2015-06-12 2016-12-15 Delta ID Inc. Apparatuses and methods for iris based biometric recognition
CN205080692U (zh) * 2015-11-09 2016-03-09 舒畅 一种警用建筑安防装置
CN105426485A (zh) * 2015-11-20 2016-03-23 小米科技有限责任公司 图像合并方法和装置、智能终端和服务器
CN106250821A (zh) * 2016-07-20 2016-12-21 南京邮电大学 一种聚类再分类的人脸识别方法
CN106228188B (zh) * 2016-07-22 2020-09-08 北京市商汤科技开发有限公司 聚类方法、装置及电子设备
JP2018018324A (ja) 2016-07-28 2018-02-01 株式会社東芝 Icカードおよび携帯可能電子装置
JP6708047B2 (ja) 2016-08-05 2020-06-10 富士通株式会社 認証装置、認証方法及び認証プログラム
JP6809114B2 (ja) 2016-10-12 2021-01-06 株式会社リコー 情報処理装置、画像処理システム、プログラム
CN106778470A (zh) * 2016-11-15 2017-05-31 东软集团股份有限公司 一种人脸识别方法及装置
CN108228872A (zh) * 2017-07-21 2018-06-29 北京市商汤科技开发有限公司 人脸图像去重方法和装置、电子设备、存储介质、程序
CN107729815B (zh) * 2017-09-15 2020-01-14 Oppo广东移动通信有限公司 图像处理方法、装置、移动终端及计算机可读存储介质
CN107480658B (zh) * 2017-09-19 2020-11-06 苏州大学 基于多角度视频的人脸识别装置和方法
CN108229297B (zh) * 2017-09-30 2020-06-05 深圳市商汤科技有限公司 人脸识别方法和装置、电子设备、计算机存储介质
CN107729928B (zh) * 2017-09-30 2021-10-22 百度在线网络技术(北京)有限公司 信息获取方法和装置
CN108875522B (zh) * 2017-12-21 2022-06-10 北京旷视科技有限公司 人脸聚类方法、装置和系统及存储介质
KR102495796B1 (ko) * 2018-02-23 2023-02-06 삼성전자주식회사 시계(field of view)가 다른 복수의 카메라를 이용하여 생체 인증을 수행하는 방법 및 이를 위한 전자 장치
CN108446681B (zh) * 2018-05-10 2020-12-15 深圳云天励飞技术有限公司 行人分析方法、装置、终端及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010005215A2 (ko) * 2008-07-07 2010-01-14 주식회사 미래인식 생체인식을 이용한 출입관리 방법 및 시스템
CN105023005A (zh) * 2015-08-05 2015-11-04 王丽婷 人脸识别装置及其识别方法
CN105956520A (zh) * 2016-04-20 2016-09-21 东莞市中控电子技术有限公司 一种基于多模式生物识别信息的个人识别装置和方法
CN107305624A (zh) * 2016-04-20 2017-10-31 厦门中控智慧信息技术有限公司 一种基于多模式生物识别信息的个人识别方法和装置
CN206541317U (zh) * 2017-03-03 2017-10-03 北京国承万通信息科技有限公司 用户识别系统
CN109658572A (zh) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611572A (zh) * 2020-06-28 2020-09-01 支付宝(杭州)信息技术有限公司 一种基于人脸认证的实名认证方法及装置
CN111611572B (zh) * 2020-06-28 2022-11-22 支付宝(杭州)信息技术有限公司 一种基于人脸认证的实名认证方法及装置

Also Published As

Publication number Publication date
JP7043619B2 (ja) 2022-03-29
KR20200116158A (ko) 2020-10-08
SG11202008779VA (en) 2020-10-29
US11410001B2 (en) 2022-08-09
TWI717146B (zh) 2021-01-21
JP2021515945A (ja) 2021-06-24
CN109658572B (zh) 2020-09-15
KR102450330B1 (ko) 2022-10-04
US20200401857A1 (en) 2020-12-24
TW202036472A (zh) 2020-10-01
CN109658572A (zh) 2019-04-19

Similar Documents

Publication Publication Date Title
WO2020124984A1 (zh) 图像处理方法及装置、电子设备和存储介质
US11232288B2 (en) Image clustering method and apparatus, electronic device, and storage medium
WO2020073505A1 (zh) 基于图像识别的图像处理方法、装置、设备及存储介质
WO2020029966A1 (zh) 视频处理方法及装置、电子设备和存储介质
US20220067379A1 (en) Category labelling method and device, and storage medium
WO2019214201A1 (zh) 活体检测方法及装置、系统、电子设备、存储介质
WO2021031645A1 (zh) 图像处理方法及装置、电子设备和存储介质
TW202105199A (zh) 資料更新方法、電子設備和儲存介質
US20180151199A1 (en) Method, Device and Computer-Readable Medium for Adjusting Video Playing Progress
TW202029055A (zh) 一種行人識別方法、裝置、電子設備及非臨時性電腦可讀儲存介質
WO2020019760A1 (zh) 活体检测方法、装置及系统、电子设备和存储介质
WO2021093375A1 (zh) 检测同行人的方法及装置、系统、电子设备和存储介质
TW202105202A (zh) 影片處理方法及裝置、電子設備、儲存媒體和電腦程式
WO2020010927A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2020181728A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2021036382A9 (zh) 图像处理方法及装置、电子设备和存储介质
WO2021103423A1 (zh) 行人事件的检测方法及装置、电子设备和存储介质
WO2022099989A1 (zh) 活体识别、门禁设备控制方法和装置、电子设备和存储介质、计算机程序
TWI766458B (zh) 資訊識別方法及裝置、電子設備、儲存媒體
WO2021164100A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2023094894A1 (zh) 目标跟踪、事件检测方法及装置、电子设备和存储介质
CN112101216A (zh) 人脸识别方法、装置、设备及存储介质
CN109101542B (zh) 图像识别结果输出方法及装置、电子设备和存储介质
CN111062407B (zh) 图像处理方法及装置、电子设备和存储介质
CN111209769B (zh) 身份验证系统及方法、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19900885

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020547077

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207026016

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19900885

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/09/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19900885

Country of ref document: EP

Kind code of ref document: A1