WO2021159750A1 - 鉴别证件的方法、装置、计算机设备和存储介质 - Google Patents

鉴别证件的方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021159750A1
WO2021159750A1 PCT/CN2020/124885 CN2020124885W WO2021159750A1 WO 2021159750 A1 WO2021159750 A1 WO 2021159750A1 CN 2020124885 W CN2020124885 W CN 2020124885W WO 2021159750 A1 WO2021159750 A1 WO 2021159750A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameters
shooting
cameras
pictures
camera
Prior art date
Application number
PCT/CN2020/124885
Other languages
English (en)
French (fr)
Inventor
张国辉
盛建达
宋晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021159750A1 publication Critical patent/WO2021159750A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present invention relates to the technical field of data processing, in particular to a method, device, computer equipment and storage medium for authenticating a certificate.
  • the main purpose of the present invention is to provide a method, a device, a computer device and a storage medium for authenticating a certificate, aiming to solve the technical problem of the difficulty of collecting a certificate when authenticating a forgery in the prior art.
  • the present invention proposes a method for authenticating a certificate, including:
  • the shooting parameters are the parameters that can be adjusted when the camera is shooting, and the environmental parameters are all The parameters of the environment where the certificate is located when the camera is taken;
  • each pairing information is sequentially inputting into a preset neural network for calculation to obtain a feature value corresponding to each of the data pictures, the neural network being a convolutional neural network for extracting image features;
  • the authenticity of the certificate is determined according to the classification results.
  • the present invention also provides a device for authenticating certificates, including:
  • the image acquisition unit is used to acquire the image data of multiple different types of cameras simultaneously shooting the same certificate from different angles, and to acquire the shooting parameters and environmental parameters of each camera, where the shooting parameters are parameters that can be adjusted when the camera is shooting,
  • the environmental parameter is the parameter of the environment where the certificate is located when the camera is photographed;
  • the pairing picture unit is used to pair each of the picture data with the shooting parameters corresponding to each of the picture data to obtain a plurality of pairing information
  • the feature extraction unit is used to input each of the pairing information into a preset neural network for calculation to obtain the feature value corresponding to each of the data pictures, and the neural network is a convolutional neural network for extracting image features ;
  • a calculation result unit configured to input each of the characteristic values and the environmental parameters into a preset classification network for calculation, and obtain a classification result corresponding to each of the pictures;
  • the authenticity determining unit is used to determine the authenticity of the certificate according to the classification results.
  • the present invention also provides a computer device, including a memory and a processor, the memory stores a computer program, and the processor implements a method for authenticating a certificate when the processor executes the computer program;
  • the method for authenticating a certificate includes:
  • the shooting parameters are the parameters that can be adjusted when the camera is shooting, and the environmental parameters are all The parameters of the environment where the certificate is located when the camera is taken;
  • each pairing information is sequentially inputting into a preset neural network for calculation to obtain a feature value corresponding to each of the data pictures, the neural network being a convolutional neural network for extracting image features;
  • the authenticity of the certificate is determined according to the classification results.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, a method for authenticating a certificate is realized;
  • the method for authenticating a certificate includes:
  • the shooting parameters are the parameters that can be adjusted when the camera is shooting, and the environmental parameters are all The parameters of the environment where the certificate is located when the camera is taken;
  • each pairing information is sequentially inputting into a preset neural network for calculation to obtain a feature value corresponding to each of the data pictures, the neural network being a convolutional neural network for extracting image features;
  • the authenticity of the certificate is determined according to the classification results.
  • Figure 1 is a schematic diagram of the steps of a method for authenticating a certificate in an embodiment of the present invention
  • FIG. 2 is a schematic block diagram of the structure of a device for authenticating a certificate in an embodiment of the present invention
  • FIG. 3 is a schematic block diagram of the structure of a computer device according to an embodiment of the present invention.
  • the method for authenticating a certificate in this embodiment includes:
  • Step S1 Obtain multiple different types of cameras simultaneously shooting the image data of the same certificate from different angles, and obtain the shooting parameters and environment parameters of each camera.
  • the shooting parameters are the parameters that can be adjusted when the camera is shooting.
  • the parameter is the parameter of the environment where the certificate is located when the camera is photographed;
  • Step S2 Pair each of the picture data with the shooting parameters corresponding to each of the picture data to obtain a plurality of pairing information
  • Step S3 Input each of the pairing information into a preset neural network in turn for calculation to obtain a feature value corresponding to each of the data pictures, and the neural network is a convolutional neural network for extracting image features;
  • Step S4 Input each of the characteristic values and the environmental parameters into a preset classification network for calculation, to obtain a classification result corresponding to each of the pictures;
  • Step S5 Determine the authenticity of the certificate according to the classification results.
  • the above method of authenticating a certificate can be implemented based on a smart device, such as a mobile phone with a camera, a tablet, a camera device, etc., and the smart device is provided with multiple different types of cameras, for example, a mobile phone has three cameras. They are a wide-angle camera, a macro camera, and a normal camera, or there are two cameras on the mobile phone, which are a normal camera and a black-and-white camera.
  • the spacing between different cameras can be set according to actual needs. Different smart devices have different camera spacings, that is, different types of smart devices have different camera types, spacing, and arrangement.
  • each of the above-mentioned cameras may be arranged in an array, and a relatively large distance is maintained between different cameras, so that it has a better capability of authenticating forgery of gratings with variable patterns at different angles.
  • the document is a document with raster anti-counterfeiting function printed with different inks, such as a Hong Kong ID card.
  • the picture data in this embodiment refers to the above-mentioned pictures, and these pictures may include pictures obtained by photographing a certificate with a wide-angle camera, pictures obtained by photographing a certificate with a common camera, etc., and then the picture data is obtained.
  • the shooting parameters and environmental parameters of each camera can be obtained.
  • the shooting parameters are the parameters that can be adjusted when the camera is shooting, such as aperture information, exposure time, etc.
  • the light sensor, photosensitive element, etc. on the device are used to obtain the images. Since the positions and types of the various cameras are different, when shooting at the same time, you can get pictures of the ID from different angles.
  • each picture is closely related to the shooting parameters of the camera that took the picture.
  • the picture data is paired with the shooting parameters corresponding to each picture data to obtain multiple pairing information, that is, each pairing information includes a picture data and the shooting parameters used to shoot the picture; and then each pairing information is input to Perform calculations in a preset neural network to obtain feature values corresponding to each data picture.
  • the above neural network is a convolutional neural network used to extract image features.
  • the above neural network can be implemented by existing technologies, for example, using a ResNet network structure. , I won’t repeat it here.
  • each feature value and the above environmental parameters are input into the classification network for calculation, and the classification result corresponding to each picture is obtained.
  • the above classification network is a preset two-classification network or a time series neural network. After calculating the characteristic value, output the probability value of the above-mentioned certificate as a real certificate, so that the classification result of the certificate can be obtained, and the classification result includes a real certificate or a fake certificate.
  • step S5 since multiple feature values are calculated through the above classification network, multiple classification results are obtained, and each classification result corresponds to a picture taken by a camera, and then the authenticity of the certificate is determined based on these classification results, for example, If more than 90% of the classification results are judged to be real documents, the above-mentioned documents can be determined to be real documents. If less than 90% of the classification results are judged to be real documents, the documents are determined to be fake documents. In another example, When the above-mentioned smart device is a mobile phone, the mobile phone has four cameras. At this time, it can be set to determine that the above-mentioned certificate is a real certificate when more than 75% of the classification results are judged to be a real certificate.
  • the method for authenticating documents provided in this application is to take pictures at the same time through multiple different types of cameras set on the smart device, and then input them into the corresponding neural network for processing together with the shooting parameters, so that the multiple cameras with a certain distance are directly used for processing.
  • Shooting so as to achieve multiple angle shooting, improve the forgery of the raster with variable angles of different angles, not only reduce the problem that the angle of traditional forgery shooting is not easy to control, and requires multiple repeated operations, but also the collection is convenient and identification The result is higher accuracy.
  • step S3 includes:
  • Step S31 Input the pictures in each pair of information into the hidden layer for calculation to obtain corresponding feature information
  • Step S32 Calculate each of the feature information and the shooting parameters in the corresponding pairing information to obtain the feature value corresponding to each of the pictures.
  • the above-mentioned neural network includes a hidden layer.
  • the picture in it is first calculated, that is, the picture is calculated through the hidden layer to obtain the corresponding feature information.
  • the feature information can be calculated with the shooting parameters in the pairing information to obtain the final feature value of the picture; for example, the wide-angle camera and the ordinary camera shooting are the same
  • the characteristic information extracted through the hidden layer is again combined with the corresponding shooting Parameter calculation, and then correction, to get the final characteristic value.
  • step S1 before the above step S1, it includes:
  • Step S01 receiving an instruction for turning on a specified type of light source
  • Step S02 Turn on the specified type of light source according to the instruction
  • step S1 includes:
  • Step S11 Collect the illuminance corresponding to the specified type of light source to obtain the environmental parameter.
  • the above documents are documents with raster anti-counterfeiting function printed with different inks.
  • the displayed raster may also be different. Therefore, a variety of different types of light sources can be set on the above-mentioned smart devices, and then The image of the certificate is taken under the illumination of different light sources.
  • the above-mentioned light sources may be visible light, ultraviolet light, infrared light, etc., or polarized light.
  • the specified type can be any of the above examples.
  • the instruction can be input by user settings, or preset trigger conditions and then automatically input, and then turn on the specified type according to the instruction
  • the light source acquires the environmental parameters while shooting, that is, the light sensor collects the illuminance corresponding to the specified type of light source while acquiring the picture data to obtain the above-mentioned environmental parameters.
  • the various light sources can be turned on in turn, and then each light source is shot once through each camera, and finally the classification results under various light sources are obtained, and then the certificate is determined according to the classification results The authenticity of.
  • step S4 includes:
  • Step S41 when the picture data is a plurality of pictures taken by each of the cameras at the same time, each of the feature values is sequentially input to the first network for calculation to obtain a classification result corresponding to each of the pictures;
  • Step S42 When the picture data is a multi-frame picture corresponding to a plurality of cameras obtained by each of the cameras continuously shooting at the same time, input the feature values corresponding to each of the cameras to the second network for calculation, Obtain the classification result of the picture corresponding to each of the cameras.
  • the above classification network may be a first network of the two classification type or a second network of the time series type.
  • the first network of the above two classification type is the two classification network, which is used to calculate the feature value and output the probability that the document is a real document
  • the second network of the above-mentioned time series type is used to calculate time series data, such as inputting multiple feature values of multiple frames of pictures obtained by continuous shooting, these feature values are time series type data, and finally the same output documents are real documents.
  • the probability of the above-mentioned first network and the second network can be constructed and trained by using the existing network structure, which will not be repeated here.
  • each feature value can be input into the first network for calculation in turn to obtain each picture The corresponding classification result.
  • the picture data is a multi-frame picture corresponding to multiple cameras obtained by each camera continuously shooting at the same time, then each feature value corresponding to each camera is input into the second network for calculation, and the classification result of the picture corresponding to each camera is obtained. Since the presence of AC power during ID shooting indoors will cause the light source to flicker, the effect of the pictures taken is not good, and subsequent calculations are prone to errors.
  • this embodiment uses a multi-frame method to compensate the light source
  • the operation is to continuously capture multiple frames of pictures by each camera, and then process the above steps S1-S4 to obtain multiple feature values corresponding to the multiple frames of pictures of each camera.
  • the sampling frequency of continuous shooting and the frequency of the local alternating current at the shooting location are mutually primed.
  • the frequency of alternating current is 50 Hz
  • the frame rate of shooting is 17 Hz.
  • the RNN network performs counterfeit authentication.
  • Each time you input all feature values corresponding to multiple frames of pictures taken by a camera directly calculate the feature values corresponding to multiple frames of all cameras to obtain Multiple classification results.
  • This method can reduce the adverse effects caused by the light stroboscopic, and improve the recognition rate and detection speed of the grating.
  • This method embodies a brand-new form of raster authentication capability, provides the feasibility of using higher-strength anti-counterfeiting means, and at the same time has high adaptability, can perform anti-counterfeiting identification on documents with color-changing ink, and reduce the deployment cost of anti-counterfeiting requirements.
  • different light sources can be used for shooting, that is, in the continuous shooting of multiple frames of pictures, one type of light source can be used when shooting the first frame, and when shooting the second frame Another type of light source is used, so that different light sources are used to shoot different frames during continuous shooting.
  • the first frame uses visible light and the second frame uses ultraviolet light. This way, through different environments, different angles and different shooting conditions, the pictures are obtained through The authenticity determination result obtained by processing will be more accurate.
  • step S1 before step S1, it includes:
  • Step S04 Obtain the model of the current device and the physical parameters of each of the cameras, where the physical parameters are non-adjustable parameters;
  • Step S05 call each of the cameras according to the model and the physical parameters to complete shooting.
  • the current device is the aforementioned smart device, such as a mobile phone.
  • the aforementioned model can be the model of a mobile phone. Since the camera is set at a fixed position, the distance and arrangement of each camera can be known by the model.
  • the aforementioned physical parameters are non-adjustable parameters. , Which is the inherent parameters of the camera, such as the shutter speed.
  • a device for authenticating a certificate corresponds to the above-mentioned method for authenticating a certificate.
  • the device includes:
  • the picture acquiring unit 1 is used to acquire multiple different types of cameras simultaneously photographing picture data of the same certificate from different angles, and to acquire shooting parameters and environmental parameters of each camera.
  • the shooting parameters are parameters that can be adjusted when the camera is shooting.
  • the environmental parameter is the parameter of the environment where the certificate is located when the camera is photographed;
  • the pairing picture unit 2 is used to pair each of the picture data with the shooting parameters corresponding to each of the picture data to obtain a plurality of pairing information;
  • the feature extraction unit 3 is used to input each pairing information into a preset neural network for calculation to obtain feature values corresponding to each of the data pictures, and the neural network is a convolutional neural network for extracting image features The internet;
  • the calculation result unit 4 is configured to input each of the characteristic values and the environmental parameters into a preset classification network for calculation to obtain a classification result corresponding to each of the pictures;
  • the authenticity determining unit 5 is used to determine the authenticity of the certificate according to the classification results.
  • the above method of authenticating a certificate can be implemented based on a smart device, such as a mobile phone with a camera, a tablet, a camera device, etc., and the smart device is provided with multiple different types of cameras, for example, a mobile phone has three cameras. They are a wide-angle camera, a macro camera, and a normal camera, or there are two cameras on the mobile phone, which are a normal camera and a black-and-white camera.
  • the spacing between different cameras can be set according to actual needs. Different smart devices have different camera spacings, that is, different types of smart devices have different camera types, spacing, and arrangement.
  • each of the above-mentioned cameras may be arranged in an array, and a relatively large distance is maintained between different cameras, so that it has a better capability of authenticating forgery of gratings with variable patterns at different angles.
  • the picture acquisition unit 1 As mentioned in the picture acquisition unit 1, firstly, different types of cameras are used to take pictures of the same document from different angles at the same time to obtain multiple pictures of the document.
  • the document is a document with raster anti-counterfeiting function printed with different inks, such as a Hong Kong ID card. .
  • the picture data in this embodiment refers to the above-mentioned pictures, and these pictures may include pictures obtained by photographing a certificate with a wide-angle camera, pictures obtained by photographing a certificate with a common camera, etc., and then the picture data is obtained.
  • the shooting parameters and environmental parameters of each camera can be obtained.
  • the shooting parameters are the parameters that can be adjusted when the camera is shooting, such as aperture information, exposure time, etc.
  • the light sensor, photosensitive element, etc. on the device are used to obtain the images. Since the positions and types of the various cameras are different, when shooting at the same time, you can get pictures of the ID from different angles.
  • each picture data is paired with the shooting parameters corresponding to each picture data to obtain multiple pairing information, that is, each pairing information includes a picture data and the shooting parameters used to shoot the picture;
  • Each pairing information is sequentially input into a preset neural network for calculation, and the feature value corresponding to each data picture is obtained.
  • the above neural network is a convolutional neural network for extracting image features, and the above neural network can be implemented by existing technology, for example It adopts the ResNet network structure to realize it, so I won't repeat it here.
  • each feature value and the above environmental parameters are input into the classification network for calculation, and the classification result corresponding to each picture is obtained.
  • the above classification network is a preset two classification network or a time series neural network. , After calculating the characteristic value, output the probability value of the above-mentioned certificate as a real certificate, so that the classification result of the certificate can be obtained, and the classification result includes a real certificate or a fake certificate.
  • the above-mentioned authenticity determination unit 5 since multiple feature values are calculated through the above-mentioned classification network, multiple classification results are obtained, and each classification result corresponds to a picture taken by a camera, and then the authenticity of the certificate is determined based on these classification results For example, if more than 90% of the classification results are judged to be real documents, the above-mentioned documents can be determined to be real documents. In the example, when the above smart device is a mobile phone, the mobile phone has four cameras. At this time, it can be set that more than 75% of the classification results are determined to be real documents, and the above documents are determined to be real documents.
  • the method for authenticating documents provided in this application is to take pictures at the same time through multiple different types of cameras set on the smart device, and then input them into the corresponding neural network for processing together with the shooting parameters, so that the multiple cameras with a certain distance are directly used for processing.
  • Shooting so as to achieve multiple angle shooting, improve the forgery of the raster with variable angles of different angles, not only reduce the problem that the angle of traditional forgery shooting is not easy to control, and requires multiple repeated operations, but also the collection is convenient and identification The result is higher accuracy.
  • the above-mentioned feature extraction unit 3 includes:
  • the calculation feature subunit is used to input the pictures in each pair of information into the hidden layer for calculation to obtain corresponding feature information
  • the calculation information subunit is configured to calculate each of the feature information and the shooting parameters in the corresponding pairing information to obtain the feature value corresponding to each of the pictures.
  • the above-mentioned neural network includes a hidden layer.
  • the picture in it is first calculated, that is, the picture is calculated through the hidden layer to obtain the corresponding feature information.
  • the feature information can be calculated with the shooting parameters in the pairing information to obtain the final feature value of the picture; for example, the wide-angle camera and the ordinary camera shooting are the same
  • the characteristic information extracted through the hidden layer is again combined with the corresponding shooting Parameter calculation, and then correction, to get the final characteristic value.
  • the aforementioned device for authenticating documents includes:
  • the instruction receiving unit is used to receive an instruction for turning on a specified type of light source
  • the picture acquisition unit 1 includes:
  • the collection and care subunit is used to collect the illuminance corresponding to the specified type of light source to obtain the environmental parameter.
  • the above documents are documents with raster anti-counterfeiting function printed with different inks.
  • the displayed raster may also be different. Therefore, a variety of different types of light sources can be set on the above-mentioned smart devices, and then The image of the certificate is taken under the illumination of different light sources.
  • the above-mentioned light sources may be visible light, ultraviolet light, infrared light, etc., or polarized light.
  • receive an instruction for turning on a specified type of light source can be any of the above examples.
  • the instruction can be input by user settings, or preset trigger conditions and then automatically input, and then turn on the specified type according to the instruction
  • the light source acquires the environmental parameters while shooting, that is, the light sensor collects the illuminance corresponding to the specified type of light source while acquiring the picture data to obtain the above-mentioned environmental parameters.
  • the various light sources can be turned on in turn, and then each light source is shot once through each camera, and finally the classification results under various light sources are obtained, and then the certificate is determined according to the classification results The authenticity of.
  • the above calculation result unit 4 includes:
  • the first result subunit is used for when the picture data is a plurality of pictures taken by each of the cameras at the same time, then input each of the feature values into the first network for calculation in turn to obtain the corresponding pictures of each of the pictures.
  • Classification result
  • the second result subunit is used to input the feature values corresponding to each of the cameras to the first result when the picture data is multiple frames of pictures corresponding to multiple cameras that are continuously captured by each of the cameras at the same time.
  • the network performs calculations to obtain the classification results of the pictures corresponding to each of the cameras.
  • the above classification network may be a first network of the two classification type or a second network of the time series type.
  • the first network of the above two classification type is the two classification network, which is used to calculate the feature value and output the probability that the document is a real document
  • the second network of the above-mentioned time series type is used to calculate time series data, such as inputting multiple feature values of multiple frames of pictures obtained by continuous shooting, these feature values are time series type data, and finally the same output documents are real documents.
  • the probability of the above-mentioned first network and the second network can be constructed and trained by using the existing network structure, which will not be repeated here.
  • each feature value can be input into the first network for calculation in turn to obtain each picture The corresponding classification result.
  • the picture data is a multi-frame picture corresponding to multiple cameras obtained by each camera continuously shooting at the same time, then each feature value corresponding to each camera is input into the second network for calculation, and the classification result of the picture corresponding to each camera is obtained. Since the presence of AC power during ID shooting indoors will cause the light source to flicker, the effect of the pictures taken is not good, and subsequent calculations are prone to errors.
  • this embodiment uses a multi-frame method to compensate the light source
  • the operation is to continuously capture multiple frames of pictures through each camera, and then process according to the above-mentioned picture acquisition unit 1, paired picture unit 2 and feature extraction unit 3 to obtain multiple feature values corresponding to the multi-frame pictures of each camera.
  • the sampling frequency of continuous shooting and the frequency of the local alternating current at the shooting location are mutually primed.
  • the frequency of alternating current is 50 Hz
  • the frame rate of shooting is 17 Hz.
  • the RNN network performs counterfeit authentication.
  • Each time you input all feature values corresponding to multiple frames of pictures taken by a camera directly calculate the feature values corresponding to multiple frames of all cameras to obtain Multiple classification results.
  • This method can reduce the adverse effects caused by the light stroboscopic, and improve the recognition rate and detection speed of the grating.
  • This method embodies a brand-new form of raster authentication capability, provides the feasibility of using higher-strength anti-counterfeiting means, and at the same time has high adaptability, can perform anti-counterfeiting identification on documents with color-changing ink, and reduce the deployment cost of anti-counterfeiting requirements.
  • different light sources can be used for shooting, that is, in the continuous shooting of multiple frames of pictures, one type of light source can be used when shooting the first frame, and when shooting the second frame Another type of light source is used, so that different light sources are used to shoot different frames during continuous shooting.
  • the first frame uses visible light and the second frame uses ultraviolet light. This way, through different environments, different angles and different shooting conditions, the pictures are obtained through The authenticity determination result obtained by processing will be more accurate.
  • the device for authenticating documents includes:
  • An acquiring parameter unit configured to acquire the model of the current device and the physical parameters of each of the cameras, where the physical parameters are non-adjustable parameters;
  • the calling camera unit is used to call each of the cameras according to the model and the physical parameters to complete shooting.
  • the current device is the aforementioned smart device, such as a mobile phone.
  • the aforementioned model can be the model of a mobile phone. Since the camera is set at a fixed position, the distance and arrangement of each camera can be known by the model.
  • the aforementioned physical parameters are non-adjustable parameters. , Which is the inherent parameters of the camera, such as the shutter speed.
  • an embodiment of the present invention also provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 3.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor designed by the computer is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium.
  • the database of the computer equipment is used to store all the data required for the above-mentioned authentication certificate.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a method of authenticating a certificate.
  • the above-mentioned processor executes the steps of the above-mentioned method for authenticating a certificate: acquiring multiple different types of cameras simultaneously photographing image data of the same certificate from different angles, and acquiring shooting parameters and environmental parameters of each camera, where the shooting parameters are taken by the camera Time adjustable parameters, the environmental parameters are the parameters of the environment where the credential is located when the camera is taken; each of the picture data is paired with the shooting parameters corresponding to each of the picture data to obtain a plurality of pairing information; Each of the pairing information is sequentially input into a preset neural network for calculation, and the feature value corresponding to each of the data pictures is obtained.
  • the neural network is a convolutional neural network for extracting image features;
  • the values and the environmental parameters are input into a preset classification network for calculation to obtain classification results corresponding to each of the pictures; the authenticity of the certificate is determined according to each of the classification results.
  • the aforementioned neural network includes a hidden layer
  • the step of sequentially inputting each of the pairing information into a preset neural network for calculation to obtain the feature value corresponding to each of the data pictures includes: The pictures in each of the pairing information are input into the hidden layer for calculation to obtain corresponding characteristic information; each of the characteristic information and the shooting parameters in the corresponding pairing information are calculated to obtain the characteristic value corresponding to each of the pictures.
  • the steps of acquiring multiple different types of cameras simultaneously shooting the image data of the same certificate from different angles, and acquiring the shooting parameters and environmental parameters of each camera it includes: receiving a light source for turning on a specified type of light source. Instruction; turn on the specified type of light source according to the instruction; the step of obtaining the environmental parameter includes: collecting the illuminance corresponding to the specified type of light source to obtain the environmental parameter.
  • the above classification network is a first network of a two-class classification type or a second network of a time series type, and each of the characteristic values and the environmental parameters are input into a preset classification network for calculation,
  • the step of obtaining the classification result corresponding to each of the pictures includes: when the picture data is a plurality of pictures taken by each of the cameras at the same time, then each of the characteristic values is sequentially input to the first network for calculation to obtain The classification result corresponding to each of the pictures; when the picture data is a multi-frame picture corresponding to a plurality of cameras obtained by continuous shooting at the same time by each of the cameras, then each of the feature values corresponding to each of the cameras is input to The second network performs calculations to obtain the classification results of the pictures corresponding to each of the cameras.
  • the steps of acquiring multiple different types of cameras simultaneously shooting the image data of the same certificate from different angles, and acquiring the shooting parameters and environmental parameters of each camera it includes: acquiring the model of the current device and each of the aforementioned The physical parameters of the camera, the physical parameters are non-adjustable parameters; each of the cameras is called according to the model and the physical parameters to complete shooting.
  • the cameras of different types are arranged in an array during shooting.
  • FIG. 3 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • An embodiment of the present invention also provides a computer-readable storage medium.
  • the computer storage medium may be non-volatile or volatile.
  • a computer program is stored thereon, and the computer program is executed by a processor to achieve a
  • the method for authenticating the certificate is specifically: acquiring multiple different types of cameras simultaneously shooting the image data of the same certificate from different angles, and acquiring the shooting parameters and environmental parameters of each camera, the shooting parameters being adjustable when the camera is shooting Parameters, the environmental parameters are the parameters of the environment where the credential is located when the camera is taken; each of the picture data is paired with the shooting parameters corresponding to each of the picture data to obtain a plurality of pairing information;
  • the pairing information is sequentially input into a preset neural network for calculation to obtain the feature value corresponding to each of the data pictures.
  • the neural network is a convolutional neural network for extracting image features; and each of the feature values is compared with the The environmental parameters are input into a preset classification network for calculation to obtain classification results corresponding to each of the pictures; the authenticity of the certificate is determined according to each of the classification results.
  • the aforementioned computer-readable storage medium, the aforementioned neural network includes a hidden layer, and the step of sequentially inputting each of the pairing information into a preset neural network for calculation to obtain the feature value corresponding to each of the data pictures includes: Input the pictures in each of the pairing information into the hidden layer for calculation to obtain corresponding feature information; calculate each of the feature information and the shooting parameters in the corresponding pairing information to obtain the feature value corresponding to each of the pictures .
  • the steps of acquiring multiple different types of cameras simultaneously shooting the image data of the same certificate from different angles, and acquiring the shooting parameters and environmental parameters of each camera it includes: receiving a light source for turning on a specified type of light source. Instruction; turn on the specified type of light source according to the instruction; the step of obtaining the environmental parameter includes: collecting the illuminance corresponding to the specified type of light source to obtain the environmental parameter.
  • the above classification network is a first network of a two-class classification type or a second network of a time series type, and each of the characteristic values and the environmental parameters are input into a preset classification network for calculation,
  • the step of obtaining the classification result corresponding to each of the pictures includes: when the picture data is a plurality of pictures taken by each of the cameras at the same time, then each of the characteristic values is sequentially input to the first network for calculation to obtain The classification result corresponding to each of the pictures; when the picture data is a multi-frame picture corresponding to a plurality of cameras obtained by continuous shooting at the same time by each of the cameras, then each of the feature values corresponding to each of the cameras is input to The second network performs calculations to obtain the classification results of the pictures corresponding to each of the cameras.
  • the steps of acquiring multiple different types of cameras simultaneously shooting the image data of the same certificate from different angles, and acquiring the shooting parameters and environmental parameters of each camera it includes: acquiring the model of the current device and each of the aforementioned The physical parameters of the camera, the physical parameters are non-adjustable parameters; each of the cameras is called according to the model and the physical parameters to complete shooting.
  • the cameras of different types are arranged in an array during shooting.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual-rate data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Abstract

一种鉴别证件的方法、装置、计算机设备和存储介质,其中方法包括:获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数;将各图片数据分别与各图片数据对应的拍摄参数进行配对,得到多个配对信息;将各配对信息依次输入至预设的神经网络中进行计算,得到各数据图片对应的特征值;将各特征值与环境参数输入到预设的分类网络中进行计算,得到各图片对应的分类结果;依据各分类结果确定证件的真伪,这样直接通过多个具有一定间距的摄像头同时进行拍摄,从而实现多个角度拍摄,不但降低了传统鉴伪拍摄的角度不易控制,且需要通过多次重复操作的问题,而且采集便利。

Description

鉴别证件的方法、装置、计算机设备和存储介质
本申请要求于2020年9月4日提交中国专利局、申请号为202010923910.2,发明名称为“鉴别证件的方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及到数据处理的技术领域,特别是涉及到一种鉴别证件的方法、装置、计算机设备和存储介质。
背景技术
目前有些证件需要利用光栅进行防伪,例如香港身份证。而针对光栅的图像鉴伪,一般是基于多种角度多次拍摄得到的证件图片进行的。发明人发现,在多次拍摄的时候,需要通过防替换检测来防止证件被替换,这就会导致开发成本,维护成本及运营成本的增加;发明人还发现,在拍摄的过程中,采用多个角度进行拍摄时,由于从不同的角度看同一张光栅图片,所看到图像不同,人眼对光栅差异的分辨率低,对于不同角度的同一个证件的图片,通过人眼来识别可能会产生不同结果,导致不同角度图片识别结果差异较大,虽然通过相机和算法能够降低对各个图像识别结果的差异,但这会导致图片采集方式要求严格,容易导致多次采集的图片无法准确对齐,使得输入的图片差异影响远远超过了光栅在不同角度图像出现的差异,从而导致结果准确率降低,而且还需要进行多次采集,采集成本高,采集难度大,难以满足实际需求。
技术问题
本发明的主要目的为提供一种鉴别证件的方法、装置、计算机设备和存储介质,旨在解决现有技术中对证件进行鉴伪时采集难度大的技术问题。
技术解决方案
基于上述发明目的,本发明提出一种鉴别证件的方法,包括:
获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
依据各所述分类结果确定所述证件的真伪。
本发明还提供一种鉴别证件的装置,包括:
获取图片单元,用于获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
配对图片单元,用于将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
提取特征单元,用于将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
计算结果单元,用于将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
确定真伪单元,用于依据各所述分类结果确定所述证件的真伪。
本发明还提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现一种鉴别证件的方法;
其中,所述鉴别证件的方法,包括:
获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
依据各所述分类结果确定所述证件的真伪。
本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现一种鉴别证件的方法;
其中,所述鉴别证件的方法,包括:
获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
依据各所述分类结果确定所述证件的真伪。
有益效果
通过设置在智能设备上的多个不同种类的摄像头同时拍摄,然后与拍摄参数等一起输入到对应神经网络去处理,这样直接通过多个具有一定间距的摄像头进行拍摄,从而实现多个角度拍摄,提高对不同角度可变图形的光栅的鉴伪能力,不但降低了传统鉴伪拍摄的角度不易控制,且需要通过多次重复操作的问题,而且采集便利,鉴别结果准确率较高。
附图说明
图1 为本发明一实施例中鉴别证件的方法的步骤示意图;
图2 为本发明一实施例中鉴别证件的装置的结构示意框图;
图3 为本发明一实施例的计算机设备的结构示意框图。
本发明的最佳实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
参照图1,本实施例中的鉴别证件的方法,包括:
步骤S1:获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
步骤S2:将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
步骤S3:将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
步骤S4:将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
步骤S5:依据各所述分类结果确定所述证件的真伪。
本实施例中,上述鉴别证件的方法可基于智能设备来实现,例如带摄像头的手机、平板、摄像设备等等,该智能设备上设有多个不同类型的摄像头,例如手机具有三个摄像头,分别为广角摄像头、微距摄像头以及普通摄像头,或者手机上具有两个摄像头,分别为普通摄像头以及黑白摄像头。不同摄像头之间的间距可依据实际需求设定,不同的智能设备所设置的各个摄像头间距不一样,也即不同型号的智能设备,其摄像头的类型、间距、排布等均可以不同。优选地,上述各个摄像头可以是阵列排布,不同摄像头之间保持较大的间距,使其对不同角度可变图形的光栅具有较好的鉴伪能力。
如上述步骤S1所述,首先通过不同类型的摄像头从不同角度同时拍摄同一个证件,得到该证件的多个图片,该证件为通过不同油墨印刷具有光栅防伪功能的证件,例如香港身份证。本实施例中的图片数据即上述图片,这些图片可以包括由广角摄像头拍摄证件得到的图片、由普通摄像头拍摄证件得到的图片等等,然后获取图片数据。同时也可以获取各个摄像头的拍摄参数和环境参数,拍摄参数为摄像头拍摄时可调节的参数,例如光圈信息、曝光时间等,环境参数为摄像头拍摄时证件所在环境的参数,例如光照度,可以通过智能设备上的光线感应器、感光元件等来获取,由于各个摄像头位置不一样,类型不一样,同时拍摄时,可得到不同角度拍摄证件的图片。
如S2-S3所述,由于不同的摄像头拍摄同一个证件,得到的照片有差异,且每个图片与拍摄该图片的摄像头的拍摄参数息息相关,为了消除差异,在进行特征提取之前,先将各图片数据分别与各图片数据对应的拍摄参数进行配对,得到多个配对信息,也即每个配对信息均包括一个图片数据以及用于拍摄该图片的拍摄参数;然后再将各配对信息依次输入至预设的神经网络中进行计算,得到各数据图片对应的特征值,上述神经网络为用于提取图片特征的卷积神经网络,上述神经网络可以通过现有技术实现,例如采用ResNet网络结构来实现,此处不再赘述。
如上述步骤S4所述,将各特征值与上述环境参数输入到分类网络中进行计算,得到各图片对应的分类结果,上述分类网络为预设的二分类网络或者时间序列类型的神经网络,通过对特征值进行计算后,输出上述证件为真实证件的概率值,从而可以得到证件的分类结果,分类结果包括真实证件或者假证件。
如上述步骤S5所述,由于将多个特征值通过上述分类网络计算后,得到多个分类结果,每个分类结果对应一个摄像头拍摄的图片,然后依据这些分类结果确定证件的真伪,例如设定当90%以上的分类结果判定为真实证件时,则可以确定上述证件为真实证件,若低于90%的分类结果判定为真实证件,则确定该证件为假证件,在另一例子中,当上述智能设备为手机时,该手机具有四个摄像头,这时可以设定超过75%的分类结果判定为真实证件时,确定上述证件为真实证件。
本申请提供的鉴别证件的方法,通过设置在智能设备上的多个不同种类的摄像头同时拍摄,然后与拍摄参数等一起输入到对应神经网络去处理,这样直接通过多个具有一定间距的摄像头进行拍摄,从而实现多个角度拍摄,提高对不同角度可变图形的光栅的鉴伪能力,不但降低了传统鉴伪拍摄的角度不易控制,且需要通过多次重复操作的问题,而且采集便利,鉴别结果准确率较高。
在一个实施例中,上述步骤S3,包括:
步骤S31:将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;
步骤S32:将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
本实施例中,上述神经网络包括有隐含层,当将上述配对信息输入到该神经网络时,首先对其中图片进行计算,也即通过隐含层对图片进行计算得到对应的特征信息,为了减小各个不同类型摄像头由于类型本身参数对特征值的所带来的影响,可将该特征信息与配对信息中的拍摄参数进行计算,得到图片最终的特征值;例如广角摄像头与普通摄像头拍摄同一个证件,由于两者拍摄参数不一样,可能得到一张模糊和一张清晰图片,为了最后得到特征值能够准确反映证件的特征,故而将通过隐含层提取出的特性信息再次与对应的拍摄参数计算,进而修正,得到最终的特征值。
在一个实施例中,上述步骤S1之前,包括:
步骤S01:接收用于开启一指定类型光源的指令;
步骤S02:依据所述指令开启所述指定类型光源;
步骤S1中的获取环境参数的步骤包括:
步骤S11:采集所述指定类型光源所对应的光照度,得到所述环境参数。
需知上述证件为通过不同油墨印刷具有光栅防伪功能的证件,在不同类型的光源下,其显示出的光栅也可能有差别,故可在上述智能设备上设置多种不同类型的光源,然后在不同光源的光照下拍摄证件的图片,上述光源可以为可见光、紫外线、红外线等,或者偏振光等。首先接收用于开启指定类型光源的指令,上述指定类型可以为上述例子中的任一种,该指令可以通过用户设定输入,或者预先设定触发条件然后自动输入,然后按照指令开启该指定类型光源,在拍摄的同时获取环境参数,也即在获取图片数据的同时依据光线感应器采集该指定类型光源所对应的光照度,得到上述环境参数。
在另一实施例中,为了提高准确率,可将各种光源轮流开启,然后每一种光源均通过各摄像头拍摄一次,最后得到在各种光源下的分类结果,然后再依据分类结果确定证件的真伪。
在一个实施例中,上述步骤S4,包括:
步骤S41:当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;
步骤S42:当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
上述分类网络可以为二分类类型的第一网络或时间序列类型的第二网络,上述二分类类型的第一网络即是二分类网络,用于将特征值进行计算然后输出证件为真实证件的概率,上述时间序列类型的第二网络用于计算时间序列类型的数据,例如输入连续拍摄得到多帧图片的多个特征值,这些特征值即为时间序列类型的数据,最后同样输出证件为真实证件的概率,上述第一网络以及第二网络均可以采用现有的网络结构搭建训练而成,此处不在赘述。
当图片数据为各摄像头同时拍摄一次得到的多个图片,也即每个摄像头均只拍摄一次,得到一个对应的图片,这时可将各特征值依次输入到第一网络进行计算,得到各图片对应的分类结果。当图片数据为各摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个摄像头的各特征值输入到第二网络进行计算,得到每个摄像头对应的图片的分类结果,由于在室内进行证件拍摄工作时,交流电的存在会导致光源的频闪,拍出的图片效果不好,导致后续计算容易出错,针对这一现象,本实施例采用多帧的方式对光源进行补偿操作,即通过每个摄像头连续拍摄得到多帧图片,然后按上述步骤S1-S4处理,得到每个摄像头的多帧图片所对应的多个特征值。其中连续拍摄的采样频率和拍摄地点当地交流电的频率为互质的关系,例如交流电的频率为50Hz,拍摄的帧率为17Hz。再将多个特征值输入到第二网络,例如RNN网络进行鉴伪,每一次输入一个摄像头拍摄多帧图片对应的所有特征值,直接将所有摄像头多帧图片对应的特征值均计算完毕,得到多个分类结果。
这样可以降低由于光照频闪带来的不利影响,提高光栅鉴伪的识别率和检测速度。该方法体现出全新形态的光栅鉴伪能力,提供了使用更高强度防伪手段的可行性,同时还具有较高的适应性,能够对变色油墨的证件进行防伪识别,降低防伪需求的部署成本。
在另一实施例中,在连续拍摄时,可以配合不同光源进行拍摄,也即在连续拍摄的多帧图片中,可以在拍摄第一帧时采用一种类型的光源,在拍摄第二帧时采用另一种类型的光源,这样在连续拍摄过程中拍摄不同帧采用不同光源,例如第一帧采用可见光,第二帧采用紫外线,这样通过不同环境不同角度不同拍摄条件下得到的图片,再经过处理得到的确定真伪结果会更加准确。
在一个实施例中,在步骤S1之前,包括:
步骤S04:获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;
步骤S05:依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
本实施例中,当前设备为上述智能设备,例如手机,上述型号可以为手机的型号,由于摄像头设置位置固定,由型号即可知道各个摄像头的间距以及排布,上述物理参数为不可调的参数,也即为摄像头固有的参数,例如快门速度。在获取上述图片数据之前,先调用摄像头进行拍摄,具体而言,可以首先获取到当前设备的型号以及物流参数,由于通过型号可知摄像头的间距排布以及摄像头的物理参数,然后按实际需求调用依据这些型号以及物理参数来调用各个摄像头,然后通过调用的摄像头来完成拍摄,以便获取图片。
参照图2,本实施例中提供一种鉴别证件的装置,该装置对应上述鉴别证件的方法,该装置包括:
获取图片单元1,用于获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
配对图片单元2,用于将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
提取特征单元3,用于将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
计算结果单元4,用于将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
确定真伪单元5,用于依据各所述分类结果确定所述证件的真伪。
本实施例中,上述鉴别证件的方法可基于智能设备来实现,例如带摄像头的手机、平板、摄像设备等等,该智能设备上设有多个不同类型的摄像头,例如手机具有三个摄像头,分别为广角摄像头、微距摄像头以及普通摄像头,或者手机上具有两个摄像头,分别为普通摄像头以及黑白摄像头。不同摄像头之间的间距可依据实际需求设定,不同的智能设备所设置的各个摄像头间距不一样,也即不同型号的智能设备,其摄像头的类型、间距、排布等均可以不同。优选地,上述各个摄像头可以是阵列排布,不同摄像头之间保持较大的间距,使其对不同角度可变图形的光栅具有较好的鉴伪能力。
如上述获取图片单元1所述,首先通过不同类型的摄像头从不同角度同时拍摄同一个证件,得到该证件的多个图片,该证件为通过不同油墨印刷具有光栅防伪功能的证件,例如香港身份证。本实施例中的图片数据即上述图片,这些图片可以包括由广角摄像头拍摄证件得到的图片、由普通摄像头拍摄证件得到的图片等等,然后获取图片数据。同时也可以获取各个摄像头的拍摄参数和环境参数,拍摄参数为摄像头拍摄时可调节的参数,例如光圈信息、曝光时间等,环境参数为摄像头拍摄时证件所在环境的参数,例如光照度,可以通过智能设备上的光线感应器、感光元件等来获取,由于各个摄像头位置不一样,类型不一样,同时拍摄时,可得到不同角度拍摄证件的图片。
如配对图片单元2、提取特征单元3所述,由于不同的摄像头拍摄同一个证件,得到的照片有差异,且每个图片与拍摄该图片的摄像头的拍摄参数息息相关,为了消除差异,在进行特征提取之前,先将各图片数据分别与各图片数据对应的拍摄参数进行配对,得到多个配对信息,也即每个配对信息均包括一个图片数据以及用于拍摄该图片的拍摄参数;然后再将各配对信息依次输入至预设的神经网络中进行计算,得到各数据图片对应的特征值,上述神经网络为用于提取图片特征的卷积神经网络,上述神经网络可以通过现有技术实现,例如采用ResNet网络结构来实现,此处不再赘述。
如上述计算结果单元4所述,将各特征值与上述环境参数输入到分类网络中进行计算,得到各图片对应的分类结果,上述分类网络为预设的二分类网络或者时间序列类型的神经网络,通过对特征值进行计算后,输出上述证件为真实证件的概率值,从而可以得到证件的分类结果,分类结果包括真实证件或者假证件。
如上述确定真伪单元5所述,由于将多个特征值通过上述分类网络计算后,得到多个分类结果,每个分类结果对应一个摄像头拍摄的图片,然后依据这些分类结果确定证件的真伪,例如设定当90%以上的分类结果判定为真实证件时,则可以确定上述证件为真实证件,若低于90%的分类结果判定为真实证件,则确定该证件为假证件,在另一例子中,当上述智能设备为手机时,该手机具有四个摄像头,这时可以设定超过75%的分类结果判定为真实证件时,确定上述证件为真实证件。
本申请提供的鉴别证件的方法,通过设置在智能设备上的多个不同种类的摄像头同时拍摄,然后与拍摄参数等一起输入到对应神经网络去处理,这样直接通过多个具有一定间距的摄像头进行拍摄,从而实现多个角度拍摄,提高对不同角度可变图形的光栅的鉴伪能力,不但降低了传统鉴伪拍摄的角度不易控制,且需要通过多次重复操作的问题,而且采集便利,鉴别结果准确率较高。
在一个实施例中,上述提取特征单元3,包括:
计算特征子单元,用于将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;
计算信息子单元,用于将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
本实施例中,上述神经网络包括有隐含层,当将上述配对信息输入到该神经网络时,首先对其中图片进行计算,也即通过隐含层对图片进行计算得到对应的特征信息,为了减小各个不同类型摄像头由于类型本身参数对特征值的所带来的影响,可将该特征信息与配对信息中的拍摄参数进行计算,得到图片最终的特征值;例如广角摄像头与普通摄像头拍摄同一个证件,由于两者拍摄参数不一样,可能得到一张模糊和一张清晰图片,为了最后得到特征值能够准确反映证件的特征,故而将通过隐含层提取出的特性信息再次与对应的拍摄参数计算,进而修正,得到最终的特征值。
在一个实施例中,上述鉴别证件的装置,包括:
接收指令单元,用于接收用于开启一指定类型光源的指令;
开启光源单元,用于依据所述指令开启所述指定类型光源;
获取图片单元1包括:
采集关照子单元,用于采集所述指定类型光源所对应的光照度,得到所述环境参数。
需知上述证件为通过不同油墨印刷具有光栅防伪功能的证件,在不同类型的光源下,其显示出的光栅也可能有差别,故可在上述智能设备上设置多种不同类型的光源,然后在不同光源的光照下拍摄证件的图片,上述光源可以为可见光、紫外线、红外线等,或者偏振光等。首先接收用于开启指定类型光源的指令,上述指定类型可以为上述例子中的任一种,该指令可以通过用户设定输入,或者预先设定触发条件然后自动输入,然后按照指令开启该指定类型光源,在拍摄的同时获取环境参数,也即在获取图片数据的同时依据光线感应器采集该指定类型光源所对应的光照度,得到上述环境参数。
在另一实施例中,为了提高准确率,可将各种光源轮流开启,然后每一种光源均通过各摄像头拍摄一次,最后得到在各种光源下的分类结果,然后再依据分类结果确定证件的真伪。
在一个实施例中,上述计算结果单元4,包括:
第一结果子单元,用于当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;
第二结果子单元,用于当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
上述分类网络可以为二分类类型的第一网络或时间序列类型的第二网络,上述二分类类型的第一网络即是二分类网络,用于将特征值进行计算然后输出证件为真实证件的概率,上述时间序列类型的第二网络用于计算时间序列类型的数据,例如输入连续拍摄得到多帧图片的多个特征值,这些特征值即为时间序列类型的数据,最后同样输出证件为真实证件的概率,上述第一网络以及第二网络均可以采用现有的网络结构搭建训练而成,此处不在赘述。
当图片数据为各摄像头同时拍摄一次得到的多个图片,也即每个摄像头均只拍摄一次,得到一个对应的图片,这时可将各特征值依次输入到第一网络进行计算,得到各图片对应的分类结果。当图片数据为各摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个摄像头的各特征值输入到第二网络进行计算,得到每个摄像头对应的图片的分类结果,由于在室内进行证件拍摄工作时,交流电的存在会导致光源的频闪,拍出的图片效果不好,导致后续计算容易出错,针对这一现象,本实施例采用多帧的方式对光源进行补偿操作,即通过每个摄像头连续拍摄得到多帧图片,然后按上述获取图片单元1、配对图片单元2及提取特征单元3处理,得到每个摄像头的多帧图片所对应的多个特征值。其中连续拍摄的采样频率和拍摄地点当地交流电的频率为互质的关系,例如交流电的频率为50Hz,拍摄的帧率为17Hz。再将多个特征值输入到第二网络,例如RNN网络进行鉴伪,每一次输入一个摄像头拍摄多帧图片对应的所有特征值,直接将所有摄像头多帧图片对应的特征值均计算完毕,得到多个分类结果。
这样可以降低由于光照频闪带来的不利影响,提高光栅鉴伪的识别率和检测速度。该方法体现出全新形态的光栅鉴伪能力,提供了使用更高强度防伪手段的可行性,同时还具有较高的适应性,能够对变色油墨的证件进行防伪识别,降低防伪需求的部署成本。
在另一实施例中,在连续拍摄时,可以配合不同光源进行拍摄,也即在连续拍摄的多帧图片中,可以在拍摄第一帧时采用一种类型的光源,在拍摄第二帧时采用另一种类型的光源,这样在连续拍摄过程中拍摄不同帧采用不同光源,例如第一帧采用可见光,第二帧采用紫外线,这样通过不同环境不同角度不同拍摄条件下得到的图片,再经过处理得到的确定真伪结果会更加准确。
在一个实施例中,在鉴别证件的装置,包括:
获取参数单元,用于获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;
调用摄像单元,用于依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
本实施例中,当前设备为上述智能设备,例如手机,上述型号可以为手机的型号,由于摄像头设置位置固定,由型号即可知道各个摄像头的间距以及排布,上述物理参数为不可调的参数,也即为摄像头固有的参数,例如快门速度。在获取上述图片数据之前,先调用摄像头进行拍摄,具体而言,可以首先获取到当前设备的型号以及物流参数,由于通过型号可知摄像头的间距排布以及摄像头的物理参数,然后按实际需求调用依据这些型号以及物理参数来调用各个摄像头,然后通过调用的摄像头来完成拍摄,以便获取图片。
参照图3,本发明实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图3所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储上述鉴别证件所需的所有数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种鉴别证件的方法。
上述处理器执行上述鉴别证件的方法的步骤:获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;依据各所述分类结果确定所述证件的真伪。
在一个实施例中,上述神经网络包括隐含层,所述将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值的步骤,包括:将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
在一个实施例中,上述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:接收用于开启一指定类型光源的指令;依据所述指令开启所述指定类型光源;获取所述环境参数的步骤,包括:采集所述指定类型光源所对应的光照度,得到所述环境参数。
在一个实施例中,上述分类网络为二分类类型的第一网络或时间序列类型的第二网络,所述将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果的步骤,包括:当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
在一个实施例中,上述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
在一个实施例中,各不同类型的所述摄像头在拍摄时成阵列排布。
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
本发明一实施例还提供一种计算机可读存储介质,所述计算机存储介质可以是非易失性,也可以是易失性,其上存储有计算机程序,计算机程序被处理器执行时实现一种鉴别证件的方法,具体为:获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;依据各所述分类结果确定所述证件的真伪。
上述计算机可读存储介质,上述神经网络包括隐含层,所述将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值的步骤,包括:将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
在一个实施例中,上述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:接收用于开启一指定类型光源的指令;依据所述指令开启所述指定类型光源;获取所述环境参数的步骤,包括:采集所述指定类型光源所对应的光照度,得到所述环境参数。
在一个实施例中,上述分类网络为二分类类型的第一网络或时间序列类型的第二网络,所述将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果的步骤,包括:当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
在一个实施例中,上述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
在一个实施例中,各不同类型的所述摄像头在拍摄时成阵列排布。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储与一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM一多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种鉴别证件的方法,其中,包括:
    获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
    将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
    将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
    将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
    依据各所述分类结果确定所述证件的真伪。
  2. 根据权利要求1所述的鉴别证件的方法,其中,所述神经网络包括隐含层,所述将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值的步骤,包括:
    将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;
    将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
  3. 根据权利要求1所述的鉴别证件的方法,其中,所述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:
    接收用于开启一指定类型光源的指令;
    依据所述指令开启所述指定类型光源;
    获取所述环境参数的步骤,包括:
    采集所述指定类型光源所对应的光照度,得到所述环境参数。
  4. 根据权利要求1所述的鉴别证件的方法,其中,所述分类网络为二分类类型的第一网络或时间序列类型的第二网络,所述将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果的步骤,包括:
    当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;
    当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
  5. 根据权利要求1所述的鉴别证件的方法,其中,所述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:
    获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;
    依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
  6. 根据权利要1所述的鉴别证件的方法,其中,各不同类型的所述摄像头在拍摄时成阵列排布。
  7. 一种鉴别证件的装置,其中,包括:
    获取图片单元,用于获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
    配对图片单元,用于将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
    提取特征单元,用于将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
    计算结果单元,用于将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
    确定真伪单元,用于依据各所述分类结果确定所述证件的真伪。
  8. 根据权利要求7所述的鉴别证件的装置,其中,所述神经网络包括隐含层,所述提取特征单元包括:
    计算特征子单元,用于将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;
    计算信息子单元,用于将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现一种鉴别证件的方法;
    其中,所述鉴别证件的方法,包括:
    获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
    将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
    将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
    将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
    依据各所述分类结果确定所述证件的真伪。
  10. 根据权利要求9所述的计算机设备,其中,所述神经网络包括隐含层,所述将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值的步骤,包括:
    将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;
    将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
  11. 根据权利要求9所述的计算机设备,其中,所述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:
    接收用于开启一指定类型光源的指令;
    依据所述指令开启所述指定类型光源;
    获取所述环境参数的步骤,包括:
    采集所述指定类型光源所对应的光照度,得到所述环境参数。
  12. 根据权利要求9所述的计算机设备,其中,所述分类网络为二分类类型的第一网络或时间序列类型的第二网络,所述将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果的步骤,包括:
    当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;
    当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
  13. 根据权利要求9所述的计算机设备,其中,所述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:
    获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;
    依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
  14. 根据权利要9所述的计算机设备,其中,各不同类型的所述摄像头在拍摄时成阵列排布。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现一种鉴别证件的方法;
    其中,所述鉴别证件的方法,包括:
    获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数,所述拍摄参数为所述摄像头拍摄时可调节的参数,所述环境参数为所述摄像头拍摄时所述证件所在环境的参数;
    将各所述图片数据分别与各所述图片数据对应的拍摄参数进行配对,得到多个配对信息;
    将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值,所述神经网络为用于提取图片特征的卷积神经网络;
    将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果;
    依据各所述分类结果确定所述证件的真伪。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述神经网络包括隐含层,所述将各所述配对信息依次输入至预设的神经网络中进行计算,得到各所述数据图片对应的特征值的步骤,包括:
    将各所述配对信息中的图片输入到所述隐含层进行计算得到对应的特征信息;
    将各所述特征信息与对应的配对信息中的拍摄参数进行计算得到各所述图片对应的特征值。
  17. 根据权利要求15所述的计算机可读存储介质,其中,所述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:
    接收用于开启一指定类型光源的指令;
    依据所述指令开启所述指定类型光源;
    获取所述环境参数的步骤,包括:
    采集所述指定类型光源所对应的光照度,得到所述环境参数。
  18. 根据权利要求15所述的计算机可读存储介质,其中,所述分类网络为二分类类型的第一网络或时间序列类型的第二网络,所述将各所述特征值与所述环境参数输入到预设的分类网络中进行计算,得到各所述图片对应的分类结果的步骤,包括:
    当所述图片数据为各所述摄像头同时拍摄一次得到的多个图片,则将各所述特征值依次输入到第一网络进行计算,得到各所述图片对应的分类结果;
    当所述图片数据为各所述摄像头同时连续拍摄得到的分别对应多个摄像头的多帧图片,则将对应每个所述摄像头的各所述特征值输入到第二网络进行计算,得到每个所述摄像头对应的图片的分类结果。
  19. 根据权利要求15所述的计算机可读存储介质,其中,所述获取多个不同类型的摄像头从不同角度同时拍摄同一证件的图片数据,以及获取各个摄像头的拍摄参数和环境参数的步骤之前,包括:
    获取当前设备的型号以及各所述摄像头的物理参数,所述物理参数为不可调的参数;
    依据所述型号以及所述物理参数调用各所述摄像头,以完成拍摄。
  20. 根据权利要15所述的计算机可读存储介质,其中,各不同类型的所述摄像头在拍摄时成阵列排布。
PCT/CN2020/124885 2020-09-04 2020-10-29 鉴别证件的方法、装置、计算机设备和存储介质 WO2021159750A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010923910.2A CN112016629B (zh) 2020-09-04 2020-09-04 鉴别证件的方法、装置、计算机设备和存储介质
CN202010923910.2 2020-09-04

Publications (1)

Publication Number Publication Date
WO2021159750A1 true WO2021159750A1 (zh) 2021-08-19

Family

ID=73516615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124885 WO2021159750A1 (zh) 2020-09-04 2020-10-29 鉴别证件的方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN112016629B (zh)
WO (1) WO2021159750A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516739A (zh) * 2019-08-27 2019-11-29 阿里巴巴集团控股有限公司 一种证件识别方法、装置及设备
CN111046899A (zh) * 2019-10-09 2020-04-21 京东数字科技控股有限公司 身份证真伪识别方法、装置、设备及存储介质
CN111324874A (zh) * 2020-01-21 2020-06-23 支付宝实验室(新加坡)有限公司 一种证件真伪识别方法及装置
CN112200136A (zh) * 2020-10-29 2021-01-08 腾讯科技(深圳)有限公司 证件真伪识别方法、装置、计算机可读介质及电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325933B (zh) * 2017-07-28 2022-06-21 阿里巴巴集团控股有限公司 一种翻拍图像识别方法及装置
CN111191539B (zh) * 2019-12-20 2021-01-29 江苏常熟农村商业银行股份有限公司 证件真伪验证方法、装置、计算机设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516739A (zh) * 2019-08-27 2019-11-29 阿里巴巴集团控股有限公司 一种证件识别方法、装置及设备
CN111046899A (zh) * 2019-10-09 2020-04-21 京东数字科技控股有限公司 身份证真伪识别方法、装置、设备及存储介质
CN111324874A (zh) * 2020-01-21 2020-06-23 支付宝实验室(新加坡)有限公司 一种证件真伪识别方法及装置
CN112200136A (zh) * 2020-10-29 2021-01-08 腾讯科技(深圳)有限公司 证件真伪识别方法、装置、计算机可读介质及电子设备

Also Published As

Publication number Publication date
CN112016629A (zh) 2020-12-01
CN112016629B (zh) 2023-07-28

Similar Documents

Publication Publication Date Title
JP6844038B2 (ja) 生体検出方法及び装置、電子機器並びに記憶媒体
RU2733115C1 (ru) Способ и устройство для верифицирования сертификатов и идентичностей
CN107729847B (zh) 一种证件验证、身份验证方法和装置
US8724856B1 (en) Method, system and computer program for comparing images
KR101108835B1 (ko) 얼굴 인증 시스템 및 그 인증 방법
KR101596298B1 (ko) 스마트폰을 활용한 비접촉식 지문인식방법
WO2021017610A1 (zh) 证件真伪验证方法、装置、计算机设备及存储介质
JP5652886B2 (ja) 顔認証装置、認証方法とそのプログラム、情報機器
CN111091063A (zh) 活体检测方法、装置及系统
CN110516672A (zh) 卡证信息识别方法、装置及终端
KR20200118842A (ko) 신원 인증 방법 및 장치, 전자 기기 및 저장 매체
KR102145132B1 (ko) 딥러닝을 이용한 대리 면접 예방 방법
US20180085009A1 (en) Method and system for detecting user heart rate using live camera feed
WO2021008205A1 (zh) 图像处理
CN107743200A (zh) 拍照的方法、装置、计算机可读存储介质和电子设备
CN109040746B (zh) 摄像头校准方法和装置、电子设备、计算机可读存储介质
JP2009176208A (ja) 顔認証装置、システム、方法及びプログラム
CN109040745A (zh) 摄像头自校准方法和装置、电子设备、计算机存储介质
KR102038576B1 (ko) 홍채 인식 시스템의 부정행위 검출 방법
CN113642639B (zh) 活体检测方法、装置、设备和存储介质
JP2017211982A (ja) 顔識別システム及び顔識別方法
WO2021166289A1 (ja) データ登録装置、生体認証装置、および記録媒体
WO2021159750A1 (zh) 鉴别证件的方法、装置、计算机设备和存储介质
CN111932462A (zh) 图像降质模型的训练方法、装置和电子设备、存储介质
CN111767845B (zh) 证件识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919266

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919266

Country of ref document: EP

Kind code of ref document: A1