CN113111807B - Target identification method and system - Google Patents
Target identification method and system Download PDFInfo
- Publication number
- CN113111807B CN113111807B CN202110423614.0A CN202110423614A CN113111807B CN 113111807 B CN113111807 B CN 113111807B CN 202110423614 A CN202110423614 A CN 202110423614A CN 113111807 B CN113111807 B CN 113111807B
- Authority
- CN
- China
- Prior art keywords
- color
- verification
- image
- illumination
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012795 verification Methods 0.000 claims abstract description 357
- 238000005286 illumination Methods 0.000 claims abstract description 188
- 239000003086 colorant Substances 0.000 claims abstract description 64
- 238000000605 extraction Methods 0.000 claims description 37
- 238000012549 training Methods 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 19
- 238000010801 machine learning Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 41
- 239000000284 extract Substances 0.000 description 11
- 239000013598 vector Substances 0.000 description 7
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the specification discloses a target identification method and a target identification system. The target identification method comprises the following steps: acquiring a plurality of target images, wherein the shooting time of the plurality of target images has a corresponding relation with the irradiation time of a plurality of illuminations in an illumination sequence irradiated to a target object, the plurality of illuminations have a plurality of colors, the plurality of colors comprise at least one reference color and at least one verification color, and each of the at least one verification color is determined based on at least a part of colors in the at least one reference color; and determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and a system for identifying a target.
Background
The target identifier is a technology for performing biological recognition based on a target acquired by an image acquisition device, for example, a face recognition technology using a face as a target is widely applied to application scenes such as authority verification and identity verification. In order to secure object recognition, it is necessary to determine the authenticity of the object image.
It is therefore desirable to provide a method and system for target recognition that can determine the authenticity of a target image.
Disclosure of Invention
One of the embodiments of the present specification provides a target recognition method, including: acquiring a plurality of target images, wherein the shooting time of the plurality of target images has a corresponding relation with the irradiation time of a plurality of lights in an irradiation sequence irradiated to a target object, the plurality of lights have a plurality of colors, the plurality of colors comprise at least one reference color and at least one verification color, and each of the at least one verification color is determined based on at least one part of the at least one reference color; and determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
One of the embodiments of the present specification provides an object recognition system, including: an acquisition module configured to acquire a plurality of target images, the plurality of target images having a correspondence between a photographing time and an illumination time of a plurality of illuminations in an illumination sequence that irradiates a target object, the plurality of illuminations having a plurality of colors including at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a portion of the at least one reference color; and a verification module for determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
One of the embodiments of the present specification provides an object recognition apparatus including a processor for performing the object recognition method disclosed in the present specification.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer, perform the object recognition method disclosed in the present specification.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of an object recognition system according to some embodiments of the present description;
FIG. 2 is an exemplary flow chart of a method of object identification according to some embodiments of the present description;
FIG. 3 is a schematic diagram of an illumination sequence shown in accordance with some embodiments of the present description;
FIG. 4 is an exemplary flow chart for determining the authenticity of a plurality of target images based on a sequence of illumination and the plurality of target images, according to some embodiments of the present disclosure;
FIG. 5 is a schematic diagram of the structure of a color verification model shown in accordance with some embodiments of the present description;
FIG. 6 is another exemplary flow chart for determining the authenticity of a plurality of target images based on a sequence of illumination and the plurality of target images, according to some embodiments of the present disclosure.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
The destination identifier is a technology for performing biometric identification based on a target object acquired by the image acquisition device. In some embodiments, the target object may be a human face, a fingerprint, a palmprint, a pupil, and the like. In some embodiments, target identification may be applied to rights verification. Such as access rights authentication, account payment rights authentication, and the like. In some embodiments, target recognition may also be used for authentication. Such as employee attendance authentication and principal registration identity security authentication. For example only, the target identification may be based on matching the target image acquired in real-time by the image acquisition device with pre-acquired biometric features to verify the identity of the target.
However, the image acquisition device may be attacked or hijacked, and an attacker may upload false target images for authentication. For example, the attacker a may directly upload the face image of the user B after attacking or hijacking the image acquisition device. The target recognition system carries out face recognition based on the face image of the user B and the face biological characteristics of the user B, which are acquired in advance, so that the identity of the user B is verified.
Therefore, in order to ensure the safety of the target recognition, it is necessary to determine the authenticity of the target image, that is, to determine that the target image is acquired by the image acquisition device in real time during the target recognition process.
Fig. 1 is a schematic view of an application scenario of an object recognition system according to some embodiments of the present disclosure.
Fig. 1 is a schematic view of an application scenario of an object recognition system according to some embodiments of the present disclosure. As shown in fig. 1, the object recognition system 100 may include a processing device 110, a network 120, a terminal 130, and a storage device 140.
The processing device 110 may be used to process data and/or information from at least one component of the target recognition system 100 and/or an external data source (e.g., a cloud data center). For example, the processing device 110 may acquire a plurality of target images, determine the authenticity of the plurality of target images, and the like. During processing, the processing device 110 may obtain data (e.g., instructions) from other components of the object recognition system 100 (e.g., the storage device 140 and/or the terminal 130) and/or send the processed data to the other components for storage or display, either directly or via the network 120.
In some embodiments, the processing device 110 may be a single server or a group of servers. The server farm may be centralized or distributed (e.g., the processing device 110 may be a distributed system). In some embodiments, the processing device 110 may be local or remote. In some embodiments, the processing device 110 may be implemented on a cloud platform or provided in a virtual manner. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof.
The network 120 may connect components of the system and/or connect the system with external components. The network 120 enables communication between components of the object recognition system 100, and between the object recognition system 100 and external components, facilitating the exchange of data and/or information. In some embodiments, network 120 may be any one or more of a wired network or a wireless network. For example, the network 120 may include a cable network, a fiber optic network, a telecommunications network, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC), an intra-device bus, an intra-device line, a cable connection, and the like, or any combination thereof. In some embodiments, the network connection between the portions of the object recognition system 100 may be in one of the manners described above, or in a variety of manners. In some embodiments, network 120 may be a point-to-point, shared, centralized, etc. variety of topologies or combinations of topologies. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, …, through which one or more components of the target recognition system 100 may connect to the network 120 to exchange data and/or information.
Terminal 130 refers to one or more terminal devices or software used by a user. In some embodiments, the terminal 130 may include an image capturing device 131 (e.g., a video camera, a still camera), and the image capturing device 131 may capture a target object and obtain a plurality of target images. In some embodiments, the image capturing device 131 may sequentially emit light of a plurality of colors in a light sequence to illuminate the target object by the terminal 130 (e.g., a screen of the terminal 130 and/or other light emitting elements) when the target object is photographed. In some embodiments, the terminal 130 may communicate with the processing device 110 over the network 120 and transmit the captured plurality of target images to the processing device 110. In some embodiments, the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other input and/or output enabled devices, etc., or any combination thereof. The above examples are only intended to illustrate the broad nature of the terminal 130 and not to limit its scope.
The storage device 140 may be used to store data (e.g., a sequence of illuminations, a plurality of target images, etc.) and/or instructions. Storage device 140 may include one or more storage components, each of which may be a separate device or may be part of another device. In some embodiments, the storage device 140 may include Random Access Memory (RAM), read Only Memory (ROM), mass storage, removable memory, volatile read-write memory, and the like, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state disks, and the like. In some embodiments, storage device 140 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof. In some embodiments, the storage device 140 may be integrated or included in one or more other components of the target recognition system 100 (e.g., the processing device 110, the terminal 130, or other possible components).
In some embodiments, the target recognition system 100 may include an acquisition module, a verification module, and a training module.
The acquisition module may be configured to acquire a plurality of target images, the plurality of target images having a correspondence between a photographing time and an illumination time of a plurality of illuminations in an illumination sequence that is illuminated to a target object, the plurality of illuminations having a plurality of colors including at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a portion of the at least one reference color.
The verification module may be configured to determine authenticity of the plurality of target images based on the illumination sequence and the plurality of target images. In some embodiments, the plurality of target images includes at least one verification image and at least one reference image, each of the at least one verification image corresponding to one of the at least one verification color, each of the at least one reference image corresponding to one of the at least one plurality of reference color, for each of the at least one verification image, a verification module may determine a color of illumination when the verification image is captured based on the at least one reference image and the verification image; and determining the authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image was captured.
In some embodiments, the verification module may further extract verification color features of the at least one verification image and reference color features of the at least one reference image; generating, for each of the at least one verification image, a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature; and determining authenticity of the plurality of target images based on the target color features and the verification color features for each of the at least one verification image.
In some embodiments, the verification module may process the at least one reference image and the verification image based on a color verification model to determine a color of illumination when the verification image is captured. In some embodiments, the color verification model is a machine learning model of preset parameters. The preset parameters refer to the learned model parameters in the machine learning model training process. Taking neural networks as an example, model parameters include weights (weights) and biases (bias), etc. In some embodiments, the color verification model includes a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer. The reference color feature extraction layer processes the at least one reference image to determine reference color features of the at least one reference image. And the verification color feature extraction layer processes the verification image and determines the verification color feature of the verification image. The color classification layer processes the reference color features of the at least one reference image and the verification color features of the verification image to determine the color of illumination when the verification image is captured.
In some embodiments, the preset parameters of the color verification model are obtained by an end-to-end training approach. The training module may be configured to obtain a plurality of training samples, each of the plurality of training samples including at least one sample reference image, at least one sample verification image, and a sample label representing a color of illumination when each of the at least one sample verification image is captured, the at least one reference color being the same as the color of illumination when the at least one sample reference image is captured. The training module may further train an initial color verification model based on the plurality of training samples, determining the preset parameters of the color verification model. In some embodiments, the training module may be omitted.
For more detailed description of the acquisition module, the verification module, and the training module, reference may be made to fig. 2-6, which are not repeated here.
It should be noted that the above description of the object recognition system and its modules is for convenience of description only and is not intended to limit the present description to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. In some embodiments, the acquisition module, the verification module, and the training module disclosed in fig. 1 may be different modules in one system, or may be one module to implement the functions of two or more modules described above. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present description.
FIG. 2 is an exemplary flow chart of a method of object recognition shown in accordance with some embodiments of the present description. As shown in fig. 2, the process 200 includes the steps of:
At step 210, a plurality of target images are acquired. The shooting time of the plurality of target images has a corresponding relation with the irradiation time of a plurality of illuminations in the illumination sequence of the terminal irradiating the target object.
In some embodiments, step 210 may be performed by the acquisition module.
The target object refers to an object needing target recognition. For example, the target object may be a specific body part of the user, such as a face, fingerprint, palmprint, pupil, or the like. In some embodiments, the target object refers to the face of a user that requires authentication and/or authorization. For example, in a network taxi service scenario, the platform needs to verify whether the order receiving driver is a registered driver user that the platform has audited, and the target object is the face of the driver. For another example, in a face payment application scenario, where the payment system needs to verify the payment authority of the payer, the target object is the face of the payer.
For target recognition of the target object, the terminal is instructed to emit the illumination sequence. The illumination sequence includes a plurality of illuminations for illuminating the target object. The colors of different lights in the lighting sequence can be the same or different. In some embodiments, the plurality of illuminations comprises at least two illuminations of different colors, i.e., the plurality of illuminations has a plurality of colors.
In some embodiments, the plurality of colors includes at least one reference color and at least one verification color. The verification color is a color among a plurality of colors directly used for verifying the authenticity of the image. The reference color is a color used to assist in verification of the authenticity of the determination target image among the plurality of colors. In some embodiments, each of the at least one verification color is determined based on at least a portion of the at least one reference color. For more details on the reference color and the verification color, reference is made to fig. 3 and the related description thereof, and no further description is given here.
The illumination sequence includes information of each of the plurality of illuminations, for example, color information, illumination time, and the like. The color information of the plurality of illuminations in the illumination sequence may be represented in the same or different ways. For example, the color information of the plurality of illuminations may be represented by a color class. For example, the colors of the plurality of illuminations in the illumination sequence may be represented as red, yellow, green, violet, cyan, blue, red. For another example, the color information of the plurality of illuminations may be represented by a color parameter. For example, the colors of the plurality of illuminations in the illumination sequence may be represented as RGB (255, 0), RGB (255, 0), RGB (0, 255, 0), RGB (255, 0, 255), RGB (0, 255, 255). In some embodiments, the sequence of illuminations may also be referred to as a sequence of colors, which contains color information of the plurality of illuminations.
The illumination times of the plurality of illuminations in the illumination sequence may include a start time, an end time, a duration, etc. or any combination thereof, on each illumination plan illuminating the target object. For example, the start time of red light irradiation of the target object is 14: 00. the start time of green irradiation of the target object is 14:02. for another example, the duration of irradiation of the target object with red light and green light is each 0.1 seconds. In some embodiments, the durations of different illumination irradiation target objects may be the same or different. The irradiation time may be represented by other means and will not be described in detail herein.
In some embodiments, the terminal may sequentially emit a plurality of lights in a particular order. In some embodiments, the terminal may emit illumination through the light emitting element. The light emitting element may include a light emitting element built in a terminal, for example, a screen, an LED lamp, or the like. The light emitting element may also include an external light emitting element. Such as an external LED lamp, a light emitting diode, etc. In some embodiments, the terminal may receive an indication of the emitted illumination when the terminal is hijacked or attacked, but does not actually emit illumination. For more details on the illumination sequence, see fig. 3 and its related description, which are not repeated here.
In some embodiments, the terminal or processing device (e.g., acquisition module) may generate the illumination sequence randomly or based on preset rules. For example, the terminal or processing device may randomly extract a plurality of colors from a color library to generate a lighting sequence. In some embodiments, the sequence of lights may be set at the terminal by a user, determined according to default settings of the target recognition system 100, or determined by the processing device through data analysis, etc. In some embodiments, the terminal or the storage device may store the illumination sequence. Correspondingly, the acquisition module can acquire the illumination sequence from the terminal or the storage device through the network.
The plurality of target images are images for target recognition. The format of the plurality of target images may include Joint Photographic Experts Group(JPEG)、Tagged Image File Format(TIFF)、Graphics Interchange Format(GIF)、Kodak Flash PiX(FPX)、Digital Imaging and Communications in Medicine(DICOM) or the like. The plurality of target images may be two-dimensional (2 d) images or three-dimensional (3 d) images.
In some embodiments, the acquisition module may acquire the plurality of target images. For example, the acquisition module may transmit an acquisition instruction to the terminal through the network, and then receive a plurality of target images transmitted by the terminal through the network. Or the terminal can send the plurality of target images to a storage device for storage, and the acquisition module can acquire the plurality of target images from the storage device. The target image may not include or include a target.
The target image may be captured by an image capturing device of the terminal or may be determined based on data (e.g., video or image) uploaded by the user. For example, during target object verification, the target recognition system 100 may sequence the illumination of the terminal. When the terminal is not hijacked or attacked, the terminal can sequentially emit the plurality of illuminations according to the illumination sequence. When the terminal emits one of a plurality of illuminations, its image-capturing device may be instructed to capture one or more images during the illumination time of the illumination. Or the image capturing device of the terminal may be instructed to take video throughout the illumination of the plurality of illuminations. A terminal or other computing device (e.g., processing device 110) may intercept one or more images acquired during the illumination time of each illumination from the video according to the illumination time of each illumination. One or more images acquired by the terminal in the irradiation time of each illumination can be used as the plurality of target images. At this time, the plurality of target images are real images of the target object photographed when irradiated with the plurality of lights. It can be understood that there is a correspondence between the irradiation time of the plurality of illuminations and the photographing time of the plurality of target images. If an image is acquired within the irradiation time of a single light, the corresponding relationship is one-to-one; if multiple images are acquired within the illumination time of a single illumination, the correspondence is one-to-many.
When the terminal is hijacked, the hijacker can upload images or videos through the terminal equipment. The uploaded image or video may contain a specific body part of the target object or other user, and/or other objects. The uploaded image or video may be a history image or video taken by the terminal or other terminal, or a composite image or video. The terminal or other computing device (e.g., processing device 110) may determine the plurality of target images based on the uploaded images or videos. For example, the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded images or videos according to the illumination sequence and/or the illumination duration of each illumination in the illumination sequence. By way of example only, the illumination sequence includes five illuminations arranged in sequence, and the hijacking person can upload five images through the terminal device. And the terminal or other computing equipment determines the image corresponding to each of the five lights according to the sequence of the five images to be uploaded. For another example, the irradiation time of the five lights in the light sequence is 0.5 seconds, and the hijacking person can upload the video with the duration of 2.5 seconds through the terminal. The terminal or other computing device may divide the uploaded video into five video segments of 0-0.5 seconds, 0.5-1 second, 1-1.5 seconds, 1.5-2 seconds, and 2-2.5 seconds, and intercept one image in each video segment. Five images captured from the video correspond to the five illuminations in sequence. At this time, the plurality of images are false images uploaded by the hijacker, not real images taken by the target object when illuminated by the plurality of lights. In some embodiments, if an image is uploaded by a hijack through a terminal, the uploading time of the image or its shooting time in video may be regarded as its shooting time. It can be understood that when the terminal is hijacked, there is also a correspondence between the irradiation time of the plurality of lights and the photographing time of the plurality of images.
As previously described, the plurality of colors corresponding to the plurality of illuminations in the sequence of illuminations includes at least one reference color and at least one verification color. In some embodiments, each of the at least one verification color is determined based on at least a portion of the at least one reference color. The plurality of target images includes at least one reference image and at least one verification image, each of the at least one reference image corresponding to one of the at least one reference color, each of the at least one verification image corresponding to one of the at least one verification color.
For each of the plurality of images, the acquisition module may use a color of illumination in the illumination sequence, where the illumination time corresponds to the image capturing time, as a color corresponding to the image. Specifically, if the irradiation time of the illumination corresponds to the shooting time of one or more images, the color of the illumination is used as the color corresponding to the one or more images. It will be appreciated that when the terminal is not hijacked or attacked, the colors corresponding to the plurality of images should be the same as the plurality of colors of the plurality of lights in the lighting sequence. For example, the plurality of colors of the plurality of illuminations of the illumination sequence are "red, yellow, blue, green, purple, red", and when the terminal is not hijacked or attacked, the colors corresponding to the plurality of images acquired by the terminal should also be "red, yellow, blue, green, purple, red". When a terminal is hijacked or attacked, the colors corresponding to the multiple images and the colors of the multiple illuminations in the illumination sequence may be different.
Step 220, determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images. In some embodiments, step 220 may be performed by a verification module.
The authenticity of the plurality of target images may reflect whether the plurality of target images are images obtained by photographing the target object under illumination of a plurality of colors of illumination. For example, when the terminal is not hijacked or attacked, its light emitting element may emit illumination of a plurality of colors, while its image capturing device may record or photograph a target object to obtain the target image. At this time, the target image has authenticity. For another example, when the terminal is hijacked or attacked, the target image is based on an image or video acquisition uploaded by the attacker. At this time, the target image does not have authenticity.
The authenticity of the target image may be used to determine whether the image capturing device of the terminal is hijacked by an attacker. For example, if at least one target image in the plurality of target images does not have authenticity, the image acquisition device is hijacked. For another example, if more than a preset number of target images in the plurality of target images do not have authenticity, the image acquisition device is hijacked.
In some embodiments, for each of the at least one verification image, a verification module may determine a color of illumination when the verification image is captured based on the at least one reference image and the verification image. The verification module may further determine the authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image was captured. For a detailed description of determining the color of illumination when the verification image is captured, and determining the authenticity of the plurality of target images based on the sequence of illumination and the color of illumination when the verification image is captured, see fig. 4 and its associated description.
In some embodiments, the verification module may further extract verification color features of the at least one verification image and reference color features of the at least one reference image. For each of the at least one verification image, a verification module may generate a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature. Based on the target color feature and the verification color feature for each of the at least one verification image, a verification module may determine authenticity of the plurality of target images. For a detailed description of generating the target color features and determining the authenticity of the plurality of target images based on the target color features and the verification color features, reference may be made to fig. 6 and its associated description.
Fig. 3 is a schematic diagram of an illumination sequence shown in accordance with some embodiments of the present description.
In some embodiments, the plurality of colors of illumination in the illumination sequence may comprise at least one reference color and at least one verification color. The verification color is a color among a plurality of colors directly used for verifying the authenticity of the image. The reference color is a color among the plurality of colors for assisting in verifying the authenticity of the color-determining target image. For example, a target image (also referred to as a reference image) corresponding to a reference color may be used to determine the color of illumination when a target image (also referred to as a verification image) corresponding to a verification color is captured. Further, the verification module may determine the authenticity of the plurality of target images based on the color of illumination when the verification image was captured. As shown in fig. 3, the illumination sequence e includes a plurality of reference colors of illumination "red light, green light, blue light", and a plurality of verification colors of illumination "yellow light, violet light … cyan light"; the illumination sequence f contains a plurality of reference color illuminations of red light, white light … blue light and a plurality of verification color illuminations of red light.
In some embodiments, there are multiple verification colors. The plurality of verification colors may be identical. For example, the verification color may be red, red. Or the multiple verification colors may be completely different. For example, the verification color may be red, yellow, blue, green, violet. Or the verification colors may also be partially identical. For example, the verification color may be yellow, green, purple, yellow, red. Similar to the verification color, in some embodiments, there are multiple reference colors, which may be identical, completely different, or partially identical. In some embodiments, the verification color may contain only one color, such as green.
In some embodiments, the at least one reference color and the at least one verification color may be determined according to a default setting of the target recognition system 100, manually set by a user, or determined by a verification module. For example, the verification module may randomly select the reference color and the verification color. For example only, the verification module may randomly select a partial color from the plurality of colors as the at least one reference color and the remaining colors as the at least one verification color. In some embodiments, the verification module may determine the at least one reference color and the at least one verification color based on preset rules. The preset rule may be a rule regarding a relationship between verification colors, a relationship between reference colors, and/or a relationship between verification colors and reference colors, etc. For example, the preset rule may be that the verification color is generated based on a reference color fusion, or the like.
In some embodiments, each of the at least one verification color may be determined based on at least a portion of the at least one reference color. For example, the verification color may be fused based on at least a portion of the at least one reference color. In some embodiments, the at least one reference color may comprise a primary color or a primary color of a color space. For example, the at least one reference color may include three primary colors of the RGB space, namely, "red, green, blue". As shown in fig. 3, a plurality of verification colors "yellow, violet … cyan" in the illumination sequence e may be determined based on 3 reference colors "red, green, blue". For example, "yellow" may be obtained by fusing the reference colors "red, green, and blue" based on a first scale, and "violet" may be obtained by fusing the reference colors "red, green, and blue" based on a second scale.
In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color. The at least one reference color and the at least one verification color may be all or partially identical. For example, one of the at least one verification color may be the same as a particular one of the at least one reference color. It will be appreciated that the verification color may also be determined based on at least one reference color, i.e. the specific reference color may be taken as the verification color. As shown in fig. 3, in the illumination sequence f, the plurality of reference colors "red, white … blue" and the plurality of verification colors "red..green" each contain red.
In some embodiments, there may be other relationships between the at least one reference color and the at least one verification color, without limitation. For example, the color systems of the at least one reference color and the at least one verification color are the same or different. Illustratively, at least one reference color belongs to a color of a warm color family (e.g., red, yellow, etc.), and at least one reference color belongs to a color of a cool color family (e.g., gray, etc.).
In some embodiments, in the illumination sequence, the illumination corresponding to the at least one reference color may be arranged before or after the illumination corresponding to the at least one verification color. As shown in fig. 3, in the illumination sequence e, the illumination of a plurality of reference colors "red light, green light, blue light" is arranged in front of the illumination of a plurality of verification colors "yellow light, violet light … cyan light". In the illumination sequence f, illumination of a plurality of reference colors "red light, white light … blue light" is arranged behind a plurality of verification colors "red light..green light". In some embodiments, the illumination corresponding to the at least one reference color may also be arranged at intervals with the illumination corresponding to the at least one verification color, which is not limited herein.
FIG. 4 is an exemplary flow chart for determining the authenticity of a plurality of target images based on a sequence of illumination and the plurality of target images, according to some embodiments of the present disclosure. In some embodiments, flowchart 400 may be performed by an authentication module. As shown in fig. 4, the process 400 may include the steps of:
step 410, for each of the at least one verification image, determining a color of illumination when the verification image is captured based on the at least one reference image and the verification image.
In some embodiments, the verification module may determine the color of the illumination when the verification image is captured based on the verification color features of the verification image and the reference color features of the at least one reference image.
The reference color feature refers to a color feature of the reference image. Verifying the color features refers to verifying the color features of the image.
The color characteristics of an image refer to information related to the color of the image. The color of the image includes the color of illumination at the time of capturing the image, the color of a subject in the image, the color of a background in the image, and the like. In some embodiments, the color features may include depth features and/or complex features extracted by the neural network.
Color characteristics can be represented in a variety of ways. In some embodiments, the color features may be represented based on color values of pixels in the image in a color space. The color space is a mathematical model that describes colors using a set of values, each of which may represent a color value of a color feature on each color channel of the color space. In some embodiments, the color space may be represented as a vector space, with each dimension of the vector space representing one color channel of the color space. Color features may be represented by vectors in the vector space. In some embodiments, the color space may include, but is not limited to, an RGB color space, an lαβ color space, an LMS color space, an HSV color space, a YCrCb color space, an HSL color space, and the like. It will be appreciated that different color spaces contain different color channels. For example, the RGB color space includes a red channel R, a green channel G, and a blue channel B, and the color characteristic may be represented by color values of each pixel point in the image on the red channel R, the green channel G, and the blue channel B, respectively.
In some embodiments, the color features may be represented by other means (e.g., color histogram, color moment, color set, etc.). For example, histogram statistics is performed on color values of each pixel point in the image in a color space, and a histogram representing color features is generated. For another example, a specific operation (such as a mean value, a variance value, etc.) is performed on the color values of each pixel point in the image in the color space, and the result of the specific operation represents the color feature of the image.
In some embodiments, the verification module may extract color features of the plurality of target images by a color feature extraction algorithm and/or a color verification model (or portions thereof). The color feature extraction algorithm includes: color histogram, color moment, color set, etc. For example, the verification module may calculate a gradient histogram based on color values of each pixel point in the image in each color channel of the color space, thereby obtaining a color histogram. For another example, the verification module may divide the image into a plurality of regions, and determine a color set of the image using a set of binary indices of the plurality of regions established by color values of respective pixels in the image at each color channel of the color space.
In some embodiments, reference color features of at least one reference image may be used to construct a reference color space. The reference color space has the at least one reference color as its color channel. Specifically, the reference color feature corresponding to each reference image may be used as a reference value of the corresponding color channel in the reference color space.
In some embodiments, the color space (also referred to as the original color space) corresponding to the plurality of target images may be the same as or different from the reference color space. For example, the plurality of target images may correspond to RGB color spaces, and the at least one reference color is red, blue, and green, and then the original color space corresponding to the plurality of target images and the reference color space constructed based on the reference colors belong to the same color space. In this context, two color spaces may be considered to be the same color space if the primary colors or primaries of the two color spaces are the same.
As described above, the verification color may be fused based on one or more reference colors. Thus, the verification module may determine the color to which the verification color feature corresponds based on the reference color feature and/or the reference color space it constructs. In some embodiments, the verification module may map verification color features of the verification image based on the reference color space, determining a color of illumination when the verification image is captured. For example, the verification module may determine parameters of the verification color feature on each color channel based on a relationship between the verification color feature and a reference value of each color channel in the reference color space, and determine a color corresponding to the verification color feature, that is, a color of illumination when the verification image is captured, based on the parameters.
For example, the verification module may extract reference color features based on the reference images a, b, cRespectively as reference values for color channel I, color channel II and color channel III. Color channel I, color channel II, and color channel III are three color channels of the reference color space. The verification module may extract verification color features/>, based on the verification image dAnd based on verification color features/>And reference values for color channel I, color channel II, and color channel III/>Relationship between/>Determining verification color features/>Parameters delta 1、δ2 and delta 3 on color channel I, color channel II, and color channel III, respectively. The verification module may determine the color corresponding to the verification color feature, i.e., the color of the illumination when the verification image was captured, based on the parameters δ 1、δ2 and δ 3. In some embodiments, the correspondence between parameters and color categories may be preset, or may be learned through a model.
In some embodiments, the reference color space may be the same color as the color channels of the original color space. For example, the original spatial color may be an RGB space and the at least one reference color may be red, green, blue. The verification module may construct a new RGB color space (i.e., reference color space) based on the reference color features of the three reference images corresponding to red, green, and blue, and determine RGB values of the verification color features of each verification image in the new RGB color space, thereby determining the color of the illumination when the verification image is photographed.
In some embodiments, the verification module may determine the color of the illumination when the verification image is captured based on the reference color feature and the verification color feature processing by the color classification layer in the color verification model, and specifically, refer to fig. 5 and related descriptions thereof, which are not repeated herein.
Step 420, determining the authenticity of the plurality of target images based on the illumination sequence and the color of the illumination when the at least one verification image was captured.
In some embodiments, for each of the at least one verification image, the verification module may determine a verification color corresponding to the verification image based on the illumination sequence. Further, the verification module may determine the authenticity of the verification image based on the verification color corresponding to the verification image. For example, the verification module determines the authenticity of the verification image based on a first determination result of whether the verification color corresponding to the verification image is consistent with the color of the illumination when photographed. The fact that the verification color corresponding to the verification image is identical to the color of illumination when being shot indicates that the verification image has authenticity, and the fact that the verification color corresponding to the verification image is different from the color of illumination when being shot indicates that the verification image does not have authenticity. For another example, the verification module determines the authenticity of the verification image based on whether a relationship between the verification colors corresponding to the plurality of verification images (e.g., whether the same) is identical to a relationship between colors illuminated when the plurality of verification images are captured.
In some embodiments, the verification module may determine whether the image capture device of the terminal is hijacked based on the authenticity of the at least one verification image. For example, the number of verification images having authenticity exceeding the first threshold value indicates that the image capturing device of the terminal is not hijacked. For another example, the number of verification images not having authenticity exceeding a second threshold (e.g., 1) indicates that the image capturing device of the terminal is hijacked.
In some embodiments, the preset threshold (e.g., first threshold, second threshold) set for image authenticity determination in some embodiments of the present specification may be related to the degree of shooting stability. The photographing stability degree is a stability degree when the image capturing apparatus of the terminal acquires the target image. In some embodiments, the preset threshold is positively correlated with the shooting stability level. It can be understood that the higher the shooting stability, the higher the quality of the obtained target image, and the more the color features extracted based on the plurality of target images truly reflect the color of the illumination when being shot, the larger the preset threshold. In some embodiments, the shooting stability may be measured based on a motion parameter of the terminal (e.g., an in-vehicle terminal or a user terminal, etc.) detected by a motion sensor of the terminal. Such as the speed of motion detected by the motion sensor, the vibration frequency, etc. For example, the larger the motion parameter or the larger the rate of change of the motion parameter, the lower the shooting stability. The motion sensor may be a sensor that detects a driving condition of the vehicle, and the vehicle may be a vehicle used by a target user. The target user refers to a user to whom the target object belongs. For example, the target user is a network bus driver, and the motion sensor may be a motion sensor of a driver side or a vehicle-mounted terminal.
In some embodiments, the preset threshold may also be related to the shooting distance and the rotation angle. The shooting distance is a distance between the image capturing apparatus and the target object when the image capturing apparatus captures the target image. The rotation angle is the angle between the front surface of the target object and the terminal screen when the image acquisition device acquires the target image. In some embodiments, the shooting distance and the rotation angle are both inversely related to a preset threshold. It can be understood that the shorter the shooting distance is, the higher the quality of the acquired target image is, and the more the color features extracted based on the plurality of target images can truly reflect the color of illumination when being shot, the larger the preset threshold is. The smaller the rotation angle is, the higher the acquired target image quality is, and similarly, the larger the preset threshold value is. In some embodiments, the shooting distance and the rotation angle may be determined based on the target image by an image recognition technique.
In some embodiments, the verification module may perform a specific operation (e.g., averaging, standard deviation, etc.) on the photographing stability degree, the photographing distance, and the rotation angle of each target image, and determine the preset threshold value based on the photographing stability degree, the photographing distance, and the photographing angle after the specific operation.
For example, the verification module obtaining the degree of stability of the terminal when the plurality of target images are obtained includes obtaining a sub-degree of stability of the terminal when each of the plurality of target images is captured; and fusing the plurality of sub-stability degrees, and determining the stability degree.
For another example, the verification module obtaining a shooting distance between the target object and the terminal when the plurality of target images are shot includes: acquiring sub shooting distances between a target object and the terminal when each of the plurality of target images is shot; and fusing the plurality of sub shooting distances to determine the shooting distance.
For another example, the verification module obtaining a rotation angle of the target object relative to the terminal when the plurality of target images are captured includes obtaining a sub-rotation angle of the target object relative to the terminal when each of the plurality of target images are captured; and fusing the plurality of sub-rotation angles to determine the rotation angle.
Because the reference image and the verification image are shot under the same external environment light, the reference color space is established based on the reference image, and the determination result can be more accurate by determining the illumination color when the verification image is shot based on the reference color space. Further, the determination of the authenticity of the target image is also more accurate. For example, when the illumination in the illumination sequence is weaker than the ambient light, the illumination that impinges on the target object may be difficult to detect. Or when the ambient light is colored light, illumination to the target object may be disturbed. When the terminal is not hijacked, the reference image and the verification image are photographed under the same (or substantially the same) ambient light. The reference color space constructed based on the reference image fuses the influence of the ambient light, and therefore, the color of illumination when the verification image is photographed can be more accurately recognized compared with the original color space. Furthermore, the methods disclosed herein may avoid interference of the light emitting elements of the terminals. When the terminal is not hijacked, the reference image and the verification image are shot under the irradiation of the same light-emitting element, and the influence of the light-emitting element can be eliminated or weakened by utilizing the reference color space, so that the accuracy of identifying the illumination color is improved.
Fig. 5 is a schematic diagram of a color verification model shown in accordance with some embodiments of the present description.
In some embodiments, the verification module may process the at least one reference image and the verification image based on a color verification model to determine a color of illumination when the verification image is captured.
The color verification model may include a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer. As shown in fig. 5, the color verification model may include a reference color feature extraction layer 530, a verification color feature extraction layer 540, and a color classification layer 570. A color verification model may be used to implement step 410. Further, the verification module determines the authenticity of the verification image based on the color of illumination and the sequence of illumination when the verification image is captured.
The color feature extraction layer (e.g., reference color feature extraction layer 530 and verification color feature extraction layer 540, etc.) may extract color features of the target image. In some embodiments, the type of color feature extraction layer may include a convolutional neural network model such as ResNet, denseNet, mobileNet, shuffleNet or EFFICIENTNET, or a recurrent neural network model such as a long and short memory recurrent neural network. In some embodiments, the types of reference color feature extraction layer 530 and verification color feature extraction layer 540 may be the same or different.
The reference color feature extraction layer 530 extracts reference color features 550 of at least one reference image 510. In some embodiments, the at least one reference image 510 may include a plurality of reference images. The reference color feature 550 may be a fusion of the color features of the plurality of reference images 510. For example, the plurality of reference images 510 may be stitched, and then input into the reference color feature extraction layer 530, and the reference color feature extraction layer 530 may output the reference color feature 550. The reference color feature 550 is, for example, a feature vector formed by stitching color feature vectors of the reference images 510-1, 510-2, 510-3.
The verification color feature extraction layer 540 extracts verification color features 560 of at least one verification image 520. In some embodiments, the verification module may make a color determination for each of the at least one verification image 520 separately. For example, as shown in FIG. 5, the verification module may input at least one reference image 510 into the reference color feature extraction layer 530 and a verification image 520-2 into the verification color feature extraction layer 540. The verification color feature extraction layer 540 may output verification color features 560 of the verification image 520-2. The color classification layer 570 may determine the color of the illumination when the verification image 520-2 was captured based on the reference color feature 550 and the verification color feature 560 of the verification image 520-2.
In some embodiments, the verification module may make color decisions for multiple verification images 520 simultaneously. For example, the verification module may input at least one reference image 510 into the reference color feature extraction layer 530 and a plurality of verification images 520 (including verification images 520-1, 520-2 …, 520-n) into the verification color feature extraction layer 540. The verification color feature extraction layer 540 may output verification color features 560 of multiple verification images 520 simultaneously. The color classification layer 570 may determine the color of illumination when each of the plurality of verification images is captured at the same time.
For each of the at least one verification image, the color classification layer 570 may determine a color of illumination when the verification image is captured based on the reference color features and the verification color features of the verification image. For example, the color classification layer 570 may determine a value or probability based on the reference color feature and the verification color feature of the verification image, and determine the color of illumination when the verification image is captured based on the value or probability. The corresponding value or probability of the verification image may reflect the likelihood that the color of the illumination belongs to each color when the verification image is captured. In some embodiments, the color classification layer may include, but is not limited to, a fully connected layer, a deep neural network, and the like.
The color verification model is a machine learning model with preset parameters. Preset parameters of the color verification model may be determined during the training process. For example, a training module may train an initial color verification model based on a plurality of training samples to determine the preset parameters of the color verification model. Each of the plurality of training samples includes at least one sample reference image, at least one sample verification image, and a sample label representing a color of illumination when each of the at least one sample verification image was captured. Wherein at least one reference color is the same as the color of the illumination when the at least one sample reference image was captured. For example, if the at least one reference color includes red, green, and blue, the at least sample reference image includes three target images of the sample target object photographed under red, green, and blue light irradiation.
In some embodiments, the verification module may input a plurality of training samples into the initial color verification model, update parameters of the initial verification color feature extraction layer, the initial reference color feature extraction layer, and the initial color classification layer through training until the updated color verification model meets a preset condition. The updated color verification model may be designated as a first verification model of the preset parameters, in other words, the updated first verification model may be designated as a trained color verification model. The preset condition may be that the loss function of the updated color feature model is less than a threshold, converges, or the number of training iterations reaches a threshold.
In some embodiments, the verification module may train the initial verification color feature extraction layer, the initial reference color feature extraction layer, and the initial color classification layer in the initial color verification model in an end-to-end training manner. The end-to-end training mode refers to inputting a training sample into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value. The initial model may include multiple sub-models or modules for performing different data processing operations, which may be considered as a whole for concurrent updating during training. For example, in the training of the initial color verification model, at least one sample reference image may be input to the initial reference color feature extraction layer, at least one sample verification image may be input to the initial verification color feature extraction layer, a loss function may be established based on the output result of the initial color classification layer and the sample label, and parameters of each initial layer in the initial color verification model may be updated simultaneously based on the loss function.
In some embodiments, the color verification model may be pre-trained by the processing device or a third party and stored in the storage device, from which the processing device may directly invoke the color verification model.
Some embodiments of the present disclosure determine the authenticity of the verification image through a color verification model, which may improve the efficiency of the authenticity verification of the target image. In addition, the color verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image. It will be appreciated that there is a certain difference in hardware of different terminals, for example, the same color lights emitted by the terminal screens of different manufacturers may have differences in saturation, brightness and other parameters, resulting in a larger intra-class gap of the same color. The multiple training samples of the initial color verification model may be taken by terminals of different capabilities. The initial color verification model is learned in the training process, so that the color verification model after training can consider the terminal performance difference when judging the color of the target object, and the color of the target image can be accurately determined. Also, when the terminal is not hijacked, since the reference image and the authentication image are both photographed under the same external ambient light condition. In some embodiments, extracting a model reference image based on reference color features in the color verification model, establishing a reference color space, and determining the authenticity of a plurality of target images based on the reference color space may eliminate or attenuate the effects of ambient light.
FIG. 6 is another exemplary flow chart for determining the authenticity of a plurality of target images based on a sequence of illumination and the plurality of target images, according to some embodiments of the present disclosure. In some embodiments, flowchart 600 may be performed by a verification module. As shown in fig. 6, the process 600 includes the steps of:
Step 610 extracts a verification color feature of the at least one verification image and a reference color feature of the at least one reference image.
For a detailed description of the extraction verification color feature and the reference verification feature, reference may be made to step 410 and its associated description.
Step 620, for each of the at least one verification image, generating a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature.
The target color feature refers to a feature represented in the reference color space by the verification color corresponding to the verification image. In some embodiments, for each of the at least one verification image, the verification module may determine a verification color corresponding to the verification image based on the illumination sequence and generate a target color feature of the verification image based on the verification color and the reference color feature. For example, the verification module may fuse the color features of the verification color with the reference color features to obtain the target color features.
Step 630, determining the authenticity of the plurality of target images based on the target color features and the verification color features of each of the at least one verification image.
In some embodiments, for each of the at least one verification image, the verification module may determine the authenticity of the verification image based on a similarity between its corresponding target color feature and the verification color feature. The similarity between the target color feature and the verification color feature may be calculated by vector similarity, for example, determined by euclidean distance, manhattan distance, or the like. Illustratively, when the similarity of the target color feature and the verification color feature is greater than a third threshold, the verification image has authenticity, and vice versa, does not have authenticity.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject matter of the present description requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.
Claims (9)
1. A method of target identification, the method comprising:
acquiring a plurality of target images, wherein the shooting time of the plurality of target images has a corresponding relation with the irradiation time of a plurality of illuminations in an illumination sequence irradiated to a target object, the plurality of illuminations have a plurality of colors, the plurality of colors comprise at least one reference color and at least one verification color, and each of the at least one verification color is obtained by fusion based on at least one part of the at least one reference color;
the plurality of target images includes at least one verification image and at least one reference image, each of the at least one verification image corresponding to one of the at least one verification color, each of the at least one reference image corresponding to one of the at least one reference color; and
Determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images, comprising:
for each of the at least one verification image, determining a color of illumination when the verification image is captured based on the at least one reference image and the verification image;
Determining the authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image was captured.
2. The method of claim 1, the determining a color of illumination when the verification image is captured based on the at least one reference image and the verification image comprising:
And processing the at least one reference image and the verification image based on a color verification model, and determining the illumination color when the verification image is shot, wherein the color verification model is a machine learning model with preset parameters.
3. The method of claim 2, wherein the color verification model comprises a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer,
The reference color feature extraction layer processes the at least one reference image and determines reference color features of the at least one reference image;
the verification color feature extraction layer processes the verification image and determines verification color features of the verification image;
The color classification layer processes the reference color features of the at least one reference image and the verification color features of the verification image to determine the color of illumination when the verification image is captured.
4. A method according to claim 3, wherein the preset parameters of the color verification model are obtained by means of an end-to-end training.
5. The method of claim 2, the preset parameters of the color verification model being generated by a training process comprising:
Obtaining a plurality of training samples, each of the plurality of training samples comprising at least one sample reference image, at least one sample verification image, and a sample label, the sample label representing a color of illumination when each of the at least one sample verification image is captured, the at least one reference color being the same as the color of illumination when the at least one sample reference image is captured; and
Training an initial color verification model based on the plurality of training samples, determining the preset parameters of the color verification model.
6. The method of claim 1, the determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprising:
Extracting verification color features of the at least one verification image and reference color features of the at least one reference image;
Generating, for each of the at least one verification image, a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature; and
Determining authenticity of the plurality of target images based on the target color features and the verification color features for each of the at least one verification image.
7. A target recognition system, the system comprising:
An acquisition module, configured to acquire a plurality of target images, where capturing times of the plurality of target images have a correspondence with illumination times of a plurality of illuminations in an illumination sequence that irradiates a target object, the plurality of illuminations having a plurality of colors, the plurality of colors including at least one reference color and at least one verification color, each of the at least one verification color being obtained by fusing at least a part of the at least one reference color;
the plurality of target images includes at least one verification image and at least one reference image, each of the at least one verification image corresponding to one of the at least one verification color, each of the at least one reference image corresponding to one of the at least one reference color; and
A verification module for determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images, comprising:
for each of the at least one verification image, determining a color of illumination when the verification image is captured based on the at least one reference image and the verification image;
Determining the authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image was captured.
8. An object discriminating apparatus, comprising at least one processor and at least one memory;
The at least one memory is configured to store computer instructions;
The at least one processor is configured to execute at least some of the computer instructions to implement the method of any one of claims 1 to 6.
9. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110423614.0A CN113111807B (en) | 2021-04-20 | 2021-04-20 | Target identification method and system |
PCT/CN2022/076352 WO2022222585A1 (en) | 2021-04-20 | 2022-02-15 | Target identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110423614.0A CN113111807B (en) | 2021-04-20 | 2021-04-20 | Target identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113111807A CN113111807A (en) | 2021-07-13 |
CN113111807B true CN113111807B (en) | 2024-06-07 |
Family
ID=76718856
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110423614.0A Active CN113111807B (en) | 2021-04-20 | 2021-04-20 | Target identification method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113111807B (en) |
WO (1) | WO2022222585A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022222575A1 (en) * | 2021-04-20 | 2022-10-27 | 北京嘀嘀无限科技发展有限公司 | Method and system for target recognition |
CN113111807B (en) * | 2021-04-20 | 2024-06-07 | 北京嘀嘀无限科技发展有限公司 | Target identification method and system |
CN113673643A (en) * | 2021-08-19 | 2021-11-19 | 江苏农牧人电子商务股份有限公司 | Method and system for supervising agricultural product supply |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529512A (en) * | 2016-12-15 | 2017-03-22 | 北京旷视科技有限公司 | Living body face verification method and device |
CN108256588A (en) * | 2018-02-12 | 2018-07-06 | 兰州工业学院 | A kind of several picture identification feature extracting method and system |
CN109376592A (en) * | 2018-09-10 | 2019-02-22 | 阿里巴巴集团控股有限公司 | Biopsy method, device and computer readable storage medium |
WO2020078229A1 (en) * | 2018-10-15 | 2020-04-23 | 腾讯科技(深圳)有限公司 | Target object identification method and apparatus, storage medium and electronic apparatus |
CN111160374A (en) * | 2019-12-28 | 2020-05-15 | 深圳市越疆科技有限公司 | Color identification method, system and device based on machine learning |
CN111523438A (en) * | 2020-04-20 | 2020-08-11 | 支付宝实验室(新加坡)有限公司 | Living body identification method, terminal device and electronic device |
CN111597938A (en) * | 2020-05-07 | 2020-08-28 | 马上消费金融股份有限公司 | Living body detection and model training method and device |
CN111881844A (en) * | 2020-07-30 | 2020-11-03 | 北京嘀嘀无限科技发展有限公司 | Method and system for judging image authenticity |
CN112507922A (en) * | 2020-12-16 | 2021-03-16 | 平安银行股份有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN112597810A (en) * | 2020-06-01 | 2021-04-02 | 支付宝实验室(新加坡)有限公司 | Identity document authentication method and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102415509B1 (en) * | 2017-11-10 | 2022-07-01 | 삼성전자주식회사 | Face verifying method and apparatus |
CN109859117A (en) * | 2018-12-30 | 2019-06-07 | 南京航空航天大学 | A kind of image color correction method directly correcting rgb value using neural network |
CN111460964A (en) * | 2020-03-27 | 2020-07-28 | 浙江广播电视集团 | Moving target detection method under low-illumination condition of radio and television transmission machine room |
CN113111807B (en) * | 2021-04-20 | 2024-06-07 | 北京嘀嘀无限科技发展有限公司 | Target identification method and system |
-
2021
- 2021-04-20 CN CN202110423614.0A patent/CN113111807B/en active Active
-
2022
- 2022-02-15 WO PCT/CN2022/076352 patent/WO2022222585A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529512A (en) * | 2016-12-15 | 2017-03-22 | 北京旷视科技有限公司 | Living body face verification method and device |
CN108256588A (en) * | 2018-02-12 | 2018-07-06 | 兰州工业学院 | A kind of several picture identification feature extracting method and system |
CN109376592A (en) * | 2018-09-10 | 2019-02-22 | 阿里巴巴集团控股有限公司 | Biopsy method, device and computer readable storage medium |
WO2020078229A1 (en) * | 2018-10-15 | 2020-04-23 | 腾讯科技(深圳)有限公司 | Target object identification method and apparatus, storage medium and electronic apparatus |
CN111160374A (en) * | 2019-12-28 | 2020-05-15 | 深圳市越疆科技有限公司 | Color identification method, system and device based on machine learning |
CN111523438A (en) * | 2020-04-20 | 2020-08-11 | 支付宝实验室(新加坡)有限公司 | Living body identification method, terminal device and electronic device |
CN111597938A (en) * | 2020-05-07 | 2020-08-28 | 马上消费金融股份有限公司 | Living body detection and model training method and device |
CN112597810A (en) * | 2020-06-01 | 2021-04-02 | 支付宝实验室(新加坡)有限公司 | Identity document authentication method and system |
CN111881844A (en) * | 2020-07-30 | 2020-11-03 | 北京嘀嘀无限科技发展有限公司 | Method and system for judging image authenticity |
CN112507922A (en) * | 2020-12-16 | 2021-03-16 | 平安银行股份有限公司 | Face living body detection method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
不同颜色空间阈值分割跟踪法的移动目标追踪;沈丹峰;沈雅欣;叶国铭;王青;;机电一体化(第09期);全文 * |
非重叠多摄像机目标识别方法研究;范彩霞;朱虹;;西安理工大学学报(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113111807A (en) | 2021-07-13 |
WO2022222585A1 (en) | 2022-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113111807B (en) | Target identification method and system | |
US11972638B2 (en) | Face living body detection method and apparatus, device, and storage medium | |
Pomari et al. | Image splicing detection through illumination inconsistencies and deep learning | |
CN110163078B (en) | Living body detection method, living body detection device and service system applying living body detection method | |
CN103383723B (en) | Method and system for spoof detection for biometric authentication | |
US20160019420A1 (en) | Multispectral eye analysis for identity authentication | |
CN113111810B (en) | Target identification method and system | |
WO2022222569A1 (en) | Target discrimation method and system | |
WO2009107237A1 (en) | Biometrics device | |
CN110956114A (en) | Face living body detection method, device, detection system and storage medium | |
US20210256244A1 (en) | Method for authentication or identification of an individual | |
CN105141842A (en) | Tamper-proof license camera system and method | |
KR102145132B1 (en) | Surrogate Interview Prevention Method Using Deep Learning | |
KR102038576B1 (en) | Method of detecting fraud of an iris recognition system | |
CN109661668A (en) | Image processing method and system for iris recognition | |
Noyes et al. | Automatic recognition systems and human computer interaction in face matching | |
CN115147936A (en) | Living body detection method, electronic device, storage medium, and program product | |
WO2022222575A1 (en) | Method and system for target recognition | |
WO2021166289A1 (en) | Data registration device, biometric authentication device, and recording medium | |
CN108171205A (en) | For identifying the method and apparatus of face | |
WO2022222957A1 (en) | Method and system for identifying target | |
Hadwiger et al. | Towards learned color representations for image splicing detection | |
Hadiprakoso | Face anti-spoofing method with blinking eye and hsv texture analysis | |
CN113111806B (en) | Method and system for target recognition | |
CN110070062A (en) | A kind of system and method for the recognition of face based on binocular active infrared |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |