CN113111807A - Target identification method and system - Google Patents

Target identification method and system Download PDF

Info

Publication number
CN113111807A
CN113111807A CN202110423614.0A CN202110423614A CN113111807A CN 113111807 A CN113111807 A CN 113111807A CN 202110423614 A CN202110423614 A CN 202110423614A CN 113111807 A CN113111807 A CN 113111807A
Authority
CN
China
Prior art keywords
color
verification
image
target
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110423614.0A
Other languages
Chinese (zh)
Inventor
张明文
张天明
赵宁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202110423614.0A priority Critical patent/CN113111807A/en
Publication of CN113111807A publication Critical patent/CN113111807A/en
Priority to PCT/CN2022/076352 priority patent/WO2022222585A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Abstract

The embodiment of the specification discloses a target identification method and a target identification system. The target identification method comprises the following steps: acquiring a plurality of target images, wherein shooting time of the plurality of target images has a corresponding relation with irradiation time of a plurality of lights in an illumination sequence irradiated to a target object, the plurality of lights have a plurality of colors, the plurality of colors comprise at least one reference color and at least one verification color, and each of the at least one verification color is determined based on at least one part of the at least one reference color; and determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.

Description

Target identification method and system
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a target identification method and system.
Background
The target identification is a technology for performing biological identification based on a target acquired by an image acquisition device, for example, a face identification technology using a face as a target is widely applied to application scenarios such as authority verification and identity verification. In order to ensure the security of the target identification, the authenticity of the target image needs to be determined.
It is therefore desirable to provide a method and system for object recognition that can determine the authenticity of an object image.
Disclosure of Invention
One of embodiments of the present specification provides a target identification method, including: acquiring a plurality of target images, wherein shooting time of the plurality of target images has a corresponding relation with irradiation time of a plurality of lights in a lighting sequence irradiated to a target object, the plurality of lights are provided with a plurality of colors, the plurality of colors comprise at least one reference color and at least one verification color, and each of the at least one verification color is determined based on at least a part of the at least one reference color; and determining an authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
One of embodiments of the present specification provides an object recognition system, including: an obtaining module, configured to obtain a plurality of target images, shooting times of the plurality of target images having a corresponding relationship with irradiation times of a plurality of lights in an illumination sequence irradiated to a target object, the plurality of lights having a plurality of colors, the plurality of colors including at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a part of the at least one reference color; and a verification module to determine authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
One of the embodiments of the present specification provides an object recognition apparatus, which includes a processor for executing the object recognition method disclosed in the present specification.
One of the embodiments of the present specification provides a computer-readable storage medium, which stores computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer executes the object recognition method disclosed in the specification.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a target recognition system in accordance with some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a method of object recognition shown in accordance with some embodiments of the present description;
FIG. 3 is a schematic illustration of an illumination sequence shown in accordance with some embodiments of the present description;
FIG. 4 is an exemplary flow diagram for determining the authenticity of a plurality of target images based on a lighting sequence and the plurality of target images, according to some embodiments of the present description;
FIG. 5 is a schematic structural diagram of a color verification model according to some embodiments of the present description;
fig. 6 is another exemplary flow diagram for determining the authenticity of a plurality of target images based on a lighting sequence and the plurality of target images, according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The object identification is a technique of performing biometric recognition based on a target object acquired by an image acquisition apparatus. In some embodiments, the target object may be a human face, a fingerprint, a palm print, a pupil, and the like. In some embodiments, target identification may be applied to rights verification. For example, access authorization authentication, account payment authorization authentication, and the like. In some embodiments, target identification may also be used for identity verification. For example, employee attendance authentication and principal registration identity security authentication. For example only, target recognition may be based on matching a target image captured in real time by an image capture device with a pre-acquired biometric feature to verify the identity of the target.
However, the image capture device may be attacked or hijacked, and the attacker may upload the false target image through authentication. For example, the attacker a may directly upload the face image of the user B after attacking or hijacking the image capture device. The target recognition system carries out face recognition based on the face image of the user B and the face biological characteristics of the user B acquired in advance, so that the identity of the user B is verified.
Therefore, in order to ensure the safety of the target identification, the authenticity of the target image needs to be determined, namely, the target image is determined to be acquired by the image acquisition device in real time in the target identification process.
FIG. 1 is a schematic diagram of an application scenario of an object recognition system according to some embodiments of the present description.
FIG. 1 is a schematic diagram of an application scenario of an object recognition system according to some embodiments of the present description. As shown in FIG. 1, the object recognition system 100 may include a processing device 110, a network 120, a terminal 130, and a storage device 140.
The processing device 110 may be used to process data and/or information from at least one component of the target recognition system 100 and/or an external data source (e.g., a cloud data center). For example, the processing device 110 may acquire multiple target images, determine the authenticity of the multiple target images, and so on. During processing, the processing device 110 may retrieve data (e.g., instructions) from other components of the object recognition system 100 (e.g., the storage device 140 and/or the terminal 130) directly or via the network 120 and/or send the processed data to the other components for storage or display.
In some embodiments, the processing device 110 may be a single server or a group of servers. The set of servers may be centralized or distributed (e.g., processing device 110 may be a distributed system). In some embodiments, the processing device 110 may be local or remote. In some embodiments, the processing device 110 may be implemented on a cloud platform, or provided in a virtual manner. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
The network 120 may connect the various components of the system and/or connect the system with external portions. The network 120 enables communication between components of the object recognition system 100, and between the object recognition system 100 and external components, facilitating the exchange of data and/or information. In some embodiments, the network 120 may be any one or more of a wired network or a wireless network. For example, network 120 may include a cable network, a fiber optic network, a telecommunications network, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network (ZigBee), Near Field Communication (NFC), an in-device bus, an in-device line, a cable connection, and the like, or any combination thereof. In some embodiments, the network connections between the various components in the object recognition system 100 may be in one of the manners described above, or in multiple manners. In some embodiments, network 120 may be a point-to-point, shared, centralized, etc. variety of topologies or a combination of topologies. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, …, through which one or more components of the object identification system 100 may connect to the network 120 to exchange data and/or information.
Terminal 130 refers to one or more terminal devices or software used by a user. In some embodiments, the terminal 130 may include an image capture device 131 (e.g., a camera, a video camera), and the image capture device 131 may capture a target object and acquire a plurality of target images. In some embodiments, when image capture device 131 captures a target object, terminal 130 (e.g., a screen and/or other light-emitting elements of terminal 130) may sequentially emit light of multiple colors in an illumination sequence to illuminate the target object. In some embodiments, the terminal 130 may communicate with the processing device 110 through the network 120 and transmit the photographed plurality of target images to the processing device 110. In some embodiments, the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other devices having input and/or output capabilities, the like, or any combination thereof. The above examples are intended only to illustrate the breadth of the type of terminal 130 and not to limit its scope.
The storage device 140 may be used to store data (e.g., a sequence of illuminations, a plurality of target images, etc.) and/or instructions. Storage device 140 may include one or more storage components, each of which may be a separate device or part of another device. In some embodiments, storage device 140 may include Random Access Memory (RAM), Read Only Memory (ROM), mass storage, removable storage, volatile read and write memory, and the like, or any combination thereof. Illustratively, mass storage may include magnetic disks, optical disks, solid state disks, and the like. In some embodiments, the storage device 140 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof. In some embodiments, the storage device 140 may be integrated or included in one or more other components of the target recognition system 100 (e.g., the processing device 110, the terminal 130, or possibly other components).
In some embodiments, the object recognition system 100 may include an acquisition module, a validation module, and a training module.
The acquisition module may be configured to acquire a plurality of target images whose shooting times have a correspondence with irradiation times of a plurality of lights in a lighting sequence irradiated to a target object, the plurality of lights having a plurality of colors including at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a part of the at least one reference color.
A verification module may be used to determine authenticity of the plurality of target images based on the illumination sequence and the plurality of target images. In some embodiments, the plurality of target images includes at least one verification image, each of the at least one verification image corresponding to one of the at least one verification color, and at least one reference image, each of the at least one reference image corresponding to one of the at least one plurality of reference colors, for each of the at least one verification image, a verification module may determine a color of illumination when the verification image was captured based on the at least one reference image and the verification image; and determining authenticity of the plurality of target images based on the illumination sequence and a color of illumination at a time the at least one verification image was captured.
In some embodiments, the verification module may further extract verification color features of the at least one verification image and reference color features of the at least one reference image; for each of the at least one verification image, generating a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature; and determining authenticity of the plurality of target images based on the target color feature and the verification color feature of each of the at least one verification image.
In some embodiments, the verification module may process the at least one reference image and the verification image based on a color verification model, determining a color of illumination when the verification image was captured. In some embodiments, the color verification model is a machine learning model of preset parameters. The preset parameters refer to model parameters learned in the training process of the machine learning model. Taking a neural network as an example, the model parameters include Weight (Weight) and bias (bias), etc. In some embodiments, the color verification model includes a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer. And the reference color feature extraction layer processes the at least one reference image and determines the reference color feature of the at least one reference image. And the verification color feature extraction layer processes the verification image and determines the verification color feature of the verification image. And the color classification layer processes the reference color characteristic of the at least one reference image and the verification color characteristic of the verification image and determines the color of illumination when the verification image is shot.
In some embodiments, the preset parameters of the color verification model are obtained by an end-to-end training approach. The training module may be configured to obtain a plurality of training samples, each of the plurality of training samples including at least one sample reference image, at least one sample verification image, and a sample label, the sample label indicating a color of light illuminated when each of the at least one sample verification image is captured, and the at least one reference color being the same as the color of light illuminated when the at least one sample reference image is captured. The training module may further train an initial color verification model based on the plurality of training samples, determining the preset parameters of the color verification model. In some embodiments, the training module may be omitted.
For more details of the acquisition module, the verification module, and the training module, reference may be made to fig. 2-6, which are not repeated herein.
It should be noted that the above descriptions of the object recognition system and its modules are only for convenience of description, and should not be construed as limiting the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the acquisition module, the verification module, and the training module disclosed in fig. 1 may be different modules in a system, or may be a module that implements the functions of two or more of the above modules. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
FIG. 2 is an exemplary flow diagram of a method of object recognition shown in accordance with some embodiments of the present description. As shown in fig. 2, the process 200 includes the following steps:
step 210, a plurality of target images are acquired. Shooting time of the plurality of target images and irradiation time of a plurality of illuminations in an illumination sequence of the terminal to the target object have a corresponding relation.
In some embodiments, step 210 may be performed by an acquisition module.
The target object refers to an object needing target identification. For example, the target object may be a specific body part of the user, such as a face, a fingerprint, a palm print, or a pupil. In some embodiments, the target object refers to a face of a user that needs authentication and/or authorization. For example, in a network appointment application scenario, the platform needs to verify whether the order taker driver is a registered driver user that the platform has reviewed, and the target object is the driver's face. For another example, in a face payment application scenario, the payment system needs to verify the payment authority of the payer, and the target object is the face of the payer.
For target identification of the target object, the terminal is instructed to emit the illumination sequence. The illumination sequence includes a plurality of illuminations for illuminating the target object. The colors of different lights in the light sequence can be the same or different. In some embodiments, the plurality of illuminations comprises at least two illuminations of different colors, i.e. the plurality of illuminations has a plurality of colors.
In some embodiments, the plurality of colors includes at least one reference color and at least one verification color. The verification color is a color directly used for verifying the authenticity of the image among the plurality of colors. The reference color is a color of the plurality of colors that is used to assist in verifying authenticity of the determination target image. In some embodiments, each of the at least one verification color is determined based on at least a portion of the at least one reference color. For more details on the reference color and the verification color, reference may be made to fig. 3 and its related description, which are not repeated herein.
The illumination sequence includes information, such as color information, illumination time, and the like, for each of a plurality of illuminations. The color information of the plurality of illuminations in the illumination sequence may be represented in the same or different ways. For example, the color information of the plurality of illuminations may be represented by a color category. For example, the colors of the plurality of lights in the light sequence may be represented as red, yellow, green, purple, cyan, blue, red. For another example, the color information of the plurality of illuminations may be represented by a color parameter. For example, the colors of the plurality of illuminations in the illumination sequence may be represented as RGB (255, 0, 0), RGB (255, 255, 0), RGB (0, 255, 0), RGB (255, 0, 255), RGB (0, 255, 255), RGB (0, 0, 255). In some embodiments, the illumination sequence may also be referred to as a color sequence, which contains color information of the plurality of illuminations.
The illumination times of the plurality of illuminations in the illumination sequence may include a start time, an end time, a duration, etc., or any combination thereof, at which each illumination plan illuminates the target object. For example, the start time for red light to illuminate the target object is 14: 00. the start time for green light illumination on the target object is 14: 02. for another example, the duration of time for which the target object is illuminated by both red light and green light is 0.1 seconds. In some embodiments, the durations of time that different illuminations illuminate the target object may be the same or different. The irradiation time may be expressed in other ways and will not be described in detail herein.
In some embodiments, the terminal may emit the plurality of lights in sequence in a particular order. In some embodiments, the terminal may emit illumination through the light emitting element. The light emitting element may include a light emitting element built in the terminal, for example, a screen, an LED lamp, etc. The light emitting element may also include an external light emitting element. Such as external LED lights, light emitting diodes, etc. In some embodiments, when the terminal is hijacked or attacked, the terminal may accept an indication to emit illumination, but will not actually emit illumination. For more details on the illumination sequence, reference may be made to fig. 3 and its related description, which are not repeated herein.
In some embodiments, the terminal or processing device (e.g., the acquisition module) may randomly generate or generate the illumination sequence based on a preset rule. For example, the terminal or processing device may randomly draw a plurality of colors from a color library to generate an illumination sequence. In some embodiments, the illumination sequence may be set by a user at a terminal, determined from default settings of the target recognition system 100, or determined by a processing device through data analysis, and the like. In some embodiments, the terminal or the storage device may store the illumination sequence. Accordingly, the obtaining module may obtain the illumination sequence from the terminal or the storage device through the network.
The plurality of target images are images for target recognition. The formats of the plurality of target images may include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Kodak Flash PiX (FPX), Digital Imaging and Communications in Medicine (DICOM), and the like. The plurality of target images may be two-dimensional (2D) images or three-dimensional (3D) images.
In some embodiments, the acquisition module may acquire the plurality of target images. For example, the obtaining module may send a obtaining instruction to the terminal through the network, and then receive the plurality of target images sent by the terminal through the network. Or, the terminal may send the plurality of target images to a storage device for storage, and the obtaining module may obtain the plurality of target images from the storage device. The target image may contain no or no targets.
The target image may be captured by an image capturing device of the terminal, or may be determined based on data (e.g., video or image) uploaded by the user. For example, in the process of target object verification, the target recognition system 100 may issue an illumination sequence to the terminal. When the terminal is not hijacked or attacked, the terminal can sequentially transmit the plurality of lights according to the light sequence. When the terminal emits one of a plurality of lights, its image capturing device may be instructed to capture one or more images during the illumination time of the light. Alternatively, the image capturing device of the terminal may be instructed to take a video during the entire illumination of the plurality of illuminations. A terminal or other computing device (e.g., processing device 110) may intercept one or more images captured within the exposure time of each illumination from the video according to the exposure time of each illumination. One or more images acquired by the terminal within the irradiation time of each illumination can be used as the plurality of target images. At this time, the plurality of target images are real images of the target object taken while being illuminated by the plurality of lights. It is understood that there is a correspondence between the irradiation times of the plurality of illuminations and the photographing times of the plurality of target images. If an image is acquired within the illumination time of a single illumination, the correspondence is one-to-one; if a plurality of images are acquired within the illumination time of a single illumination, the correspondence is one-to-many.
When the terminal is hijacked, the hijacker can upload images or videos through the terminal equipment. The uploaded image or video may contain a specific body part of the target subject or other user, and/or other objects. The uploaded image or video may be a history image or video photographed by the terminal or other terminals, or a composite image or video. The terminal or other computing device (e.g., processing device 110) may determine the plurality of target images based on the uploaded images or videos. For example, the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded images or videos according to the illumination sequence and/or illumination duration of each illumination in the illumination sequence. For example only, the lighting sequence includes five lights arranged in sequence, and the hijacker can upload five images through the terminal device. And the terminal or other computing equipment determines an image corresponding to each of the five illuminations according to the uploading sequence of the five images. For another example, the illumination time of five illuminations in the illumination sequence is 0.5 seconds, and the hijacker can upload a video with the time duration of 2.5 seconds through the terminal. The terminal or other computing device may divide the uploaded video into five segments of video, 0-0.5 seconds, 0.5-1 seconds, 1-1.5 seconds, 1.5-2 seconds, and 2-2.5 seconds, and intercept one image in each segment of video. And five images intercepted from the video correspond to the five illuminations in sequence. At this time, the plurality of images are false images uploaded by the hijacked person, but not real images taken by the target object when illuminated by the plurality of lights. In some embodiments, if an image is uploaded by a hijacker through a terminal, the uploading time of the image or the shooting time of the image in a video can be regarded as the shooting time of the image. It is understood that when the terminal is hijacked, there is also a correspondence between the illumination times of the plurality of lights and the photographing times of the plurality of images.
As previously mentioned, the plurality of colors corresponding to the plurality of illuminations in the illumination sequence includes at least one reference color and at least one verification color. In some embodiments, each of the at least one verification color is determined based on at least a portion of the at least one reference color. The plurality of target images includes at least one reference image each corresponding to one of the at least one reference color and at least one verification image each corresponding to one of the at least one verification color.
For each of the plurality of images, the obtaining module may use, as the color corresponding to the image, a color of illumination in the illumination sequence, the illumination time of which corresponds to the image capturing time. Specifically, if the illumination time of the illumination corresponds to the shooting time of one or more images, the color of the illumination is used as the color corresponding to the one or more images. It will be appreciated that when the terminal is not hijacked or attacked, the corresponding colors of the multiple images should be the same as the multiple colors of the multiple illuminations in the illumination sequence. For example, the multiple colors of the multiple lights in the lighting sequence are "red, yellow, blue, green, purple, red", and when the terminal is not hijacked or attacked, the corresponding colors of the multiple images acquired by the terminal should also be "red, yellow, blue, green, purple, red". When the terminal is hijacked or attacked, the corresponding colors of the multiple images and the multiple colors of the multiple lights in the lighting sequence may be different.
Step 220, determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images. In some embodiments, step 220 may be performed by a verification module.
The reality of the plurality of target images may reflect whether the plurality of target images are images of the target object captured under illumination of a plurality of colors of illumination. For example, when the terminal is not hijacked or attacked, the light-emitting element thereof may emit light of a plurality of colors, while the image capture device thereof may record or take a picture of the target object to acquire the target image. At this time, the target image has reality. For another example, when the terminal is hijacked or attacked, the target image is acquired based on an image or video uploaded by the attacker. At this time, the target image has no reality.
The authenticity of the target image may be used to determine whether the image capture device of the terminal is hijacked by an attacker. For example, if at least one target image in the plurality of target images does not have authenticity, the image acquisition device is hijacked. For another example, if more than a preset number of target images in the plurality of target images do not have authenticity, it is indicated that the image capturing device is hijacked.
In some embodiments, for each of the at least one verification image, a verification module may determine a color of illumination at the time the verification image was captured based on the at least one reference image and the verification image. The verification module may further determine authenticity of the plurality of target images based on the illumination sequence and a color of illumination when the at least one verification image was captured. For a detailed description of determining the color of the illumination when the verification image is captured, and determining the authenticity of the plurality of target images based on the illumination sequence and the color of the illumination when the verification image is captured, reference is made to fig. 4 and its associated description.
In some embodiments, the verification module may further extract verification color features of the at least one verification image and reference color features of the at least one reference image. For each of the at least one verification image, the verification module may generate a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature. Based on the target color feature and the verification color feature of each of the at least one verification image, a verification module may determine authenticity of the plurality of target images. For a detailed description of generating the target color feature and determining the authenticity of the plurality of target images based on the target color feature and the verification color feature, reference may be made to fig. 6 and its associated description.
FIG. 3 is a schematic diagram of an illumination sequence shown in accordance with some embodiments of the present description.
In some embodiments, the plurality of colors of illumination in the illumination sequence may comprise at least one reference color and at least one verification color. The verification color is a color directly used for verifying the authenticity of the image among the plurality of colors. The reference color is a color of the plurality of colors that assists the verification color in determining the authenticity of the target image. For example, the target image corresponding to the reference color (also referred to as a reference image) may be used to determine the color of light when the target image corresponding to the verification color (also referred to as a verification image) is photographed. Further, the verification module may determine authenticity of the plurality of target images based on a color of illumination when the verification image is captured. As shown in fig. 3, the illumination sequence e includes a plurality of reference colors of illumination "red light, green light, and blue light", and a plurality of verification colors of illumination "yellow light, purple light … cyan light"; the illumination sequence f includes a plurality of reference colors of illumination "red light, white light … blue light", and a plurality of verification colors of illumination "red light.
In some embodiments, there are multiple verification colors. The plurality of verification colors may be identical. For example, the verification color may be red, red. Alternatively, the plurality of verification colors may be completely different. For example, the verification color may be red, yellow, blue, green, violet. Still alternatively, the plurality of verification colors may also be partially identical. For example, the verification color may be yellow, green, purple, yellow, red. Similarly to the verification color, in some embodiments there are multiple reference colors, which may be identical, completely different, or partially identical. In some embodiments, the verification color may comprise only one color, such as green.
In some embodiments, the at least one reference color and the at least one verification color may be determined according to a default setting of the target recognition system 100, manually set by a user, or determined by a verification module. For example, the verification module may randomly choose a reference color and a verification color. For example only, the verification module may randomly select a part of the colors from the plurality of colors as the at least one reference color, and the remaining colors as the at least one verification color. In some embodiments, the verification module may determine the at least one reference color and the at least one verification color based on preset rules. The preset rule may be a rule regarding verifying a relationship between colors, a relationship between reference colors, and/or a relationship between a color and a reference color, and the like. For example, the preset rule is that the verification color can be generated based on reference color fusion, and the like.
In some embodiments, each of the at least one verification color may be determined based on at least a portion of the at least one reference color. For example, the verification color may be blended based on at least a portion of the at least one reference color. In some embodiments, the at least one reference color may comprise a primary color or a primary color of a color space. For example, the at least one reference color may include three primary colors of an RGB space, i.e., "red, green, and blue". As shown in fig. 3, a plurality of verification colors "yellow, purple … cyan" in the illumination sequence e may be determined based on 3 reference colors "red, green, blue". For example, "yellow" may be obtained by blending the reference colors "red, green, and blue" based on a first ratio, and "violet" may be obtained by blending the reference colors "red, green, and blue" based on a second ratio.
In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color. The at least one reference color and the at least one verification color may be all or partially the same. For example, a certain one of the at least one verification color may be the same as a particular one of the at least one reference color. It will be appreciated that the verification color may also be determined based on at least one reference color, i.e. the particular reference color may be the verification color. As shown in fig. 3, in the illumination sequence f, a plurality of reference colors "red, white … blue" and a plurality of verification colors "red.. green" each contain red.
In some embodiments, there may be other relationships between the at least one reference color and the at least one verification color, and are not limited herein. For example, the color systems of the at least one reference color and the at least one verification color are the same or different. Illustratively, at least one reference color belongs to a warm color family (e.g., red, yellow, etc.) and at least one reference color belongs to a cool color family (e.g., gray, etc.).
In some embodiments, in the illumination sequence, the illumination corresponding to the at least one reference color may be arranged in front of or behind the illumination corresponding to the at least one verification color. As shown in fig. 3, in the illumination sequence e, the illumination "red light, green light, blue light" of the plurality of reference colors is arranged in front of the illumination "yellow light, purple light … cyan light" of the plurality of verification colors. In the illumination sequence f, a plurality of reference colors of illumination "red light, white light … blue light" are arranged behind a plurality of verification colors "red light. In some embodiments, the illumination corresponding to the at least one reference color may be further arranged at intervals from the illumination corresponding to the at least one verification color, which is not limited herein.
Fig. 4 is an exemplary flow diagram for determining the authenticity of a plurality of target images based on a lighting sequence and the plurality of target images, according to some embodiments of the present description. In some embodiments, flowchart 400 may be performed by a verification module. As shown in fig. 4, the process 400 may include the following steps:
for each of the at least one verification image, a color of illumination at which the verification image was captured is determined based on the at least one reference image and the verification image, step 410.
In some embodiments, the verification module may determine the color of the illumination when the verification image was captured based on the verification color feature of the verification image and the reference color feature of the at least one reference image.
The reference color feature refers to a color feature of the reference image. Verifying the color characteristics refers to verifying the color characteristics of the image.
The color feature of an image refers to information related to the color of the image. The color of the image includes a color of light illuminated when the image is captured, a color of a subject in the image, a color of a background in the image, and the like. In some embodiments, the color features may include depth features and/or complex features extracted by the neural network.
The color characteristics may be represented in a variety of ways. In some embodiments, the color features may be based on a representation of color values of pixel points in the image in a color space. A color space is a mathematical model that describes color using a set of numerical values, each numerical value in the set of numerical values representing a color value of a color feature on each color channel of the color space. In some embodiments, the color space may be represented as a vector space, each dimension of which represents one color channel of the color space. Color features may be represented by vectors in the vector space. In some embodiments, the color space may include, but is not limited to, an RGB color space, an L α β color space, an LMS color space, an HSV color space, a YCrCb color space, an HSL color space, and the like. It is understood that different color spaces contain different color channels. For example, the RGB color space includes a red channel R, a green channel G, and a blue channel B, and the color feature can be represented by the color value of each pixel point in the image on the red channel R, the green channel G, and the blue channel B, respectively.
In some embodiments, the color features may be represented in other ways (e.g., color histograms, color moments, color sets, etc.). For example, histogram statistics is performed on color values of each pixel point in the image in the color space, and a histogram representing color features is generated. For another example, a specific operation (e.g., mean, square error, etc.) is performed on the color value of each pixel point in the image in the color space, and the result of the specific operation represents the color feature of the image.
In some embodiments, the verification module may extract color features of the plurality of target images through a color feature extraction algorithm and/or a color verification model (or portion thereof). The color feature extraction algorithm includes: color histograms, color moments, color sets, etc. For example, the verification module may count the gradient histogram based on the color value of each pixel point in the image in each color channel of the color space, so as to obtain the color histogram. For another example, the verification module may divide the image into a plurality of regions, and determine the color set of the image by using a set of binary indexes of the plurality of regions, which are respectively established by the color values of the pixels in the image in each color channel of the color space.
In some embodiments, the reference color features of at least one reference image may be used to construct a reference color space. The reference color space has the at least one reference color as its color channel. Specifically, the reference color feature corresponding to each reference image may be used as a reference value of the corresponding color channel in the reference color space.
In some embodiments, the color space (also referred to as the original color space) corresponding to the plurality of target images may be the same as or different from the reference color space. For example, the plurality of target images may correspond to an RGB color space, and the at least one reference color is red, blue, and green, so that the original color space corresponding to the plurality of target images and the reference color space constructed based on the reference color belong to the same color space. In this context, two color spaces may be considered to be the same color space if their primary colors or primaries are the same.
As described above, the verification color may be blended based on one or more reference colors. Accordingly, the verification module may determine a color corresponding to the verification color feature based on the reference color feature and/or the reference color space constructed by the reference color feature. In some embodiments, the verification module may map verification color features of the verification image based on a reference color space, determining a color of illumination when the verification image was captured. For example, the verification module may determine a parameter of the verification color feature on each color channel based on a relationship between the verification color feature and a reference value of each color channel in the reference color space, and then determine a color corresponding to the verification color feature based on the parameter, that is, a color illuminated when the verification image is captured.
For example, the verification module may extract the reference color feature based on the reference images a, b, c
Figure BDA0003028849690000121
As reference values for color channel i, color channel ii, and color channel iii, respectively. Color channel i, color channel ii, and color channel iii are the three color channels of the reference color space. The verification module may extract verification color features based on the verification image d
Figure BDA0003028849690000122
And based on verification of colour characteristics
Figure BDA0003028849690000123
And reference values of color channel I, color channel II, and color channel III
Figure BDA0003028849690000124
The relationship between
Figure BDA0003028849690000125
Determining verification color features
Figure BDA0003028849690000126
Parameter delta in color channel I, color channel II and color channel III, respectively1、δ2And delta3. The verification module may be based on the parameter δ1、δ2And delta3And determining the color corresponding to the verification color characteristic, namely the color of illumination when the verification image is shot. In some embodiments, the correspondence between the parameters and the color categories may be preset or may be learned through a model.
In some embodiments, the reference color space may be the same color as the color channels of the original color space. For example, the original spatial color may be an RGB space and the at least one reference color may be red, green, blue. The verification module may construct a new RGB color space (i.e., a reference color space) based on the reference color features of the three reference images corresponding to red, green, and blue, and determine the RGB values of the verification color features of each verification image in the new RGB color space, thereby determining the color of light illuminated when the verification image is photographed.
In some embodiments, the verification module may process the reference color feature and the verification color feature based on the color classification layer in the color verification model, and determine the color of illumination when the verification image is shot, which may specifically refer to fig. 5 and the related description thereof, and details are not repeated here.
And step 420, determining the authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image is shot.
In some embodiments, for each of the at least one verification image, the verification module may determine a corresponding verification color for the verification image based on the illumination sequence. Further, the verification module may determine authenticity of the verification image based on a corresponding verification color of the verification image. For example, the verification module determines the authenticity of the verification image based on a first determination result of whether a verification color corresponding to the verification image coincides with the color of illumination when being photographed. The verification color corresponding to the verification image is the same as the illumination color when the verification image is shot, the verification image is true, and the verification color corresponding to the verification image is different from the illumination color when the verification image is shot, so that the verification image is not true. For another example, the verification module determines the authenticity of the verification images based on whether a relationship (e.g., whether or not the same) between verification colors corresponding to the plurality of verification images coincides with a relationship between colors illuminated when the plurality of verification images are photographed.
In some embodiments, the verification module may determine whether the image capturing device of the terminal is hijacked based on the authenticity of the at least one verification image. For example, the number of verification images with authenticity exceeding the first threshold value indicates that the image capturing device of the terminal is not hijacked. For another example, the number of verification images without authenticity exceeding a second threshold (e.g., 1) indicates that the image capture device of the terminal is hijacked.
In some embodiments, the preset threshold (e.g., the first threshold, the second threshold) set for the image authenticity determination in some embodiments of the present specification may be related to the shooting stability degree. The shooting stability degree is the stability degree when the image acquisition device of the terminal acquires the target image. In some embodiments, the predetermined threshold is positively correlated to the photographing stability. It can be understood that the higher the shooting stability, the higher the quality of the obtained target image, and the more the color features extracted based on the plurality of target images can truly reflect the color of illumination when the target image is shot, the larger the preset threshold value is. In some embodiments, the shooting stability may be measured based on a motion parameter of the terminal detected by a motion sensor of the terminal (e.g., a vehicle-mounted terminal or a user terminal, etc.). Such as the speed of motion, the frequency of vibration, etc., detected by the motion sensor. For example, the larger the motion parameter or the larger the rate of change of the motion parameter, the lower the shooting stability. The motion sensor may be a sensor that detects a running condition of a vehicle, and the vehicle may be a vehicle used by a target user. The target user refers to a user to which the target object belongs. For example, if the target user is a web taxi appointment driver, the motion sensor may be a motion sensor of a driver's end or a vehicle-mounted terminal.
In some embodiments, the preset threshold may also be related to the shooting distance and the rotation angle. The shooting distance is a distance between the image capturing apparatus and the target object when the image capturing apparatus captures the target image. The rotation angle is the angle between the front of the target object and the terminal screen when the image acquisition equipment acquires the target image. In some embodiments, both the shooting distance and the rotation angle are inversely related to the preset threshold. It can be understood that the shorter the shooting distance is, the higher the quality of the acquired target image is, and the more the color features extracted based on the plurality of target images can truly reflect the color of illumination when the target image is shot, the larger the preset threshold is. The smaller the rotation angle is, the higher the quality of the acquired target image is, and in the same way, the larger the preset threshold value is. In some embodiments, the shooting distance and the rotation angle may be determined based on the target image by an image recognition technique.
In some embodiments, the verification module may perform a specific operation (e.g., averaging, standard deviation, etc.) on the photographing stability degree, the photographing distance, and the rotation angle of each target image, and determine the preset threshold based on the photographing stability degree, the photographing distance, and the photographing angle after the specific operation.
For example, the obtaining, by the verification module, the stability degree of the terminal when the plurality of target images are obtained includes obtaining a sub-stability degree of the terminal when each of the plurality of target images is photographed; and fusing the plurality of sub-stability degrees to determine the stability degree.
For another example, the obtaining, by the verification module, the shooting distances of the target object and the terminal when the plurality of target images are shot includes: acquiring the sub-shooting distance between a target object and the terminal when each of the plurality of target images is shot; and fusing the plurality of sub-shooting distances to determine the shooting distance.
For another example, the verifying module acquiring a rotation angle of the target object with respect to the terminal when the plurality of target images are photographed includes acquiring a sub-rotation angle of the target object with respect to the terminal when each of the plurality of target images is photographed; and fusing the plurality of sub-rotation angles to determine the rotation angle.
Since the reference image and the verification image are both photographed under the same ambient light condition, establishing the reference color space based on the reference image and determining the color of illumination when the verification image is photographed based on the reference color space can make the determination result more accurate. Further, the authenticity of the target image is more accurately determined. For example, when the illumination in the illumination sequence is weaker than the ambient light, the illumination irradiated to the target object may be difficult to detect. Alternatively, when the ambient light is colored light, the illumination of the target object may be disturbed. When the terminal is not hijacked, the reference image and the verification image are taken under the same (or substantially the same) ambient light. The reference color space constructed based on the reference image fuses the influence of the ambient light, and therefore, compared with the original color space, the color of illumination when the verification image is shot can be more accurately identified. Furthermore, the method disclosed herein may avoid interference of the light emitting elements of the terminal. When the terminal is not hijacked, the reference image and the verification image are shot under the irradiation of the same light-emitting element, the influence of the light-emitting element can be eliminated or weakened by utilizing the reference color space, and the accuracy rate of identifying the illumination color is improved.
FIG. 5 is a schematic diagram of a color verification model according to some embodiments of the present description.
In some embodiments, the verification module may process the at least one reference image and the verification image based on a color verification model, determining a color of illumination when the verification image was captured.
The color verification model may include a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer. As shown in fig. 5, the color verification model may include a reference color feature extraction layer 530, a verification color feature extraction layer 540, and a color classification layer 570. A color verification model may be used to implement step 410. Further, the verification module determines authenticity of the verification image based on the color of illumination and the illumination sequence when the verification image is shot.
The color feature extraction layers (e.g., the reference color feature extraction layer 530 and the verification color feature extraction layer 540, etc.) may extract color features of the target image. In some embodiments, the type of color feature extraction layer may include convolutional neural network models such as ResNet, densnet, MobileNet, ShuffleNet, or EfficientNet, or cyclic neural network models such as long-short memory cyclic neural networks. In some embodiments, the types of the reference color feature extraction layer 530 and the verification color feature extraction layer 540 may be the same or different.
The reference color feature extraction layer 530 extracts at least one reference color feature 550 of the reference image 510. In some embodiments, the at least one reference image 510 may include a plurality of reference images. The reference color feature 550 may be a fusion of color features of the plurality of reference images 510. For example, the plurality of reference images 510 may be merged and input to the reference color feature extraction layer 530 after the merging, and the reference color feature extraction layer 530 may output the reference color features 550. The reference color feature 550 is illustratively a feature vector formed by stitching together color feature vectors of the reference images 510-1, 510-2, 510-3.
The verification color feature extraction layer 540 extracts the verification color features 560 of at least one verification image 520. In some embodiments, the verification module may perform a color determination separately for each of the at least one verification images 520. For example, as shown in FIG. 5, the verification module may input at least one reference image 510 into a reference color feature extraction layer 530 and a verification image 520-2 into a verification color feature extraction layer 540. The verification color feature extraction layer 540 may output a verification color feature 560 of the verification image 520-2. The color classification layer 570 may determine the color of illumination at which the verification image 520-2 was captured based on the reference color feature 550 and the verification color feature 560 of the verification image 520-2.
In some embodiments, the verification module may make color determinations for multiple verification images 520 simultaneously. For example, the verification module may input at least one reference image 510 into the reference color feature extraction layer 530 and a plurality of verification images 520 (including verification images 520-1, 520-2 … 520-n) into the verification color feature extraction layer 540. The verification color feature extraction layer 540 may simultaneously output the verification color features 560 of the plurality of verification images 520. The color classification layer 570 may simultaneously determine the color of illumination when each of the plurality of verification images is captured.
For each of the at least one verification image, the color classification layer 570 may determine a color of illumination at the time the verification image was captured based on the reference color feature and the verification color feature of the verification image. For example, the color classification layer 570 may determine a value or probability based on the reference color feature and the verification color feature of the verification image, and then determine the color of light illuminated when the verification image is captured based on the value or probability. The corresponding numerical value or probability of the verification image may reflect the likelihood that the color of the illumination belongs to each color when the verification image is captured. In some embodiments, the color classification layer may include, but is not limited to, a fully connected layer, a deep neural network, and the like.
The color verification model is a machine learning model with preset parameters. The preset parameters of the color verification model may be determined during the training process. For example, the training module may train an initial color verification model based on a plurality of training samples to determine the preset parameters of the color verification model. Each of the plurality of training samples includes at least one sample reference image, at least one sample verification image, and a sample label representing a color of illumination when each of the at least one sample verification image was captured. Wherein at least one reference color is the same as the color of the illumination when the at least one sample reference image is captured. For example, if the at least one reference color comprises red, green, and blue, the at least sample reference image comprises three target images of a sample target object taken under red, green, and blue illumination.
In some embodiments, the verification module may input a plurality of training samples into the initial color verification model, and update parameters of the initial verification color feature extraction layer, the initial reference color feature extraction layer, and the initial color classification layer through training until the updated color verification model satisfies a preset condition. The updated color verification model may be designated as a first verification model of preset parameters, in other words, the updated first verification model may be designated as a trained color verification model. The preset condition may be that the loss function of the updated color feature model is less than a threshold, convergence, or that the number of training iterations reaches a threshold.
In some embodiments, the verification module may train the initial verification color feature extraction layer, the initial reference color feature extraction layer, and the initial color classification layer in the initial color verification model in an end-to-end training manner. The end-to-end training mode is that a training sample is input into an initial model, a loss value is determined based on the output of the initial model, and the initial model is updated based on the loss value. The initial model may contain a plurality of sub-models or modules for performing different data processing operations, which are considered as a whole in the training, to be updated simultaneously. For example, in the training of the initial color verification model, at least one sample reference image may be input into the initial reference color feature extraction layer, at least one sample verification image may be input into the initial verification color feature extraction layer, a loss function may be established based on an output result of the initial color classification layer and the sample label, and parameters of each initial layer in the initial color verification model may be simultaneously updated based on the loss function.
In some embodiments, the color verification model may be pre-trained by the processing device or a third party and saved in the storage device, and the processing device may invoke the color verification model directly from the storage device.
Some embodiments of the present description may improve the efficiency of target image authenticity verification by determining the authenticity of a verification image through a color verification model. In addition, the color verification model can be used for improving the reliability of the authenticity verification of the target object, reducing or removing the influence of performance difference of the terminal equipment and further determining the authenticity of the target image. It can be understood that there is a certain difference in hardware of different terminals, for example, the same color light emitted by the terminal screens of different manufacturers may have a difference in saturation, brightness, etc., resulting in a larger intra-class difference of the same color. Multiple training samples of the initial color verification model may be taken by terminals of different capabilities. The initial color verification model can consider the terminal performance difference when the color of the target object is judged by the trained color verification model through learning in the training process, and the color of the target image can be accurately determined. Moreover, when the terminal is not hijacked, the reference image and the verification image are both shot under the same external environment light condition. In some embodiments, the influence of ambient light may be eliminated or reduced when extracting a model reference image based on reference color features in a color verification model, establishing a reference color space, and determining the authenticity of a plurality of target images based on the reference color space.
Fig. 6 is another exemplary flow diagram for determining the authenticity of a plurality of target images based on a lighting sequence and the plurality of target images, according to some embodiments of the present description. In some embodiments, flowchart 600 may be performed by a verification module. As shown in fig. 6, the process 600 includes the following steps:
step 610, extracting the verification color feature of the at least one verification image and the reference color feature of the at least one reference image.
For a detailed description of extracting verification color features and reference verification features, see step 410 and its associated description.
Step 620, for each of the at least one verification image, generating a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature.
The target color feature refers to a feature in which a verification color corresponding to the verification image is represented in the reference color space. In some embodiments, for each of the at least one verification image, the verification module may determine a corresponding verification color for the verification image based on the illumination sequence and generate a target color feature for the verification image based on the verification color and the reference color feature. For example, the verification module may fuse the color feature of the verification color with the reference color feature to obtain the target color feature.
Step 630, determining authenticity of the plurality of target images based on the target color feature and the verification color feature of each of the at least one verification image.
In some embodiments, for each of the at least one verification images, the verification module may determine the authenticity of the verification image based on the similarity between its corresponding target color feature and the verification color feature. The similarity between the target color feature and the verification color feature may be calculated by vector similarity, for example, determined by euclidean distance, manhattan distance, or the like. Illustratively, when the similarity between the target color feature and the verification color feature is greater than a third threshold, the verification image has authenticity, and otherwise, the verification image does not have authenticity.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of object recognition, the method comprising:
acquiring a plurality of target images, wherein shooting time of the plurality of target images has a corresponding relation with irradiation time of a plurality of lights in a lighting sequence irradiated to a target object, the plurality of lights are provided with a plurality of colors, the plurality of colors comprise at least one reference color and at least one verification color, and each of the at least one verification color is determined based on at least a part of the at least one reference color; and
determining authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
2. The method of claim 1, the plurality of target images comprising at least one verification image and at least one reference image, each of the at least one verification image corresponding to one of the at least one verification color, each of the at least one reference image corresponding to one of the at least one plurality of reference colors,
the determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprises:
for each of the at least one verification image, determining a color of illumination at which the verification image was captured based on the at least one reference image and the verification image; and
determining authenticity of the plurality of target images based on the illumination sequence and a color of illumination at a time the at least one verification image was captured.
3. The method of claim 2, the determining, based on the at least one reference image and the verification image, a color of illumination at a time the verification image was captured comprising:
and processing the at least one reference image and the verification image based on a color verification model, and determining the color of illumination when the verification image is shot, wherein the color verification model is a machine learning model with preset parameters.
4. The method of claim 3, the color verification model comprising a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer,
the reference color feature extraction layer processes the at least one reference image to determine a reference color feature of the at least one reference image;
the verification color feature extraction layer processes the verification image and determines the verification color feature of the verification image;
and the color classification layer processes the reference color characteristic of the at least one reference image and the verification color characteristic of the verification image and determines the color of illumination when the verification image is shot.
5. The method of claim 4, wherein the preset parameters of the color verification model are obtained by an end-to-end training mode.
6. The method of claim 3, the preset parameters of the color verification model being generated by a training process comprising:
obtaining a plurality of training samples, wherein each of the plurality of training samples comprises at least one sample reference image, at least one sample verification image and a sample label, the sample label represents the color of light irradiated when each of the at least one sample verification image is shot, and the at least one reference color is the same as the color of light irradiated when the at least one sample reference image is shot; and
training an initial color verification model based on the plurality of training samples, determining the preset parameters of the color verification model.
7. The method of claim 1, the plurality of target images comprising at least one verification image and at least one reference image, each of the at least one verification image corresponding to one of the at least one verification color, each of the plurality of at least one reference images corresponding to one of the at least one plurality of reference colors,
the determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprises:
extracting verification color features of the at least one verification image and reference color features of the at least one reference image;
for each of the at least one verification image, generating a target color feature of a verification color corresponding to the verification image based on the illumination sequence and the reference color feature; and
determining authenticity of the plurality of target images based on the target color feature and the verification color feature of each of the at least one verification image.
8. An object recognition system, the system comprising:
an obtaining module, configured to obtain a plurality of target images, shooting times of the plurality of target images having a corresponding relationship with irradiation times of a plurality of lights in an illumination sequence irradiated to a target object, the plurality of lights having a plurality of colors, the plurality of colors including at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a part of the at least one reference color; and
a verification module to determine authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
9. An object discrimination apparatus, comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202110423614.0A 2021-04-20 2021-04-20 Target identification method and system Pending CN113111807A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110423614.0A CN113111807A (en) 2021-04-20 2021-04-20 Target identification method and system
PCT/CN2022/076352 WO2022222585A1 (en) 2021-04-20 2022-02-15 Target identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110423614.0A CN113111807A (en) 2021-04-20 2021-04-20 Target identification method and system

Publications (1)

Publication Number Publication Date
CN113111807A true CN113111807A (en) 2021-07-13

Family

ID=76718856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110423614.0A Pending CN113111807A (en) 2021-04-20 2021-04-20 Target identification method and system

Country Status (2)

Country Link
CN (1) CN113111807A (en)
WO (1) WO2022222585A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222585A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Target identification method and system
WO2022222575A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Method and system for target recognition

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512A (en) * 2016-12-15 2017-03-22 北京旷视科技有限公司 Living body face verification method and device
CN108256588A (en) * 2018-02-12 2018-07-06 兰州工业学院 A kind of several picture identification feature extracting method and system
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium
US20190147227A1 (en) * 2017-11-10 2019-05-16 Samsung Electronics Co., Ltd. Facial verification method and apparatus
WO2020078229A1 (en) * 2018-10-15 2020-04-23 腾讯科技(深圳)有限公司 Target object identification method and apparatus, storage medium and electronic apparatus
CN111160374A (en) * 2019-12-28 2020-05-15 深圳市越疆科技有限公司 Color identification method, system and device based on machine learning
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111597938A (en) * 2020-05-07 2020-08-28 马上消费金融股份有限公司 Living body detection and model training method and device
CN111881844A (en) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 Method and system for judging image authenticity
CN112507922A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN112597810A (en) * 2020-06-01 2021-04-02 支付宝实验室(新加坡)有限公司 Identity document authentication method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859117A (en) * 2018-12-30 2019-06-07 南京航空航天大学 A kind of image color correction method directly correcting rgb value using neural network
CN111460964A (en) * 2020-03-27 2020-07-28 浙江广播电视集团 Moving target detection method under low-illumination condition of radio and television transmission machine room
CN113111807A (en) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 Target identification method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529512A (en) * 2016-12-15 2017-03-22 北京旷视科技有限公司 Living body face verification method and device
US20190147227A1 (en) * 2017-11-10 2019-05-16 Samsung Electronics Co., Ltd. Facial verification method and apparatus
CN108256588A (en) * 2018-02-12 2018-07-06 兰州工业学院 A kind of several picture identification feature extracting method and system
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium
WO2020078229A1 (en) * 2018-10-15 2020-04-23 腾讯科技(深圳)有限公司 Target object identification method and apparatus, storage medium and electronic apparatus
CN111160374A (en) * 2019-12-28 2020-05-15 深圳市越疆科技有限公司 Color identification method, system and device based on machine learning
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111597938A (en) * 2020-05-07 2020-08-28 马上消费金融股份有限公司 Living body detection and model training method and device
CN112597810A (en) * 2020-06-01 2021-04-02 支付宝实验室(新加坡)有限公司 Identity document authentication method and system
CN111881844A (en) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 Method and system for judging image authenticity
CN112507922A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Face living body detection method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈丹峰;沈雅欣;叶国铭;王青;: "不同颜色空间阈值分割跟踪法的移动目标追踪", 机电一体化, no. 09 *
范彩霞;朱虹;: "非重叠多摄像机目标识别方法研究", 西安理工大学学报, no. 02 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222585A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Target identification method and system
WO2022222575A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Method and system for target recognition

Also Published As

Publication number Publication date
WO2022222585A1 (en) 2022-10-27

Similar Documents

Publication Publication Date Title
WO2022222575A1 (en) Method and system for target recognition
Pomari et al. Image splicing detection through illumination inconsistencies and deep learning
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
WO2022222569A1 (en) Target discrimation method and system
CN110163078A (en) The service system of biopsy method, device and application biopsy method
WO2022222585A1 (en) Target identification method and system
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
CN108664843B (en) Living object recognition method, living object recognition apparatus, and computer-readable storage medium
KR102145132B1 (en) Surrogate Interview Prevention Method Using Deep Learning
CN113111810B (en) Target identification method and system
JPWO2009107237A1 (en) Biometric authentication device
CN112801057A (en) Image processing method, image processing device, computer equipment and storage medium
US20210256244A1 (en) Method for authentication or identification of an individual
CN109871845A (en) Certificate image extracting method and terminal device
CN112232323A (en) Face verification method and device, computer equipment and storage medium
CN113312965A (en) Method and system for detecting unknown face spoofing attack living body
CN111767879A (en) Living body detection method
CN115147936A (en) Living body detection method, electronic device, storage medium, and program product
CN108171205A (en) For identifying the method and apparatus of face
WO2022222957A1 (en) Method and system for identifying target
Hadwiger et al. Towards learned color representations for image splicing detection
Hadiprakoso Face anti-spoofing method with blinking eye and hsv texture analysis
WO2022222904A1 (en) Image verification method and system, and storage medium
US11967184B2 (en) Counterfeit image detection
CN116152932A (en) Living body detection method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination