CN113111806A - Method and system for object recognition - Google Patents

Method and system for object recognition Download PDF

Info

Publication number
CN113111806A
CN113111806A CN202110423528.XA CN202110423528A CN113111806A CN 113111806 A CN113111806 A CN 113111806A CN 202110423528 A CN202110423528 A CN 202110423528A CN 113111806 A CN113111806 A CN 113111806A
Authority
CN
China
Prior art keywords
color
image
verification
images
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110423528.XA
Other languages
Chinese (zh)
Inventor
张明文
张天明
赵宁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202110423528.XA priority Critical patent/CN113111806A/en
Publication of CN113111806A publication Critical patent/CN113111806A/en
Priority to PCT/CN2022/075531 priority patent/WO2022222575A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Abstract

The embodiment of the specification discloses a target identification method and a target identification system. The target identification method comprises the following steps: determining an illumination sequence, wherein the illumination sequence is used for determining a plurality of colors of a plurality of illuminations emitted by a terminal when a target object is illuminated; acquiring a plurality of target images based on a terminal, wherein the shooting time of the plurality of target images and the irradiation time of a plurality of illuminations have a corresponding relation; and determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.

Description

Method and system for object recognition
Technical Field
The present description relates to the field of image processing technology, and more particularly, to a method and system for object recognition.
Background
The target identification is a technology for performing biological identification based on a target acquired by an image acquisition device, for example, a face identification technology using a face as a target is widely applied to application scenarios such as authority verification and identity verification. In order to ensure the security of the target identification, the authenticity of the target image needs to be determined.
It is therefore desirable to provide a method and system for object recognition that can determine the authenticity of the object image.
Disclosure of Invention
One of embodiments of the present specification provides a target identification method, including: determining an illumination sequence for determining a plurality of colors of a plurality of illuminations illuminating the target object; acquiring a plurality of target images, wherein the shooting time of the plurality of target images and the irradiation time of a plurality of lights have a corresponding relation; and determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
One of the embodiments of the present specification provides an object recognition system, including: a determination module to determine an illumination sequence to determine a plurality of colors of a plurality of illuminations illuminating a target object; the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a plurality of target images, and the shooting time of the plurality of target images and the irradiation time of a plurality of illuminations have a corresponding relation; and the verification module is used for determining the authenticity of the multiple target images based on the illumination sequence and the multiple target images.
One of the embodiments of the present specification provides an object recognition apparatus, which includes a processor for executing the object recognition method disclosed in the present specification.
One of the embodiments of the present specification provides a computer-readable storage medium, which stores computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer executes the object recognition method disclosed in the specification.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a target recognition system in accordance with some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a method of object recognition shown in accordance with some embodiments of the present description;
FIG. 3 is a schematic illustration of an illumination sequence shown in accordance with some embodiments of the present description;
FIG. 4 is another schematic diagram of an illumination sequence shown in accordance with some embodiments of the present description;
FIG. 5 is an exemplary flow diagram illustrating acquiring multiple target images according to some embodiments of the present description;
FIG. 6 is a schematic diagram of texture substitution shown in accordance with some embodiments of the present description;
FIG. 7 is an exemplary flow diagram illustrating determining authenticity of a plurality of target images according to some embodiments of the present description;
FIG. 8 is another exemplary flow chart for determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description;
FIG. 9 is another exemplary flow diagram for determining authenticity of a plurality of target images, shown in some embodiments herein
FIG. 10 is a schematic structural diagram of a first verification model in accordance with certain embodiments of the present description;
FIG. 11 is another exemplary flow diagram for determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description;
FIG. 12 is another exemplary flow chart for determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description;
FIG. 13 is a schematic structural diagram of a second verification model in accordance with certain embodiments of the present description;
FIG. 14 is another exemplary flow chart for determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description; and
FIG. 15 is a schematic diagram of a third verification model, shown in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The object identification is a technique of performing biometric recognition based on a target object acquired by an image acquisition apparatus. In some embodiments, the target object may be a human face, a fingerprint, a palm print, a pupil, and the like. In some embodiments, target identification may be applied to rights verification. For example, access authorization authentication, account payment authorization authentication, and the like. In some embodiments, target identification may also be used for identity verification. For example, employee attendance authentication and principal registration identity security authentication. For example only, target recognition may be based on matching a target image captured in real time by an image capture device with a pre-acquired biometric feature to verify the identity of the target.
However, the image capture device may be attacked or hijacked, and the attacker may upload the false target image through authentication. For example, the attacker a may directly upload the face image of the user B after attacking or hijacking the image capture device. The target recognition system carries out face recognition based on the face image of the user B and the face biological characteristics of the user B acquired in advance, so that the identity of the user B is verified.
Therefore, in order to ensure the safety of the target identification, the authenticity of the target image needs to be determined, namely, the target image is determined to be acquired by the image acquisition device in real time in the target identification process.
FIG. 1 is a schematic diagram of an application scenario of an object recognition system according to some embodiments of the present description. As shown in FIG. 1, the object recognition system 100 may include a processing device 110, a network 120, a terminal 130, and a storage device 140.
The processing device 110 may be used to process data and/or information from at least one component of the target recognition system 100 and/or an external data source (e.g., a cloud data center). For example, the processing device 110 may determine an illumination sequence, acquire multiple target images, determine the authenticity of multiple target images, and so on. For another example, the processing device 110 may pre-process (e.g., replace textures, etc.) multiple initial images obtained from the terminal 130, resulting in multiple target images. During processing, the processing device 110 may retrieve data (e.g., instructions) from other components of the object recognition system 100 (e.g., the storage device 140 and/or the terminal 130) directly or via the network 120 and/or send the processed data to the other components for storage or display.
In some embodiments, the processing device 110 may be a single server or a group of servers. The set of servers may be centralized or distributed (e.g., processing device 110 may be a distributed system). In some embodiments, the processing device 110 may be local or remote. In some embodiments, the processing device 110 may be implemented on a cloud platform, or provided in a virtual manner. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
The network 120 may connect the various components of the system and/or connect the system with external portions. The network 120 enables communication between components of the object recognition system 100, and between the object recognition system 100 and external components, facilitating the exchange of data and/or information. In some embodiments, the network 120 may be any one or more of a wired network or a wireless network. For example, network 120 may include a cable network, a fiber optic network, a telecommunications network, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network (ZigBee), Near Field Communication (NFC), an in-device bus, an in-device line, a cable connection, and the like, or any combination thereof. In some embodiments, the network connections between the various components in the object recognition system 100 may be in one of the manners described above, or in multiple manners. In some embodiments, network 120 may be a point-to-point, shared, centralized, etc. variety of topologies or a combination of topologies. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, …, through which one or more components of the object identification system 100 may connect to the network 120 to exchange data and/or information.
Terminal 130 refers to one or more terminal devices or software used by a user. In some embodiments, the terminal 130 may include an image capture device 131 (e.g., a camera, a video camera), and the image capture device 131 may capture a target object and acquire a plurality of target images. In some embodiments, when image capture device 131 captures a target object, terminal 130 (e.g., a screen and/or other light-emitting elements of terminal 130) may sequentially emit light of multiple colors in an illumination sequence to illuminate the target object. In some embodiments, the terminal 130 may communicate with the processing device 110 through the network 120 and transmit the photographed plurality of target images to the processing device 110. In some embodiments, the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other devices having input and/or output capabilities, the like, or any combination thereof. The above examples are intended only to illustrate the breadth of the type of terminal 130 and not to limit its scope.
The storage device 140 may be used to store data (e.g., lighting sequences, multiple initial images, or multiple target images, etc.) and/or instructions. Storage device 140 may include one or more storage components, each of which may be a separate device or part of another device. In some embodiments, storage device 140 may include Random Access Memory (RAM), Read Only Memory (ROM), mass storage, removable storage, volatile read and write memory, and the like, or any combination thereof. Illustratively, mass storage may include magnetic disks, optical disks, solid state disks, and the like. In some embodiments, the storage device 140 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof. In some embodiments, the storage device 140 may be integrated or included in one or more other components of the target recognition system 100 (e.g., the processing device 110, the terminal 130, or possibly other components).
In some embodiments, the object recognition system 100 may include a determination module, an acquisition module, and a verification module.
The determination module may be to determine an illumination sequence to determine a plurality of colors of a plurality of illuminations illuminating the target object.
The acquisition module may be configured to acquire a plurality of target images, and shooting times of the plurality of target images have a corresponding relationship with irradiation times of the plurality of lights.
In some embodiments, the obtaining module may be configured to obtain a plurality of initial images and pre-process the plurality of initial images to obtain a plurality of target images.
In some embodiments, the obtaining module may be configured to obtain a color verification model, the color verification model being a machine learning model of preset parameters. For example, preset parameters of the color verification model and the like are obtained through training.
The verification module may be to determine authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
In some embodiments, the plurality of colors includes at least one reference color and at least one verification color. The relationship between the at least one reference color and the at least one verification color may be various. For example, each of the at least one verification colors is determined based on at least a portion of the at least one reference color, and for example, one or more of the at least one reference colors are the same as one or more of the at least one verification colors.
In some embodiments, the plurality of target images includes at least one verification image and at least one reference image, each of the at least one verification image corresponding to one of the at least one verification color and each of the plurality of reference images corresponding to one of the at least one reference color.
In some embodiments, the verification module may be configured to determine a color of illumination when the at least one verification image is captured based on the at least one reference image, and determine authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image is captured.
In some embodiments, the verification module may be configured to determine a first image sequence based on the plurality of target images and a second image sequence based on the plurality of color template images, and determine authenticity of the plurality of target images based on the first image sequence and the second image sequence. Wherein the plurality of color template images are generated based on the illumination sequence.
In some embodiments, the verification module may be configured to determine the authenticity of the plurality of target images based on a first color relationship between the at least one reference image and the at least one verification image, a second color relationship between the at least one reference color and the at least one verification color, and based on the first color relationship and the second color relationship.
In some embodiments, the verification module may be to determine authenticity of the plurality of target images based on the illumination sequence and the color verification model. For example, the verification model processes the multiple target images based on the color verification model to obtain a processing result, and determines the authenticity of the multiple target images by combining the processing result and the illumination sequence.
For more detailed description of the determining module, the obtaining module and the verifying module, reference may be made to fig. 2-15, which are not repeated herein.
It should be noted that the above descriptions of the object recognition system and its modules are only for convenience of description, and should not be construed as limiting the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the determining module, the obtaining module, and the verifying module disclosed in fig. 1 may be different modules in one system, or may be one module to implement the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
In some embodiments, a method of target recognition by a processing device of the target recognition system 100 may include: determining an illumination sequence for determining a plurality of colors of a plurality of illuminations illuminating a target object; acquiring a plurality of target images, wherein shooting time of the plurality of target images and irradiation time of the plurality of illuminations have a corresponding relation; and determining an authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
In some embodiments, the processing device acquiring the plurality of target images may include: acquiring a plurality of initial images, wherein the plurality of initial images comprise a first initial image and a second initial image; replacing the texture of the second initial image with the texture of the first initial image to generate a processed second initial image; and using the processed second initial image as one of the plurality of target images.
In some embodiments, the processing device replacing the texture of the second initial image with the texture of the first initial image to generate a processed second initial image may comprise: and transferring the color of illumination when the second initial image is shot to the first initial image based on a color transfer algorithm to obtain the processed second initial image.
In some embodiments, the plurality of colors includes at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a portion of the at least one reference color. In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color.
In some embodiments, the plurality of target images includes at least one verification image, each of the at least one verification image corresponding to one of the at least one verification color, and at least one reference image, each of the at least one reference image corresponding to one of the at least one reference color, the processing device determining authenticity of the plurality of target images based on the illumination sequence and the plurality of target images may include: extracting a reference color feature of the at least one reference image and a verification color feature of the at least one verification image; for each of the at least one verification image, determining the color of illumination when the verification image is shot based on the verification color features of the verification image and the reference color features of the at least one reference image; and determining authenticity of the plurality of target images based on the illumination sequence and a color of illumination at a time the at least one verification image was captured.
In some embodiments, the processing device determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprises: determining a first image sequence based on the plurality of target images; determining a second sequence of images based on a plurality of color template images, the plurality of color template images generated based on the illumination sequence; processing the first image sequence based on a first extraction layer, and extracting first characteristic information of the first image sequence; processing the second image sequence based on a second extraction layer, and extracting second characteristic information of the second image sequence; and processing the first characteristic information and the second characteristic information based on a discrimination layer to determine the authenticity of the plurality of target images, wherein the first extraction layer, the second extraction layer and the discrimination layer are machine learning models of preset parameters, and the first extraction layer and the second extraction layer share parameters.
In some embodiments, the preset parameters of the first extraction layer, the second extraction layer and the discrimination layer are obtained by an end-to-end training mode.
In some embodiments, the plurality of colors includes at least one reference color and at least one verification color, the plurality of target images includes at least one reference image and at least one verification image, each of the at least one reference image corresponds to one of the at least one reference color, each of the at least one verification image corresponds to one of the at least one verification color, and the processing device determines the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprises: extracting a reference color feature of each of the at least one reference image and a verification color feature of each of the at least one verification image; for each of the at least one reference image, determining a first color relationship of the reference image and each verification image based on a reference color feature of the reference image and a verification color feature of the each verification image; for each of said at least one reference color, determining a second color relationship for said reference color and each of said verification colors; and determining authenticity of the plurality of target images based on the at least one first color relationship and the at least one second color relationship.
In some embodiments, the processing device determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprises: obtaining a color verification model, wherein the color verification model is a machine learning model with preset parameters; and processing the plurality of target images by using the color verification model based on the illumination sequence to determine the authenticity of the plurality of target images.
FIG. 2 is an exemplary flow diagram of a method of object recognition shown in accordance with some embodiments of the present description. As shown in fig. 2, the process 200 includes the following steps:
step 210, determine an illumination sequence. The illumination sequence is used to determine a plurality of colors of a plurality of illuminations illuminating a target object. In some embodiments, step 210 may be performed by the determination module.
The target object refers to an object needing target identification. For example, the target object may be a specific body part of the user, such as a face, a fingerprint, a palm print, or a pupil. In some embodiments, the target object refers to a face of a user that needs authentication and/or authorization. For example, in a network appointment application scenario, the platform needs to verify whether the order taker driver is a registered driver user that the platform has reviewed, and the target object is the driver's face. For another example, in a face payment application scenario, the payment system needs to verify the payment authority of the payer, and the target object is the face of the payer.
For target identification of the target object, the terminal is instructed to emit the illumination sequence. The illumination sequence includes a plurality of illuminations for illuminating the target object. The colors of different lights in the light sequence can be the same or different. In some embodiments, the plurality of illuminations comprises at least two illuminations of different colors, i.e. the plurality of illuminations has a plurality of colors.
As used herein, "determining a lighting sequence" refers to determining information, e.g., color information, illumination time, etc., for each of a plurality of illuminations contained in the lighting sequence. The color information of the plurality of illuminations in the illumination sequence may be represented in the same or different ways. For example, the color information of the plurality of illuminations may be represented by a color category. For example, the colors of the plurality of lights in the light sequence may be represented as red, yellow, green, purple, cyan, blue, red. For another example, the color information of the plurality of illuminations may be represented by a color parameter. For example, the colors of the plurality of illuminations in the illumination sequence may be represented as RGB (255, 0, 0), RGB (255, 255, 0), RGB (0, 255, 0), RGB (255, 0, 255), RGB (0, 255, 255), RGB (0, 0, 255). In some embodiments, the illumination sequence may also be referred to as a color sequence, which contains color information of the plurality of illuminations.
The illumination times of the plurality of illuminations in the illumination sequence may include a start time, an end time, a duration, etc., or any combination thereof, at which each illumination plan illuminates the target object. For example, the start time for red light to illuminate the target object is 14: 00. the start time for green light illumination on the target object is 14: 02. for another example, the duration of time for which the target object is illuminated by both red light and green light is 0.1 seconds. In some embodiments, the durations of time that different illuminations illuminate the target object may be the same or different. The irradiation time may be expressed in other ways and will not be described in detail herein.
In some embodiments, the terminal may emit the plurality of lights in sequence in a particular order. In some embodiments, the terminal may emit illumination through the light emitting element. The light emitting element may include a light emitting element built in the terminal, for example, a screen, an LED lamp, etc. The light emitting element may also include an external light emitting element. Such as external LED lights, light emitting diodes, etc. In some embodiments, when the terminal is hijacked or attacked, the terminal may accept an indication to emit illumination, but will not actually emit illumination. For more details on the illumination sequence, reference may be made to fig. 3 and fig. 4 and their associated description, which are not repeated here.
In some embodiments, the terminal or processing device (e.g., the determination module) may randomly generate or generate the illumination sequence based on a preset rule. For example, the terminal or processing device may randomly draw a plurality of colors from a color library to generate an illumination sequence. In some embodiments, the illumination sequence may be set by a user at a terminal, determined from default settings of the target recognition system 100, determined by a processing device through data analysis (e.g., with a deterministic model), or the like. In some embodiments, the terminal or the storage device may store the illumination sequence. Accordingly, the obtaining module may obtain the illumination sequence from the terminal or the storage device through the network.
Step 220, a plurality of target images are acquired. In some embodiments, step 220 may be performed by the acquisition module.
The plurality of target images are images for target recognition. The formats of the plurality of target images may include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Kodak Flash PiX (FPX), Digital Imaging and Communications in Medicine (DICOM), and the like. The plurality of target images may be two-dimensional (2D) images or three-dimensional (3D) images.
In some embodiments, the obtaining module may obtain the plurality of target images based on the terminal. For example, the obtaining module may send a obtaining instruction to the terminal through the network, and then receive the plurality of target images sent by the terminal through the network. Or, the terminal may send the plurality of target images to a storage device for storage, and the obtaining module may obtain the plurality of target images from the storage device. The target object may not be contained or contained in the target image.
The target image may be captured by an image capturing device of the terminal, or may be determined based on data (e.g., video or image) uploaded by the user. For example, in the process of target object verification, the target recognition system 100 may issue an illumination sequence to the terminal. When the terminal is not hijacked or attacked, the terminal can sequentially transmit the plurality of lights according to the light sequence. When the terminal emits one of a plurality of lights, its image capturing device may be instructed to capture one or more images during the illumination time of the light. Alternatively, the image capturing device of the terminal may be instructed to take a video during the entire illumination of the plurality of illuminations. A terminal or other computing device (e.g., processing device 110) may intercept one or more images captured within the exposure time of each illumination from the video according to the exposure time of each illumination. One or more images acquired by the terminal within the irradiation time of each illumination can be used as the plurality of target images. At this time, the plurality of target images are real images of the target object taken while being illuminated by the plurality of lights. It is understood that there is a correspondence between the irradiation times of the plurality of illuminations and the photographing times of the plurality of target images. If an image is acquired within the illumination time of a single illumination, the correspondence is one-to-one; if a plurality of images are acquired within the illumination time of a single illumination, the correspondence is one-to-many.
When the terminal is hijacked, the hijacker can upload images or videos through the terminal equipment. The uploaded image or video may contain a specific body part of the target subject or other user, and/or other objects. . The uploaded image or video may be a history image or video photographed by the terminal or other terminals, or a composite image or video. The terminal or other computing device (e.g., processing device 110) may determine the plurality of target images based on the uploaded images or videos. For example, the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded images or videos according to the illumination sequence and/or illumination duration of each illumination in the illumination sequence. For example only, the lighting sequence includes five lights arranged in sequence, and the hijacker can upload five target images through the terminal device. And the terminal or other computing equipment determines the target image corresponding to each of the five illuminations according to the uploading sequence of the five target images. For another example, the illumination time of five illuminations in the illumination sequence is 0.5 seconds, and the hijacker can upload a video with the time duration of 2.5 seconds through the terminal. The terminal or other computing device may divide the uploaded video into five videos of 0-0.5 seconds, 0.5-1 second, 1-1.5 seconds, 1.5-2 seconds, and 2-2.5 seconds, and intercept a target image in each video. And five target images intercepted from the video correspond to the five illuminations in sequence. At this time, the plurality of target images are false images uploaded by the hijacked person, not real images taken by the target object when illuminated by the plurality of lights. In some embodiments, if the target image is uploaded by the hijacker through the terminal, the uploading time of the target image or the shooting time of the target image in the video can be regarded as the shooting time of the target image. It is understood that, when the terminal is hijacked, there is also a correspondence between the irradiation times of the plurality of lights and the photographing times of the plurality of target images.
For each of the plurality of target images, the determining module may use, as the color corresponding to the target image, a color of illumination in the illumination sequence, the illumination time of which corresponds to the target image capturing time. Specifically, if the illumination time of the illumination corresponds to the shooting time of one or more target images, the color of the illumination is used as the color corresponding to the one or more target images. It will be appreciated that when the terminal is not hijacked or attacked, the colors corresponding to the plurality of target images should be the same as the plurality of colors of the plurality of lights in the light sequence. For example, the multiple colors of the multiple illuminations in the illumination sequence are "red, yellow, blue, green, purple, red", and when the terminal is not hijacked or attacked, the colors corresponding to the multiple target images acquired by the terminal should also be "red, yellow, blue, green, purple, red". When the terminal is hijacked or attacked, the colors corresponding to the multiple target images and the multiple colors of the multiple lights in the lighting sequence may be different.
In some embodiments, the obtaining module may obtain a plurality of initial images from the terminal and pre-process the plurality of initial images to obtain the plurality of target images. The plurality of initial images can be shot by the terminal or uploaded by a hijacker through the terminal.
It is understood that there is a correspondence between the capturing times of the plurality of initial images and the irradiation times of the plurality of illuminations. If the plurality of target images are obtained based on the preprocessing of the plurality of initial images, the corresponding relation between the shooting time of the plurality of target images and the irradiation time of the plurality of lights actually reflects the corresponding relation between the shooting time of the plurality of initial images corresponding to the plurality of target images and the irradiation time of the plurality of lights, and the color of the lights when the target images are shot actually reflects the color of the lights when the initial images corresponding to the target images are shot.
In some embodiments, the pre-processing may include texture conformance processing. Texture of an image refers to the gray scale distribution of the elements (e.g., pixels) in the image and their surrounding spatial neighborhood. It is understood that, if the plurality of initial images are captured by the terminal, the plurality of initial images may have different textures due to the variation of the distance, angle, and background between the terminal and the target object. The texture consistency processing can enable the textures of the multiple initial images to be identical or basically identical, and reduces interference of texture features, so that efficiency and accuracy of target identification are improved.
In some embodiments, the fetch module may implement the texture reconciliation process by texture replacement. Texture replacement is the replacement of the texture of all the original images with the texture of the specified image. In some embodiments, the designated image may be one of the plurality of initial images, that is, the obtaining module may replace the texture of one of the plurality of initial images with the texture of the other initial images to achieve texture consistency. Alternatively, the designated image may be an image of the target object other than the plurality of initial images. For example, images of the target object captured in the past are stored in a storage device. For a detailed description of texture replacement, reference may be made to fig. 5 and its related description, which are not described herein again.
In some embodiments, the obtaining module may implement texture consistency processing by way of background matting, shooting angle correction, and the like. For example, taking the target object as a target face as an example, the parts of the plurality of initial images except for the face are scratched, and then the angle of the face in the remaining parts is corrected to a preset angle (for example, the front surface faces the image acquisition device, etc.). For example only, the background extraction may identify a face contour of each of the plurality of initial images by an image recognition technique, and extract the parts except the face contour. The angle correction may be implemented by a correction algorithm (e.g., a face correction algorithm) or a model. In some embodiments, the obtaining module may also implement the texture consistency processing in other manners, which is not limited herein.
In some embodiments, the pre-processing may also include image screening, image denoising, image enhancement, and the like.
Image screening may include screening out images that do not include a target object or a particular body part of the user. The screened object can be an initial image acquired by the terminal, or an image obtained by performing other preprocessing (for example, texture matching processing) on the initial image. For example, the obtaining module may screen out images that do not contain the target object from the plurality of initial images based on matching features of the initial images and features of images that contain the target object.
Image denoising may include removing interference information in an image. The interference information in the image not only degrades the quality of the image, but also affects the color features extracted based on the image. In some embodiments, the acquisition module may implement image denoising through a median filter, a machine learning model, or the like.
Image enhancement may increase missing information in an image. Missing information in an image can cause image blurring and can also affect color features extracted based on the image. For example, image enhancement may adjust the brightness, contrast, saturation, hue, etc. of an image, increase its sharpness, reduce noise, etc. In some embodiments, the acquisition module may implement image enhancement through a smoothing filter, a median filter, or the like.
Similar to image screening, the object targeted by image denoising or image enhancement may be an initial image, or an image obtained by performing other preprocessing on the initial image.
In some embodiments, the pre-processing may also include other operations, not limiting herein. In some embodiments, the object recognition system 100 may further include a pre-processing module for pre-processing the initial image.
And step 230, determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images. In some embodiments, step 230 may be performed by a verification module.
The reality of the plurality of target images may reflect whether the plurality of target images are images of the target object captured under illumination of a plurality of colors of illumination. For example, when the terminal is not hijacked or attacked, the light-emitting element thereof may emit light of a plurality of colors, while the image capture device thereof may record or take a picture of the target object to acquire the target image. At this time, the target image has reality. For another example, when the terminal is hijacked or attacked, the target image is acquired based on an image or video uploaded by the attacker. At this time, the target image has no reality.
If the plurality of target images are obtained based on preprocessing the plurality of initial images, the reality of the plurality of target images may also be referred to as the reality of the plurality of initial images, which may reflect whether the plurality of initial images corresponding to the plurality of target images are images of the target object captured under illumination of a plurality of colors of light. For example, when the terminal is not hijacked or attacked, the initial image and the target image have authenticity. For another example, when the terminal is hijacked or attacked, the initial image and the target image do not have authenticity. For convenience of description, the authenticity of the plurality of target images and the authenticity of the plurality of initial images are hereinafter collectively referred to as the authenticity of the plurality of target images.
The authenticity of the target image may be used to determine whether the image capture device of the terminal is hijacked by an attacker. For example, if at least one target image in the plurality of target images does not have authenticity, the image acquisition device is hijacked. For another example, if more than a preset number of target images in the plurality of target images do not have authenticity, it is indicated that the image capturing device is hijacked.
In some embodiments, the verification module may determine authenticity of the plurality of target images based on color characteristics and lighting sequences of the plurality of target images. For more details on determining the authenticity of the target image based on the color characteristics of the target image, reference may be made to fig. 7 and its associated description, which are not repeated herein.
The color feature of an image refers to information related to the color of the image. The color of the image includes a color of light illuminated when the image is captured, a color of a subject in the image, a color of a background in the image, and the like. In some embodiments, the color features may include depth features and/or complex features extracted by the neural network.
The color characteristics may be represented in a variety of ways. In some embodiments, the color features may be based on a representation of color values of pixel points in the image in a color space. A color space is a mathematical model that describes color using a set of numerical values, each numerical value in the set of numerical values representing a color value of a color feature on each color channel of the color space. In some embodiments, the color space may be represented as a vector space, each dimension of which represents one color channel of the color space. Color features may be represented by vectors in the vector space. In some embodiments, the color space may include, but is not limited to, an RGB color space, an L α β color space, an LMS color space, an HSV color space, a YCrCb color space, an HSL color space, and the like. It is understood that different color spaces contain different color channels. For example, the RGB color space includes a red channel R, a green channel G, and a blue channel B, and the color feature can be represented by the color value of each pixel point in the image on the red channel R, the green channel G, and the blue channel B, respectively.
In some embodiments, the color features may be represented in other ways (e.g., color histograms, color moments, color sets, etc.). For example, histogram statistics is performed on color values of each pixel point in the image in the color space, and a histogram representing color features is generated. For another example, a specific operation (e.g., mean, square error, etc.) is performed on the color value of each pixel point in the image in the color space, and the result of the specific operation represents the color feature of the image.
In some embodiments, the verification module may extract color features of the plurality of target images through a color feature extraction algorithm and/or a color verification model (or portion thereof). The color feature extraction algorithm includes: color histograms, color moments, color sets, etc. For example, the verification module may count the gradient histogram based on the color value of each pixel point in the image in each color channel of the color space, so as to obtain the color histogram. For another example, the verification module may divide the image into a plurality of regions, and determine the color set of the image by using a set of binary indexes of the plurality of regions, which are respectively established by the color values of the pixels in the image in each color channel of the color space. See fig. 10, 13, and 15 and their associated description for more detail regarding color feature extraction based on a color verification model.
In some embodiments, the verification module may process the plurality of target images based on the illumination sequence and with a color verification model to determine authenticity of the plurality of target images, the color verification model being a machine learning model of preset parameters. For determining the authenticity of the target image based on the color verification model, refer to fig. 8 and the related description thereof, which are not repeated herein.
In some embodiments of the present specification, the target recognition system 100 may issue an illumination sequence to the terminal, and obtain a target image corresponding to a plurality of illuminations in the illumination sequence from the terminal. The processing device can determine whether the target image or the initial image corresponding to the target image is an image shot under the condition that the target object is irradiated by the illumination sequence by identifying the color of illumination when the target image is shot, and further determine whether the terminal is hijacked or attacked. It can be appreciated that an attacker, without knowledge of the lighting sequence, may have difficulty in having the same color of lighting as the images in the uploaded image or uploaded video were captured as the multiple lights in the lighting sequence. Even if the kinds of colors are the same, the positional order of each color is hardly the same. The method disclosed in the specification can improve the attack difficulty of an attacker and ensure the safety of target identification.
FIG. 3 is a schematic diagram of an illumination sequence shown in accordance with some embodiments of the present description.
As previously mentioned, multiple illuminations may be included in the illumination sequence. In some embodiments, the colors of the multiple illuminations in the illumination sequence may be identical, completely different, or partially identical. For example, the plurality of illuminations are each red in color. For another example, at least two of the plurality of illuminations have different colors, i.e., the plurality of illuminations have a plurality of colors. In some embodiments, the plurality of colors comprises white. In some embodiments, the plurality of colors includes red, blue, green.
The plurality of illuminations in the illumination sequence are arranged in a particular order. As shown in fig. 3, the illumination sequence a includes 4 illuminations sequentially arranged for red light, white light, blue light, and green light, the illumination sequence b includes 4 illuminations sequentially arranged for white light, blue light, red light, and green light, the illumination sequence c includes 4 illuminations sequentially arranged for red light, white light, blue light, and white light, and the illumination sequence d includes 4 illuminations sequentially arranged for red light, white light, and blue light. The colors of a plurality of lights in the lighting sequence a and the lighting sequence b are the same, but the arrangement sequence is different. Similarly, the colors of the plurality of lights in the light sequence c and the light sequence d are the same, but the arrangement order is different. Furthermore, the colors of the 4 lights in the light sequences a and b are completely different, and the colors of the two lights in the light sequences c and d are the same.
Fig. 4 is another schematic diagram of an illumination sequence shown in accordance with some embodiments of the present description.
In some embodiments, the plurality of colors of illumination in the illumination sequence may comprise at least one reference color and at least one verification color. The verification color is a color directly used for verifying the authenticity of the image among the plurality of colors. The reference color is a color of the plurality of colors that assists the verification color in determining the authenticity of the target image. For example, the target image corresponding to the reference color (also referred to as a reference image) may be used to determine the color of light when the target image corresponding to the verification color (also referred to as a verification image) is photographed. Further, the verification module may determine authenticity of the plurality of target images based on a color of illumination when the verification image is captured. For another example, the target image corresponding to the reference color (also referred to as a reference image) may be used to verify that the target image corresponding to the color (also referred to as a verification image) determines the first color relationship. Further, the verification module may determine authenticity of the plurality of target images based on the first color relationship. As shown in fig. 4, the illumination sequence e includes a plurality of reference colors of illumination "red light, green light, and blue light", and a plurality of verification colors of illumination "yellow light, purple light … cyan light"; the illumination sequence f includes a plurality of reference colors of illumination "red light, white light … blue light", and a plurality of verification colors of illumination "red light.
In some embodiments, there are multiple verification colors. The plurality of verification colors may be identical. For example, the verification color may be red, red. Alternatively, the plurality of verification colors may be completely different. For example, the verification color may be red, yellow, blue, green, violet. Still alternatively, the plurality of verification colors may also be partially identical. For example, the verification color may be yellow, green, purple, yellow, red. Similarly to the verification color, in some embodiments there are multiple reference colors, which may be identical, completely different, or partially identical. In some embodiments, the verification color may comprise only one color, such as green.
In some embodiments, the at least one reference color and the at least one verification color may be determined according to a default setting of the target recognition system 100, manually set by a user, or determined by a determination module. For example, the determination module may randomly choose a reference color and a verification color. For example only, the determining module may randomly select a portion of the colors from the plurality of colors as the at least one reference color and the remaining colors as the at least one verification color. In some embodiments, the determination module may determine the at least one reference color and the at least one verification color based on preset rules. The preset rule may be a rule regarding verifying a relationship between colors, a relationship between reference colors, and/or a relationship between a color and a reference color, and the like. For example, the preset rule is that the verification color can be generated based on reference color fusion, and the like.
In some embodiments, each of the at least one verification color may be determined based on at least a portion of the at least one reference color. For example, the verification color may be blended based on at least a portion of the at least one reference color. In some embodiments, the at least one reference color may comprise a primary color or a primary color of a color space. For example, the at least one reference color may include three primary colors of an RGB space, i.e., "red, green, and blue". As shown in fig. 4, a plurality of verification colors "yellow, purple … cyan" in the illumination sequence e may be determined based on 3 reference colors "red, green, blue". For example, "yellow" may be obtained by blending the reference colors "red, green, and blue" based on a first ratio, and "violet" may be obtained by blending the reference colors "red, green, and blue" based on a second ratio.
In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color. The at least one reference color and the at least one verification color may be all or partially the same. For example, a certain one of the at least one verification color may be the same as a particular one of the at least one reference color. It will be appreciated that the verification color may also be determined based on at least one reference color, i.e. the particular reference color may be the verification color. As shown in fig. 4, in the illumination sequence f, a plurality of reference colors "red, white … blue" and a plurality of verification colors "red.. green" each contain red.
In some embodiments, there may be other relationships between the at least one reference color and the at least one verification color, and are not limited herein. For example, the color systems of the at least one reference color and the at least one verification color are the same or different. Illustratively, at least one reference color belongs to a warm color family (e.g., red, yellow, etc.) and at least one reference color belongs to a cool color family (e.g., gray, etc.).
In some embodiments, in the illumination sequence, the illumination corresponding to the at least one reference color may be arranged in front of or behind the illumination corresponding to the at least one verification color. As shown in fig. 4, in the illumination sequence e, the illumination "red light, green light, blue light" of the plurality of reference colors is arranged in front of the illumination "yellow light, purple light … cyan light" of the plurality of verification colors. In the illumination sequence f, a plurality of reference colors of illumination "red light, white light … blue light" are arranged behind a plurality of verification colors "red light. In some embodiments, the illumination corresponding to the at least one reference color may be further arranged at intervals from the illumination corresponding to the at least one verification color, which is not limited herein.
FIG. 5 is an exemplary flow diagram illustrating acquiring multiple target images according to some embodiments of the present description. In some embodiments, flow 500 may be performed by an acquisition module. As shown in fig. 5, the process 500 includes the following steps:
at step 510, a plurality of initial images are acquired.
As mentioned previously, the plurality of initial images are unprocessed images acquired from the terminal. In some embodiments, the plurality of initial images may be images shot by an image acquisition device of the terminal, or images determined by the hijacked terminal based on images or videos uploaded by the hijacker. In some embodiments, the plurality of initial images may include a first initial image and a second initial image.
As mentioned above, the obtaining module may perform texture replacement on the plurality of initial images to generate the plurality of target images in step 220. For example, the acquisition module may replace the texture of the plurality of initial images with the texture in the designated image. The first initial image refers to a designated image among the plurality of initial images, i.e., an initial image providing a texture for replacement. In some embodiments, the first initial image needs to contain the target object. For example, the acquisition module may acquire a first initial image containing the target object from a plurality of initial images through image screening. Alternatively, the first initial image may be any one of a plurality of initial images. For another example, the first initial image may be the earliest captured one of the plurality of initial images. For another example, the first initial image may be the simplest background one of the plurality of initial images. For example only, the simplicity of the background is determined by the color type of the background, and the less the color type of the background, the simpler the background. The background simplicity degree is also judged by the line complexity degree of the background, and the fewer the lines of the background are, the simpler the background is. In some embodiments, white light is present in the illumination sequence. The first initial image may be an initial image of the plurality of target images whose acquisition time corresponds to the white light irradiation time.
The second initial image is the initial image of the plurality of initial images with the texture replaced. In some embodiments, the second initial image may be any initial image other than the first initial image. The second initial image may be one or more.
In some embodiments, the terminal may acquire a corresponding initial image according to the illumination time of the illumination in the illumination sequence. The acquisition module may acquire the plurality of initial images from a terminal through a network. Or the hijacker can upload images or videos through the terminal equipment. An acquisition module may determine the plurality of initial images based on the uploaded image or video. For a detailed description of the terminal acquiring the initial image, reference may be made to step 220, which is not described herein again.
Step 520, replacing the texture of the second initial image with the texture of the first initial image to generate a processed second initial image.
In some embodiments, the acquisition module may implement texture replacement based on a color migration algorithm. Specifically, the obtaining module may migrate the color of illumination when the second initial image is captured onto the first initial image based on a color migration algorithm to generate a processed second initial image. Color migration algorithms are methods of migrating colors of one image onto another image to create a new image. Color migration algorithms include, but are not limited to, Reinhard algorithm, Welsh algorithm, fuzzy clustering algorithm, adaptive migration algorithm, and the like. In some embodiments, the color migration algorithm may extract color features of the second initial image and then migrate the color features of the second initial image onto the first initial image to generate a processed second initial image. For more reference to color characteristics, see step 230 and its associated description. For a detailed description of the color migration algorithm, reference may be made to fig. 6 and its related description, which are not repeated herein.
It will be appreciated that the replacement of the texture of the second initial image with the texture of the first initial image is such that the texture of all target images is consistent but the colour of all target images remains the same. Thus, in some embodiments, the acquisition module may transfer the color features of the illumination when the second initial image was taken to the first initial image based on a color transfer algorithm, and the color features of the newly generated image remain the same as the second initial image, but the texture becomes the texture of the first initial image. For example, when the number of the second initial images is N (N is an integer equal to or greater than 1), the color feature illuminated when each of the N second initial images is shot is transferred to the first initial image, so that N newly generated images can be obtained, where the color features of the N newly generated images represent the colors illuminated when the N second initial images are shot, respectively, but the textures of the N newly generated images are the textures of the first initial image.
In some embodiments, the acquisition module may also implement texture replacement using a texture feature migration algorithm. Specifically, the texture feature migration algorithm may extract texture features of the first initial image and texture features of the second initial image, and replace the texture features of the second initial image with the texture features of the first initial image to generate a processed second initial image.
In some embodiments, the method of extracting texture features may include, but is not limited to, geometric methods, gray level co-occurrence matrix methods, model methods, signal processing methods, machine learning models, and the like. The machine learning model may include, but is not limited to, a deep neural network model, a recurrent neural network model, a customized model structure, and the like, which is not limited herein.
And step 530, using the processed second initial image as one of the plurality of target images.
It will be appreciated that the processed second initial image is captured with the same illumination colour as the second initial image, but with texture features derived from the first initial image. If the lighting color of the first initial image and the second initial image is different when being shot, the first initial image and the second initial image after being processed can be two images with the same content and different colors. In some embodiments, the plurality of initial images includes one or more second initial images. For each second initial image, the obtaining module may replace the texture of the second initial image with the texture of the first initial image to generate a corresponding processed second initial image. Optionally, the obtaining module may use the first initial image as one of the plurality of target images. At this time, the plurality of target images includes the first initial image and one or more processed second initial images.
As described above, since the distance and angle between the image capturing device and the target object may vary, the texture of the plurality of initial images may be different. Therefore, some embodiments of the present disclosure make textures in multiple target images the same through a texture unification process, thereby reducing the influence of the textures in the target images on illumination color recognition and better determining the authenticity of the multiple target images.
FIG. 6 is a schematic diagram of texture substitution shown in accordance with some embodiments of the present description.
As shown in fig. 6, the 1 st initial image of the m initial images is acquired when white light is irradiated, and the other initial images are acquired when red light, orange light, cyan light … and blue light are irradiated. The acquisition module may select initial image 1 as the first initial image and initial images 610-2, 610-3 …, 610-m as the second initial image. The second initial images are different from the first initial images in color and texture. For example, the target object in the second initial image 610-m is located at a different position than in the first initial image 610-1. As another example, the photographic background of the target object in the second initial images 610-2, 610-3 …, 610-m is different from that in the first initial image 610-1. The texture differences of the initial images 610-1, 610-2, 610-3 …, 610-m may result in poor accuracy of image authenticity determinations, increased data analysis, and the like.
To solve the above problem, the second initial image may be preprocessed using a color migration algorithm. As shown in FIG. 6, the acquisition module extracts the color features (i.e., color features corresponding to red, orange, cyan, … blue, respectively) of m-1 second initial images 610-2, 610-3 … 610-m, respectively. The acquisition module respectively migrates the color characteristics of the m-1 second initial images onto the first initial image 610-1 to generate m-1 processed second initial images 620-2, 620-3 …, 620-m. It can be understood that the processed second initial image, in which the texture feature of the first initial image and the color feature of the second initial image are fused, is equivalent to an image obtained by replacing the texture of the second initial image with the texture of the first initial image.
In some embodiments, the first initial image and the second initial image are RGB images. To avoid the influence of the correlation between the color channels in the RGB color space, the obtaining module may first convert the first initial image and the second initial image from the RGB color space to the L α β color space. For example, the acquisition module may convert the target image (e.g., the first initial image or the second initial image) from an RGB color space to an L α β color space through a neural network. For another example, the obtaining module may convert the target image from the RGB color space to the LMS color space and then from the LMS color space to the L α β color space based on a plurality of transition matrices.
Further, the obtaining module may extract color features of the transformed second initial image and the transformed first initial image in the L α β color space. In some embodiments, the obtaining module may calculate an average value μ of all pixel points of the converted second initial image on each channel of L α β2jAnd standard deviation value sigma2j. Wherein j represents the serial number of the color channel in the L alpha beta color space, and j is more than or equal to 0 and less than or equal to 2. j is equal to 0,1,2 for the luminance channel L, the yellow-blue channel α and the red-green channel β, respectively. The obtaining module can calculate the average value mu of all pixel points of the converted first initial image on each L alpha beta channel1jAnd standard deviation value sigma1j
Further, the acquisition module may migrate the color characteristics of the transformed second initial image onto the transformed first initial image. In some embodiments, the acquisition module may be based on a standard deviation value σ of the transformed first initial image in each L α β channel1jAnd the standard deviation value sigma of the converted second initial image2jDetermining a scaling factor lambda of the corresponding channelj=σ2j1j. For each pixel point of the converted first initial image, the obtaining module may subtract the average value μ of each L α β channel from the value of the pixel point in the L α β channel1jAnd obtaining the updated value of the pixel point in each L alpha beta channel. The acquisition module can reuse each pixel point inThe updated value of each L alpha beta channel is multiplied by the scaling factor lambda of the channeljAnd then the mean value mu of the transformed second initial image in the corresponding L alpha beta channel2jTo generate a processed second initial image.
In some embodiments, the acquisition module may further convert the processed second initial image from the L α β color space to the RGB color space.
Some embodiments of the present description transfer the color features of the second initial image to the first initial image based on a color transfer algorithm, which not only avoids extracting complex texture features, but also enables the processed second initial image to contain more detailed and accurate color feature information, thereby improving the efficiency and accuracy of determining the authenticity of the target image.
FIG. 7 is a flow diagram illustrating determining authenticity of a target image based on color characteristics according to some embodiments of the present description. In some embodiments, flow 700 may be performed by a verification module. As shown in fig. 7, the process 700 may include the following steps:
and step 710, extracting color features of the multiple target images.
See step 230 and its associated description for more details regarding the color characteristics.
And step 720, determining the authenticity of the multiple target images based on the color characteristics and the illumination sequence of the multiple target images.
In some embodiments, for each of the plurality of target images, the verification module may determine a color of illumination when the target image is captured based on the color feature of the target image, and then determine a corresponding color of the target image based on the illumination sequence. Further, the verification module may determine authenticity of the target image.
For example, when the verification color can be blended based on at least one reference color, the verification module can construct a new color space (i.e., the reference color space in fig. 9) based on the reference color feature of at least one reference image. Further, the verification module may determine a color of illumination when the verification image was captured based on the new color space and the verification color feature of the verification image. Further, the verification module may determine the authenticity of the verification image in conjunction with the corresponding color of the verification image. For determining the authenticity of the target image based on the reference color space, reference may be made to fig. 9 and 11 and the related description thereof, which are not repeated herein.
In some embodiments, the verification module may determine a color relationship between the plurality of target images based on color characteristics of the plurality of target images, and determine authenticity of the plurality of target images based on the color relationship between the plurality of target images and a color relationship between a plurality of colors of a plurality of illuminations in the sequence of illuminations. For determining the authenticity of the target image based on the color relationship, refer to fig. 12 and the related description thereof, which are not repeated herein.
In some embodiments, for each of the plurality of target images, the verification module may determine a degree of match between the color features of the target image and the color features of its corresponding illumination based on the color features of the target image and the color features of its corresponding illumination. Further, the verification module may determine authenticity of the target image based on the degree of match. For example, if the degree of match between the color feature of the target image and the color feature of its corresponding illumination is greater than a preset threshold, the target image has reality. The degree of matching may be determined based on a similarity between the color feature of the target image and the color feature of the illumination. The similarity can be measured by Euclidean distance, Manhattan distance, and the like. In some embodiments, the verification module may determine a degree of matching between the first feature information and the second feature information based on color features of a first image sequence constructed by the plurality of target images (i.e., the first feature information in fig. 14) and color features of a second image sequence constructed by the plurality of color template images (i.e., the second characteristic information in fig. 14). Further, the verification module may determine authenticity of the plurality of target images based on the degree of match. For more details regarding determining the authenticity of a plurality of target images based on sequence, reference may be made to fig. 14 and its associated description.
In some embodiments, the preset threshold set for the image authenticity judgment in some embodiments of the present description may be related to the shooting stability degree. The shooting stability degree is the stability degree when the image acquisition device of the terminal acquires the target image. In some embodiments, the predetermined threshold is positively correlated to the photographing stability. It can be understood that the higher the shooting stability, the higher the quality of the obtained target image, and the more the color features extracted based on the plurality of target images can truly reflect the color of illumination when the target image is shot, the larger the preset threshold value is. In some embodiments, the shooting stability may be measured based on a motion parameter of the terminal detected by a motion sensor of the terminal (e.g., a vehicle-mounted terminal or a user terminal, etc.). Such as the speed of motion, the frequency of vibration, etc., detected by the motion sensor. For example, the larger the motion parameter or the larger the rate of change of the motion parameter, the lower the shooting stability. The motion sensor may be a sensor that detects a running condition of a vehicle, and the vehicle may be a vehicle used by a target user. The target user refers to a user to which the target object belongs. For example, if the target user is a web taxi appointment driver, the motion sensor may be a motion sensor of a driver's end or a vehicle-mounted terminal.
In some embodiments, the preset threshold may also be related to a shooting distance and a shooting angle. The shooting distance is a distance between the image capturing apparatus and the target object when the image capturing apparatus captures the target image. The shooting angle is the angle between the front of the target object and the terminal screen when the image acquisition equipment acquires the target image. In some embodiments, both the shooting distance and the shooting angle are inversely related to the preset threshold. It can be understood that the shorter the shooting distance is, the higher the quality of the acquired target image is, and the more the color features extracted based on the plurality of target images can truly reflect the color of illumination when the target image is shot, the larger the preset threshold is. The smaller the shooting angle is, the higher the quality of the acquired target image is, and similarly, the larger the preset threshold is. In some embodiments, the shooting distance and shooting angle may be determined based on the target image by image recognition techniques.
In some embodiments, the verification module may perform a specific operation (e.g., averaging, standard deviation, etc.) on the shooting stability, the shooting distance, and the shooting angle of each target image, and determine the preset threshold based on the shooting stability, the shooting distance, and the shooting angle after the specific operation. For example, the verification module determines corresponding sub-thresholds based on the shooting stability, the shooting distance, and the shooting angle after the specific operation, and then determines the preset threshold based on the sub-threshold corresponding to the shooting stability, the sub-threshold corresponding to the shooting distance, and the sub-threshold corresponding to the shooting angle. For example, the three sub-thresholds may be averaged, weighted averaged, or the like.
FIG. 8 is a flow diagram illustrating determining authenticity of a target image based on a color verification model according to some embodiments of the present description. In some embodiments, flow 800 may be performed by a verification module. As shown in fig. 8, the process 800 may include the following steps:
step 810, a color verification model is obtained.
The color verification model is a model for verifying whether an image has authenticity. The color verification model is a machine learning model of preset parameters. The preset parameters refer to model parameters learned in the training process of the machine learning model. Taking a neural network as an example, the model parameters include Weight (Weight) and bias (bias), etc. The preset parameters of the color verification model are determined during the training process. For example, the model acquisition module may train an initial color verification model based on a plurality of training samples with labels to arrive at a color verification model.
In some embodiments, the color verification model may be stored in a storage device, and the verification module may retrieve the color verification model from the storage device over a network. In some embodiments, the color verification model may be obtained through a training process. The training process for the color verification model can be seen in fig. 10, 13 and 15 and their associated descriptions.
And step 820, processing the multiple target images by using the color verification model based on the illumination sequence, and determining the authenticity of the multiple target images.
In some embodiments, the color verification model may include a first verification model. The first verification model may include a first color feature extraction layer and a color classification layer. The first color feature extraction layer extracts color features of the target image. The color classification layer determines the corresponding color of the target image based on the color characteristics of the target image. See fig. 10 and its associated description for determining the authenticity of a target image based on a first verification model.
In some embodiments, the color verification model may include a second verification model. The second verification model may include a second color feature extraction layer and a color relationship determination layer. And extracting the color features of the target image layer by the second color feature extraction. The color relationship determination layer determines a relationship (e.g., whether or not the same) between the colors corresponding to different target images based on the color features of the target images. For a detailed description of determining the authenticity of a target image based on a second verification model, reference may be made to fig. 13 and its associated description.
In some embodiments, the color verification model may include a third verification model. The third verification model may include a first extraction layer, a second extraction layer, and a discrimination layer. The first extraction layer extracts color features of a sequence constructed by a plurality of target images. The second extraction layer extracts color features of a sequence of multiple color template image constructions. The discrimination layer determines a relationship of the two sequences based on color characteristics of the two sequences. See fig. 14 and its associated description for determining the authenticity of a plurality of target images based on a third verification model.
FIG. 9 is an exemplary flow diagram illustrating determining authenticity of a plurality of target images according to some embodiments of the present description. In some embodiments, flowchart 900 may be performed by a verification module. As shown in fig. 9, the process 900 may include the following steps:
in some embodiments, the plurality of colors of illumination in the illumination sequence comprises at least one reference color and at least one verification color. Each of the at least one verification color may be determined based on at least a portion of the at least one reference color. For example, each of the at least one verification colors may be generated based on one or more reference color fusions. The plurality of images includes at least one reference image and at least one verification image. Each of the at least one verification image corresponds to one of the at least one verification color. Each of the plurality of reference images corresponds to one of the at least one reference color. As shown in step 220, the target image corresponds to a specific color, which means that the target image should have the specific color if the terminal is not hijacked (i.e. the target image is real).
Step 910, extracting a reference color feature of at least one reference image and a verification color feature of at least one verification image.
The reference color feature refers to a color feature of the reference image. Verifying the color characteristics refers to verifying the color characteristics of the image. For color features and their extraction, see the description of step 230.
In some embodiments, the verification module may extract color features of the image based on a first color feature extraction layer included in the first verification model. For details about extracting color features based on the first color feature extraction layer, reference may be made to fig. 10 and its related description, which are not repeated herein.
Step 920, for each of the at least one verification image, determining a color of illumination when the verification image is captured based on the verification color feature of the verification image and the reference color feature of the at least one reference image.
In some embodiments, the reference color features of at least one reference image may be used to construct a reference color space. The reference color space has the at least one reference color as its color channel. Specifically, the reference color feature corresponding to each reference image may be used as a reference value of the corresponding color channel in the reference color space.
In some embodiments, the color space (also referred to as the original color space) corresponding to the plurality of target images may be the same as or different from the reference color space. For example, the plurality of target images may correspond to an RGB color space, and the at least one reference color is red, blue, and green, so that the original color space corresponding to the plurality of target images and the reference color space constructed based on the reference color belong to the same color space. In this context, two color spaces may be considered to be the same color space if their primary colors or primaries are the same.
As described above, the verification color may be blended based on one or more reference colors. Accordingly, the verification module may determine a color corresponding to the verification color feature based on the reference color feature and/or the reference color space constructed by the reference color feature. In some embodiments, the verification module may map verification color features of the verification image based on a reference color space, determining a color of illumination when the verification image was captured. For example, the verification module may determine a parameter of the verification color feature on each color channel based on a relationship between the verification color feature and a reference value of each color channel in the reference color space, and then determine a color corresponding to the verification color feature based on the parameter, that is, a color illuminated when the verification image is captured.
For example, the verification module may extract the reference color feature based on the reference images a, b, c
Figure BDA0003028784420000221
As reference values for color channel i, color channel ii, and color channel iii, respectively. Color channel i, color channel ii, and color channel iii are the three color channels of the reference color space. The verification module may extract verification color features based on the verification image d
Figure BDA0003028784420000222
And based on verification of colour characteristics
Figure BDA0003028784420000223
And reference values of color channel I, color channel II, and color channel III
Figure BDA0003028784420000224
The relationship between
Figure BDA0003028784420000225
Determining verification color features
Figure BDA0003028784420000226
Parameter delta in color channel I, color channel II and color channel III, respectively1、δ2And delta3. The verification module may be based on the parameter δ1、δ2And delta3And determining the color corresponding to the verification color characteristic, namely the color of illumination when the verification image is shot. In some embodiments, the correspondence between the parameters and the color categories may be preset or may be learned through a model.
In some embodiments, the reference color space may be the same color as the color channels of the original color space. For example, the original spatial color may be an RGB space and the at least one reference color may be red, green, blue. The verification module may construct a new RGB color space (i.e., a reference color space) based on the reference color features of the three reference images corresponding to red, green, and blue, and determine the RGB values of the verification color features of each verification image in the new RGB color space, thereby determining the color of light illuminated when the verification image is photographed.
In some embodiments, the verification module may process the reference color feature and the verification color feature based on the color classification layer in the first verification model, and determine the color of illumination when the verification image is shot, which may be specifically referred to fig. 10 and the related description thereof, and is not described herein again.
Step 930, determining authenticity of the plurality of target images based on the illumination sequence and the color of illumination when the at least one verification image is captured.
In some embodiments, for each of the at least one verification image, the verification module may determine a corresponding verification color for the verification image based on the illumination sequence. Further, the verification module may determine authenticity of the verification image based on a corresponding verification color of the verification image. For example, the verification module determines the authenticity of the verification image based on a first determination result of whether a verification color corresponding to the verification image coincides with the color of illumination when being photographed. The verification color corresponding to the verification image is the same as the illumination color when the verification image is shot, the verification image is true, and the verification color corresponding to the verification image is different from the illumination color when the verification image is shot, so that the verification image is not true. For another example, the verification module determines the authenticity of the verification image based on whether a relationship (e.g., whether or not the same) between verification colors corresponding to the plurality of verification images is consistent with a relationship between colors of light illuminated when the plurality of verification images are photographed.
In some embodiments, the verification module may determine whether the image capturing device of the terminal is hijacked based on the authenticity of the at least one verification image. For example, the number of verification images with authenticity exceeding the first threshold value indicates that the image capturing device of the terminal is not hijacked. For another example, the number of verification images without authenticity exceeding a second threshold (e.g., 1) indicates that the image capture device of the terminal is hijacked.
In some embodiments, the verification module may determine the authenticity of the plurality of target images in combination with the reference color space and other verification approaches. In some embodiments, the verification module may determine updated color features for the verification image and each of the target images in the reference image based on the reference color space. The updated color feature is a feature obtained by converting the original color feature into the reference color space. Further, the verification module may replace the original color feature based on the updated color feature of each target image, and determine the authenticity of the plurality of target images in combination with other verification methods. For example, the verification module may determine a first color relationship between the plurality of target images based on the updated color features of the plurality of target images and determine authenticity of the plurality of target images based on the first color relationship. For another example, the verification module determines color features of a first image sequence constructed from the plurality of target images based on the updated color features of the plurality of target images and determines authenticity of the plurality of target images based on the color features of the first image sequence. For a description of other verification methods, reference may be made to other parts of the specification, such as fig. 12 to 15 and their related descriptions.
Since the reference image and the verification image are both photographed under the same ambient light condition, establishing the reference color space based on the reference image and determining the color of illumination when the verification image is photographed based on the reference color space can make the determination result more accurate. Further, the authenticity of the target image is more accurately determined. For example, when the illumination in the illumination sequence is weaker than the ambient light, the illumination irradiated to the target object may be difficult to detect. Alternatively, when the ambient light is colored light, the illumination of the target object may be disturbed. When the terminal is not hijacked, the reference image and the verification image are taken under the same (or substantially the same) ambient light. The reference color space constructed based on the reference image fuses the influence of the ambient light, and therefore, compared with the original color space, the color of illumination when the verification image is shot can be more accurately identified. Furthermore, the method disclosed herein may avoid interference of the light emitting elements of the terminal. When the terminal is not hijacked, the reference image and the verification image are shot under the irradiation of the same light-emitting element, the influence of the light-emitting element can be eliminated or weakened by utilizing the reference color space, and the accuracy rate of identifying the illumination color is improved.
FIG. 10 is a schematic diagram of a first verification model, shown in accordance with some embodiments of the present description.
In some embodiments, the verification module may determine authenticity of the plurality of target images based on the first verification model and the illumination sequence. The first verification model may include a first color feature extraction layer and a color classification layer. The first color feature extraction layer may include a reference color feature extraction layer and a verification color feature extraction layer. As shown in fig. 10, the first verification model may include a reference color feature extraction layer 1030, a verification color feature extraction layer 1040, and a color classification layer 1070. The reference color feature extraction layer 1030 and the verification color feature extraction layer 1040 may be used to implement step 910. A color classification layer 1070 may be used to implement step 920. Further, the verification module determines the authenticity of the verification image based on the color and illumination sequence corresponding to the verification image.
The color feature extraction model (e.g., the first color feature extraction layer, the reference color feature extraction layer 1030, the verification color feature extraction layer 1040, etc.) may extract the color features of the target image. In some embodiments, the types of color feature extraction models may include convolutional neural network models such as ResNet, densnet, MobileNet, ShuffleNet, or EfficientNet, or cyclic neural network models such as long-short memory cyclic neural networks. In some embodiments, the types of the reference color feature extraction layer 1030 and the verification color feature extraction layer 1040 may be the same or different.
The reference color feature extraction layer 1030 extracts the reference color features 1050 of at least one reference image 1010. In some embodiments, the at least one reference image 1010 may include a plurality of reference images. The reference color feature 1050 may be a fusion of color features of the plurality of reference images 1010. For example, the plurality of reference images 1010 may be merged and input to the reference color feature extraction layer 1030, and the reference color feature extraction layer 1030 may output the reference color features 1050. For example, the reference color feature 1050 is a feature vector formed by splicing color feature vectors of the reference images 1010-1, 1010-2, and 1010-3.
The verification color feature extraction layer 1040 extracts the verification color features 1060 of the at least one verification image 1020. In some embodiments, the verification module may perform a color determination separately for each of the at least one verification images 1020. For example, as shown in FIG. 8, the verification module may input at least one reference image 1010 into the reference color feature extraction layer 1030 and the verification image 1020-2 into the verification color feature extraction layer 1040. The verification color feature extraction layer 1040 may output the verification color features 1060 of the verification image 1020-2. The color classification layer 1070 may determine the color of the illumination when the verification image 1020-2 was captured based on the reference color feature 1050 and the verification color feature 1060 of the verification image 1020-2.
In some embodiments, the verification module may make color determinations for multiple verification images 1020 at the same time. For example, the verification module may input at least one reference image 1010 into the reference color feature extraction layer 1030 and a plurality of verification images 1020 (including verification images 1020-1, 1020-2 … 1020-n) into the verification color feature extraction layer 1040. The verification color feature extraction layer 1040 may simultaneously output verification color features 1060 of the plurality of verification images 1020. The color classification layer 1070 may simultaneously determine the color of illumination when each of the plurality of verification images is captured.
For each of the at least one verification image, the color classification layer 1070 may determine the color of illumination when the verification image was captured based on the reference color feature and the verification color feature of the verification image. For example, the color classification layer 1070 may determine a value or probability based on the reference color feature and the verification color feature of the verification image, and then determine the color of light illuminated when the verification image is captured based on the value or probability. The corresponding numerical value or probability of the verification image may reflect the likelihood that the color of the illumination belongs to each color when the verification image is captured. In some embodiments, the color classification layer 1070 may include, but is not limited to, a fully connected layer, a deep neural network, and the like.
The first verification model is a machine learning model of preset parameters. It is to be understood that the reference color feature extraction layer, the verification color feature extraction layer, and the color classification layer included in the first verification model are machine learning models of preset parameters. The preset parameters of the first verification model may be determined during the model training process. For example, the acquisition module may train an initial first verification model based on a first training sample with a first label to obtain preset parameters of the first verification model. The first training sample includes at least one sample reference image and at least one sample verification image of the sample target object, and the first label of the first training sample is the color of light illuminated when each sample verification image is captured. Wherein the color of illumination when the at least one sample reference image is photographed is the same as the at least one reference color. For example, if the at least one reference color comprises red, green, and blue, the at least sample reference image comprises three target images of a sample target object taken under red, green, and blue illumination.
In some embodiments, the obtaining module may input the first training sample into an initial first verification model, and update parameters of the initial verification color feature extraction layer, the initial reference color feature extraction layer, and the initial color classification layer through training until the updated first verification model satisfies a first preset condition. The updated first verification model may be designated as the first verification model of the preset parameters, in other words, the updated first verification model may be designated as the trained first verification model. The first preset condition may be that the loss function of the updated color feature model is less than a threshold, convergence, or that the number of training iterations reaches a threshold.
In some embodiments, the obtaining module may train the initial verification color feature extraction layer, the initial reference color feature extraction layer, and the initial color classification layer in the initial first verification model in an end-to-end training manner. The end-to-end training mode is that a training sample is input into an initial model, a loss value is determined based on the output of the initial model, and the initial model is updated based on the loss value. The initial model may contain a plurality of sub-models or modules for performing different data processing operations, which are considered as a whole in the training, to be updated simultaneously. For example, in the training of the initial first verification model, at least one sample reference image may be input to the initial reference color feature extraction layer, at least one sample verification image may be input to the initial verification color feature extraction layer, a loss function may be established based on an output result of the initial color classification layer and the first label, and parameters of each initial model in the initial first verification model may be simultaneously updated based on the loss function.
In some embodiments, the first verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may invoke the first verification model directly from the storage device.
Some embodiments of the present description may improve the efficiency of target image authenticity verification by determining the authenticity of a verification image through a first verification model. In addition, the first verification model can be used for improving the reliability of the authenticity verification of the target object, reducing or removing the influence of performance difference of the terminal equipment and further determining the authenticity of the target image. It can be understood that there is a certain difference in hardware of different terminals, for example, the same color light emitted by the terminal screens of different manufacturers may have a difference in saturation, brightness, etc., resulting in a larger intra-class difference of the same color. The first training samples of the initial first verification model may be taken by terminals of different capabilities. The initial first verification model can consider the terminal performance difference when performing color judgment on the target object through learning in the training process, and can accurately determine the color of the target image. Moreover, when the terminal is not hijacked, the reference image and the verification image are both shot under the same external environment light condition. In some embodiments, when the reference color space is established based on the reference color feature extraction layer in the first verification model and the authenticity of the plurality of target images is determined based on the reference color space, the influence of the external environment light may be eliminated or reduced.
FIG. 11 is another exemplary flow diagram for determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description. In some embodiments, flowchart 1100 may be performed by a verification module. As shown in fig. 11, the process 1100 includes the following steps:
step 1110, extracting verification color features of at least one verification image.
For a detailed description of extracting verification color features, see step 910 and its associated description.
Step 1120, extracting a reference color feature of at least one reference image.
For a detailed description of extracting the reference color feature, reference may be made to step 910 and its related description.
Step 1130, for each of the at least one verification image, a target color feature of a verification color corresponding to the verification image is generated based on the illumination sequence and the reference color feature.
The target color feature refers to a feature in which a verification color corresponding to the verification image is represented in the reference color space. In some embodiments, for each of the at least one verification image, the verification module may determine a corresponding verification color for the verification image based on the illumination sequence and generate a target color feature for the verification image based on the verification color and the reference color feature. For example, the verification module may fuse the color feature of the verification color with the reference color feature to obtain the target color feature.
Step 1140, determining authenticity of the plurality of target images based on each corresponding target color feature and verification color feature in the at least one verification image.
In some embodiments, for each of the at least one verification images, the verification module may determine the authenticity of the verification image based on the similarity between its corresponding target color feature and the verification color feature. The similarity between the target color feature and the verification color feature may be calculated by vector similarity, for example, determined by euclidean distance, manhattan distance, or the like. Illustratively, when the similarity between the target color feature and the verification color feature is greater than a third threshold, the verification image has authenticity, and otherwise, the verification image does not have authenticity.
FIG. 12 is an exemplary flow chart of a method of determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description. In some embodiments, flow 1200 may be performed by a verification module. As shown in fig. 12, the process 1200 includes the following steps:
as previously mentioned, the plurality of colors corresponding to the plurality of illuminations in the illumination sequence includes at least one reference color and at least one verification color. In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color. The plurality of target images includes at least one reference image each corresponding to one of the at least one reference color and at least one verification image each corresponding to one of the at least one verification color.
Step 1210, extracting a reference color feature of each of the at least one reference image and a verification color feature of each of the at least one verification image.
For extracting the reference color feature and verifying the color feature, reference may be made to step 910 and the related description thereof, which are not described herein again.
In some embodiments, the verification module may extract the reference color feature and the verification color feature based on a second color feature extraction layer included in the second verification model. For details about extracting color features based on the second color feature extraction layer, reference may be made to fig. 13 and its related description, which are not repeated herein.
Step 1220, for each of the at least one reference image, determining a first color relationship for the reference image and each verification image based on the reference color feature of the reference image and the verification color feature of each verification image.
The first color relationship between the reference image and the verification image refers to a relationship between a color illuminated when the reference image is captured and a color illuminated when the verification image is captured. The first color relationship includes being the same, different, or similar, etc. In some embodiments, the first color relationship may be represented by a numerical value. For example, the same is represented by "1" and different is represented by "0".
In some embodiments, the at least one first color relationship determined based on the at least one reference image and the at least one verification image may be represented by a vector, each element in the vector may represent a first color relationship between one of the at least one reference image and one of the at least one verification image. For example, if the first color relationship of each of the 1 reference image and the 5 verification images is the same, different, the same, or different, the first color relationship of the 1 reference image and the 5 verification images can be represented by a vector (1,0,1,1, 0).
In some embodiments, the at least one first color relationship determined based on the at least one reference image and the at least one verification image may also be represented by a verification code. The subcode for each location in the validation code may represent a first color relationship between one of the at least one reference image and one of the at least one validation image. For example, the first color relationship of the 1 reference image and the 5 verification images can be represented by the verification code 10110.
In some embodiments, the verification module may determine a first color relationship between the reference color feature of the reference image and the verification color feature of the verification image based thereon. For example, the verification module may determine a similarity between a reference color feature of the reference image and a verification color feature of the verification image, and determine the at least one first color relationship based on the similarity and a threshold. For example, if the similarity is greater than the fourth threshold, the judgment is the same, if the similarity is less than the fifth threshold, the judgment is different, or if the similarity is greater than the sixth threshold, the judgment is similar, and the like. Wherein the fourth threshold may be greater than the fifth threshold and the sixth threshold, and the sixth threshold may be greater than the fifth threshold. In some embodiments, the similarity may be characterized by a distance between the reference color feature and the verification color feature. The distance may include, but is not limited to, an euclidean distance, a manhattan distance, a chebyshev distance, a minkowski distance, a mahalanobis distance, an included cosine distance, and the like.
In some embodiments, the verification module may further obtain the first color relationship based on a color relationship determination layer included in the second verification model. For a detailed description of the color relationship determination layer, reference may be made to fig. 13 and its related description, which are not repeated herein.
Step 1230, for each of the at least one reference color, determining a second color relationship of the reference color and each of said verification colors.
The second color relationship of the reference color and the verification color may indicate whether the two colors are the same, different, or similar. In some embodiments, the type and representation manner of the second color relationship may be similar to the first color relationship, and are not described herein again.
In some embodiments, the verification module may determine its second color relationship based on the reference color and the verification color class or color parameter. For example, if the categories of the reference color and the verification color are the same or the difference in the numerical values of the color parameters is smaller than a certain threshold, the two colors are judged to be the same, and otherwise, the two colors are judged to be different.
In some embodiments, the verification module may extract a first color feature of the color template image of the reference color and a second color feature of the color template image of the verification color. The verification module may further determine a second color relationship for the reference color and the verification color based on the first color characteristic and the second color characteristic. For example, the verification module may calculate a similarity between the first color feature and the second color feature to determine the second color relationship.
In some embodiments, there is a one-to-one correspondence of the at least one first color relationship and the at least one second color relationship. Specifically, a first color relationship between the reference image and the verification image corresponds to a second color relationship between the reference color corresponding to the reference image and the verification color corresponding to the verification image.
Step 1240, determining the authenticity of the plurality of target images based on the at least one first color relationship and the at least one second color relationship.
In some embodiments, the verification module may determine authenticity of the plurality of target images based on some or all of the at least one first color relationship and the corresponding second color relationship.
In some embodiments, the first color relationship and the second color relationship may be represented by a vector. In some embodiments, the verification module may select some or all of the at least one first color relationship to construct a first vector and construct a second vector based on a second color relationship corresponding to the selected first color relationship. Further, the verification module may determine authenticity of the plurality of target images based on similarity of the first vector and the second vector. For example, if the similarity is greater than the seventh threshold, the plurality of target images have reality. It is to be understood that the order of arrangement of the elements in the first vector and the second vector is determined based on the correspondence between the first color relationship and the second color relationship. For example, an element corresponding to a first color relationship in the first vector a is aijIn the second vector B, the element corresponding to the second color relationship corresponding to the first color relationship is Bij
In some embodiments, the first color relationship and the second color relationship may also be represented by a verification code. In some embodiments, the verification module may select some or all of the at least one first color relationship to construct a corresponding first verification code, construct a corresponding second verification code based on a second color relationship corresponding to the selected first color relationship, and determine the authenticity of the plurality of target images. Similarly to the first vector and the second vector, the positions of the sub-codes in the first verification code and the second verification code are determined based on the correspondence between the first color relationship and the second color relationship. For example, if the first verification code and the second verification code are different, the plurality of target images do not have authenticity. For example, if the first verification code is 10110 and the second verification code is 10111, the plurality of target images do not have authenticity. For another example, the verification module may determine the authenticity of the plurality of target images based on the same number of subcodes in the first verification code and the second verification code. For example, if the number of the same subcodes is greater than the eighth threshold, authenticity of the plurality of target images is determined, and if the number of the same subcodes is less than the ninth threshold, authenticity of the plurality of target images is determined. For example, the eighth threshold is 3, the ninth threshold is 1, the first verification code is 10110, the second verification code is 10111, and the sub-codes of the first bit, the second bit, the third bit, and the fourth bit of the first verification code and the second verification code are the same, so that it is determined that the plurality of target images have authenticity.
In some embodiments, the verification module may determine the colors of the illumination when the verification image and the reference image are captured based on the reference color space, further determine a first color relationship, and determine the authenticity of the plurality of target images in combination with a corresponding second color relationship. In some embodiments, the verification module may determine a verification image updated verification color feature and a reference image updated reference color feature based on the reference color space. Further, the verification module determines a first color relationship based on the updated verification color features and the updated reference color features, and determines the authenticity of the plurality of target images in combination with the corresponding second color relationship.
As described above, the reference image and the verification image are both photographed when illuminated by the same light emitting element under the same ambient light, and therefore, when the authenticity of the plurality of target images is determined based on the relationship between the reference image and the verification image, the influence of the ambient light and the light emitting element can be eliminated or reduced, thereby improving the recognition accuracy of the illumination color.
FIG. 13 is a schematic diagram of a second verification model, shown in accordance with some embodiments of the present description.
In some embodiments, the verification module may determine authenticity of the plurality of target images based on the second verification model and the illumination sequence. As shown in fig. 13, the second verification model may include a second color feature extraction layer 1330 and a color relationship determination layer 1360. A second color feature extraction layer 1330 may be used to implement step 1210 and a color relationship determination layer 1360 may be used to implement step 1220. Further, the verification module may determine authenticity of the plurality of target images based on the first color relationship and the illumination sequence.
In some embodiments, the at least one reference image and the at least one verification image may constitute one or more image pairs. Each image pair includes one of the at least one reference image and one of the at least one verification image. The verification module may analyze one or more image pairs to determine a first color relationship between a reference image and a verification image in the image pair, respectively. For example, as shown in FIG. 13, the at least one reference image comprises "1320-1 … 1320-y" and the at least one verification image comprises "1310-1 … 1310-x". For illustrative purposes, the following discussion will proceed with reference image 1320-y and verification image 1310-1 forming an image pair.
The second color feature extraction layer 1330 may extract the reference color features 1350-y of the reference image 1320-y and the verification color features 1340-1 of the verification image 1310-1. In some embodiments, the type of the second color feature extraction layer 1330 may include a Convolutional Neural Networks (CNN) model such as ResNet, resenext, SE-Net, densnet, MobileNet, ShuffleNet, RegNet, EfficientNet, or inclusion, or a recurrent Neural network model.
The input to the second color feature extraction layer 1330 may be an image pair (e.g., a reference image 1320-y and a verification image 1310-1). For example, the reference image 1320-y and the verification image 1310-1 may be stitched and input to the second color feature extraction layer 1330. The output of the second color feature extraction layer 1330 may be the color features of an image pair (e.g., the reference color features 1350-y of the reference image 1320-y and the verification color features 1340-1 of the verification image 1310-1). For example, the output of the second color feature extraction model 1330 may be the color feature after stitching the verification color feature 1340-1 of the verification image 1310-1 with the reference color feature 1350-y of the reference image 1320-y.
The color relationship determination layer 1360 is configured to determine a first color relationship of the image pair based on the color features of the image pair. For example, the verification module may input the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1 into the color relationship determination layer 1360, which outputs the first color relationship of the reference image 1320-y and the verification image 1310-1.
In some embodiments, the verification module may input pairs of images of the at least one reference image and the at least one verification image together into the second verification model. The second verification model may simultaneously output the first color relationship for each of the plurality of pairs of images. In some embodiments, the verification module may input a pair of the plurality of pairs of images into the second verification model. The second verification model may output a first color relationship for the pair of images.
In some embodiments, the color relationship determination layer 1360 may be a classification model including, but not limited to, a fully connected layer, a deep neural network, a decision tree, and the like.
The second verification model is a machine learning model with preset parameters, and as can be understood, the second verification model includes a second color feature extraction layer and a color relationship determination layer which are machine learning models with preset parameters. The preset parameters of the second verification model may be determined during the training process. For example, the acquisition module may train an initial second verification model based on a second training sample with a second label to arrive at a second verification model. The second training sample includes one or more sample image pairs with a second label. Each sample image pair includes two target images of a sample target object taken under illumination by the same or different lights. The second label of the second training sample may indicate whether the sample image pair was illuminated in the same color as it was captured.
In some embodiments, the obtaining module may input a second training sample into the initial second verification model, and update parameters of the initial second color feature extraction layer and the initial color relationship determination layer through training until the updated second verification model satisfies a second preset condition. The updated second verification model may be designated as the second verification model of the preset parameters, in other words, the updated second verification model may be designated as the trained second verification model. The second preset condition may be that the loss function of the updated second verification model is less than a threshold, convergence, or that the number of training iterations reaches a threshold.
In some embodiments, the obtaining module may train the initial second color feature extraction layer and the initial color relationship determination layer in the initial second verification model in an end-to-end training manner. The end-to-end training mode is that a training sample is input into an initial model, a loss value is determined based on the output of the initial model, and the initial model is updated based on the loss value. The initial model may contain a plurality of sub-models or modules for performing different data processing operations, which are considered as a whole in the training, to be updated simultaneously. For example, in the initial second verification model training, at least one sample reference image and at least one verification image may be input into the initial color feature extraction model, a loss function may be established based on the output result of the initial color relationship determination layer and the second label, and parameters of each initial model in the initial second verification model may be simultaneously updated based on the loss function.
In some embodiments, the second verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may invoke the second verification model directly from the storage device.
The authenticity of the multiple target images is determined based on the first color relation and the second color relation, the illumination types when the target images are shot do not need to be identified, whether the illumination types are consistent when the target images are shot is directly determined through comparing color characteristics for identification, and the color identification task is converted into a binary classification task for judging whether the colors are the same or not. In some embodiments, the first color relationship may be determined using a second verification model. The color relationship determination layer of the second verification model may include only a smaller number of neurons (e.g., two neurons) to make the determination of whether the colors are the same. The second verification model disclosed in this specification is simpler in structure than the color recognition network in the conventional method. The target object analysis based on the second verification model also requires relatively less computational resources (e.g., computational space), thereby improving the recognition efficiency of the illumination color. Meanwhile, the input of the model can be a target image corresponding to any color, and compared with other algorithms needing to limit the number of the input color types, the method and the device for inputting the color type are higher in applicability. Moreover, the second verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of performance difference of the terminal equipment, and further determine the authenticity of the target image. It can be understood that there is a certain difference in hardware of different terminals, for example, the same color light emitted by the terminal screens of different manufacturers may have a difference in saturation, brightness, etc., resulting in a larger intra-class difference of the same color. The second training samples of the initial second verification model may be taken by terminals of different capabilities. The initial second verification model can consider the terminal performance difference when performing color judgment on the target object through learning in the training process, and can accurately determine the color of the target image. Further, when the terminal is not hijacked, both the reference image and the verification image are taken under the same ambient light. Therefore, when the reference image and the verification image are processed based on the second verification model to determine the authenticity of the plurality of target images, the influence of the external environment light can be eliminated or weakened.
FIG. 14 is another exemplary flow chart of a method of determining authenticity of a plurality of target images, shown in accordance with some embodiments of the present description. In some embodiments, flow 1400 may be performed by a verification module. As shown in fig. 14, the process 1400 includes the following steps:
at step 1410, a first image sequence is determined based on a plurality of target images.
The first image sequence is a set of a plurality of target images arranged in a specific order. In some embodiments, the verification module may order the plurality of target images by their respective capture times to generate the first sequence of images. For example, the plurality of target images may be ordered from first to last according to their respective photographing times.
Step 1420 determines a second sequence of images based on the plurality of color template images.
The color template image is a template image generated based on the color of illumination in the illumination sequence. A color template image of a certain color refers to a pure color picture containing only that color. For example, a red color template image contains only red, does not contain colors other than red, and does not contain texture.
In some embodiments, the verification module may generate the plurality of color template images based on a sequence of illumination. For example, the verification module may generate a color template image corresponding to the color of each illumination in the illumination sequence according to the color type and/or the color parameter of the illumination. In some embodiments, the storage device may store a color template image of each color in the illumination sequence in advance, and the verification module may obtain the color template image corresponding to the color of the illumination in the illumination sequence from the storage device through the network.
The second image sequence is a set of a plurality of color template images arranged in order. In some embodiments, the verification module may order the plurality of color template images by illumination times of their corresponding illuminations to generate the second sequence of images. For example, the plurality of color template images may be ordered from first to last by the illumination time of their corresponding illumination. In some embodiments, an order of arrangement of the plurality of color template images in the second image sequence coincides with an order of arrangement of the plurality of target images in the first image sequence. The irradiation time of the light corresponding to the plurality of color template images in the second image sequence corresponds to the photographing time of the plurality of target images in the first image sequence. For example, if a plurality of target images are arranged from first to last in accordance with the photographing time thereof, a plurality of color template images are also arranged from first to last based on the irradiation time of the illumination corresponding thereto.
At step 1430, first feature information of the first image sequence is extracted.
The first feature information may include color features of a plurality of target images in the first sequence of images. See step 230 and its associated description for more details regarding extracting color features. In some embodiments, the verification module may extract first feature information of the first sequence of images based on a first extraction layer in the third verification model. See fig. 15 and its associated description for extracting first feature information based on the first extraction layer.
Step 1440 extracts second feature information of the second sequence of images.
The second feature information may include color features of a plurality of color template images in the second image sequence. See step 230 and its associated description for more details regarding extracting color features. In some embodiments, the verification module may extract the second feature information based on a second extraction layer in a second color verification model. See fig. 15 and its associated description for more details regarding the extraction of the second color feature based on the second extraction layer.
Step 1450, determining the authenticity of the plurality of target images based on the first characteristic information and the second characteristic information.
In some embodiments, the verification module may determine, based on a matching degree of the first feature information and the second feature information, a second determination result of whether a color sequence of illumination when the plurality of target images in the first image sequence are photographed and a color sequence of the plurality of template color images in the second image sequence are consistent. For example, the verification module may determine the second determination result based on a relationship between the similarity between the first feature information and the second feature information and a threshold value, using the similarity between the first feature information and the second feature information as a matching degree. For example, if the similarity between the first feature information and the second feature information is greater than the tenth threshold, the second determination result is identical. And if the similarity of the first characteristic information and the second characteristic information is smaller than an eleventh threshold, the second judgment result is inconsistent. Further, the verification module may determine authenticity of the plurality of target images based on the second determination result. For example, if the second determination result is consistent, the plurality of target images have reality.
In some embodiments, the verification module may determine the second determination based on a discrimination layer in the third color verification model. For more details regarding determining the second determination result based on the determination layer, refer to fig. 15 and its associated description.
Some embodiments of the present description generate a second image sequence based on an artificially constructed color template image and determine the authenticity of the plurality of target images using the alignment of the second image sequence and the first image sequence (sequence of the plurality of target images). The method disclosed in this specification may make the task of identifying the target image simpler than directly identifying the color of the first image sequence. In some embodiments, a third verification model may be used for target image authenticity analysis. The second image sequence can enable the identification task of the third verification model to be simpler and the learning difficulty to be smaller, so that the identification accuracy is higher. In addition, the plurality of target images in the first image sequence are shot when being irradiated by the same light-emitting element under the condition of the same external environment light, so that when the authenticity of the plurality of target images is determined based on the relationship between the reference image and the verification image, the influence of the external environment light and the light-emitting element can be eliminated or weakened, thereby improving the identification accuracy of the illumination color.
FIG. 15 is a diagram illustrating an example structure of a third verification model in accordance with some embodiments of the present description.
In some embodiments, the verification module may determine authenticity of the plurality of target images based on the third verification model and the illumination sequence. As shown in fig. 15, the third color verification model may include a first extraction layer 1530, a second extraction layer 1540, and a discrimination layer 1570. For example, the verification module may utilize the third verification model to implement step 1430 and step 1450 to determine the second determination result. Specifically, based on first extraction layer 1530, step 1430 is implemented, second extraction layer 1540, step 1440 is implemented, and discrimination layer 1570, step 1450 is implemented. Further, the verification module determines authenticity of the plurality of target images based on the second determination result and the illumination sequence.
In some embodiments, the first extraction layer 1530 has as input the first image sequence 1510 and as output the first feature information 1550. For example, the verification module may sequentially stitch a plurality of target images in the first image sequence 1510 and input the stitched target images into the first extraction layer 1530. The output first feature information 1550 may be a feature obtained by splicing color features corresponding to a plurality of target images in the first image sequence 1510. The second extraction layer 1540 inputs the second image sequence 1520 and outputs the second feature information 1560. For example, the verification module may sequentially stitch the plurality of color template images in the second image sequence 1520 and input the stitched color template images into the second extraction layer 1540. The output second feature information 1560 may be a feature obtained by splicing color features corresponding to a plurality of color template images in the second image sequence 1520.
In some embodiments, the types of the first and second extraction layers include, but are not limited to, a Convolutional Neural Networks (CNN) model such as ResNet, resenext, SE-Net, densneet, MobileNet, ShuffleNet, RegNet, EfficientNet, or inclusion, or a recurrent Neural network model. The types of the first extraction layer and the second extraction layer may be the same or different.
In some embodiments, the discrimination layer 1570 inputs the first characteristic information 1550 and the second characteristic information 1560, and outputs the second discrimination result. In some embodiments, the discrimination layer may be a model that implements classification, including but not limited to a fully connected layer, a Deep Neural Network (DNN), and the like.
The third verification model is a machine learning model of preset parameters. It is understood that the first extraction layer, the second extraction layer and the discrimination layer included in the third verification model are machine learning models of preset parameters. The preset parameters of the third verification model may be determined during the model training process. For example, the acquisition module may train an initial third verification model based on a third training sample with a third label to obtain a third verification model. The third training sample includes a first sample image sequence composed of a plurality of sample target images of the sample target object and a second sample image sequence composed of a plurality of sample color template images of a plurality of sample colors. The third label of the third training sample is whether the color sequence of the illumination when the plurality of sample target images of the first sample image sequence are shot is consistent with the color sequence of the plurality of sample color templates in the second sample image sequence.
In some embodiments, the obtaining module may input a third training sample into the initial third verification model, and update parameters of the initial first extraction layer, the initial second extraction layer, and the initial discrimination layer through training until the updated third verification model satisfies the first preset condition. The updated third verification model may be designated as the preset-parameter third verification model, in other words, the updated third verification model may be designated as the trained third verification model. The third preset condition may be that the loss function of the updated third verification model is smaller than a threshold, convergence, or the number of training iterations reaches a threshold.
In some embodiments, the obtaining module may train the initial first extraction layer, the initial second extraction layer, and the initial discrimination layer in the initial third verification model in an end-to-end training manner. The end-to-end training mode is that a training sample is input into an initial model, a loss value is determined based on the output of the initial model, and the initial model is updated based on the loss value. The initial model may contain a plurality of sub-models or modules for performing different data processing operations, which are considered as a whole in the training, to be updated simultaneously. For example, in the training of the initial first verification model, the first sample image sequence may be input into the initial first extraction layer, the second sample image sequence may be input into the initial second extraction layer, a loss function may be established based on the output result of the initial discrimination layer and the third label, and the parameters of each initial model in the initial third verification model may be updated simultaneously based on the loss function.
In some embodiments, all or a portion of the parameters of the first extraction layer and the second extraction layer may be shared.
Some embodiments of the present disclosure determine the authenticity of the target image through the third verification model, and may perform target recognition directly by comparing whether the first image sequence including the target image and the second image sequence including the color template image are identical without recognizing the type of illumination when the target image is photographed. This is equivalent to converting the color recognition task into a binary task of judging whether the colors are the same. In some embodiments, a third verification model may be used to determine whether the first and second image sequences are consistent. The discrimination layer of the second verification model may include only a small number of neurons (e.g., two neurons) to make the determination of whether the sequences are the same. The second verification model disclosed in this specification is simpler in structure than the color recognition network in the conventional method. The target object analysis based on the third verification model also requires relatively less computational resources (e.g., computational space), thereby improving the recognition efficiency of the illumination color. Meanwhile, the input of the model can be a target image corresponding to any color, and compared with other algorithms needing to limit the number of the input color types, the method and the device for inputting the color type are higher in applicability. Moreover, the third verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of performance difference of the terminal equipment, and further determine the authenticity of the target image. It can be understood that there is a certain difference in hardware of different terminals, for example, the same color light emitted by the terminal screens of different manufacturers may have a difference in saturation, brightness, etc., resulting in a larger intra-class difference of the same color. A third training sample of the initial third verification model may be taken by terminals of different capabilities. The initial third verification model can consider the terminal performance difference when performing the color judgment of the target object through learning in the training process, so that the color of the target image can be determined more accurately. Furthermore, a plurality of target images in the first image sequence are all captured under the same ambient light. Therefore, when the first image sequence is processed based on the third verification model to determine the authenticity of the plurality of target images, the influence of the external environment light can be eliminated or weakened.
In some embodiments, the verification module may determine updated color features of a plurality of target images (including at least one verification target object map and at least one reference image) based on the reference color space, and generate updated first feature information of the first image sequence based on the updated color features of the plurality of target images. Similarly, the verification module may generate second image sequence updated second feature information based on the plurality of color template image updated color features. The verification module may further determine authenticity of the plurality of target images based on the updated first characteristic information (or first characteristic information) and the updated second characteristic information (or second characteristic information).
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of object recognition, the method comprising:
acquiring a plurality of initial images, wherein shooting time of the plurality of initial images has a corresponding relation with irradiation time of a plurality of lights in an illumination sequence irradiated to a target object, the plurality of lights have a plurality of colors, and the plurality of initial images comprise a first initial image and at least one second initial image;
for each of at least one second initial image, replacing the texture of the second initial image with the texture of the first initial image to generate a processed second initial image; and
determining authenticity of the plurality of target images based on the illumination sequence and a plurality of target images, the plurality of target images including the first initial image and the at least one processed second initial image.
2. The method of claim 1, the replacing the texture of the second initial image with the texture of the first initial image to generate a processed second initial image comprising:
and transferring the color of illumination when the second initial image is shot to the first initial image based on a color transfer algorithm to generate a processed second initial image.
3. The method of claim 2, the color migration algorithm comprising one of a Reinhard algorithm, a Welsh algorithm, a fuzzy clustering algorithm, and an adaptive migration algorithm.
4. The method of claim 1, the plurality of colors comprising white, the capture time of the first initial image corresponding to a light illumination time of white in the lighting sequence.
5. The method of claim 1, the determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprising:
determining the color of illumination when the plurality of target images are shot; and
determining authenticity of the plurality of target images based on the illumination sequence and colors of illumination when the plurality of target images are captured.
6. The method of claim 5, the determining a color of illumination when the plurality of target images are captured comprising:
for each of the plurality of target images, processing the target image based on a color verification model, and determining the color of illumination when the target image is shot, wherein the color verification model is a machine learning model with preset parameters.
7. The method of claim 1, the determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images comprising:
determining a first image sequence based on the plurality of target images;
determining a second sequence of images based on a plurality of color template images, the plurality of color template images generated based on the illumination sequence;
determining authenticity of the plurality of target images based on the first image sequence and the second image sequence.
8. An object recognition system, the system comprising:
an obtaining module, configured to obtain a plurality of initial images, where shooting times of the plurality of initial images have a corresponding relationship with irradiation times of a plurality of lights in an illumination sequence irradiated to a target object, the plurality of lights have a plurality of colors, and the plurality of initial images include a first initial image and at least one second initial image;
a preprocessing module, configured to replace, for each of at least one second initial image, a texture of the second initial image with a texture of the first initial image to generate a processed second initial image; and
a verification module to determine authenticity of the plurality of target images based on the illumination sequence and a plurality of target images, the plurality of target images including the first initial image and the at least one processed second initial image.
9. An object discrimination apparatus, comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202110423528.XA 2021-04-20 2021-04-20 Method and system for object recognition Pending CN113111806A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110423528.XA CN113111806A (en) 2021-04-20 2021-04-20 Method and system for object recognition
PCT/CN2022/075531 WO2022222575A1 (en) 2021-04-20 2022-02-08 Method and system for target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110423528.XA CN113111806A (en) 2021-04-20 2021-04-20 Method and system for object recognition

Publications (1)

Publication Number Publication Date
CN113111806A true CN113111806A (en) 2021-07-13

Family

ID=76718623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110423528.XA Pending CN113111806A (en) 2021-04-20 2021-04-20 Method and system for object recognition

Country Status (2)

Country Link
CN (1) CN113111806A (en)
WO (1) WO2022222575A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743284A (en) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 Image recognition method, device, equipment, camera and access control equipment
CN114266977A (en) * 2021-12-27 2022-04-01 青岛澎湃海洋探索技术有限公司 Multi-AUV underwater target identification method based on super-resolution selectable network
WO2022222904A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Image verification method and system, and storage medium
WO2022222575A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Method and system for target recognition

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580210B (en) * 2023-07-05 2023-09-15 四川弘和数智集团有限公司 Linear target detection method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050008193A1 (en) * 2000-06-13 2005-01-13 Microsoft Corporation System and process for bootstrap initialization of nonparametric color models
CN109461168A (en) * 2018-10-15 2019-03-12 腾讯科技(深圳)有限公司 The recognition methods of target object and device, storage medium, electronic device
CN109493280A (en) * 2018-11-02 2019-03-19 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111881844A (en) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 Method and system for judging image authenticity

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408403A (en) * 2018-09-10 2021-09-17 创新先进技术有限公司 Living body detection method, living body detection device, and computer-readable storage medium
CN113111810B (en) * 2021-04-20 2023-12-08 北京嘀嘀无限科技发展有限公司 Target identification method and system
CN113111811A (en) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 Target discrimination method and system
CN113111807A (en) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 Target identification method and system
CN113111806A (en) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 Method and system for object recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050008193A1 (en) * 2000-06-13 2005-01-13 Microsoft Corporation System and process for bootstrap initialization of nonparametric color models
CN109461168A (en) * 2018-10-15 2019-03-12 腾讯科技(深圳)有限公司 The recognition methods of target object and device, storage medium, electronic device
CN109493280A (en) * 2018-11-02 2019-03-19 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111881844A (en) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 Method and system for judging image authenticity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范彩霞;朱虹;: "非重叠多摄像机目标识别方法研究", 西安理工大学学报, no. 02 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222904A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Image verification method and system, and storage medium
WO2022222575A1 (en) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Method and system for target recognition
CN113743284A (en) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 Image recognition method, device, equipment, camera and access control equipment
CN114266977A (en) * 2021-12-27 2022-04-01 青岛澎湃海洋探索技术有限公司 Multi-AUV underwater target identification method based on super-resolution selectable network
CN114266977B (en) * 2021-12-27 2023-04-07 青岛澎湃海洋探索技术有限公司 Multi-AUV underwater target identification method based on super-resolution selectable network

Also Published As

Publication number Publication date
WO2022222575A1 (en) 2022-10-27

Similar Documents

Publication Publication Date Title
WO2022222575A1 (en) Method and system for target recognition
Pomari et al. Image splicing detection through illumination inconsistencies and deep learning
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
CN109543640B (en) Living body detection method based on image conversion
US7715596B2 (en) Method for controlling photographs of people
WO2022222569A1 (en) Target discrimation method and system
CN110163078A (en) The service system of biopsy method, device and application biopsy method
WO2018049084A1 (en) Methods and systems for human imperceptible computerized color transfer
CN109086723B (en) Method, device and equipment for detecting human face based on transfer learning
CN108664843B (en) Living object recognition method, living object recognition apparatus, and computer-readable storage medium
CN112801057A (en) Image processing method, image processing device, computer equipment and storage medium
KR102145132B1 (en) Surrogate Interview Prevention Method Using Deep Learning
WO2022222585A1 (en) Target identification method and system
CN109871845A (en) Certificate image extracting method and terminal device
CN106991364A (en) face recognition processing method, device and mobile terminal
Hadiprakoso et al. Face anti-spoofing using CNN classifier & face liveness detection
CN113111810B (en) Target identification method and system
CN112232323A (en) Face verification method and device, computer equipment and storage medium
CN105684046A (en) Generating image compositions
CN107862654A (en) Image processing method, device, computer-readable recording medium and electronic equipment
JP2005259049A (en) Face collation device
Lin Face detection by color and multilayer feedforward neural network
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
JPH11306348A (en) Method and device for object detection
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination