WO2022222569A1 - Procédé et système de discrimination de cible - Google Patents

Procédé et système de discrimination de cible Download PDF

Info

Publication number
WO2022222569A1
WO2022222569A1 PCT/CN2022/074706 CN2022074706W WO2022222569A1 WO 2022222569 A1 WO2022222569 A1 WO 2022222569A1 CN 2022074706 W CN2022074706 W CN 2022074706W WO 2022222569 A1 WO2022222569 A1 WO 2022222569A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
color
target
image sequence
Prior art date
Application number
PCT/CN2022/074706
Other languages
English (en)
Chinese (zh)
Inventor
张明文
张天明
赵宁宁
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2022222569A1 publication Critical patent/WO2022222569A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Definitions

  • This specification relates to the technical field of image processing, and in particular, to a method and system for object discrimination.
  • Target discrimination is a technology for biometric identification based on the target acquired by an image acquisition device.
  • face recognition technology targeting human faces is widely used in application scenarios such as permission verification and identity verification.
  • it is necessary to determine the authenticity of the target image.
  • One of the embodiments of this specification provides a target discrimination method, the method includes: determining a first image sequence based on a plurality of target images, the shooting time of the plurality of target images and a plurality of the illumination sequences irradiating the target object The irradiation time of illumination has a corresponding relationship; based on a plurality of color template images, a second image sequence is determined, and the plurality of color template images are generated based on the illumination sequence; and based on the first image sequence and the second image sequence , to determine the authenticity of the multiple target images.
  • One of the embodiments of this specification provides a target discrimination system, the system includes: a first image sequence determination module, configured to determine a first image sequence based on multiple target images, the shooting time of the multiple target images and the irradiation time The irradiation time of multiple lights in the lighting sequence to the target object has a corresponding relationship; the second image sequence determination module is configured to determine a second image sequence based on multiple color template images, the multiple color template images are based on the lighting Sequence generation; a verification module, configured to determine the authenticity of the multiple target images based on the first image sequence and the second image sequence.
  • One of the embodiments of the present specification provides an apparatus for identifying objects, including a processor for executing the method for identifying objects disclosed in the present specification.
  • One of the embodiments of this specification provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the target discrimination method disclosed in this specification.
  • FIG. 1 is a schematic diagram of an application scenario of a target discrimination system according to some embodiments of the present specification
  • FIG. 2 is an exemplary flowchart of a target discrimination method according to some embodiments of the present specification
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • FIG. 4 is a schematic structural diagram of a color verification model according to some embodiments of the present specification.
  • FIG. 5 is an exemplary flowchart of acquiring multiple target images according to some embodiments of the present specification.
  • FIG. 6 is an exemplary flowchart of acquiring multiple target images based on texture replacement according to some embodiments of the present specification
  • FIG. 7 is a schematic diagram of texture replacement according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting components, elements, parts, parts or assemblies to different levels.
  • Target discrimination is a biometric technology based on the target object acquired by the image acquisition device.
  • the target object may be a human face, a fingerprint, a palm print, a pupil, and the like.
  • target discrimination may be applied to authorization verification.
  • authorization verification For example, access control authority authentication and account payment authority authentication.
  • target discrimination can also be used for authentication.
  • employee attendance certification and self-registration identity security certification For example, employee attendance certification and self-registration identity security certification.
  • target identification may be performed based on the target image collected in real time by the image collection device and the pre-acquired biometric feature, so as to verify the target identity.
  • image capture devices can be hacked or hijacked, and attackers can upload fake target images for authentication.
  • attacker A can directly upload the face image of user B after attacking or hijacking the image acquisition device.
  • the target discrimination system performs face recognition based on user B's face image and pre-acquired user B's face biometrics, thereby passing user B's identity verification.
  • FIG. 1 is a schematic diagram of an application scenario of a target discrimination system according to some embodiments of the present specification.
  • the target discrimination system 100 may include a processing device 110 , a network 120 , a terminal 130 and a storage device 140 .
  • the processing device 110 may be used to process data and/or information from at least one component of the target discrimination system 100 and/or an external data source (eg, a cloud data center). For example, the processing device 110 may determine a first image sequence based on the plurality of target images, a second image sequence authenticity, etc. For another example, the processing device 110 may perform preprocessing (eg, replace textures, etc.) on multiple initial images obtained from the terminal 130 to obtain multiple target images. During processing, the processing device 110 may obtain data (such as instructions) from other components of the target discrimination system 100 (such as the storage device 140 and/or the terminal 130 ) directly or through the network 120 and/or send the processed data to the other components described above for storage or display.
  • an external data source eg, a cloud data center
  • the processing device 110 may determine a first image sequence based on the plurality of target images, a second image sequence authenticity, etc.
  • the processing device 110 may perform preprocessing (eg, replace textures, etc.) on multiple initial images obtained from the
  • processing device 110 may be a single server or group of servers.
  • the server group may be centralized or distributed (eg, processing device 110 may be a distributed system).
  • processing device 110 may be local or remote.
  • the processing device 110 may be implemented on a cloud platform, or provided in a virtual fashion.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • the network 120 may connect components of the system and/or connect the system with external components.
  • the network 120 enables communication between various components of the object identification system 100 and between the object identification system 100 and external components, facilitating the exchange of data and/or information.
  • the network 120 may be any one or more of a wired network or a wireless network.
  • the network 120 may include a cable network, a fiber optic network, a telecommunications network, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN) , Bluetooth network, ZigBee network (ZigBee), near field communication (NFC), intra-device bus, intra-device line, cable connection, etc. or any combination thereof.
  • the network connection between the various parts in the target identification system 100 may adopt one of the above-mentioned manners, or may adopt multiple manners.
  • the network 120 may be of various topologies such as point-to-point, shared, centralized, or a combination of topologies.
  • network 120 may include one or more network access points.
  • network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, . . . , through which one or more components of object discrimination system 100 may Connect to network 120 to exchange data and/or information.
  • the terminal 130 refers to one or more terminal devices or software used by the user.
  • the terminal 130 may include an image capturing device 131 (eg, a camera, a camera), and the image capturing device 131 may photograph a target object and acquire multiple target images.
  • the terminal 130 eg, the screen and/or other light emitting elements of the terminal 130
  • the terminal 130 may sequentially emit light of multiple colors in the lighting sequence to illuminate the target object.
  • the terminal 130 may communicate with the processing device 110 through the network 120 and send the captured multiple target images to the processing device 110 .
  • the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other devices with input and/or output capabilities, the like, or any combination thereof.
  • the above examples are only used to illustrate the broadness of the types of terminals 130 and not to limit the scope thereof.
  • the storage device 140 may be used to store data (eg, lighting sequences, multiple initial or multiple target images, multiple color template images, etc.) and/or instructions.
  • the storage device 140 may include one or more storage components, and each storage component may be an independent device or a part of other devices.
  • storage device 140 may include random access memory (RAM), read only memory (ROM), mass storage, removable memory, volatile read-write memory, the like, or any combination thereof.
  • mass storage may include magnetic disks, optical disks, solid state disks, and the like.
  • storage device 140 may be implemented on a cloud platform.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • storage device 140 may be integrated or included in one or more other components of object discrimination system 100 (eg, processing device 110, terminal 130, or other possible components).
  • the object discrimination system 100 may include a first image sequence determination module, a second image sequence determination module, a verification module, and a model acquisition module.
  • the first image sequence determination module may be configured to determine the first image sequence based on the plurality of target images.
  • the shooting time of the multiple target images has a corresponding relationship with the irradiation time of the multiple illuminations in the illumination sequence illuminating the target object.
  • the first image sequence determination module may also be used to acquire multiple target images.
  • the first image sequence determination module may acquire multiple target images by preprocessing multiple acquired initial images. Preprocessing includes, but is not limited to, image texture uniformity, image screening, image enhancement, image denoising, etc.
  • the target object may be a human face.
  • the second image sequence determination module may determine the second image sequence based on the plurality of color template images.
  • the plurality of color template images are generated based on the lighting sequence.
  • the verification module may determine the authenticity of the plurality of target images based on the first image sequence and the second image sequence. In some embodiments, the verification module may process the first image sequence and the second image sequence based on a color verification model to determine the authenticity of the plurality of target images.
  • the color verification model is a machine learning model with preset parameters. In some embodiments, the color verification model may include multiple layers. The multiple layers may include a first extraction layer, a second extraction layer, and a discriminant layer. When the color verification model contains multiple layers, the preset parameters of the color verification model can be obtained through end-to-end training.
  • the model obtaining module is used to obtain the color verification model.
  • the acquisition model acquires preset parameters of the color verification model through a training process.
  • the model acquisition module can obtain the preset parameters of the color verification model through end-to-end training.
  • the above description of the target discrimination system and its modules is only for the convenience of description, and does not limit the description to the scope of the illustrated embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, various modules may be combined arbitrarily, or a subsystem may be formed to connect with other modules without departing from the principle.
  • the first image sequence determination module, the second image sequence determination module, the verification module, and the model acquisition module disclosed in FIG. 1 may be different modules in one system, or may be one module to implement the above two or functions of more than two modules.
  • each module may share one storage module, and each module may also have its own storage module. Such deformations are all within the protection scope of this specification.
  • FIG. 2 is an exemplary flowchart of a method for object discrimination according to some embodiments of the present specification. As shown in Figure 2, the process 200 includes the following steps:
  • Step 210 Determine a first image sequence based on the multiple target images.
  • the shooting time of the multiple target images has a corresponding relationship with the irradiation time of the multiple lights in the lighting sequence irradiating the target object.
  • step 210 may be performed by a first image sequence determination module.
  • the target object refers to an object that needs to be discriminated against.
  • the target object may be a specific body part of the user, such as face, fingerprint, palm print, or pupil.
  • the target object refers to the face of a user who needs to be authenticated and/or authenticated.
  • the platform needs to verify whether the driver who takes the order is a registered driver user reviewed by the platform, and the target object is the driver's face.
  • the payment system needs to verify the payment authority of the payer, and the target object is the payer's face.
  • the terminal is instructed to emit the illumination sequence.
  • the lighting sequence includes a plurality of lighting for illuminating the target object.
  • the colors of different lights in the lighting sequence may be the same or different.
  • the plurality of lights include at least two lights with different colors, that is, the plurality of lights have multiple colors.
  • the illumination sequence includes information about each illumination in the plurality of illuminations, for example, color information, illumination time, and the like.
  • the color information of multiple lights in the lighting sequence may be represented in the same or different manners.
  • the color information of the plurality of lights may be represented by color categories.
  • the colors of the multiple lights in the lighting sequence may be represented as red, yellow, green, purple, cyan, blue, and red.
  • the color information of the plurality of lights may be represented by color parameters.
  • the colors of multiple lights in the lighting sequence can be represented as RGB(255, 0, 0), RGB(255, 255, 0), RGB(0, 255, 0), RGB(255, 0, 255) , RGB(0, 255, 255), RGB(0, 0, 255).
  • the lighting sequence which may also be referred to as a color sequence, contains color information for the plurality of lighting.
  • the illumination times of the plurality of illuminations in the illumination sequence may include the start time, end time, duration, etc., or any combination thereof, for each illumination plan to illuminate the target object.
  • the start time of illuminating the target object with red light is 14:00
  • the start time of illuminating the target object with green light is 14:02.
  • the durations for which the red light and the green light illuminate the target object are both 0.1 seconds.
  • the durations for different illuminations to illuminate the target object may be the same or different.
  • the irradiation time can be expressed in other ways, which will not be repeated here.
  • the terminal may sequentially emit multiple illuminations in a particular order.
  • the terminal may emit light through the light emitting element.
  • the light-emitting element may include a light-emitting element built in the terminal, for example, a screen, an LED light, and the like.
  • the light-emitting element may also include an externally-connected light-emitting element. For example, external LED lights, light-emitting diodes, etc.
  • the terminal when the terminal is hijacked or attacked, the terminal may receive an instruction to emit light, but does not actually emit light. For more details about the lighting sequence, please refer to FIG. 3 and its related description, which will not be repeated here.
  • the terminal or processing device may generate the lighting sequence randomly or based on a preset rule. For example, a terminal or processing device may randomly select a plurality of colors from a color library to generate a lighting sequence.
  • the lighting sequence may be set by the user, determined according to the default settings of the target discrimination system 100, or determined by the processing device through data analysis, or the like.
  • the terminal or storage device may store the lighting sequence.
  • the first image sequence determination module may acquire the illumination sequence from the terminal or the storage device through the network.
  • the multiple target images are images used for target discrimination.
  • the formats of the multiple target images may include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Kodak Flash PiX (FPX), Digital Imaging and Communications in Medicine (DICOM), etc. .
  • the multiple target images may be two-dimensional (2D, two-dimensional) images or three-dimensional (3D, three-dimensional) images.
  • the first image sequence determination module may acquire the plurality of target images based on the terminal. For example, the first image sequence determination module may send an acquisition instruction to the terminal through the network, and then receive multiple target images sent by the terminal through the network. Alternatively, the terminal may send the multiple target images to a storage device for storage, and the first image sequence determination module may acquire the multiple target images from the storage device. The target image may not contain or contain the target.
  • the target image may be captured by an image acquisition device of the terminal, or may be determined based on data (eg, video or image) uploaded by the user.
  • the target identification system 100 will issue a lighting sequence to the terminal.
  • the terminal may sequentially emit the plurality of illuminations according to the illumination sequence.
  • its image acquisition device may be instructed to acquire one or more images within the illumination time of the illumination.
  • the image capture device of the terminal may be instructed to capture video during the entire illumination period of the plurality of illuminations.
  • the terminal or other computing device may intercept one or more images collected during the illumination time of each illumination from the video according to the illumination time of each illumination.
  • One or more images collected by the terminal during the irradiation time of each illumination may be used as the multiple target images.
  • the multiple target images are real images captured by the target object when it is illuminated by the multiple illuminations. It can be understood that there is a corresponding relationship between the irradiation time of the multiple lights and the shooting time of the multiple target images. If one image is collected within the irradiation time of a single light, the corresponding relationship is one-to-one; if multiple images are collected within the irradiation time of a single light, the corresponding relationship is one-to-many.
  • the hijacker can upload images or videos through the terminal device.
  • the uploaded image or video may contain target objects or specific body parts of other users, and/or other objects.
  • the uploaded image or video may be a historical image or video shot by the terminal or other terminals, or a synthesized image or video.
  • the terminal or other computing device eg, processing device 110
  • the terminal or other computing device may determine the plurality of target images based on the uploaded image or video.
  • the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded image or video according to the illumination sequence and/or illumination duration of each illumination in the illumination sequence.
  • the lighting sequence includes five lightings arranged in sequence, and the hijacker can upload five images through the terminal device.
  • the terminal or other computing device will determine an image corresponding to each of the five illuminations according to the sequence in which the five images are uploaded.
  • the irradiation time of the five lights in the lighting sequence is 0.5 seconds, respectively, and the hijacker can upload a video with a duration of 2.5 seconds through the terminal.
  • the terminal or other computing device can divide the uploaded video into five videos of 0-0.5 seconds, 0.5-1 seconds, 1-1.5 seconds, 1.5-2 seconds and 2-2.5 seconds, and intercept each video an image.
  • the five images captured from the video correspond to the five illuminations in sequence.
  • the multiple images are fake images uploaded by the hijacker, not real images taken by the target object when illuminated by the multiple lights.
  • the uploading time of the image or the shooting time in the video may be regarded as the shooting time. It can be understood that when the terminal is hijacked, there is also a corresponding relationship between the irradiation time of multiple lights and the shooting time of multiple images.
  • the terminal or the processing device may use the color of the illumination corresponding to the illumination time and the image capture time in the illumination sequence as the image corresponding s color. Specifically, if the irradiation time of the light corresponds to the shooting time of one or more images, the color of the light is used as the color corresponding to the one or more images. It can be understood that when the terminal is not hijacked or attacked, the colors corresponding to the multiple images should be the same as the multiple colors of the multiple lights in the lighting sequence. For example, the multiple colors of multiple lights in the lighting sequence are "red, yellow, blue, green, purple, and red".
  • the colors corresponding to the multiple images obtained by the terminal should also be “red, yellow”. , blue, green, purple, red”.
  • the colors corresponding to multiple images and multiple colors of multiple lights in the lighting sequence may be different.
  • the first image sequence determination module may acquire multiple initial images from the terminal, and preprocess the multiple initial images to acquire the multiple target images.
  • the multiple initial images may be photographed by the terminal or uploaded by the hijacker through the terminal. See Figure 5 for more details on acquiring multiple target images.
  • the shooting time of the multiple initial images and the irradiation time of the multiple lights there is a corresponding relationship between the shooting time of the multiple initial images and the irradiation time of the multiple lights. If the multiple target images are obtained based on the preprocessing of multiple initial images, the corresponding relationship between the shooting time of the multiple target images and the irradiation time of the multiple lights actually reflects the shooting time and the shooting time of the multiple initial images corresponding to the multiple target images. The corresponding relationship between the irradiation times of multiple lights, the color of the light when the target image is shot actually reflects the color of the light when the initial image corresponding to the target image was shot.
  • the first image sequence is a collection of multiple target images arranged in a specific order.
  • the first image sequence determination module may sort the plurality of target images according to their respective shooting times to generate the first image sequence. For example, the plurality of target images may be sorted from first to last according to their respective shooting times.
  • Step 220 Determine a second image sequence based on the sequence of multiple color template images.
  • the plurality of color template images are generated based on the lighting sequence.
  • step 220 may be performed by a second image sequence determination module.
  • a color template image is a template image generated based on the colors of the lights in the lighting sequence.
  • a color template image for a color is a solid-color image that contains only that color. For example, a red color template image contains only red, no colors other than red, and no texture.
  • the second image sequence determination module may generate the plurality of color template images based on the lighting sequence. For example, the second image sequence determination module may generate a color template image corresponding to the color of each light in the light sequence according to the color type and/or color parameter of the light. In some embodiments, a color template image of each color in the lighting sequence may be pre-stored in the storage device, and the second image sequence determination module may obtain a color template image corresponding to the color of the lighting in the lighting sequence from the storage device through a network.
  • the second image sequence is a collection of multiple color template images arranged in sequence.
  • the second image sequence determination module may sort the plurality of color template images according to their corresponding illumination times to generate the second image sequence. For example, the plurality of color template images may be sorted from first to last according to their corresponding illumination times.
  • the arrangement order of the plurality of color template images in the second image sequence is consistent with the arrangement order of the plurality of target images in the first image sequence.
  • the irradiation time of the illumination corresponding to the plurality of color template images in the second image sequence corresponds to the shooting time of the plurality of target images in the first image sequence. For example, if the multiple target images are arranged from first to last according to their shooting time, the multiple color template images are also arranged from first to last based on the irradiation time of their corresponding lighting.
  • Step 230 based on the first image sequence and the second image sequence, determine the authenticity of the multiple target images.
  • step 230 may be performed by a verification module.
  • the authenticity of the multiple target images may reflect whether the multiple target images are images obtained by shooting the target object under illumination of multiple colors of light. For example, when the terminal is not hijacked or attacked, its light-emitting element can emit light of multiple colors, and its image acquisition device can record or photograph the target object to obtain the target image. At this time, the target image has authenticity. For another example, when the terminal is hijacked or attacked, the target image is obtained based on the image or video uploaded by the attacker. At this time, the target image has no authenticity.
  • the authenticity of multiple target images can also be referred to as the authenticity of multiple initial images, which can reflect whether the multiple initial images corresponding to the multiple target images are It is an image obtained by shooting the target object under the illumination of multiple colors of light.
  • the authenticity of the multiple target images and the authenticity of the multiple initial images are collectively referred to as the authenticity of the multiple target images below.
  • the authenticity of the target image can be used to determine whether the terminal's image capture device has been hijacked by an attacker. For example, if at least one target image in the multiple target images is not authentic, it means that the image acquisition device is hijacked. For another example, if more than a preset number of target images in the multiple target images are not authentic, it means that the image acquisition device is hijacked.
  • the verification module may extract first characteristic information of the first sequence of images and second characteristic information of the second sequence of images.
  • the verification module may further determine the authenticity of the multiple target images based on the first feature information and the second feature information.
  • the first feature information may include color features of the plurality of target images in the first image sequence.
  • the second feature information may include color features of the plurality of color template images in the second image sequence.
  • the color feature of an image refers to information related to the color of the image.
  • the color of the image includes the color of the light when the image is captured, the color of the subject in the image, the color of the background in the image, and the like.
  • the color features may include deep features and/or complex features extracted by a neural network.
  • Color features can be represented in a number of ways.
  • the color feature can be represented based on the color value of each pixel in the image in the color space.
  • a color space is a mathematical model that describes color using a set of numerical values, each of which can represent the color value of a color feature on each color channel of the color space.
  • a color space may be represented as a vector space, each dimension of the vector space representing a color channel of the color space. Color features can be represented by vectors in this vector space.
  • the color space may include, but is not limited to, RGB color space, L ⁇ color space, LMS color space, HSV color space, YCrCb color space, HSL color space, and the like.
  • the RGB color space includes red channel R, green channel G, and blue channel B, and color features can be represented by the color values of each pixel in the image on the red channel R, green channel G, and blue channel B, respectively.
  • color features may be represented by other means (eg, color histograms, color moments, color sets, etc.).
  • the histogram statistics are performed on the color values of each pixel in the image in the color space to generate a histogram representing the color features.
  • a specific operation eg, mean, squared difference, etc. is performed on the color value of each pixel in the image in the color space, and the result of the specific operation represents the color feature of the image.
  • the verification module may extract the color features of the image through a color feature extraction algorithm and/or a color verification model (or portion thereof).
  • Color feature extraction algorithms include: color histogram, color moment, color set, etc.
  • the verification module can count the gradient histogram based on the color value of each pixel in the image in each color channel of the color space, so as to obtain the color histogram.
  • the verification module can divide the image into multiple regions, and use the set of binary indices of the multiple regions established by the color values of each pixel in the image in each color channel of the color space to determine the color of the image. set. For more details on extracting color features based on the color verification model, see FIG. 4 and its related description.
  • the verification module may extract the first feature information of the first image sequence based on the first extraction layer in the color verification model.
  • the first feature information based on the first extraction layer please refer to FIG. 4 and its related descriptions.
  • the verification module may extract the second feature information of the second image sequence based on the second extraction layer in the color verification model.
  • the second feature information based on the second extraction layer please refer to FIG. 4 and its related descriptions.
  • the verification module may determine, based on the degree of matching between the first feature information and the second feature information, a color sequence of illumination when the multiple target images in the first image sequence are captured and multiple template colors in the second image sequence The judgment result of whether the color sequence of the image is consistent. If the multiple target images are obtained based on the preprocessing of the multiple initial images, the color sequence of the illumination when the multiple target images are captured actually reflects the color sequence of the multiple illumination when the multiple initial images corresponding to the multiple target images are captured. For example, the verification module may take the similarity between the first feature information and the second feature information as the matching degree, and then determine the judgment result based on the relationship between the similarity between the first feature information and the second feature information and a preset threshold.
  • the verification module may determine the authenticity of the multiple target images based on the judgment result. For example, if the judgment result is consistent, the multiple target images are authentic.
  • the preset thresholds (eg, the first threshold, the second threshold) set for the image authenticity determination in some embodiments of this specification may be related to the degree of shooting stability.
  • the shooting stability degree is the stability degree when the image acquisition device of the terminal acquires the target image.
  • the preset threshold is positively related to the degree of shooting stability. It can be understood that the higher the shooting stability, the higher the quality of the obtained target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when the multiple target images or their corresponding multiple initial images are captured. The larger the preset threshold is.
  • the shooting stability may be measured based on a motion parameter of the terminal detected by a motion sensor of the terminal (eg, a vehicle-mounted terminal or a user terminal, etc.).
  • a motion sensor of the terminal eg, a vehicle-mounted terminal or a user terminal, etc.
  • the motion sensor may be a sensor that detects the driving situation of the vehicle, and the vehicle may be the vehicle used by the target user.
  • the target user refers to the user to which the target object belongs. For example, if the target user is an online car-hailing driver, the motion sensor may be a motion sensor on the driver's end or the in-vehicle terminal.
  • the preset threshold may also be related to the shooting distance and the rotation angle.
  • the shooting distance is the distance between the target object when the image acquisition device acquires the target image.
  • the rotation angle is the angle between the front of the target object and the terminal screen when the image acquisition device acquires the target image.
  • both the shooting distance and the rotation angle are negatively correlated with the preset threshold. It can be understood that the shorter the shooting distance, the higher the quality of the obtained target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when the multiple target images or their corresponding multiple initial images are captured, then The larger the preset threshold is. The smaller the rotation angle, the higher the quality of the acquired target image, and similarly, the larger the preset threshold.
  • the shooting distance and rotation angle may be determined based on the target image through image recognition techniques.
  • the verification module may perform specific operations (eg, average, standard deviation, etc.) on the shooting stability, shooting distance, and rotation angle of each target image, and based on the specific calculation, the shooting stability, shooting distance, and The shooting angle determines the preset threshold.
  • specific operations eg, average, standard deviation, etc.
  • obtaining the stability degree of the terminal when the multiple target images are acquired by the verification module includes: acquiring the sub-stability degree of the terminal when each of the multiple target images is captured; fusing the multiple sub-stability degrees, Determine the degree of stability.
  • acquiring the shooting distance between the target object and the terminal when the multiple target images are shot by the verification module includes: acquiring a sub-shooting distance between the target object and the terminal when each of the multiple target images is shot ; Fusing the plurality of sub-shooting distances to determine the shooting distance.
  • obtaining the rotation angle of the target object relative to the terminal when the multiple target images are captured by the verification module includes obtaining the rotation angle of the target object relative to the terminal when each of the multiple target images is captured.
  • the sub-rotation angle of the terminal; the multiple sub-rotation angles are fused to determine the rotation angle.
  • the verification module may determine the judgment result based on the discrimination layer in the color verification model. For more details about determining the judgment result based on the discriminant layer, please refer to FIG. 4 and its related description.
  • the target identification system 100 sends a lighting sequence to the terminal, and acquires from the terminal a target image corresponding to a plurality of lightings in the lighting sequence.
  • the processing device can determine whether the target image or the initial image corresponding to the target image is an image captured under the illumination sequence of the target object, and further Determine if the endpoint is hijacked or hacked. It is understandable that when an attacker does not know the lighting sequence, it is difficult for the color of the light to be the same as the color of multiple lights in the light sequence when the uploaded image or the image in the uploaded video is captured. Even if the kinds of colors are the same, the order of the positions of each color is difficult to be the same.
  • the method disclosed in this specification can improve the difficulty of an attacker's attack and ensure the security of target identification.
  • Some embodiments of the present specification generate a second image sequence based on an artificially constructed color template image, and determine the authenticity of the multiple target images by comparing the second image sequence with the first image sequence (sequence of multiple target images). sex.
  • the method disclosed in this specification can make the task of identifying the target image simpler.
  • the method disclosed in this specification can improve the processing efficiency of target images and reduce processing time.
  • a color verification model may be used for target image authenticity analysis. Using the second image sequence can make the recognition task of the color verification model simpler and the learning difficulty lower, thereby making the recognition accuracy higher.
  • the multiple target images in the first image sequence are all captured under the same ambient light conditions and illuminated by the same light-emitting element. Therefore, based on the relationship between the first image sequence and the second image sequence When determining the authenticity of multiple target images, the influence of external ambient light and light-emitting elements can be eliminated or weakened, thereby improving the accuracy of light color recognition.
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • the colors of the multiple lights in the lighting sequence may be exactly the same, completely different, or partially the same.
  • the colors of the plurality of lights are all red.
  • at least two of the plurality of lights have different colors, that is, the plurality of lights have multiple colors.
  • the plurality of colors includes white.
  • the plurality of colors includes red, blue, and green.
  • illumination sequence a includes four illuminations arranged in sequence: red light, white light, blue light, and green light
  • illumination sequence b includes four illuminations arranged in sequence: white light, blue light, red light, and green light
  • illumination sequence c includes Four illuminations of red light, white light, blue light, and white light are arranged in sequence
  • the illumination sequence d includes four illuminations of red light, white light, white light, and blue light arranged in sequence.
  • Lighting sequence a and lighting sequence b have the same color of multiple lights, but they are arranged in different order.
  • multiple lights in lighting sequence c and lighting sequence d have the same color, but are arranged in different orders.
  • the colors of the 4 lights in the light sequences a and b are completely different, and the colors of the two lights in the light sequences c and d are the same.
  • FIG. 4 is a diagram showing an example structure of a color verification model according to some embodiments of the present specification.
  • the verification module may process the first image sequence and the second image sequence based on a color verification model to determine the authenticity of the plurality of target images.
  • the color verification model is a machine learning model with preset parameters. Preset parameters refer to the model parameters learned during the training of the machine learning model. Taking a neural network as an example, the model parameters include weight and bias.
  • the color-verified model may include a first extraction layer 430 , a second extraction layer 440 and a discrimination layer 470 .
  • the verification module may implement steps 210-230 using a color verification model to determine the judgment result. Specifically, step 210 may be implemented based on the first extraction layer 430 , step 220 may be implemented based on the second extraction layer 440 , and step 230 may be implemented based on the discrimination layer 470 . Further, the verification module determines the authenticity of the plurality of target images based on the judgment results.
  • the input of the first extraction layer 430 is the first image sequence 410 and the output is the first feature information 450 .
  • the verification module may sequentially stitch multiple target images in the first image sequence 410 into the first extraction layer 430 .
  • the output first feature information 450 may be a feature obtained by splicing color features corresponding to multiple target images in the first image sequence 410 .
  • the input of the second extraction layer 440 is the second image sequence 420 and the output is the second feature information 460 .
  • the verification module may sequentially stitch multiple color template images in the second image sequence 420 into the second extraction layer 440 .
  • the output second feature information 460 may be a feature obtained by splicing color features corresponding to multiple color template images in the second image sequence 420 .
  • the types of the first extraction layer and the second extraction layer include, but are not limited to, Convolutional Neural Networks such as ResNet, ResNeXt, SE-Net, DenseNet, MobileNet, ShuffleNet, RegNet, EfficientNet, or Inception. Networks, CNN) model, or recurrent neural network model.
  • the types of the first extraction layer and the second extraction layer may be the same or different.
  • the input of the discrimination layer 470 is the first characteristic information 450 and the second characteristic information 460, and the output is the determination result.
  • the discriminative layer may be a model that implements classification, including but not limited to a fully connected layer, a deep neural network (DNN), and the like.
  • the preset parameters of the color verification model are determined during the training process.
  • the model acquisition module may train an initial color verification model based on a plurality of training samples with sample labels to obtain preset parameters of the color verification model.
  • Each of the plurality of training samples includes a first sample image sequence composed of a plurality of sample target images, a second sample image sequence composed of a plurality of sample color templates, and a sample label Image composition.
  • the sample label indicates whether the color sequence of the illumination when the multiple sample target images of the first sample image sequence are captured is consistent with the color sequence of the multiple sample color templates in the second sample image sequence.
  • the model acquisition module may input multiple training samples into the initial color verification model, and update the parameters of the initial first extraction layer, the initial second extraction layer, and the initial discriminant layer through training until the updated color verification model satisfies the preset conditions.
  • the updated color verification model can be designated as the color verification model with preset parameters, in other words, the updated color verification model can be designated as the trained color verification model, wherein the preset condition can be the updated color verification model
  • the loss function is less than the threshold, converges, or the number of training iterations reaches the threshold.
  • the preset parameters of the color verification model are obtained through end-to-end training.
  • the model obtaining module can obtain the preset parameters of the first extraction layer, the second extraction layer and the discrimination layer in the color verification model through an end-to-end training method.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • the first sample image sequence can be input into the initial first extraction layer
  • the second sample image sequence can be input into the initial second extraction layer
  • the loss can be established based on the output results of the initial discriminant layer and the sample labels function to simultaneously update the parameters of each initial model in the initial color verification model based on the loss function.
  • all or part of the parameters of the first extraction layer and the second extraction layer may be shared.
  • the authenticity of the target image is determined by determining whether the first image sequence and the second image sequence are consistent through a color verification model.
  • the color verification model can directly identify the target by comparing whether the first image sequence containing the target image and the second image sequence containing the color template image are consistent without identifying the type of illumination when the target image was captured. This is equivalent to what the color verification model is actually doing is a binary classification task.
  • the discriminative layer of the color verification model may include only a small number of neurons (eg, two neurons). Compared with the color recognition network in the traditional method, the structure of the color verification model disclosed in this specification is simpler.
  • the target object analysis based on the color verification model also requires relatively less computing resources (eg, computing space), thereby improving the efficiency of light color recognition.
  • the input of the model can be an image sequence corresponding to any color.
  • the embodiments of this specification are more applicable.
  • using the color verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image.
  • the color verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image.
  • the training samples for the initial color validation model can be taken by terminals with different performances.
  • the initial color verification model is learned in the training process, so that the color verification model can consider the terminal performance difference when judging the color of the target object, and more accurately determine the color of the target image.
  • the multiple target images in the first sequence of images are all captured under the same ambient light conditions. Therefore, when the first image sequence is processed based on the color verification model and the authenticity of the multiple target images is determined, the influence of external ambient light can be eliminated or reduced.
  • FIG. 5 is an exemplary flowchart of acquiring multiple target images according to some embodiments of the present specification.
  • process 500 may be performed by a first image sequence determination module. As shown in Figure 5, the process 500 includes the following steps:
  • Step 510 acquiring multiple initial images.
  • the plurality of initial images are unprocessed images acquired from the terminal.
  • the multiple initial images may be images captured by an image acquisition device of the terminal, or may be images determined by the hijacked terminal based on images or videos uploaded by the hijacker.
  • the terminal may acquire a corresponding initial image according to the illumination time of the illumination in the illumination sequence.
  • the first image sequence determination module may acquire the plurality of initial images from the terminal through the network.
  • the hijacker can upload images or videos through the end device.
  • the first image sequence determination module may determine the plurality of initial images based on the uploaded images or videos.
  • step 520 multiple initial images are preprocessed to acquire multiple target images.
  • the target identification system 100 may further include a preprocessing module for preprocessing the initial target image.
  • preprocessing may include texture normalization processing, image screening, image denoising, image enhancement, and the like.
  • Image screening may include screening out images that do not include the target object or specific body parts of the user.
  • the object to be screened may be the initial image collected by the terminal, or may be an image obtained by the initial image after other preprocessing (for example, texture uniform processing).
  • the first image sequence determination module or the preprocessing module may perform matching based on the characteristics of the initial image and the characteristics of the image containing the target object, and filter out the images that do not contain the target object from the plurality of initial images.
  • Image denoising may include removing interfering information in an image.
  • the interference information in the image will not only reduce the quality of the image, but also affect the color features extracted based on the image.
  • the first image sequence determination module or the preprocessing module may implement image denoising through a median filter, a machine learning model, or the like.
  • Image enhancement can add missing information in an image. Missing information in the image can cause image blur and also affect the color features extracted from the image. For example, image enhancement can adjust the brightness, contrast, saturation, hue, etc. of an image, increase its sharpness, reduce noise, etc.
  • the first image sequence determination module or the preprocessing module may implement image enhancement through a smoothing filter, a median filter, or the like.
  • the target of image denoising or image enhancement can be the original image, or the original image after other preprocessing.
  • the texture of an image refers to the grayscale distribution of elements (such as pixels) in the image and their surrounding spatial neighborhoods. It can be understood that, if the multiple initial images are captured by the terminal, the textures of the multiple initial images may be different because the distance, angle, and background of the terminal and the target object may change.
  • the texture unification processing can make the textures of the multiple initial images the same or substantially the same, reduce the interference of texture features, and thus improve the efficiency and accuracy of target recognition.
  • the first image sequence determination module or the preprocessing module may implement texture uniform processing through texture replacement.
  • Texture replacement refers to replacing all the textures of the original image with the textures of the specified image.
  • the designated image may be one of a plurality of initial images, that is, the first image sequence determination module or the preprocessing module may replace the texture of one of the plurality of initial images with the texture of the other initial images , to achieve texture consistency.
  • the designated image may be an image of the target object other than the plurality of initial images. For example, the images of the target object taken in the past and stored in the storage device.
  • FIG. 6 and related descriptions which will not be repeated here.
  • the first image sequence determination module may implement the texture uniformity processing by means of removing the background, correcting the shooting angle, or the like. For example, taking the target object as the target face as an example, the parts other than the face in the multiple initial images are cut out, and then the angle of the face in the remaining part is corrected to a preset angle (for example, the face is facing the image collection equipment, etc.).
  • the background cutout may identify the face contour of each of the plurality of initial images through image recognition technology, and then cut out the part other than the face contour.
  • the angle correction can be achieved by a correction algorithm (eg, a face correction algorithm) or a model.
  • the first image sequence determination module or the preprocessing module may also implement texture uniform processing in other ways, which is not limited herein.
  • the images obtained after the preprocessing are used as the multiple target images.
  • FIG. 6 is an exemplary flowchart of acquiring multiple target images based on texture replacement according to some embodiments of the present specification.
  • process 600 may be performed by a first image sequence determination module or a preprocessing module.
  • the following describes the process by which the first image sequence determination module performs flow 600 .
  • the process 600 includes the following steps:
  • Step 610 acquiring multiple initial images.
  • step 510 See step 510 and its related description for more details on acquiring the initial image.
  • the plurality of initial images may include a first initial image and a second initial image.
  • the first image sequence determination module may perform texture replacement on the plurality of initial images to generate the plurality of target images described in step 220 .
  • the first image sequence determination module may replace the texture of the plurality of initial images with the texture in the specified image.
  • the first initial image refers to a designated image among the plurality of initial images, that is, the initial image that provides the texture for replacement.
  • the first initial image needs to contain the target object.
  • the first image sequence determination module may obtain the first initial image including the target object from the plurality of initial images through image screening.
  • the first initial image may be any one of multiple initial images.
  • the first initial image may be the one with the earliest shooting time among the plurality of initial images.
  • the first initial image may be the one with the simplest background among the plurality of initial images.
  • the simplicity of the background is judged by the color variety of the background, and the less color variety of the background, the simpler the background.
  • the simplicity of the background is also judged by the complexity of the lines of the background. The fewer the lines of the background, the simpler the background.
  • white light is present in the lighting sequence.
  • the first initial image may be an initial image whose acquisition time corresponds to the white light irradiation time among the multiple target images.
  • the second initial image is the initial image of the replaced texture among the plurality of initial images.
  • the second initial image may be any initial image other than the first initial image.
  • the second initial image may be one or more images.
  • Step 620 replace the texture of the second initial image with the texture of the first initial image to generate a processed second initial image.
  • the first image sequence determination module may implement texture replacement based on a color migration algorithm. Specifically, the first image sequence determination module may transfer the color of the illumination when the second initial image was captured to the first initial image based on a color transfer algorithm, so as to generate the processed second initial image.
  • a color transfer algorithm is a method of transferring the color of one image to another image to generate a new image.
  • Color migration algorithms include but are not limited to Reinhard algorithm, Welsh algorithm, fuzzy clustering algorithm, adaptive migration algorithm, etc.
  • the color transfer algorithm may extract color features of the second initial image, and then transfer the color features of the second initial image to the first initial image to generate a processed second initial image. See Figure 2 and its related descriptions for more introductions to color features. For a detailed description of the color migration algorithm, reference may be made to FIG. 7 and its related descriptions, which will not be repeated here.
  • the first image sequence determination module may transfer the color features of the illumination when the second initial image was captured to the first initial image based on the color transfer algorithm, and the color features of the newly generated image are still the same as those of the second initial image.
  • the initial image is the same, but the texture becomes the texture of the first initial image.
  • N is an integer greater than or equal to 1
  • the color features of the illumination when each of the N second initial images is captured are transferred to the first initial image, and N new images can be obtained.
  • the color features of the N newly generated images respectively represent the color of the illumination when the N second initial images were taken, but the textures of the N newly generated images are all the textures of the first initial images.
  • the first image sequence determination module may further implement texture replacement using a texture feature transfer algorithm.
  • the texture feature migration algorithm can extract the texture features of the first initial image and the texture features of the second initial image, and replace the texture features of the second initial image with the texture features of the first initial image to generate the processed second initial image. image.
  • methods for extracting texture features may include, but are not limited to, geometric methods, gray level co-occurrence matrix methods, model methods, signal processing methods, and machine learning models.
  • the machine learning model may include, but is not limited to, a deep neural network model, a recurrent neural network model, a custom model structure, and the like, which is not limited here.
  • Step 630 taking the processed second initial image as one of the multiple target images.
  • the color of the illumination when the processed second initial image is captured is the same as that of the second initial image, but the texture features come from the first initial image. If the colors of the light when the first initial image and the second initial image are photographed are different, the first initial image and the processed second initial image may be two images with the same content and different colors.
  • the plurality of initial images includes one or more second initial images. For each second initial image, the first image sequence determination module may replace the texture of the second initial image with the texture of the first initial image to generate a corresponding processed second initial image. Optionally, the first image sequence determination module may also use the first initial image as one of the multiple target images. At this time, the plurality of target images include the first initial image and one or more processed second initial images.
  • the textures in the multiple target images are made the same through texture unification processing, thereby reducing the influence of the textures in the target images on the light color recognition, and better determining the authenticity of the multiple target images.
  • FIG. 7 is a schematic diagram of texture replacement according to some embodiments of the present specification.
  • the first image sequence determination module may select the first initial image 710-1 as the first initial image, and the initial images 710-2, 710-3..., 710-m as the second initial image.
  • Each second initial image differs from the first initial image in addition to color and texture. For example, the location of the target object in the second initial image 710-m is different from that in the first initial image 710-1.
  • the shooting backgrounds of the target objects in the second initial images 710-2, 710-3..., 710-m are all different from those in the first initial image 710-1.
  • the texture differences of the initial images 710-1, 710-2, 710-3..., 710-m may lead to low accuracy of the image authenticity judgment result and increased data analysis amount.
  • the second initial image can be preprocessed by using a color migration algorithm.
  • the first image sequence determination module extracts the color features of the m-1 second initial images 710-2, 710-3 . . . 710-m respectively (ie, the color features corresponding to red, orange, cyan, . . . blue).
  • the first image sequence determination module respectively transfers the color features of m-1 second initial images to the first initial image 710-1, and generates m-1 processed second initial images 720-2, 720-3, . . . , 720-m.
  • the processed second initial image incorporates the texture feature of the first initial image and the color feature of the second initial image, which is equivalent to an image obtained by replacing the texture of the second initial image with the texture of the first initial image.
  • the first initial image and the second initial image are RGB images.
  • the first image sequence determination module may first convert the first initial image and the second initial image from the RGB color space to the L ⁇ color space.
  • the first image sequence determination module may convert the target image (eg, the first initial image or the second initial image) from the RGB color space to the L ⁇ color space through a neural network.
  • the first image sequence determination module may convert the target image from the RGB color space to the LMS color space first, and then from the LMS color space to the L ⁇ color space, based on multiple transition matrices.
  • the first image sequence determination module may extract the color features of the transformed second initial image and the transformed first initial image in the L ⁇ color space.
  • the first image sequence determination module may calculate the average value ⁇ 2j and the standard deviation value ⁇ 2j of all the pixels of the transformed second initial image on each channel of L ⁇ .
  • j represents the color channel number in the L ⁇ color space, 0 ⁇ j ⁇ 2. When j is equal to 0, 1, and 2, it represents the luminance channel L, the yellow-blue channel ⁇ , and the red-green channel ⁇ , respectively.
  • the first image sequence determination module may calculate the average value ⁇ 1j and the standard deviation value ⁇ 1j of all the pixels of the transformed first initial image on each channel of L ⁇ .
  • the first image sequence determination module can transfer the color feature of the transformed second initial image to the transformed first initial image.
  • the first image sequence determination module can subtract the average value ⁇ 1j of the channel from the value of the pixel point in each L ⁇ channel to obtain the pixel point in each L ⁇ channel. update value of .
  • the first image sequence determination module can then multiply the updated value of each pixel in each L ⁇ channel by the scaling factor ⁇ j of the channel, and add the average value ⁇ of the transformed second initial image in the corresponding L ⁇ channel. 2j to generate the processed second initial image.
  • the first image sequence determination module may also convert the processed second initial image from the L ⁇ color space to the RGB color space.
  • Some embodiments of this specification transfer the color features of the second initial image to the first initial image based on the color transfer algorithm, which not only avoids extracting complex texture features, but also enables the processed second image to contain more detailed and accurate features Therefore, the efficiency and accuracy of determining the authenticity of the target image can be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

Des modes de réalisation de la présente description divulguent un procédé et un système de discrimination de cible. Le procédé de discrimination de cible consiste à : déterminer une première séquence d'images sur la base d'une pluralité d'images cibles, le temps de capture d'images de la pluralité d'images cibles ayant une correspondance avec le temps d'exposition à un rayonnement d'une pluralité de lumières dans une séquence d'éclairage auxquelles un objet cible est exposé ; déterminer une seconde séquence d'images sur la base d'une pluralité d'images de modèles de couleurs, la pluralité d'images de modèles de couleurs étant générée sur la base de la séquence d'éclairage ; et sur la base de la première séquence d'images et de la seconde séquence d'images, déterminer l'authenticité de la pluralité d'images cibles.
PCT/CN2022/074706 2021-04-20 2022-01-28 Procédé et système de discrimination de cible WO2022222569A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110423974.0A CN113111811A (zh) 2021-04-20 2021-04-20 一种目标判别方法和系统
CN202110423974.0 2021-04-20

Publications (1)

Publication Number Publication Date
WO2022222569A1 true WO2022222569A1 (fr) 2022-10-27

Family

ID=76718765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074706 WO2022222569A1 (fr) 2021-04-20 2022-01-28 Procédé et système de discrimination de cible

Country Status (2)

Country Link
CN (1) CN113111811A (fr)
WO (1) WO2022222569A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935179A (zh) * 2023-09-14 2023-10-24 海信集团控股股份有限公司 一种目标检测方法、装置、电子设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和系统
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991432A (zh) * 2020-03-03 2020-04-10 支付宝(杭州)信息技术有限公司 活体检测方法、装置、电子设备及系统
CN111523438A (zh) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备
CN111881844A (zh) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 一种判断图像真实性的方法及系统
CN112507922A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN112561813A (zh) * 2020-12-10 2021-03-26 深圳云天励飞技术股份有限公司 人脸图像增强方法、装置、电子设备及存储介质
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408403A (zh) * 2018-09-10 2021-09-17 创新先进技术有限公司 活体检测方法、装置和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991432A (zh) * 2020-03-03 2020-04-10 支付宝(杭州)信息技术有限公司 活体检测方法、装置、电子设备及系统
CN111523438A (zh) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备
CN111881844A (zh) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 一种判断图像真实性的方法及系统
CN112561813A (zh) * 2020-12-10 2021-03-26 深圳云天励飞技术股份有限公司 人脸图像增强方法、装置、电子设备及存储介质
CN112507922A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935179A (zh) * 2023-09-14 2023-10-24 海信集团控股股份有限公司 一种目标检测方法、装置、电子设备及存储介质
CN116935179B (zh) * 2023-09-14 2023-12-08 海信集团控股股份有限公司 一种目标检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113111811A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
WO2022222575A1 (fr) Procédé et système de reconnaissance de cible
WO2022222569A1 (fr) Procédé et système de discrimination de cible
George et al. Cross modal focal loss for rgbd face anti-spoofing
CN111488756B (zh) 基于面部识别的活体检测的方法、电子设备和存储介质
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
CN110084135B (zh) 人脸识别方法、装置、计算机设备及存储介质
CN112801057B (zh) 图像处理方法、装置、计算机设备和存储介质
CN109086723B (zh) 一种基于迁移学习的人脸检测的方法、装置以及设备
WO2019152983A2 (fr) Système et appareil empêchant par supervision auxiliaire les intrusions utilisant un visage
CN110163078A (zh) 活体检测方法、装置及应用活体检测方法的服务系统
CN108664843B (zh) 活体对象识别方法、设备和计算机可读存储介质
JP7191061B2 (ja) ライブネス検査方法及び装置
CN112052830B (zh) 人脸检测的方法、装置和计算机存储介质
WO2022222585A1 (fr) Procédé et système d'identification de cible
KR102145132B1 (ko) 딥러닝을 이용한 대리 면접 예방 방법
CN110532746B (zh) 人脸校验方法、装置、服务器及可读存储介质
CN115115504A (zh) 证件照生成方法、装置、计算机设备及存储介质
CN113111810B (zh) 一种目标识别方法和系统
Sun et al. Understanding deep face anti-spoofing: from the perspective of data
CN113128428B (zh) 基于深度图预测的活体检测方法和相关设备
CN112200075B (zh) 一种基于异常检测的人脸防伪方法
JP3962517B2 (ja) 顔面検出方法及びその装置、コンピュータ可読媒体
Hadwiger et al. Towards learned color representations for image splicing detection
CN116229528A (zh) 一种活体掌静脉检测方法、装置、设备及存储介质
WO2022222957A1 (fr) Procédé et système d'identification de cible

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790679

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22790679

Country of ref document: EP

Kind code of ref document: A1