WO2022222575A1 - Procédé et système de reconnaissance de cible - Google Patents

Procédé et système de reconnaissance de cible Download PDF

Info

Publication number
WO2022222575A1
WO2022222575A1 PCT/CN2022/075531 CN2022075531W WO2022222575A1 WO 2022222575 A1 WO2022222575 A1 WO 2022222575A1 CN 2022075531 W CN2022075531 W CN 2022075531W WO 2022222575 A1 WO2022222575 A1 WO 2022222575A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
verification
images
target
Prior art date
Application number
PCT/CN2022/075531
Other languages
English (en)
Chinese (zh)
Inventor
张明文
张天明
赵宁宁
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2022222575A1 publication Critical patent/WO2022222575A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Definitions

  • This specification relates to the technical field of image processing, and in particular, to a method and system for object recognition.
  • Target recognition is a technology for biometric identification based on targets acquired by image acquisition devices.
  • face recognition technology targeting faces is widely used in application scenarios such as permission verification and identity verification.
  • permission verification and identity verification In order to ensure the security of target recognition, it is necessary to determine the authenticity of the target image.
  • One of the embodiments of this specification provides a target recognition method, the method includes: determining an illumination sequence, where the illumination sequence is used to determine multiple colors of multiple illuminations illuminating a target object; acquiring multiple target images, wherein the The shooting time has a corresponding relationship with the irradiation time of the plurality of illuminations; and based on the illumination sequence and the plurality of target images, the authenticity of the plurality of target images is determined.
  • One of the embodiments of the present specification provides a target recognition system, the system includes: a determination module for determining a lighting sequence, where the lighting sequence is used for determining multiple colors of multiple lights illuminating a target object; an acquiring module for acquiring Multiple target images, the shooting time of the multiple target images has a corresponding relationship with the irradiation time of multiple illuminations; the verification module is used to determine the authenticity of the multiple target images based on the illumination sequence and the multiple target images.
  • One of the embodiments of the present specification provides a target recognition apparatus, including a processor, where the processor is configured to execute the target recognition method disclosed in the present specification.
  • One of the embodiments of this specification provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the target identification method disclosed in this specification.
  • FIG. 1 is a schematic diagram of an application scenario of a target recognition system according to some embodiments of the present specification
  • FIG. 2 is an exemplary flowchart of a target recognition method according to some embodiments of the present specification
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • FIG. 4 is another schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • FIG. 5 is an exemplary flowchart of acquiring multiple target images according to some embodiments of the present specification.
  • FIG. 6 is a schematic diagram of texture replacement according to some embodiments of the present specification.
  • FIG. 7 is an exemplary flowchart of determining authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 8 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 9 is another exemplary flowchart of determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 10 is a schematic structural diagram of a first verification model according to some embodiments of the present specification.
  • FIG. 11 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 12 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 13 is a schematic structural diagram of a second verification model according to some embodiments of the present specification.
  • FIG. 14 is another exemplary flowchart of determining authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 15 is a schematic structural diagram of a third verification model according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting components, elements, parts, parts or assemblies to different levels.
  • Target recognition is a technology for biometric recognition based on target objects acquired by image acquisition equipment.
  • the target object may be a human face, a fingerprint, a palm print, a pupil, and the like.
  • object recognition may be applied to authorization verification.
  • authorization verification For example, access control authority authentication and account payment authority authentication.
  • target recognition may also be used for authentication.
  • employee attendance certification and self-registration identity security certification may be based on matching of the target image captured by the image capture device in real time and the pre-acquired biometric features, thereby verifying the target identity.
  • image capture devices can be hacked or hijacked, and attackers can upload fake target images for authentication.
  • attacker A can directly upload the face image of user B after attacking or hijacking the image acquisition device.
  • the target recognition system performs face recognition based on user B's face image and pre-acquired user B's face biometrics, thereby passing user B's identity verification.
  • FIG. 1 is a schematic diagram of an application scenario of a target recognition system according to some embodiments of the present specification.
  • the object recognition system 100 may include a processing device 110 , a network 120 , a terminal 130 and a storage device 140 .
  • the processing device 110 may be used to process data and/or information from at least one component of the object recognition system 100 and/or an external data source (eg, a cloud data center). For example, the processing device 110 may determine a lighting sequence, acquire multiple target images, determine the authenticity of multiple target images, and the like. For another example, the processing device 110 may perform preprocessing (eg, replace textures, etc.) on multiple initial images obtained from the terminal 130 to obtain multiple target images. During processing, the processing device 110 may obtain data (eg, instructions) from other components of the object recognition system 100 (eg, the storage device 140 and/or the terminal 130 ) directly or through the network 120 and/or send the processed data to the other components described above for storage or display.
  • data eg, instructions
  • processing device 110 may be a single server or group of servers.
  • the server group may be centralized or distributed (eg, processing device 110 may be a distributed system).
  • processing device 110 may be local or remote.
  • the processing device 110 may be implemented on a cloud platform, or provided in a virtual fashion.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • the network 120 may connect components of the system and/or connect the system with external components.
  • the network 120 enables communication between the various components of the object recognition system 100 and between the object recognition system 100 and external components, facilitating the exchange of data and/or information.
  • the network 120 may be any one or more of a wired network or a wireless network.
  • the network 120 may include a cable network, a fiber optic network, a telecommunications network, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN) , Bluetooth network, ZigBee network (ZigBee), near field communication (NFC), intra-device bus, intra-device line, cable connection, etc. or any combination thereof.
  • the network connection between the various parts in the object recognition system 100 may adopt one of the above-mentioned manners, or may adopt multiple manners.
  • the network 120 may be of various topologies such as point-to-point, shared, centralized, or a combination of topologies.
  • network 120 may include one or more network access points.
  • network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, . . . , through which one or more components of object recognition system 100 may Connect to network 120 to exchange data and/or information.
  • the terminal 130 refers to one or more terminal devices or software used by the user.
  • the terminal 130 may include an image capturing device 131 (eg, a camera, a camera), and the image capturing device 131 may photograph a target object and acquire multiple target images.
  • the terminal 130 eg, the screen and/or other light emitting elements of the terminal 130
  • the terminal 130 may sequentially emit light of multiple colors in the lighting sequence to illuminate the target object.
  • the terminal 130 may communicate with the processing device 110 through the network 120 and send the captured multiple target images to the processing device 110 .
  • the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other devices with input and/or output capabilities, the like, or any combination thereof.
  • the above examples are only used to illustrate the broadness of the types of terminals 130 and not to limit the scope thereof.
  • the storage device 140 may be used to store data (eg, lighting sequences, multiple initial images or multiple target images, etc.) and/or instructions.
  • the storage device 140 may include one or more storage components, and each storage component may be an independent device or a part of other devices.
  • storage device 140 may include random access memory (RAM), read only memory (ROM), mass storage, removable memory, volatile read-write memory, the like, or any combination thereof.
  • mass storage may include magnetic disks, optical disks, solid state disks, and the like.
  • storage device 140 may be implemented on a cloud platform.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • storage device 140 may be integrated or included in one or more other components of object recognition system 100 (eg, processing device 110, terminal 130, or other possible components).
  • the object recognition system 100 may include a determination module, an acquisition module, and a verification module.
  • the determination module may be used to determine a lighting sequence for determining a plurality of colors of a plurality of lights illuminating the target object.
  • the acquisition module can be used to acquire multiple target images, and the shooting time of the multiple target images has a corresponding relationship with the irradiation time of the multiple lights.
  • the acquisition module may be configured to acquire multiple initial images, and preprocess the multiple initial images to obtain multiple target images.
  • the acquisition module may be used to acquire a color verification model, which is a machine learning model with preset parameters.
  • a color verification model which is a machine learning model with preset parameters.
  • the preset parameters of the color verification model are obtained through training, etc.
  • the verification module can be used to determine the authenticity of multiple target images based on the illumination sequence and multiple target images.
  • the plurality of colors includes at least one reference color and at least one verification color.
  • the relationship between the at least one reference color and the at least one verification color may be various. For example, each of the at least one verification color is determined based on at least a portion of the at least one reference color, as another example, one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the plurality of target images include at least one verification image and at least one reference image, each of the at least one verification image corresponds to one of the at least one verification color, and each of the plurality of reference images One corresponds to one of the at least one reference color.
  • the verification module may be configured to determine, based on the at least one reference image, the color of the lighting when the at least one verification image was taken, and to determine the number of lighting based on the lighting sequence and the color of the lighting when the at least one verification image was taken. the authenticity of the target image.
  • the verification module may be configured to determine a first image sequence determined based on multiple target images and a second image sequence determined based on multiple color template images, and determine multiple image sequences based on the first image sequence and the second image sequence The authenticity of the target image.
  • multiple color template images are generated based on lighting sequences.
  • the verification module may be configured to, based on a first color relationship between the at least one reference image and the at least one verification image, and a second color relationship between the at least one reference color and the at least one verification color, and based on the The first color relationship and the second color relationship determine the authenticity of the plurality of target images.
  • the verification module may be used to determine the authenticity of the plurality of target images based on the lighting sequence and the color verification model. For example, the verification model processes multiple target images based on the color verification model, obtains the processing results, and determines the authenticity of the multiple target images by combining the processing results and the lighting sequence.
  • the above description of the target recognition system and its modules is only for the convenience of description, and does not limit the description to the scope of the illustrated embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, various modules may be combined arbitrarily, or a subsystem may be formed to connect with other modules without departing from the principle.
  • the determination module, the acquisition module, and the verification module disclosed in FIG. 1 may be different modules in a system, or may be one module to implement the functions of the above-mentioned two or more modules.
  • each module may share one storage module, and each module may also have its own storage module. Such deformations are all within the protection scope of this specification.
  • the method for target recognition performed by the processing device of the target recognition system 100 may include: determining an illumination sequence, the illumination sequence being used to determine a plurality of colors of a plurality of illuminations illuminating a target object; acquiring a plurality of target images, The shooting times of the plurality of target images have a corresponding relationship with the irradiation times of the plurality of illuminations; and based on the illumination sequence and the plurality of target images, the authenticity of the plurality of target images is determined.
  • acquiring a plurality of target images by the processing device may include: acquiring a plurality of initial images, the plurality of initial images including a first initial image and a second initial image; and using the processed second initial image as one of the plurality of target images.
  • the processing device replaces the texture of the second initial image with the texture of the first initial image to generate the processed second initial image may include: based on a color migration algorithm, converting the second initial image When the image is captured, the color of the light is transferred to the first initial image to obtain the processed second initial image.
  • the plurality of colors include at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a portion of the at least one reference color. In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the plurality of target images include at least one verification image and at least one reference image, each of the at least one verification image corresponds to one of the at least one verification color, and the Each of the at least one reference image corresponds to one of the at least one reference color
  • the processing device determining, based on the illumination sequence and the plurality of target images, the authenticity of the plurality of target images may include : extract the reference color feature of the at least one reference image and the verification color feature of the at least one verification image; for each of the at least one verification image, based on the verification color feature of the verification image and The reference color feature of the at least one reference image determines the color of the illumination when the verification image is taken; and determines the plurality of images based on the illumination sequence and the color of the illumination when the at least one verification image is taken. The authenticity of the target image.
  • the processing device determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images includes: determining a first image sequence based on the plurality of target images; based on the plurality of target images A color template image, determining a second image sequence, the multiple color template images are generated based on the illumination sequence; processing the first image sequence based on the first extraction layer, and extracting the first feature information of the first image sequence; Process the second image sequence based on a second extraction layer to extract second feature information of the second image sequence; and process the first feature information and the second feature information based on a discriminant layer to determine the plurality of images
  • the authenticity of the target image wherein the first extraction layer, the second extraction layer and the discrimination layer are machine learning models with preset parameters, and the first extraction layer and the second extraction layer share parameters.
  • the preset parameters of the first extraction layer, the second extraction layer and the discrimination layer are obtained through an end-to-end training manner.
  • the plurality of colors include at least one reference color and at least one verification color
  • the plurality of target images include at least one reference image and at least one verification image
  • each of the at least one reference image one corresponds to one of the at least one reference color
  • each of the at least one verification image corresponds to one of the at least one verification color
  • the processing device is based on the lighting sequence and the plurality of target image
  • determining the authenticity of the multiple target images includes: extracting the reference color feature of each pair of the at least one reference image and the verification color feature of each pair of the at least one verification image; For each of the at least one reference image, based on the reference color feature of the reference image and the verification color feature of each verification image, determine the first reference image and each verification image. color relationships; for each of the at least one reference color, determining a second color relationship for the reference color and each of the verification colors; and based on the at least one first color relationship and the at least one second color relationship The color relationship determines the authenticity of the multiple target images.
  • the processing device determining the authenticity of the multiple target images based on the illumination sequence and the multiple target images includes: acquiring a color verification model, where the color verification model is machine learning with preset parameters and, based on the illumination sequence, using the color verification model to process the plurality of target images to determine the authenticity of the plurality of target images.
  • FIG. 2 is an exemplary flowchart of a method for object recognition according to some embodiments of the present specification. As shown in Figure 2, the process 200 includes the following steps:
  • Step 210 determine the lighting sequence.
  • the lighting sequence is used to determine multiple colors of multiple lights illuminating the target object.
  • step 210 may be performed by a determination module.
  • the target object refers to an object that needs to be identified.
  • the target object may be a specific body part of the user, such as face, fingerprint, palm print, or pupil.
  • the target object refers to the face of a user who needs to be authenticated and/or authenticated.
  • the platform needs to verify whether the driver who takes the order is a registered driver user reviewed by the platform, and the target object is the driver's face.
  • the payment system needs to verify the payment authority of the payer, and the target object is the payer's face.
  • the terminal is instructed to emit the illumination sequence.
  • the lighting sequence includes a plurality of lighting for illuminating the target object.
  • the colors of different lights in the lighting sequence may be the same or different.
  • the plurality of lights include at least two lights with different colors, that is, the plurality of lights have multiple colors.
  • determining a lighting sequence refers to determining information for each lighting in a plurality of lightings included in the lighting sequence, such as color information, lighting time, and the like.
  • the color information of multiple lights in the lighting sequence may be represented in the same or different manners.
  • the color information of the plurality of lights may be represented by color categories.
  • the colors of the multiple lights in the lighting sequence may be represented as red, yellow, green, purple, cyan, blue, and red.
  • the color information of the plurality of lights may be represented by color parameters.
  • the colors of multiple lights in the lighting sequence can be represented as RGB(255, 0, 0), RGB(255, 255, 0), RGB(0, 255, 0), RGB(255, 0, 255) , RGB(0, 255, 255), RGB(0, 0, 255).
  • the lighting sequence which may also be referred to as a color sequence, contains color information for the plurality of lighting.
  • the illumination times of the plurality of illuminations in the illumination sequence may include the start time, end time, duration, etc., or any combination thereof, for each illumination plan to illuminate the target object.
  • the start time of illuminating the target object with red light is 14:00
  • the start time of illuminating the target object with green light is 14:02.
  • the durations for which the red light and the green light illuminate the target object are both 0.1 seconds.
  • the durations for different illuminations to illuminate the target object may be the same or different.
  • the irradiation time can be expressed in other ways, which will not be repeated here.
  • the terminal may sequentially emit multiple illuminations in a particular order.
  • the terminal may emit light through the light emitting element.
  • the light-emitting element may include a light-emitting element built in the terminal, for example, a screen, an LED light, and the like.
  • the light-emitting element may also include an externally-connected light-emitting element. For example, external LED lights, light-emitting diodes, etc.
  • the terminal when the terminal is hijacked or attacked, the terminal may receive an instruction to emit light, but does not actually emit light. For more details about the lighting sequence, please refer to FIG. 3 and FIG. 4 and their related descriptions, which will not be repeated here.
  • a terminal or processing device may generate a lighting sequence randomly or based on preset rules. For example, a terminal or processing device may randomly select a plurality of colors from a color library to generate a lighting sequence.
  • the lighting sequence may be set by the user at the terminal, determined according to the default settings of the object recognition system 100, or determined by the processing device through data analysis (eg, using a determination model), or the like.
  • the terminal or storage device may store the lighting sequence.
  • the obtaining module can obtain the lighting sequence from the terminal or the storage device through the network.
  • Step 220 acquiring multiple target images.
  • step 220 may be performed by an acquisition module.
  • the plurality of target images are images used for target recognition.
  • the formats of the multiple target images may include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Kodak Flash PiX (FPX), Digital Imaging and Communications in Medicine (DICOM), etc. .
  • the multiple target images may be two-dimensional (2D, two-dimensional) images or three-dimensional (3D, three-dimensional) images.
  • the acquiring module may acquire the multiple target images based on the terminal. For example, the acquisition module may send acquisition instructions to the terminal through the network, and then receive multiple target images sent by the terminal through the network. Alternatively, the terminal may send the multiple target images to a storage device for storage, and the acquiring module may acquire the multiple target images from the storage device. The target image may not contain or contain the target object.
  • the target image may be captured by an image acquisition device of the terminal, or may be determined based on data (eg, video or image) uploaded by the user.
  • the target recognition system 100 will issue a lighting sequence to the terminal.
  • the terminal may sequentially emit the plurality of illuminations according to the illumination sequence.
  • its image acquisition device may be instructed to acquire one or more images within the illumination time of the illumination.
  • the image capture device of the terminal may be instructed to capture video during the entire illumination period of the plurality of illuminations.
  • the terminal or other computing device may intercept one or more images collected during the illumination time of each illumination from the video according to the illumination time of each illumination.
  • One or more images collected by the terminal during the irradiation time of each illumination may be used as the multiple target images.
  • the multiple target images are real images captured by the target object when it is illuminated by the multiple illuminations. It can be understood that there is a corresponding relationship between the irradiation time of the multiple lights and the shooting time of the multiple target images. If one image is collected within the irradiation time of a single light, the corresponding relationship is one-to-one; if multiple images are collected within the irradiation time of a single light, the corresponding relationship is one-to-many.
  • the hijacker can upload images or videos through the terminal device.
  • the uploaded image or video may contain target objects or specific body parts of other users, and/or other objects.
  • the uploaded image or video may be a historical image or video shot by the terminal or other terminals, or a synthesized image or video.
  • the terminal or other computing device eg, processing device 110
  • the terminal or other computing device may determine the plurality of target images based on the uploaded image or video.
  • the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded image or video according to the illumination sequence and/or illumination duration of each illumination in the illumination sequence.
  • the illumination sequence includes five illuminations arranged in sequence, and the hijacker can upload five target images through the terminal device.
  • the terminal or other computing device will determine the target image corresponding to each of the five illuminations according to the sequence in which the five target images are uploaded.
  • the irradiation time of the five lights in the lighting sequence is 0.5 seconds, respectively, and the hijacker can upload a video with a duration of 2.5 seconds through the terminal.
  • the terminal or other computing device can divide the uploaded video into five videos of 0-0.5 seconds, 0.5-1 seconds, 1-1.5 seconds, 1.5-2 seconds and 2-2.5 seconds, and intercept each video a target image.
  • the five target images captured from the video correspond to the five illuminations in sequence.
  • the multiple target images are fake images uploaded by the hijacker, rather than the real images taken by the target object when illuminated by the multiple lights.
  • the uploading time of the target image or the shooting time in the video may be regarded as the shooting time. It can be understood that when the terminal is hijacked, there is also a corresponding relationship between the irradiation time of multiple lights and the shooting time of multiple target images.
  • the determining module may use the color of the light corresponding to the irradiation time in the light sequence and the shooting time of the target image as the color corresponding to the target image. Specifically, if the irradiation time of the light corresponds to the shooting time of one or more target images, the color of the light is used as the color corresponding to the one or more target images. It can be understood that when the terminal is not hijacked or attacked, the colors corresponding to the multiple target images should be the same as the multiple colors of the multiple lights in the lighting sequence. For example, the multiple colors of multiple lights in the lighting sequence are "red, yellow, blue, green, purple, and red".
  • the colors corresponding to the multiple target images obtained by the terminal should also be "red, Yellow, blue, green, purple, red".
  • the colors corresponding to multiple target images and the multiple colors of multiple lights in the lighting sequence may be different.
  • the acquiring module may acquire multiple initial images from the terminal, and preprocess the multiple initial images to acquire the multiple target images.
  • the multiple initial images may be photographed by the terminal or uploaded by the hijacker through the terminal.
  • the shooting time of the multiple initial images and the irradiation time of the multiple lights there is a corresponding relationship between the shooting time of the multiple initial images and the irradiation time of the multiple lights. If the multiple target images are obtained based on the preprocessing of multiple initial images, the corresponding relationship between the shooting time of the multiple target images and the irradiation time of the multiple lights actually reflects the shooting time and the shooting time of the multiple initial images corresponding to the multiple target images. The corresponding relationship between the irradiation times of multiple lights, the color of the light when the target image is shot actually reflects the color of the light when the initial image corresponding to the target image was shot.
  • the preprocessing may include texture uniform processing.
  • the texture of an image refers to the grayscale distribution of elements (such as pixels) in the image and their surrounding spatial neighborhoods. It can be understood that, if the multiple initial images are captured by the terminal, the textures of the multiple initial images may be different because the distance, angle, and background of the terminal and the target object may change.
  • the texture unification processing can make the textures of the multiple initial images the same or substantially the same, reduce the interference of texture features, and thus improve the efficiency and accuracy of target recognition.
  • the acquisition module may implement texture uniform processing through texture replacement.
  • Texture replacement refers to replacing all the textures of the original image with the textures of the specified image.
  • the specified image may be one of multiple initial images, that is, the acquisition module may replace the texture of one of the multiple initial images with the texture of other initial images to achieve texture consistency.
  • the designated image may be an image of the target object other than the plurality of initial images. For example, the images of the target object taken in the past and stored in the storage device.
  • texture replacement reference may be made to FIG. 5 and related descriptions, which will not be repeated here.
  • the acquisition module may implement texture uniformity processing by means of background culling, shooting angle correction, and the like. For example, taking the target object as the target face as an example, the parts other than the face in the multiple initial images are cut out, and then the angle of the face in the remaining part is corrected to a preset angle (for example, the face is facing the image collection equipment, etc.).
  • the background cutout may identify the face contour of each of the plurality of initial images through image recognition technology, and then cut out the part other than the face contour.
  • the angle correction can be achieved by a correction algorithm (eg, a face correction algorithm) or a model.
  • the acquisition module may also implement texture uniform processing in other ways, which is not limited herein.
  • preprocessing may also include image screening, image denoising, image enhancement, and the like.
  • Image screening may include screening out images that do not include the target object or specific body parts of the user.
  • the object to be screened may be the initial image collected by the terminal, or may be an image obtained by the initial image after other preprocessing (for example, texture uniform processing).
  • the acquisition module may perform matching based on the characteristics of the initial image and the characteristics of the image containing the target object, and filter out the images that do not contain the target object in the plurality of initial images.
  • Image denoising may include removing interfering information in an image.
  • the interference information in the image will not only reduce the quality of the image, but also affect the color features extracted based on the image.
  • the acquisition module may implement image denoising through median filters, machine learning models, and the like.
  • Image enhancement can add missing information in an image. Missing information in the image can cause image blur and also affect the color features extracted based on the image. For example, image enhancement can adjust the brightness, contrast, saturation, hue, etc. of an image, increase its sharpness, reduce noise, etc.
  • the acquisition module may implement image enhancement through smoothing filters, median filters, or the like.
  • the target of image denoising or image enhancement can be the original image, or the original image after other preprocessing.
  • the preprocessing may also include other operations, which are not limited herein.
  • the object recognition system 100 may further include a preprocessing module for preprocessing the initial image.
  • Step 230 based on the illumination sequence and the multiple target images, determine the authenticity of the multiple target images.
  • step 230 may be performed by a verification module.
  • the authenticity of the multiple target images may reflect whether the multiple target images are images obtained by shooting the target object under illumination of multiple colors of light. For example, when the terminal is not hijacked or attacked, its light-emitting element can emit light of multiple colors, and its image acquisition device can record or photograph the target object to obtain the target image. At this time, the target image has authenticity. For another example, when the terminal is hijacked or attacked, the target image is obtained based on the image or video uploaded by the attacker. At this time, the target image has no authenticity.
  • the authenticity of multiple target images can also be referred to as the authenticity of multiple initial images, which can reflect whether the multiple initial images corresponding to the multiple target images are It is an image obtained by shooting the target object under the illumination of multiple colors of light.
  • the authenticity of the multiple target images and the authenticity of the multiple initial images are collectively referred to as the authenticity of the multiple target images below.
  • the authenticity of the target image can be used to determine whether the terminal's image capture device has been hijacked by an attacker. For example, if at least one target image in the multiple target images is not authentic, it means that the image acquisition device is hijacked. For another example, if more than a preset number of target images in the multiple target images are not authentic, it means that the image acquisition device is hijacked.
  • the verification module may determine the authenticity of the plurality of target images based on color characteristics and lighting sequences of the plurality of target images. For more details about determining the authenticity of the target image based on the color feature of the target image, reference may be made to FIG. 7 and related descriptions thereof, which will not be repeated here.
  • the color feature of an image refers to information related to the color of the image.
  • the color of the image includes the color of the light when the image is captured, the color of the subject in the image, the color of the background in the image, and the like.
  • the color features may include deep features and/or complex features extracted by a neural network.
  • Color features can be represented in a number of ways.
  • the color feature can be represented based on the color value of each pixel in the image in the color space.
  • a color space is a mathematical model that describes color using a set of numerical values, each of which can represent the color value of a color feature on each color channel of the color space.
  • a color space may be represented as a vector space, each dimension of the vector space representing a color channel of the color space. Color features can be represented by vectors in this vector space.
  • the color space may include, but is not limited to, RGB color space, L ⁇ color space, LMS color space, HSV color space, YCrCb color space, HSL color space, and the like.
  • the RGB color space includes red channel R, green channel G, and blue channel B, and color features can be represented by the color values of each pixel in the image on the red channel R, green channel G, and blue channel B, respectively.
  • color features may be represented by other means (eg, color histograms, color moments, color sets, etc.).
  • the histogram statistics are performed on the color values of each pixel in the image in the color space to generate a histogram representing the color features.
  • a specific operation eg, mean, squared difference, etc. is performed on the color value of each pixel in the image in the color space, and the result of the specific operation represents the color feature of the image.
  • the verification module may extract color features of the plurality of target images through a color feature extraction algorithm and/or a color verification model (or a portion thereof).
  • Color feature extraction algorithms include: color histogram, color moment, color set, etc.
  • the verification module can count the gradient histogram based on the color value of each pixel in the image in each color channel of the color space, so as to obtain the color histogram.
  • the verification module can divide the image into multiple regions, and use the set of binary indices of the multiple regions established by the color values of each pixel in the image in each color channel of the color space to determine the color of the image. set.
  • Figures 10, 13 and 15 see Figures 10, 13 and 15 and their related descriptions.
  • the verification module may process multiple target images based on the lighting sequence and use a color verification model, which is a machine learning model with preset parameters, to determine the authenticity of the multiple target images.
  • a color verification model which is a machine learning model with preset parameters
  • the target recognition system 100 sends a lighting sequence to the terminal, and acquires from the terminal a target image that has a corresponding relationship with multiple lightings in the lighting sequence.
  • the processing device can determine whether the target image or its corresponding initial image is an image captured under the illumination sequence of the target object by identifying the color of the illumination when the target image is captured, and further determine whether the terminal is hijacked or attacked. It is understandable that when an attacker does not know the lighting sequence, it is difficult for the color of the light to be the same as the color of multiple lights in the light sequence when the uploaded image or the image in the uploaded video is captured. Even if the kinds of colors are the same, the order of the positions of each color is difficult to be the same.
  • the method disclosed in this specification can improve the difficulty of an attacker's attack and ensure the security of target identification.
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • the colors of the multiple lights in the lighting sequence may be exactly the same, completely different, or partially the same.
  • the colors of the plurality of lights are all red.
  • at least two of the plurality of lights have different colors, that is, the plurality of lights have multiple colors.
  • the plurality of colors includes white.
  • the plurality of colors includes red, blue, and green.
  • illumination sequence a includes four illuminations arranged in sequence: red light, white light, blue light, and green light
  • illumination sequence b includes four illuminations arranged in sequence: white light, blue light, red light, and green light
  • illumination sequence c includes Four illuminations of red light, white light, blue light, and white light are arranged in sequence
  • the illumination sequence d includes four illuminations of red light, white light, white light, and blue light arranged in sequence.
  • Lighting sequence a and lighting sequence b have the same color of multiple lights, but they are arranged in different order.
  • multiple lights in lighting sequence c and lighting sequence d have the same color, but are arranged in different orders.
  • the colors of the 4 lights in the light sequences a and b are completely different, and the colors of the two lights in the light sequences c and d are the same.
  • FIG. 4 is another schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • the plurality of colors of lighting in the lighting sequence may include at least one reference color and at least one verification color.
  • the verification color is one of the colors that is directly used to verify the authenticity of the image.
  • the reference color is a color among the colors used to assist the verification color to determine the authenticity of the target image.
  • the target image corresponding to the reference color also referred to as the reference image
  • the verification module may determine the authenticity of the plurality of target images based on the color of the illumination when the verification image was captured.
  • the target image corresponding to the reference color may be used to verify the target image corresponding to the color (also referred to as the verification image) to determine the first color relationship.
  • the verification module may determine the authenticity of the plurality of target images based on the first color relationship.
  • the illumination sequence e contains multiple reference colors of illumination “red light, green light, blue light”, and multiple verification colors of illumination “yellow light, purple light... cyan light”;
  • the illumination sequence f contains multiple illuminations Lighting of the reference color "red light, white light...blue light”, and light of multiple verification colors "red light..green light”.
  • multiple colors exist for verification.
  • the plurality of verification colors may be identical.
  • the verification color can be red, red, red, red.
  • multiple verification colors can be completely different.
  • the verification color can be red, yellow, blue, green, violet.
  • the plurality of verification colors may be partially the same.
  • the verification color can be yellow, green, purple, yellow, red.
  • there are multiple reference colors and the multiple reference colors may be identical, completely different, or partially identical.
  • the verification color may contain only one color, such as green.
  • the at least one reference color and the at least one verification color may be determined according to a default setting of the object recognition system 100, manually set by a user, or determined by a determination module.
  • the determination module may randomly select a reference color and a verification color.
  • the determination module may randomly select a part of the colors from the plurality of colors as the at least one reference color, and the remaining colors as the at least one verification color.
  • the determination module may determine the at least one reference color and the at least one verification color based on a preset rule.
  • the preset rules may be rules regarding the relationship between verification colors, the relationship between reference colors, and/or the relationship between verification colors and reference colors, and the like.
  • the preset rule is that the verification color can be generated based on the fusion of the reference color, and so on.
  • each of the at least one verification color may be determined based on at least a portion of the at least one reference color.
  • the verification color may be obtained by fusion based on at least a part of the at least one reference color.
  • the at least one reference color may comprise a primary or primary color of the color space.
  • the at least one reference color may include the three primary colors of the RGB space, ie, "red, green, and blue".
  • multiple verification colors "yellow, purple...cyan” in the lighting sequence e can be determined based on three reference colors "red, green, blue”.
  • “yellow” can be obtained by fusing the reference colors “red, green, blue” based on the first ratio
  • “purple” can be obtained by fusing the reference colors "red, green, blue” based on the second ratio.
  • one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the at least one reference color and the at least one verification color may be completely identical or partially identical.
  • a certain one of the at least one verification color may be the same as a certain one of the at least one reference color.
  • the verification color can also be determined based on at least one reference color, that is, the specific reference color can be used as the verification color. As shown in Figure 4, in the illumination sequence f, multiple reference colors "red, white...blue” and multiple verification colors "red...green” all contain red.
  • the at least one reference color and the at least one verification color may also have other relationships, which are not limited herein.
  • the at least one reference color and the at least one verification color are the same or different in color family.
  • at least one reference color belongs to a warm color system (eg, red, yellow, etc.)
  • at least one reference color belongs to a cool color system (eg, gray, etc.).
  • the lighting corresponding to the at least one reference color may be arranged in front of or behind the lighting corresponding to the at least one verification color.
  • illuminations of multiple reference colors “red light, green light, blue light” are arranged in front of illuminations of multiple verification colors “yellow light, purple light... cyan light”.
  • illuminations of multiple reference colors “red light, white light...blue light” are arranged behind multiple verification colors “red light...green light”.
  • the illumination corresponding to the at least one reference color may also be arranged at intervals with the illumination corresponding to the at least one verification color, which is not limited herein.
  • FIG. 5 is an exemplary flowchart of acquiring multiple target images according to some embodiments of the present specification.
  • process 500 may be performed by an acquisition module. As shown in Figure 5, the process 500 includes the following steps:
  • Step 510 acquiring multiple initial images.
  • the plurality of initial images are unprocessed images acquired from the terminal.
  • the multiple initial images may be images captured by an image acquisition device of the terminal, or may be images determined by the hijacked terminal based on images or videos uploaded by the hijacker.
  • the plurality of initial images may include a first initial image and a second initial image.
  • the acquisition module may perform texture replacement on the plurality of initial images to generate the plurality of target images described in step 220 .
  • the acquisition module may replace the texture of the plurality of initial images with the texture in the specified image.
  • the first initial image refers to a designated image among the plurality of initial images, that is, the initial image that provides the texture for replacement.
  • the first initial image needs to contain the target object.
  • the acquiring module may acquire a first initial image including the target object from a plurality of initial images through image filtering.
  • the first initial image may be any one of multiple initial images.
  • the first initial image may be the one with the earliest shooting time among the plurality of initial images.
  • the first initial image may be the one with the simplest background among the plurality of initial images.
  • the simplicity of the background is judged by the color variety of the background, and the less color variety of the background, the simpler the background.
  • the simplicity of the background is also judged by the complexity of the lines of the background. The fewer the lines of the background, the simpler the background.
  • white light is present in the lighting sequence.
  • the first initial image may be an initial image whose acquisition time corresponds to the white light irradiation time among the multiple target images.
  • the second initial image is the initial image of the replaced texture among the plurality of initial images.
  • the second initial image may be any initial image other than the first initial image.
  • the second initial image may be one or more images.
  • the terminal may acquire a corresponding initial image according to the illumination time of the illumination in the illumination sequence.
  • the acquiring module may acquire the plurality of initial images from the terminal through the network.
  • the hijacker can upload images or videos through the end device.
  • the acquisition module may determine the plurality of initial images based on the uploaded images or videos.
  • Step 520 replace the texture of the second initial image with the texture of the first initial image to generate a processed second initial image.
  • the acquisition module may implement texture replacement based on a color transfer algorithm. Specifically, the acquisition module can transfer the color of the illumination when the second initial image is captured to the first initial image based on the color transfer algorithm, so as to generate the processed second initial image.
  • a color transfer algorithm is a method of transferring the color of one image to another image to generate a new image.
  • Color migration algorithms include but are not limited to Reinhard algorithm, Welsh algorithm, fuzzy clustering algorithm, adaptive migration algorithm, etc.
  • the color transfer algorithm may extract color features of the second initial image, and then transfer the color features of the second initial image to the first initial image to generate a processed second initial image.
  • color features please refer to step 230 and its related description.
  • FIG. 6 and related descriptions which will not be repeated here.
  • the acquisition module can migrate the color features of the illumination when the second initial image is captured to the first initial image based on the color transfer algorithm, and the color features of the newly generated image are still the same as the second initial image, But the texture becomes the texture of the first initial image.
  • N is an integer greater than or equal to 1
  • the color features of the illumination when each of the N second initial images is captured are transferred to the first initial image, and N new images can be obtained.
  • the color features of the N newly generated images respectively represent the color of the illumination when the N second initial images were taken, but the textures of the N newly generated images are all the textures of the first initial images.
  • the acquisition module may also implement texture replacement using a texture feature migration algorithm.
  • the texture feature migration algorithm can extract the texture features of the first initial image and the texture features of the second initial image, and replace the texture features of the second initial image with the texture features of the first initial image to generate the processed second initial image. image.
  • methods for extracting texture features may include, but are not limited to, geometric methods, gray level co-occurrence matrix methods, model methods, signal processing methods, and machine learning models.
  • the machine learning model may include, but is not limited to, a deep neural network model, a recurrent neural network model, a custom model structure, and the like, which is not limited here.
  • Step 530 taking the processed second initial image as one of the multiple target images.
  • the color of the illumination when the processed second initial image is captured is the same as that of the second initial image, but the texture features come from the first initial image. If the colors of the light when the first initial image and the second initial image are photographed are different, the first initial image and the processed second initial image may be two images with the same content and different colors.
  • the plurality of initial images includes one or more second initial images. For each second initial image, the acquiring module may replace the texture of the second initial image with the texture of the first initial image to generate a corresponding processed second initial image. Optionally, the acquiring module may also use the first initial image as one of the multiple target images. At this time, the plurality of target images include the first initial image and one or more processed second initial images.
  • the textures in the multiple target images are made the same through texture unification processing, thereby reducing the influence of the textures in the target images on the light color recognition, and better determining the authenticity of the multiple target images.
  • FIG. 6 is a schematic diagram of texture replacement according to some embodiments of the present specification.
  • the acquisition module may select the first initial image 610-1 as the first initial image, and the initial images 610-2, 610-3..., 610-m as the second initial image.
  • Each second initial image differs from the first initial image in addition to color and texture.
  • the location of the target object in the second initial image 610-m is different from that in the first initial image 610-1.
  • the shooting backgrounds of the target objects in the second initial images 610-2, 610-3..., 610-m are all different from those in the first initial image 610-1.
  • the texture differences of the initial images 610-1, 610-2, 610-3..., 610-m may lead to low accuracy of the image authenticity judgment result and increased data analysis amount.
  • the second initial image can be preprocessed by using a color migration algorithm.
  • the acquisition module extracts the color features of m-1 second initial images 610-2, 610-3 . . . 610-m respectively (that is, the color features corresponding to red, orange, cyan, . .
  • the acquisition module respectively transfers the color features of m-1 second initial images to the first initial image 610-1, and generates m-1 processed second initial images 620-2, 620-3..., 620-m .
  • the processed second initial image incorporates the texture feature of the first initial image and the color feature of the second initial image, which is equivalent to an image obtained by replacing the texture of the second initial image with the texture of the first initial image.
  • the first initial image and the second initial image are RGB images.
  • the acquiring module may first convert the first initial image and the second initial image from the RGB color space to the L ⁇ color space.
  • the acquisition module may convert the target image (eg, the first initial image or the second initial image) from the RGB color space to the L ⁇ color space through a neural network.
  • the acquisition module may first convert the target image from the RGB color space to the LMS color space based on multiple transition matrices, and then convert from the LMS color space to the L ⁇ color space.
  • the acquisition module can extract the color features of the transformed second initial image and the transformed first initial image in the L ⁇ color space.
  • the acquisition module may calculate the average value ⁇ _2j and the standard deviation value ⁇ _2j of all the pixels of the transformed second initial image on each channel of L ⁇ .
  • j represents the color channel number in the L ⁇ color space, 0 ⁇ j ⁇ 2. When j is equal to 0, 1, and 2, it represents the luminance channel L, the yellow and blue channels ⁇ , and the red and green channels ⁇ , respectively.
  • the acquisition module can calculate the mean value ⁇ _1j and the standard deviation value ⁇ _1j of all the pixels of the transformed first initial image on each channel of L ⁇ .
  • the acquiring module can transfer the color feature of the transformed second initial image to the transformed first initial image.
  • the acquisition module can then multiply the updated value of each pixel in each L ⁇ channel by the scaling factor ⁇ _j of the channel, and add the average value ⁇ _2j of the transformed second initial image in the corresponding L ⁇ channel to generate a processed image. of the second initial image.
  • the acquisition module may also convert the processed second initial image from the L ⁇ color space to the RGB color space.
  • Some embodiments of this specification transfer the color features of the second initial image to the first initial image based on the color transfer algorithm, which not only avoids extracting complex texture features, but also enables the processed second initial image to contain more detailed and Accurate color feature information can improve the efficiency and accuracy of determining the authenticity of the target image.
  • FIG. 7 is a flowchart of determining the authenticity of a target image based on color features according to some embodiments of the present specification.
  • process 700 may be performed by a verification module. As shown in FIG. 7, the process 700 may include the following steps:
  • Step 710 extracting color features of multiple target images.
  • step 230 See step 230 and its related description for more details on color features.
  • Step 720 Determine the authenticity of the multiple target images based on the color features and lighting sequences of the multiple target images.
  • the verification module may determine the color of the illumination when the target image was captured based on the color feature of the target image, and then determine the color corresponding to the target image based on the illumination sequence . Further, the verification module can determine the authenticity of the target image.
  • the verification module can form a new color space (ie, the reference color space in FIG. 9 ) based on the reference color feature of the at least one reference image. Further, the verification module may determine, based on the new color space and verification color features of the verification image, the color of the illumination when the verification image is captured. Further, the verification module may determine the authenticity of the verification image in combination with the color corresponding to the verification image. For determining the authenticity of the target image based on the reference color space, reference may be made to FIG. 9 , FIG. 11 and their related descriptions, and details are not repeated here.
  • the verification module may determine the color relationship between the multiple target images based on the color features of the multiple target images, and then based on the color relationship between the multiple target images and a plurality of the multiple illuminations in the illumination sequence The color relationship between colors determines the authenticity of multiple target images. Regarding the determination of the authenticity of the target image based on the color relationship, reference may be made to FIG. 12 and its related description, which will not be repeated here.
  • the verification module may determine the color feature of the target image and its corresponding color feature based on the color feature of the target image and the color feature of the corresponding illumination The matching degree between the color features of the light. Further, the verification module may determine the authenticity of the target image based on the matching degree. For example, if the matching degree between the color feature of the target image and the color feature of the corresponding illumination is greater than a preset threshold, the target image has authenticity.
  • the matching degree may be determined based on the similarity between the color features of the target image and the color features of the lighting. Similarity can be measured by Euclidean distance, Manhattan distance, etc.
  • the verification module may be based on color features of a first image sequence constructed from multiple target images (ie, the first feature information in FIG. 14 ) and color features of a second image sequence constructed from multiple color template images (ie, the second characteristic information in FIG. 14 ), the degree of matching between the first characteristic information and the second characteristic information is determined. Further, the verification module may determine the authenticity of the multiple target images based on the matching degree. For more details on determining the authenticity of multiple target images based on sequences, see FIG. 14 and its related description.
  • the preset threshold set for the image authenticity determination in some embodiments of this specification may be related to the degree of shooting stability.
  • the shooting stability degree is the stability degree when the image acquisition device of the terminal acquires the target image.
  • the preset threshold is positively related to the degree of shooting stability. It can be understood that the higher the shooting stability, the higher the quality of the acquired target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when shooting, and the larger the preset threshold is.
  • the shooting stability may be measured based on a motion parameter of the terminal detected by a motion sensor of the terminal (eg, a vehicle-mounted terminal or a user terminal, etc.). For example, the motion speed, vibration frequency, etc.
  • the motion sensor may be a sensor that detects the driving situation of the vehicle, and the vehicle may be the vehicle used by the target user.
  • the target user refers to the user to which the target object belongs.
  • the motion sensor may be a motion sensor on the driver's end or the in-vehicle terminal.
  • the preset threshold may also be related to the shooting distance and the shooting angle.
  • the shooting distance is the distance between the target object when the image acquisition device acquires the target image.
  • the shooting angle is the angle between the front of the target object and the terminal screen when the image acquisition device acquires the target image.
  • both the shooting distance and the shooting angle are negatively correlated with the preset threshold. It can be understood that the shorter the shooting distance, the higher the quality of the acquired target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when shooting, and the larger the preset threshold is. The smaller the shooting angle, the higher the quality of the acquired target image, and similarly, the larger the preset threshold.
  • the shooting distance and shooting angle may be determined based on the target image through image recognition techniques.
  • the verification module may perform specific operations (eg, averaging, standard deviation, etc.) on the shooting stability, shooting distance, and shooting angle of each target image, and based on the specific calculation, the shooting stability, shooting distance and The shooting angle determines the preset threshold. For example, the verification module determines the corresponding sub-threshold value based on the shooting stability degree, shooting distance and shooting angle after the specific operation, and then based on the sub-threshold value corresponding to the shooting stability degree, the sub-threshold value corresponding to the shooting distance and the sub-threshold value corresponding to the shooting angle, Determine preset thresholds. For example, the three sub-thresholds can be averaged, weighted averaged, and the like.
  • FIG. 8 is a flowchart of determining the authenticity of a target image based on a color verification model according to some embodiments of the present specification.
  • process 800 may be performed by a verification module. As shown in FIG. 8, the process 800 may include the following steps:
  • Step 810 acquiring a color verification model.
  • a color verification model is a model used to verify that an image is authentic.
  • the color verification model is a machine learning model with preset parameters. Preset parameters refer to the model parameters learned during the training of the machine learning model. Taking a neural network as an example, the model parameters include weight and bias.
  • the preset parameters of the color verification model are determined during the training process. For example, the model acquisition module can train an initial color verification model based on a plurality of training samples with labels to obtain a color verification model.
  • the color verification model can be stored in a storage device, and the verification module can obtain the color verification model from the storage device through a network.
  • the color verification model may be acquired through a training process.
  • For the training process of the color verification model please refer to Figures 10, 13 and 15 and their related descriptions.
  • Step 820 Based on the lighting sequence, use the color verification model to process the multiple target images to determine the authenticity of the multiple target images.
  • the color verification model may include a first verification model.
  • the first verification model may include a first color feature extraction layer and a color classification layer.
  • the first color feature extraction layer extracts color features of the target image.
  • the color classification layer determines the color corresponding to the target image based on the color features of the target image.
  • the color verification model may include a second verification model.
  • the second verification model may include a second color feature extraction layer and a color relationship determination layer.
  • the second color feature extraction layer extracts the color features of the target image.
  • the color relationship determination layer determines the relationship (for example, whether they are the same) between colors corresponding to different target images based on the color features of the target images.
  • the color verification model may include a third verification model.
  • the third verification model may include a first extraction layer, a second extraction layer and a discriminant layer.
  • the first extraction layer extracts the color features of the sequence constructed by multiple target images.
  • the second extraction layer extracts the color features of a sequence constructed from multiple color template images.
  • the discriminative layer determines the relationship between the two sequences based on the color features of the two sequences. For determining the authenticity of multiple target images based on the third verification model, reference may be made to FIG. 14 and related descriptions.
  • FIG. 9 is an exemplary flowchart of determining authenticity of multiple target images according to some embodiments of the present specification.
  • flowchart 900 may be performed by a verification module. As shown in FIG. 9, the process 900 may include the following steps:
  • the plurality of colors of lighting in the lighting sequence includes at least one reference color and at least one verification color.
  • Each of the at least one verification color may be determined based on at least a portion of the at least one reference color.
  • each of the at least one verification color may be fused based on one or more reference colors.
  • the plurality of images include at least one reference image and at least one verification image.
  • Each of the at least one verification image corresponds to one of the at least one verification color.
  • Each of the plurality of reference images corresponds to one of the at least one reference color.
  • the target image corresponds to a specific color, indicating that if the terminal is not hijacked (that is, the target image is real), the target image should have the specific color.
  • Step 910 Extract reference color features of at least one reference image and verification color features of at least one verification image.
  • the reference color feature refers to the color feature of the reference image. Verifying color features refers to verifying the color features of an image. For the color feature and its extraction, please refer to the description of step 230 .
  • the verification module may extract color features of the image based on the first color feature extraction layer included in the first verification model. For details of extracting color features based on the first color feature extraction layer, reference may be made to FIG. 10 and its related descriptions, which will not be repeated here.
  • Step 920 for each of the at least one verification image, based on the verification color feature of the verification image and the reference color feature of the at least one reference image, determine the color of the illumination when the verification image is captured.
  • reference color features of at least one reference image may be used to construct a reference color space.
  • the reference color space has the at least one reference color as its color channel.
  • the reference color feature corresponding to each reference image can be used as the reference value of the corresponding color channel in the reference color space.
  • the color space (also referred to as the original color space) corresponding to the multiple target images may be the same as or different from the reference color space.
  • the multiple target images may correspond to the RGB color space, and the at least one reference color is red, blue and green, then the original color space corresponding to the multiple target images and the reference color space constructed based on the reference colors belong to the same color space.
  • two color spaces can be considered the same color space if their primary or primary colors are the same.
  • the verification color can be obtained by fusing one or more reference colors. Therefore, the verification module may determine the color corresponding to the verification color feature based on the reference color feature and/or the reference color space constructed by the reference color feature. In some embodiments, the verification module may map the verification color feature of the verification image based on the reference color space, and determine the color of the illumination when the verification image is captured. For example, the verification module can determine the parameters of the verification color feature on each color channel based on the relationship between the verification color feature and the reference value of each color channel in the reference color space, and then determine the color corresponding to the verification color feature based on the parameters, That is, verify the color of the light when the image was taken.
  • the verification module may use the reference color features x ⁇ , y ⁇ , and z ⁇ extracted based on the reference images a, b, and c as reference values of color channel I, color channel II, and color channel III, respectively.
  • Color Channel I, Color Channel II and Color Channel III are the three color channels of the reference color space.
  • the verification module may determine the color corresponding to the verification color feature based on the parameters ⁇ _1, ⁇ _2, and ⁇ _3, that is, the color of the light when the verification image is captured.
  • the corresponding relationship between parameters and color categories may be preset, or may be learned through a model.
  • the base color space may be the same color as the color channel of the original color space.
  • the original space color may be an RGB space
  • the at least one reference color may be red, green, and blue.
  • the verification module can construct a new RGB color space (that is, the reference color space) based on the reference color features of the three reference images corresponding to red, green, and blue, and determine the verification color features of each verification image in the new RGB color space. RGB values to determine the color of the lighting when the verification image was taken.
  • the verification module may process the reference color feature and the verification color feature based on the color classification layer in the first verification model, and determine the color of the illumination when the verification image is captured. For details, please refer to FIG. 10 and its related descriptions. No longer.
  • Step 930 Determine the authenticity of the multiple target images based on the lighting sequence and the color of the lighting when the at least one verification image was taken.
  • the verification module may determine a verification color corresponding to the verification image based on the lighting sequence. Further, the verification module may determine the authenticity of the verification image based on the verification color corresponding to the verification image. For example, the verification module determines the authenticity of the verification image based on the first judgment result of whether the verification color corresponding to the verification image is consistent with the color of the illumination when the image was captured. If the verification color corresponding to the verification image is the same as the color of the light when it was photographed, it means that the verification image is authentic. For another example, the verification module determines the authenticity of the verification images based on whether the relationship between the verification colors corresponding to the multiple verification images (eg, whether they are the same) is consistent with the relationship between the colors of the illumination when the multiple verification images were captured.
  • the verification module may determine whether the terminal's image capture device has been hijacked based on the authenticity of the at least one verification image. For example, if the number of authentic verification images exceeds the first threshold, it means that the image acquisition device of the terminal is not hijacked. For another example, if the number of verification images that do not have authenticity exceeds the second threshold (for example, 1), it means that the image acquisition device of the terminal is hijacked.
  • the second threshold for example, 1
  • the verification module may combine the reference color space with other verification methods to determine the authenticity of the multiple target images.
  • the verification module may determine the updated color characteristics of each target image in the verification image and the benchmark image based on the benchmark color space.
  • the updated color feature refers to the feature after converting the original color feature to the reference color space.
  • the verification module can replace the original color features based on the updated color features of each target image, and determine the authenticity of the multiple target images in combination with other verification methods. For example, the verification module may determine the first color relationship between the multiple target images based on the updated color features of the multiple target images, and determine the authenticity of the multiple target images based on the first color relationship.
  • the verification module determines the color features of the first image sequence constructed by the multiple target images based on the updated color features of the multiple target images, and determines the authenticity of the multiple target images based on the color features of the first image sequence.
  • the verification module determines the color features of the first image sequence constructed by the multiple target images based on the updated color features of the multiple target images, and determines the authenticity of the multiple target images based on the color features of the first image sequence.
  • the determination of the authenticity of the target image is also more accurate. For example, when the lighting in the lighting sequence is weaker than the ambient light, the lighting hitting the target object may be difficult to detect. Alternatively, when the ambient light is colored light, the lighting hitting the target object may be disturbed. When the terminal is not hijacked, the reference image and the verification image are taken under the same (or substantially the same) ambient light.
  • the reference color space constructed based on the reference image incorporates the influence of ambient light, therefore, compared with the original color space, the color of the illumination when the verification image was captured can be more accurately identified. Furthermore, the methods disclosed herein can avoid interference of the light emitting elements of the terminal. When the terminal is not hijacked, both the reference image and the verification image are shot under the illumination of the same light-emitting element. Using the reference color space can eliminate or weaken the influence of the light-emitting element, and improve the accuracy of identifying the light color.
  • FIG. 10 is a schematic diagram of a first verification model according to some embodiments of the present specification.
  • the verification module may determine the authenticity of the plurality of target images based on the first verification model and the lighting sequence.
  • the first verification model may include a first color feature extraction layer and a color classification layer.
  • the first color feature extraction layer may include a reference color feature extraction layer and a verification color feature extraction layer.
  • the first verification model may include a reference color feature extraction layer 1030 , a verification color feature extraction layer 1040 and a color classification layer 1070 .
  • the reference color feature extraction layer 1030 and the verification color feature extraction layer 1040 may be used to implement step 910 .
  • Color classification layer 1070 may be used to implement step 920 .
  • the verification module determines the authenticity of the verification image based on the color and illumination sequence corresponding to the verification image.
  • the color feature extraction model (eg, the first color feature extraction layer, the reference color feature extraction layer 1030, the verification color feature extraction layer 1040, etc.) can extract the color features of the target image.
  • the type of the color feature extraction model may include a convolutional neural network model such as ResNet, DenseNet, MobileNet, ShuffleNet or EfficientNet, or a recurrent neural network model such as a long short-term memory recurrent neural network.
  • the types of reference color feature extraction layer 1030 and verification color feature extraction layer 1040 may be the same or different.
  • the reference color feature extraction layer 1030 extracts reference color features 1050 of at least one reference image 1010 .
  • the at least one reference image 1010 may include multiple reference images.
  • the reference color feature 1050 may be a fusion of color features of the plurality of reference images 1010 .
  • the plurality of reference images 1010 may be spliced, and after splicing, the reference color feature extraction layer 1030 may be input, and the reference color feature extraction layer 1030 may output the reference color feature 1050 .
  • the reference color feature 1050 is a feature vector formed by splicing color feature vectors of the reference images 1010-1, 1010-2, and 1010-3.
  • the verification color feature extraction layer 1040 extracts the verification color features 1060 of the at least one verification image 1020 .
  • the verification module may perform a color judgment on each of the at least one verification image 1020, respectively.
  • the verification module may input at least one reference image 1010 into the reference color feature extraction layer 1030 , and input the verification image 1020 - 2 into the verification color feature extraction layer 1040 .
  • the verification color feature extraction layer 1040 may output the verification color feature 1060 of the verification image 1020-2.
  • the color classification layer 1070 may determine the color of the light when the verification image 1020-2 was photographed based on the reference color feature 1050 and the verification color feature 1060 of the verification image 1020-2.
  • the verification module may perform color judgment on multiple verification images 1020 at the same time.
  • the verification module may input at least one reference image 1010 into the reference color feature extraction layer 1030, and input multiple verification images 1020 (including the verification images 1020-1, 1020-2...1020-n) into the verification color feature extraction layer 1040.
  • the verification color feature extraction layer 1040 can output the verification color features 1060 of multiple verification images 1020 at the same time.
  • the color classification layer 1070 can simultaneously determine the color of the illumination when each of the multiple verification images is photographed.
  • the color classification layer 1070 may determine the color of the illumination when the verification image was photographed based on the reference color feature and the verification color feature of the verification image. For example, the color classification layer 1070 may determine a value or probability based on the reference color feature and the verification color feature of the verification image, and then determine the color of the light when the verification image is captured based on the value or probability. The numerical value or probability corresponding to the verification image may reflect the possibility that the color of the light when the verification image is photographed belongs to each color. In some embodiments, the color classification layer 1070 may include, but is not limited to, fully connected layers, deep neural networks, and the like.
  • the first validation model is a machine learning model with preset parameters. It can be understood that the reference color feature extraction layer, the verification color feature extraction layer and the color classification layer included in the first verification model are machine learning models with preset parameters.
  • the preset parameters of the first verification model can be determined during the model training process. For example, the acquisition module may train the initial first verification model based on the first training sample with the first label to obtain preset parameters of the first verification model.
  • the first training sample includes at least one sample reference image and at least one sample verification image of the sample target object, and the first label of the first training sample is the color of illumination when each sample verification image is photographed. Wherein, the color of the illumination when the at least one sample reference image is captured is the same as the color of the at least one reference image. For example, if the at least one reference color includes red, green, and blue, the at least sample reference image includes three target images captured by the sample target subject under illumination of red light, green light, and blue light.
  • the acquisition module may input the first training sample into the initial first verification model, and update the parameters of the initial verification color feature extraction layer, the initial reference color feature extraction layer and the initial color classification layer through training, until the updated first A verification model satisfies the first preset condition.
  • the updated first verification model may be designated as the first verification model with preset parameters, in other words, the updated first verification model may be designated as the trained first verification model.
  • the first preset condition may be that the loss function of the updated color feature model is smaller than a threshold, converges, or the number of training iterations reaches a threshold.
  • the acquisition module can train the initial verification color feature extraction layer, the initial reference color feature extraction layer and the initial color classification layer in the initial first verification model through an end-to-end training method.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • at least one sample reference image can be input into the initial reference color feature extraction layer
  • at least one sample verification image can be input into the initial verification color feature extraction layer, based on the output of the initial color classification layer.
  • a loss function is established based on the result and the first label, and the parameters of each initial model in the initial first verification model are updated simultaneously based on the loss function.
  • the first verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may directly call the first verification model from the storage device.
  • the authenticity of the verification image is determined by the first verification model, which can improve the efficiency of the authenticity verification of the target image.
  • using the first verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal device, and further determine the authenticity of the target image.
  • the color light of the same color emitted by the terminal screens of different manufacturers may have differences in parameters such as saturation and brightness, resulting in a large intra-class gap of the same color.
  • the first training samples of the initial first verification model may be captured by terminals with different performances.
  • the initial first verification model is learned in the training process, so that the first verification model can take into account the difference in terminal performance when judging the color of the target object, and more accurately determine the color of the target image. Moreover, when the terminal is not hijacked, since both the reference image and the verification image are captured under the same ambient light conditions. In some embodiments, when a reference color space is established based on the reference color feature extraction layer in the first verification model, and the authenticity of multiple target images is determined based on the reference color space, the influence of external ambient light can be eliminated or reduced.
  • FIG. 11 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • flowchart 1100 may be performed by a verification module. As shown in Figure 11, the process 1100 includes the following steps:
  • Step 1110 Extract the verification color feature of at least one verification image.
  • step 910 For the specific description of extracting the verification color feature, please refer to step 910 and its related description.
  • Step 1120 extracting reference color features of at least one reference image.
  • step 910 For the specific description of extracting the reference color feature, please refer to step 910 and its related description.
  • Step 1130 for each of the at least one verification image, based on the illumination sequence and the reference color feature, generate a target color feature of the verification color corresponding to the verification image.
  • the target color feature refers to the feature represented by the verification color corresponding to the verification image in the reference color space.
  • the verification module may determine a verification color corresponding to the verification image based on the illumination sequence, and generate a target color feature of the verification image based on the verification color and the reference color feature. For example, the verification module can fuse the color feature of the verification color with the reference color feature to obtain the target color feature.
  • Step 1140 Determine the authenticity of the multiple target images based on the target color feature and the verification color feature corresponding to each of the at least one verification image.
  • the verification module may determine the authenticity of the verification image based on the similarity between its corresponding target color feature and the verification color feature.
  • the similarity between the target color feature and the verification color feature can be calculated by vector similarity, for example, determined by Euclidean distance, Manhattan distance, and the like.
  • the similarity between the target color feature and the verification color feature is greater than the third threshold, the verification image has authenticity, otherwise it does not have authenticity.
  • FIG. 12 is an exemplary flowchart of a method for determining authenticity of multiple target images according to some embodiments of the present specification.
  • process 1200 may be performed by a verification module. As shown in Figure 12, the process 1200 includes the following steps:
  • the multiple colors corresponding to the multiple lights in the lighting sequence include at least one reference color and at least one verification color.
  • one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the multiple target images include at least one reference image and at least one verification image, each of the at least one reference image corresponds to one of the at least one reference color, and the at least one verification image is Each image corresponds to one of the at least one verification color.
  • Step 1210 Extract the reference color feature of each of the at least one reference image and the verification color feature of each pair of the at least one verification image.
  • step 910 For the extraction of the reference color feature and the verification of the color feature, reference may be made to step 910 and its related description, which will not be repeated here.
  • the verification module may extract the reference color feature and the verification color feature based on the second color feature extraction layer included in the second verification model. For details of extracting color features based on the second color feature extraction layer, reference may be made to FIG. 13 and related descriptions, and details are not repeated here.
  • Step 1220 For each of the at least one reference image, determine the first color relationship between the reference image and each verification image based on the reference color feature of the reference image and the verification color feature of each verification image.
  • the first color relationship between the reference image and the verification image refers to the relationship between the color of the light when the reference image is captured and the color of the light when the verification image is captured.
  • the first color relationship includes the same, different, or similar, and the like.
  • the first color relationship may be represented numerically. For example, the same is represented by "1", and the difference is represented by "0".
  • the at least one first color relationship determined based on the at least one reference image and the at least one verification image may be represented by a vector, and each element in the vector may represent one and at least one of the at least one reference image.
  • a first color relationship between one of the verification images For example, if the first color relationship of each of the 1 reference image and the 5 verification images is the same, different, the same, the same, and different, then the first color relationship of the 1 reference image and the 5 verification images can be represented by a vector (1,0,1,1,0) means.
  • the at least one first color relationship determined based on the at least one reference image and the at least one verification image may also be represented by a verification code.
  • the subcode for each position in the verification code may represent a first color relationship between one of the at least one reference image and one of the at least one verification image.
  • the first color relationship between the above-mentioned one reference image and the five verification images can be represented by the verification code 10110.
  • the verification module may determine the first color relationship therebetween based on the reference color feature of the reference image and the verification color feature of the verification image. For example, the verification module may determine the similarity between the reference color feature of the reference image and the verification color feature of the verification image, and determine at least one first color relationship based on the similarity and the threshold. For example, if the similarity is greater than the fourth threshold, it is determined to be the same, if it is smaller than the fifth threshold, it is determined to be different, or larger than the sixth threshold and smaller than the fourth threshold, it is determined to be similar, and so on.
  • the fourth threshold may be greater than the fifth threshold and the sixth threshold, and the sixth threshold may be greater than the fifth threshold.
  • the similarity may be characterized by the distance between the reference color feature and the verification color feature.
  • the distance may include, but is not limited to, Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, Mahalanobis distance, included angle cosine distance, and the like.
  • the verification module may further acquire the first color relationship based on the color relationship determination layer included in the second verification model.
  • the color relationship determination layer For a detailed description of the color relationship determination layer, reference may be made to FIG. 13 and its related descriptions, which will not be repeated here.
  • Step 1230 for each of the at least one reference color, determine a second color relationship between the reference color and each of the verification colors.
  • the second color relationship of the reference color and the verification color may indicate whether the two colors are the same, different, or similar.
  • the type and representation of the second color relationship may be similar to the first color relationship, and details are not described herein again.
  • the verification module may determine its second color relationship based on the reference color and the verification color category or color parameter. For example, if the categories in the reference color and the verification color are the same or the numerical difference of the color parameters is less than a certain threshold, the two colors are judged to be the same, otherwise, the two colors are judged to be different.
  • the verification module may extract the first color feature of the color template image of the reference color and the second color feature of the color template image of the verification color.
  • the verification module may further determine a second color relationship between the reference color and the verification color based on the first color feature and the second color feature. For example, the verification module may calculate the similarity between the first color feature and the second color feature to determine the second color relationship.
  • the first color relationship between the reference image and the verification image corresponds to the second color relationship between the reference color corresponding to the reference image and the verification color corresponding to the verification image.
  • Step 1240 Determine the authenticity of the plurality of target images based on the at least one first color relationship and the at least one second color relationship.
  • the verification module may determine the authenticity of the plurality of target images based on some or all of the at least one first color relationship and the corresponding second color relationship.
  • the first color relationship and the second color relationship may be represented by vectors.
  • the verification module may select part or all of the at least one first color relationship to construct the first vector, and construct the second vector based on the second color relationship corresponding to the selected first color relationship. Further, the verification module may determine the authenticity of the plurality of target images based on the similarity between the first vector and the second vector. For example, if the similarity is greater than the seventh threshold, the multiple target images are authentic. It can be understood that the arrangement order of the elements in the first vector and the second vector is determined based on the corresponding relationship between the first color relationship and the second color relationship. For example, the element corresponding to a first color relationship in the first vector A is Aij, and the element corresponding to the second color relationship corresponding to the first color relationship in the second vector B is Bij.
  • the first color relationship and the second color relationship may also be represented by a verification code.
  • the verification module may select part or all of at least one first color relationship to construct a corresponding first verification code, and construct a corresponding second verification code based on the second color relationship corresponding to the selected first color relationship , to determine the authenticity of multiple target images. Similar to the first vector and the second vector, the positions of the sub-codes in the first verification code and the second verification code are determined based on the correspondence between the first color relationship and the second color relationship. For example, if the first verification code and the second verification code are different, the multiple target images do not have authenticity. For example, if the first verification code is 10110 and the second verification code is 10111, the multiple target images are not authentic.
  • the verification module may determine the authenticity of the multiple target images based on the same number of sub-codes in the first verification code and the second verification code. For example, if the number of identical subcodes is greater than the eighth threshold, the authenticity of the multiple target images is determined, and if the number of identical subcodes is less than the ninth threshold, it is determined that the multiple target images are not authentic.
  • the eighth threshold is 3
  • the ninth threshold is 1, the first verification code is 10110, the second verification code is 10111, and the first, second, and third digits of the first verification code and the second verification code are If the correspondence with the subcode of the fourth bit is the same, it is determined that the multiple target images are authentic.
  • the verification module may determine, based on the reference color space, the color of the illumination when the verification image and the reference image were captured, further determine the first color relationship, and then determine the authenticity of the multiple target images in combination with the corresponding second color relationship.
  • the verification module may determine the updated verification color feature of the verification image and the updated reference color feature of the reference image based on the reference color space. Further, the verification module determines the first color relationship based on the updated verification color feature and the updated reference color feature, and then determines the authenticity of the multiple target images in combination with the corresponding second color relationship.
  • both the reference image and the verification image are captured under the same ambient light conditions and illuminated by the same light-emitting element. Therefore, based on the relationship between the reference image and the verification image, the In the case of authenticity, the influence of external ambient light and light-emitting elements can be eliminated or weakened, thereby improving the accuracy of light color recognition.
  • FIG. 13 is a schematic diagram of a second verification model according to some embodiments of the present specification.
  • the verification module may determine the authenticity of the plurality of target images based on the second verification model and the lighting sequence.
  • the second verification model may include a second color feature extraction layer 1330 and a color relationship determination layer 1360 .
  • the second color feature extraction layer 1330 may be used to implement step 1210
  • the color relationship determination layer 1360 may be used to implement step 1220 .
  • the verification module may determine the authenticity of the multiple target images based on the first color relationship and the lighting sequence.
  • the at least one reference image and the at least one verification image may form one or more image pairs.
  • Each image pair includes one of at least one reference image and one of at least one verification image.
  • the verification module may analyze one or more image pairs, respectively, and determine the first color relationship between the reference image and the verification image in the image pair.
  • the at least one reference image includes "1320-1...1320-y" and the at least one verification image includes "1310-1...1310-x".
  • the image pair formed by the reference image 1320-y and the verification image 1310-1 is taken as an example to expand.
  • the second color feature extraction layer 1330 may extract the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1.
  • the type of the second color feature extraction layer 1330 may include a convolutional neural network (Convolutional Neural Networks, CNN) model such as ResNet, ResNeXt, SE-Net, DenseNet, MobileNet, ShuffleNet, RegNet, EfficientNet, or Inception, or a recurrent neural network model.
  • CNN convolutional Neural Networks
  • the input to the second color feature extraction layer 1330 may be an image pair (eg, a reference image 1320-y and a verification image 1310-1).
  • the reference image 1320-y and the verification image 1310-1 can be spliced and input into the second color feature extraction layer 1330.
  • the output of the second color feature extraction layer 1330 may be the color features of the image pair (eg, the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1).
  • the output of the second color feature extraction model 1330 may be the color feature after the verification color feature 1340-1 of the verification image 1310-1 and the reference color feature 1350-y of the reference image 1320-y are concatenated.
  • the color relationship determination layer 1360 is configured to determine the first color relationship of the image pair based on the color features of the image pair. For example, the verification module may input the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1 into the color relationship determination layer 1360, which outputs the reference image 1320-y and Verify the first color relationship of image 1310-1.
  • the verification module may input multiple image pairs consisting of at least one reference image and at least one verification image together into the second verification model.
  • the second verification model may simultaneously output the first color relationship for each of the plurality of pairs of images.
  • the verification module may input one of the plurality of image pairs into the second verification model.
  • the second verification model may output the first color relationship for the pair of images.
  • the color relationship determination layer 1360 may be a classification model, including but not limited to fully connected layers, deep neural networks, decision trees, and the like.
  • the second verification model is a machine learning model with preset parameters. It can be understood that the second color feature extraction layer and the color relationship determination layer included in the second verification model are machine learning models with preset parameters.
  • the preset parameters of the second verification model can be determined during the training process. For example, the acquisition module may train an initial second verification model based on the second training samples with the second label to obtain the second verification model.
  • the second training samples include one or more sample image pairs with second labels. Each sample image pair includes two target images of the sample target object taken under the same or different lights.
  • the second label of the second training sample may indicate whether the color of the illumination when the sample image pair was captured is the same.
  • the acquisition module may input the second training sample into the initial second verification model, and update the parameters of the initial second color feature extraction layer and the initial color relationship determination layer through training, until the updated second verification model satisfies the first Two preset conditions.
  • the updated second verification model may be designated as the second verification model with preset parameters, in other words, the updated second verification model may be designated as the trained second verification model.
  • the second preset condition may be that the loss function of the updated second verification model is smaller than the threshold, converges, or the number of training iterations reaches the threshold.
  • the acquisition module may train the initial second color feature extraction layer and the initial color relationship determination layer in the initial second verification model through an end-to-end training method.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • At least one sample reference image and at least one verification image can be input into the initial color feature extraction model, and a loss function can be established based on the output result of the initial color relationship determination layer and the second label, and based on The loss function simultaneously updates the parameters of each initial model in the initial second verification model.
  • the second verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may directly call the second verification model from the storage device.
  • the second verification model may be used to determine the first color relationship.
  • the color relationship determination layer of the second verification model may include only a small number of neurons (eg, two neurons) to judge whether the colors are the same. Compared with the color recognition network in the traditional method, the structure of the second verification model disclosed in this specification is simpler.
  • the target object analysis based on the second verification model also requires relatively less computing resources (eg, computing space), thereby improving the efficiency of light color recognition.
  • the input of the model can be a target image corresponding to any color.
  • the embodiment of this specification has higher applicability.
  • using the second verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal device, and further determine the authenticity of the target image. It can be understood that there are certain differences in the hardware of different terminals. For example, the color light of the same color emitted by the terminal screens of different manufacturers may have differences in parameters such as saturation and brightness, resulting in a large intra-class gap of the same color.
  • the second training samples of the initial second verification model may be captured by terminals with different performances.
  • the initial second verification model is learned in the training process, so that the second verification model can consider the terminal performance difference when judging the color of the target object, and more accurately determine the color of the target image.
  • both the reference image and the verification image are taken under the same ambient light conditions. Therefore, when the reference image and the verification image are processed based on the second verification model to determine the authenticity of the multiple target images, the influence of external ambient light can be eliminated or reduced.
  • FIG. 14 is another exemplary flowchart of a method for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • process 1400 may be performed by a verification module. As shown in Figure 14, the process 1400 includes the following steps:
  • Step 1410 Determine a first image sequence based on the multiple target images.
  • the first image sequence is a collection of multiple target images arranged in a specific order.
  • the verification module may sequence the plurality of target images by their respective capture times to generate the first sequence of images. For example, the plurality of target images may be sorted from first to last according to their respective shooting times.
  • Step 1420 Determine a second image sequence based on the plurality of color template images.
  • a color template image is a template image generated based on the colors of the lights in the lighting sequence.
  • a color template image for a color is a solid-color image that contains only that color. For example, a red color template image contains only red, no colors other than red, and no texture.
  • the verification module may generate the plurality of color template images based on the lighting sequence. For example, the verification module may generate a color template image corresponding to the color of each light in the light sequence according to the color type and/or color parameter of the light. In some embodiments, a color template image of each color in the lighting sequence may be pre-stored in the storage device, and the verification module may obtain a color template image corresponding to the color of the lighting in the lighting sequence from the storage device through the network.
  • the second image sequence is a collection of multiple color template images arranged in sequence.
  • the verification module may sort the plurality of color template images according to their corresponding illumination times to generate a second sequence of images.
  • the plurality of color template images may be sorted from first to last according to their corresponding illumination times.
  • the arrangement order of the plurality of color template images in the second image sequence is consistent with the arrangement order of the plurality of target images in the first image sequence.
  • the irradiation time of the illumination corresponding to the plurality of color template images in the second image sequence corresponds to the shooting time of the plurality of target images in the first image sequence. For example, if the multiple target images are arranged from first to last according to their shooting time, the multiple color template images are also arranged from first to last based on the irradiation time of their corresponding lighting.
  • Step 1430 extract the first feature information of the first image sequence.
  • the first feature information may include color features of the plurality of target images in the first image sequence. For more details on extracting color features, see step 230 and its related description.
  • the verification module may extract the first feature information of the first image sequence based on the first extraction layer in the third verification model. For the extraction of the first feature information based on the first extraction layer, please refer to FIG. 15 and its related descriptions.
  • Step 1440 extract the second feature information of the second image sequence.
  • the second feature information may include color features of the plurality of color template images in the second image sequence. For more details on extracting color features, see step 230 and its related description.
  • the verification module may extract the second feature information based on a second extraction layer in the second color verification model. For more details on extracting the second color feature based on the second extraction layer, see FIG. 15 and its related description.
  • Step 1450 based on the first feature information and the second feature information, determine the authenticity of the multiple target images.
  • the verification module may determine, based on the degree of matching between the first feature information and the second feature information, a color sequence of illumination when the multiple target images in the first image sequence are captured and multiple template colors in the second image sequence The second judgment result of whether the color sequences of the images are consistent. For example, the verification module may take the similarity between the first feature information and the second feature information as the matching degree, and then determine the second judgment result based on the relationship between the similarity between the first feature information and the second feature information and the threshold. For example, if the similarity between the first feature information and the second feature information is greater than the tenth threshold, the second judgment result is consistent. If the similarity between the first feature information and the second feature information is less than the eleventh threshold, the second judgment result is inconsistent. Further, the verification module may determine the authenticity of the plurality of target images based on the second judgment result. For example, if the second judgment result is consistent, the multiple target images are authentic.
  • the verification module may determine the second determination result based on the discrimination layer in the third color verification model. For more details about determining the second judgment result based on the discriminant layer, please refer to FIG. 15 and its related description.
  • Some embodiments of the present specification generate a second image sequence based on an artificially constructed color template image, and determine the authenticity of the multiple target images by comparing the second image sequence with the first image sequence (sequence of multiple target images). sex. Compared to directly identifying the color of the first image sequence, the method disclosed in this specification can make the task of identifying the target image simpler.
  • a third validation model may be used for target image authenticity analysis. Using the second image sequence can make the recognition task of the third verification model simpler and the learning difficulty lower, thereby making the recognition accuracy higher.
  • the multiple target images in the first image sequence are all captured under the same ambient light conditions and illuminated by the same light-emitting element, therefore, the multiple targets are determined based on the relationship between the reference image and the verification image.
  • the influence of external ambient light and light-emitting elements can be eliminated or weakened, thereby improving the recognition accuracy of lighting color.
  • FIG. 15 is a diagram showing an exemplary structure of a third verification model according to some embodiments of the present specification.
  • the verification module may determine the authenticity of the plurality of target images based on the third verification model and the lighting sequence.
  • the third color verification model may include a first extraction layer 1530 , a second extraction layer 1540 and a discrimination layer 1570 .
  • the verification module may implement steps 1430-1450 using the third verification model to determine the second judgment result.
  • the first extraction layer 1530 implements step 1430
  • the second extraction layer 1540 implements step 1440
  • the discrimination layer 1570 implements step 1450 .
  • the verification module determines the authenticity of the multiple target images based on the second judgment result and the lighting sequence.
  • the input of the first extraction layer 1530 is the first image sequence 1510 and the output is the first feature information 1550 .
  • the verification module may sequentially stitch multiple target images in the first image sequence 1510 into the first extraction layer 1530 .
  • the output first feature information 1550 may be a feature obtained by splicing color features corresponding to multiple target images in the first image sequence 1510 .
  • the input of the second extraction layer 1540 is the second image sequence 1520 , and the output is the second feature information 1560 .
  • the verification module may sequentially stitch multiple color template images in the second image sequence 1520 into the second extraction layer 1540 .
  • the output second feature information 1560 may be a feature obtained by splicing color features corresponding to multiple color template images in the second image sequence 1520 .
  • the types of the first extraction layer and the second extraction layer include, but are not limited to, Convolutional Neural Networks such as ResNet, ResNeXt, SE-Net, DenseNet, MobileNet, ShuffleNet, RegNet, EfficientNet, or Inception. Networks, CNN) model, or recurrent neural network model.
  • the types of the first extraction layer and the second extraction layer may be the same or different.
  • the input of the discrimination layer 1570 is the first characteristic information 1550 and the second characteristic information 1560, and the output is the second determination result.
  • the discriminative layer may be a model that implements classification, including but not limited to a fully connected layer, a deep neural network (DNN), and the like.
  • the third validation model is a machine learning model with preset parameters. It can be understood that the first extraction layer, the second extraction layer and the discrimination layer included in the third verification model are machine learning models with preset parameters.
  • the preset parameters of the third verification model can be determined during the model training process. For example, the acquisition module may train an initial third verification model based on the third training sample with the third label to obtain the third verification model.
  • the third training sample includes a first sample image sequence and a second sample image sequence.
  • the first sample image sequence consists of multiple sample target images of the sample target object, and the second sample image sequence consists of multiple samples of multiple sample colors. Color template image composition.
  • the third label of the third training sample is whether the color sequence of illumination when the multiple sample target images of the first sample image sequence are captured is consistent with the color sequence of multiple sample color templates in the second sample image sequence.
  • the acquisition module may input the third training sample into the initial third verification model, and update the parameters of the initial first extraction layer, the initial second extraction layer and the initial discriminant layer through training until the updated third verification model
  • the first preset condition is satisfied.
  • the updated third verification model may be designated as a preset parameter third verification model, in other words, the updated third verification model may be designated as a trained third verification model.
  • the third preset condition may be that the updated loss function of the third verification model is smaller than the threshold, converges, or the number of training iterations reaches the threshold.
  • the acquisition module can train the initial first extraction layer, the initial second extraction layer and the initial discrimination layer in the initial third verification model in an end-to-end training manner.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • the first sample image sequence can be input into the initial first extraction layer, and the second sample image sequence can be input into the initial second extraction layer, based on the output result of the initial discriminant layer and the third label A loss function is established, and the parameters of each initial model in the initial third verification model are updated simultaneously based on the loss function.
  • all or part of the parameters of the first extraction layer and the second extraction layer may be shared.
  • the authenticity of the target image is determined by the third verification model, without identifying the type of illumination when the target image is captured, directly by comparing whether the first image sequence containing the target image and the second image sequence containing the color template image are Consistent for target recognition. This is equivalent to transforming the color recognition task into a binary classification task of judging whether the colors are the same.
  • a third verification model may be used to determine whether the first sequence of images and the second sequence of images are identical.
  • the discrimination layer of the second verification model may include only a small number of neurons (eg, two neurons) to judge whether the sequences are the same. Compared with the color recognition network in the traditional method, the structure of the second verification model disclosed in this specification is simpler.
  • the target object analysis based on the third verification model also requires relatively less computing resources (eg, computing space), thereby improving the efficiency of light color recognition.
  • the input of the model can be a target image corresponding to any color.
  • the embodiment of this specification has higher applicability.
  • using the third verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image. It can be understood that there are certain differences in the hardware of different terminals. For example, the color light of the same color emitted by the terminal screens of different manufacturers may have differences in parameters such as saturation and brightness, resulting in a large intra-class gap of the same color.
  • the third training samples of the initial third verification model may be captured by terminals with different performances.
  • the initial third verification model is learned in the training process, so that the trained third verification model can consider the terminal performance difference when judging the color of the target object, and more accurately determine the color of the target image.
  • the multiple target images in the first sequence of images are all captured under the same ambient light conditions. Therefore, when the first image sequence is processed based on the third verification model and the authenticity of the multiple target images is determined, the influence of external ambient light can be eliminated or reduced.
  • the verification module may determine the updated color features of a plurality of target images (including at least one verification target object image and at least one reference image) based on the reference color space, and generate the updated color features based on the updated color features of the plurality of target images.
  • the updated first feature information of the first image sequence may be determined.
  • the verification module may generate the updated second feature information of the second image sequence based on the updated color features of the plurality of color template images.
  • the verification module may further determine the authenticity of the plurality of target images based on the updated first characteristic information (or the first characteristic information) and the updated second characteristic information (or the second characteristic information).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

Selon des modes de réalisation, la présente invention concerne un procédé et un système de reconnaissance de cible. Le procédé de reconnaissance de cible consiste : à déterminer une séquence d'éclairage, la séquence d'éclairage étant utilisée pour déterminer de multiples couleurs de multiples éclairages émis par un terminal lors de l'irradiation d'un objet cible ; à acquérir de multiples images cibles sur la base du terminal, les temps de capture d'image des multiples images cibles ayant une correspondance avec les temps d'irradiation des multiples éclairages ; et à déterminer, sur la base de la séquence d'éclairage et des multiples images cibles, l'authenticité des multiples images cibles.
PCT/CN2022/075531 2021-04-20 2022-02-08 Procédé et système de reconnaissance de cible WO2022222575A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110423528.XA CN113111806A (zh) 2021-04-20 2021-04-20 用于目标识别的方法和系统
CN202110423528.X 2021-04-20

Publications (1)

Publication Number Publication Date
WO2022222575A1 true WO2022222575A1 (fr) 2022-10-27

Family

ID=76718623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075531 WO2022222575A1 (fr) 2021-04-20 2022-02-08 Procédé et système de reconnaissance de cible

Country Status (2)

Country Link
CN (1) CN113111806A (fr)
WO (1) WO2022222575A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580210A (zh) * 2023-07-05 2023-08-11 四川弘和数智集团有限公司 一种线性目标检测方法、装置、设备及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222904A1 (fr) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 Procédé et système de vérification d'image, et support de stockage
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和系统
CN113743284A (zh) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 图像识别方法、装置、设备、相机及门禁设备
CN114266977B (zh) * 2021-12-27 2023-04-07 青岛澎湃海洋探索技术有限公司 基于超分辨可选择网络的多auv的水下目标识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376592A (zh) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 活体检测方法、装置和计算机可读存储介质
CN111881844A (zh) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 一种判断图像真实性的方法及系统
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和系统
CN113111810A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别方法和系统
CN113111807A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别的方法和系统
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937744B1 (en) * 2000-06-13 2005-08-30 Microsoft Corporation System and process for bootstrap initialization of nonparametric color models
CN109461168B (zh) * 2018-10-15 2021-03-16 腾讯科技(深圳)有限公司 目标对象的识别方法和装置、存储介质、电子装置
CN109493280B (zh) * 2018-11-02 2023-03-14 腾讯科技(深圳)有限公司 图像处理方法、装置、终端及存储介质
CN111523438B (zh) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376592A (zh) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 活体检测方法、装置和计算机可读存储介质
CN111881844A (zh) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 一种判断图像真实性的方法及系统
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和系统
CN113111810A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别方法和系统
CN113111807A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别的方法和系统
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580210A (zh) * 2023-07-05 2023-08-11 四川弘和数智集团有限公司 一种线性目标检测方法、装置、设备及介质
CN116580210B (zh) * 2023-07-05 2023-09-15 四川弘和数智集团有限公司 一种线性目标检测方法、装置、设备及介质

Also Published As

Publication number Publication date
CN113111806A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
WO2022222575A1 (fr) Procédé et système de reconnaissance de cible
CN111488756B (zh) 基于面部识别的活体检测的方法、电子设备和存储介质
George et al. Cross modal focal loss for rgbd face anti-spoofing
WO2022222569A1 (fr) Procédé et système de discrimination de cible
CN109543640B (zh) 一种基于图像转换的活体检测方法
CN110163078A (zh) 活体检测方法、装置及应用活体检测方法的服务系统
CN109086723B (zh) 一种基于迁移学习的人脸检测的方法、装置以及设备
CN112801057B (zh) 图像处理方法、装置、计算机设备和存储介质
CN108664843B (zh) 活体对象识别方法、设备和计算机可读存储介质
CN109871845B (zh) 证件图像提取方法及终端设备
WO2022222585A1 (fr) Procédé et système d'identification de cible
CN110532746B (zh) 人脸校验方法、装置、服务器及可读存储介质
KR102145132B1 (ko) 딥러닝을 이용한 대리 면접 예방 방법
CN106991364A (zh) 人脸识别处理方法、装置以及移动终端
CN113111810B (zh) 一种目标识别方法和系统
Hadiprakoso et al. Face anti-spoofing using CNN classifier & face liveness detection
CN116229528A (zh) 一种活体掌静脉检测方法、装置、设备及存储介质
CN115147936A (zh) 一种活体检测方法、电子设备、存储介质及程序产品
CN107862654A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN112200075B (zh) 一种基于异常检测的人脸防伪方法
JP3962517B2 (ja) 顔面検出方法及びその装置、コンピュータ可読媒体
CN113723310B (zh) 基于神经网络的图像识别方法及相关装置
Hadwiger et al. Towards learned color representations for image splicing detection
JP2004128715A (ja) ビデオデータの記憶制御方法およびシステム、プログラム、記録媒体、ビデオカメラ
WO2022222904A1 (fr) Procédé et système de vérification d'image, et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790685

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22790685

Country of ref document: EP

Kind code of ref document: A1