WO2022222585A1 - 一种目标识别的方法和系统 - Google Patents

一种目标识别的方法和系统 Download PDF

Info

Publication number
WO2022222585A1
WO2022222585A1 PCT/CN2022/076352 CN2022076352W WO2022222585A1 WO 2022222585 A1 WO2022222585 A1 WO 2022222585A1 CN 2022076352 W CN2022076352 W CN 2022076352W WO 2022222585 A1 WO2022222585 A1 WO 2022222585A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
verification
image
target
target images
Prior art date
Application number
PCT/CN2022/076352
Other languages
English (en)
French (fr)
Inventor
张明文
张天明
赵宁宁
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110423614.0A external-priority patent/CN113111807B/zh
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2022222585A1 publication Critical patent/WO2022222585A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Definitions

  • the present specification relates to the technical field of image processing, and in particular, to a target recognition method and system.
  • Target recognition is a technology for biometric identification based on targets acquired by image acquisition devices.
  • face recognition technology targeting faces is widely used in application scenarios such as permission verification and identity verification.
  • permission verification and identity verification In order to ensure the security of target recognition, it is necessary to determine the authenticity of the target image.
  • One of the embodiments of the present specification provides a target recognition method, the method includes: acquiring a plurality of target images, the shooting times of the plurality of target images correspond to the irradiation times of the plurality of lights in the lighting sequence irradiating the target object relationship, the plurality of lights have a plurality of colors, the plurality of colors include at least one reference color and at least one verification color, each of the at least one verification color is based on at least a portion of the at least one reference color determining; and determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
  • a target recognition system which is characterized by comprising: an acquisition module, configured to acquire a plurality of target images, the shooting time of the plurality of target images is related to a plurality of the illumination sequences irradiating the target object.
  • the irradiation time of the light has a corresponding relationship
  • the plurality of lights have a plurality of colors
  • the plurality of colors include at least one reference color and at least one verification color, each of the at least one verification color is based on the at least one reference At least a part of the colors is determined;
  • a verification module is configured to determine the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
  • One of the embodiments of the present specification provides a target recognition apparatus, including a processor, where the processor is configured to execute the target recognition method disclosed in the present specification.
  • One of the embodiments of this specification provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the target identification method disclosed in this specification.
  • FIG. 1 is a schematic diagram of an application scenario of a target recognition system according to some embodiments of the present specification
  • FIG. 2 is an exemplary flowchart of a target recognition method according to some embodiments of the present specification
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • FIG. 4 is an exemplary flowchart for determining the authenticity of multiple target images based on the illumination sequence and multiple target images according to some embodiments of the present specification
  • FIG. 5 is a schematic structural diagram of a color verification model according to some embodiments of the present specification.
  • FIG. 6 is another exemplary flowchart for determining the authenticity of multiple target images based on a lighting sequence and multiple target images according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting components, elements, parts, parts or assemblies to different levels.
  • Target recognition is a technology for biometric recognition based on target objects acquired by image acquisition equipment.
  • the target object may be a human face, a fingerprint, a palm print, a pupil, and the like.
  • object recognition may be applied to authorization verification.
  • authorization verification For example, access control authority authentication and account payment authority authentication.
  • target recognition may also be used for authentication.
  • employee attendance certification and self-registration identity security certification may be based on matching of target images captured in real time by the image capture device with pre-acquired biometric features, thereby verifying target identity.
  • image capture devices can be hacked or hijacked, and attackers can upload fake target images for authentication.
  • attacker A can directly upload the face image of user B after attacking or hijacking the image acquisition device.
  • the target recognition system performs face recognition based on user B's face image and pre-acquired user B's face biometrics, thereby passing user B's identity verification.
  • FIG. 1 is a schematic diagram of an application scenario of a target recognition system according to some embodiments of the present specification.
  • the object recognition system 100 may include a processing device 110 , a network 120 , a terminal 130 and a storage device 140 .
  • the processing device 110 may be used to process data and/or information from at least one component of the object recognition system 100 and/or an external data source (eg, a cloud data center). For example, the processing device 110 may acquire multiple target images, determine the authenticity of the multiple target images, and the like. During processing, the processing device 110 may obtain data (eg, instructions) from other components of the object recognition system 100 (eg, the storage device 140 and/or the terminal 130 ) directly or through the network 120 and/or send the processed data to the other components described above for storage or display.
  • data eg, instructions
  • processing device 110 may be a single server or group of servers.
  • the server group may be centralized or distributed (eg, processing device 110 may be a distributed system).
  • processing device 110 may be local or remote.
  • the processing device 110 may be implemented on a cloud platform, or provided in a virtual fashion.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • the network 120 may connect components of the system and/or connect the system with external components.
  • the network 120 enables communication between the various components of the object recognition system 100 and between the object recognition system 100 and external components, facilitating the exchange of data and/or information.
  • the network 120 may be any one or more of a wired network or a wireless network.
  • the network 120 may include a cable network, a fiber optic network, a telecommunications network, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN) , Bluetooth network, ZigBee network (ZigBee), near field communication (NFC), intra-device bus, intra-device line, cable connection, etc. or any combination thereof.
  • the network connection between the various parts in the object recognition system 100 may adopt one of the above-mentioned manners, or may adopt multiple manners.
  • the network 120 may be of various topologies such as point-to-point, shared, centralized, or a combination of topologies.
  • network 120 may include one or more network access points.
  • network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, . . . , through which one or more components of object recognition system 100 may Connect to network 120 to exchange data and/or information.
  • the terminal 130 refers to one or more terminal devices or software used by the user.
  • the terminal 130 may include an image capturing device 131 (eg, a camera, a camera), and the image capturing device 131 may photograph a target object and acquire multiple target images.
  • the terminal 130 eg, the screen and/or other light emitting elements of the terminal 130
  • the terminal 130 may sequentially emit light of multiple colors in the lighting sequence to illuminate the target object.
  • the terminal 130 may communicate with the processing device 110 through the network 120 and send the captured multiple target images to the processing device 110 .
  • the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other devices with input and/or output capabilities, the like, or any combination thereof.
  • the above examples are only used to illustrate the broadness of the types of terminals 130 and not to limit the scope thereof.
  • Storage device 140 may be used to store data (eg, lighting sequences, multiple target images, etc.) and/or instructions.
  • the storage device 140 may include one or more storage components, and each storage component may be an independent device or a part of other devices.
  • storage device 140 may include random access memory (RAM), read only memory (ROM), mass storage, removable memory, volatile read-write memory, the like, or any combination thereof.
  • mass storage may include magnetic disks, optical disks, solid state disks, and the like.
  • storage device 140 may be implemented on a cloud platform.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • storage device 140 may be integrated or included in one or more other components of object recognition system 100 (eg, processing device 110, terminal 130, or other possible components).
  • the object recognition system 100 may include an acquisition module, a verification module and a training module.
  • the acquisition module can be used to acquire multiple target images, the shooting time of the multiple target images has a corresponding relationship with the irradiation time of multiple lights in the lighting sequence irradiating the target object, and the multiple lights have multiple colors, so
  • the plurality of colors include at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a portion of the at least one reference color.
  • the verification module may be configured to determine the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images.
  • the plurality of target images include at least one verification image and at least one reference image, each of the at least one verification image corresponds to one of the at least one verification color, and the Each of the at least one reference image corresponds to one of the at least one plurality of reference colors, and for each of the at least one validation image, the validation module may be based on the at least one reference image and the verification image, determining the color of the illumination when the verification image was captured; and determining the authenticity of the plurality of target images based on the illumination sequence and the color of the illumination when the at least one verification image was captured.
  • the verification module may further extract the verification color feature of the at least one verification image and the reference color feature of the at least one reference image; for each of the at least one verification image, based on generating the target color feature of the verification color corresponding to the verification image by the illumination sequence and the reference color feature; and based on the target color feature and the verification color feature of each of the at least one verification image , to determine the authenticity of the multiple target images.
  • the verification module may process the at least one reference image and the verification image based on a color verification model, and determine the color of the lighting when the verification image was captured.
  • the color verification model is a machine learning model with preset parameters. Preset parameters refer to the model parameters learned during the training of the machine learning model. Taking a neural network as an example, the model parameters include weight and bias.
  • the color verification model includes a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer.
  • the reference color feature extraction layer processes the at least one reference image to determine the reference color feature of the at least one reference image.
  • the verification color feature extraction layer processes the verification image to determine the verification color feature of the verification image.
  • the color classification layer processes the reference color feature of the at least one reference image and the verification color feature of the verification image, and determines the color of the illumination when the verification image is photographed.
  • the preset parameters of the color verification model are obtained through end-to-end training.
  • the training module can be used to obtain a plurality of training samples, each of the plurality of training samples includes at least one sample reference image, at least one sample verification image, and a sample label, the sample label representing the at least one sample It is verified that the color of the illumination when each image in the image is photographed, and the at least one reference color is the same as the color of the illumination when the at least one sample reference image is photographed.
  • the training module may further train an initial color verification model based on the plurality of training samples, and determine the preset parameters of the color verification model. In some embodiments, the training module may be omitted.
  • the acquisition module, the verification module, and the training module disclosed in FIG. 1 may be different modules in a system, or may be a module that implements the functions of the above two or more modules.
  • each module may share one storage module, and each module may also have its own storage module.
  • FIG. 2 is an exemplary flowchart of a method for object recognition according to some embodiments of the present specification. As shown in Figure 2, the process 200 includes the following steps:
  • Step 210 acquiring multiple target images.
  • the shooting time of the multiple target images has a corresponding relationship with the irradiation time of the multiple lights in the lighting sequence in which the terminal irradiates the target object.
  • step 210 may be performed by an acquisition module.
  • the target object refers to an object that needs to be identified.
  • the target object can be a specific body part of the user, such as face, fingerprint, palm print, or pupil.
  • the target object refers to the face of a user who needs to be authenticated and/or authenticated.
  • the platform needs to verify whether the driver who takes the order is a registered driver user reviewed by the platform, and the target object is the driver's face.
  • the payment system needs to verify the payment authority of the payer, and the target object is the payer's face.
  • the terminal is instructed to emit the illumination sequence.
  • the lighting sequence includes a plurality of lighting for illuminating the target object.
  • the colors of different lights in the lighting sequence may be the same or different.
  • the plurality of lights include at least two lights with different colors, that is, the plurality of lights have multiple colors.
  • the plurality of colors includes at least one reference color and at least one verification color.
  • the verification color is one of the colors that is directly used to verify the authenticity of the image.
  • the reference color is one of the colors used to assist verification in determining the authenticity of the target image.
  • each of the at least one verification color is determined based on at least a portion of the at least one reference color. For more details about the reference color and the verification color, please refer to FIG. 3 and its related description, which will not be repeated here.
  • the illumination sequence includes information about each illumination in the plurality of illuminations, for example, color information, illumination time, and the like.
  • the color information of multiple lights in the lighting sequence may be represented in the same or different manners.
  • the color information of the plurality of lights may be represented by color categories.
  • the colors of the multiple lights in the lighting sequence may be represented as red, yellow, green, purple, cyan, blue, and red.
  • the color information of the plurality of lights may be represented by color parameters.
  • the colors of multiple lights in the lighting sequence can be represented as RGB(255, 0, 0), RGB(255, 255, 0), RGB(0, 255, 0), RGB(255, 0, 255) , RGB(0, 255, 255), RGB(0, 0, 255).
  • the lighting sequence which may also be referred to as a color sequence, contains color information for the plurality of lighting.
  • the illumination times of the plurality of illuminations in the illumination sequence may include the start time, end time, duration, etc., or any combination thereof, for each illumination plan to illuminate the target object.
  • the start time of illuminating the target object with red light is 14:00
  • the start time of illuminating the target object with green light is 14:02.
  • the durations for which the red light and the green light illuminate the target object are both 0.1 seconds.
  • the durations for different illuminations to illuminate the target object may be the same or different.
  • the irradiation time can be expressed in other ways, which will not be repeated here.
  • the terminal may sequentially emit multiple illuminations in a particular order.
  • the terminal may emit light through the light emitting element.
  • the light-emitting element may include a light-emitting element built in the terminal, for example, a screen, an LED light, and the like.
  • the light-emitting element may also include an externally-connected light-emitting element. For example, external LED lights, light-emitting diodes, etc.
  • the terminal when the terminal is hijacked or attacked, the terminal may receive an instruction to emit light, but does not actually emit light. For more details about the lighting sequence, please refer to FIG. 3 and its related description, which will not be repeated here.
  • the terminal or processing device may generate the lighting sequence randomly or based on preset rules. For example, a terminal or processing device may randomly select a plurality of colors from a color library to generate a lighting sequence.
  • the lighting sequence may be set by the user at the terminal, determined according to the default settings of the target recognition system 100, or determined by the processing device through data analysis, and the like.
  • the terminal or storage device may store the lighting sequence.
  • the obtaining module can obtain the lighting sequence from the terminal or the storage device through the network.
  • the multiple target images are images used for target recognition.
  • the formats of the multiple target images may include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Kodak Flash PiX (FPX), Digital Imaging and Communications in Medicine (DICOM), etc. .
  • the multiple target images may be two-dimensional (2D, two-dimensional) images or three-dimensional (3D, three-dimensional) images.
  • the acquisition module may acquire the plurality of target images.
  • the acquisition module may send acquisition instructions to the terminal through the network, and then receive multiple target images sent by the terminal through the network.
  • the terminal may send the multiple target images to a storage device for storage, and the acquiring module may acquire the multiple target images from the storage device.
  • the target image may not contain or contain the target.
  • the target image may be captured by an image acquisition device of the terminal, or may be determined based on data (eg, video or image) uploaded by the user.
  • the target recognition system 100 will issue a lighting sequence to the terminal.
  • the terminal may sequentially emit the plurality of illuminations according to the illumination sequence.
  • its image acquisition device may be instructed to acquire one or more images within the illumination time of the illumination.
  • the image capture device of the terminal may be instructed to capture video during the entire illumination period of the plurality of illuminations.
  • the terminal or other computing device may intercept one or more images collected during the illumination time of each illumination from the video according to the illumination time of each illumination.
  • One or more images collected by the terminal during the irradiation time of each illumination may be used as the multiple target images.
  • the multiple target images are real images captured by the target object when it is illuminated by the multiple illuminations. It can be understood that there is a corresponding relationship between the irradiation time of the multiple lights and the shooting time of the multiple target images. If one image is collected within the irradiation time of a single light, the corresponding relationship is one-to-one; if multiple images are collected within the irradiation time of a single light, the corresponding relationship is one-to-many.
  • the hijacker can upload images or videos through the terminal device.
  • the uploaded image or video may contain target objects or specific body parts of other users, and/or other objects.
  • the uploaded image or video may be a historical image or video shot by the terminal or other terminals, or a synthesized image or video.
  • the terminal or other computing device eg, processing device 110
  • the terminal or other computing device may determine the plurality of target images based on the uploaded image or video.
  • the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded image or video according to the illumination sequence and/or illumination duration of each illumination in the illumination sequence.
  • the lighting sequence includes five lightings arranged in sequence, and the hijacker can upload five images through the terminal device.
  • the terminal or other computing device will determine an image corresponding to each of the five illuminations according to the sequence in which the five images are uploaded.
  • the irradiation time of the five lights in the lighting sequence is 0.5 seconds, respectively, and the hijacker can upload a video with a duration of 2.5 seconds through the terminal.
  • the terminal or other computing device can divide the uploaded video into five videos of 0-0.5 seconds, 0.5-1 seconds, 1-1.5 seconds, 1.5-2 seconds and 2-2.5 seconds, and intercept each video an image.
  • the five images captured from the video correspond to the five illuminations in sequence.
  • the multiple images are fake images uploaded by the hijacker, not real images taken by the target object when illuminated by the multiple lights.
  • the uploading time of the image or the shooting time in the video may be regarded as the shooting time. It can be understood that when the terminal is hijacked, there is also a corresponding relationship between the irradiation time of multiple lights and the shooting time of multiple images.
  • the multiple colors corresponding to the multiple lights in the lighting sequence include at least one reference color and at least one verification color.
  • each of the at least one verification color is determined based on at least a portion of the at least one reference color.
  • the multiple target images include at least one reference image and at least one verification image, each of the at least one reference image corresponds to one of the at least one reference color, and the at least one verification image is Each image corresponds to one of the at least one verification color.
  • the acquisition module may use the color of the light corresponding to the irradiation time and the image capturing time in the light sequence as the color corresponding to the image. Specifically, if the irradiation time of the light corresponds to the shooting time of one or more images, the color of the light is used as the color corresponding to the one or more images.
  • the colors corresponding to the multiple images should be the same as the multiple colors of the multiple lights in the lighting sequence.
  • the multiple colors of multiple lights in the lighting sequence are "red, yellow, blue, green, purple, and red”.
  • the colors corresponding to the multiple images obtained by the terminal should also be “red, yellow”. , blue, green, purple, red”.
  • the colors corresponding to multiple images and multiple colors of multiple lights in the lighting sequence may be different.
  • Step 220 based on the illumination sequence and the multiple target images, determine the authenticity of the multiple target images.
  • step 220 may be performed by a verification module.
  • the authenticity of the multiple target images may reflect whether the multiple target images are images obtained by shooting the target object under the illumination of multiple colors of light. For example, when the terminal is not hijacked or attacked, its light-emitting element can emit light of multiple colors, and its image acquisition device can record or photograph the target object to obtain the target image. At this time, the target image has authenticity. For another example, when the terminal is hijacked or attacked, the target image is obtained based on the image or video uploaded by the attacker. At this time, the target image has no authenticity.
  • the authenticity of the target image can be used to determine whether the terminal's image capture device has been hijacked by an attacker. For example, if at least one target image in the multiple target images is not authentic, it means that the image acquisition device is hijacked. For another example, if more than a preset number of target images in the multiple target images are not authentic, it means that the image acquisition device is hijacked.
  • the verification module may determine, based on the at least one reference image and the verification image, a color of illumination when the verification image was captured.
  • the verification module may further determine the authenticity of the plurality of target images based on the lighting sequence and the color of the lighting when the at least one verification image was captured. Refer to FIG. 4 and related descriptions for specific descriptions of determining the color of the illumination when the verification image is captured, and determining the authenticity of the multiple target images based on the illumination sequence and the color of the illumination when the verification image is captured.
  • the verification module may further extract the verification color feature of the at least one verification image and the reference color feature of the at least one reference image. For each of the at least one verification image, the verification module may generate a target color feature of the verification color corresponding to the verification image based on the illumination sequence and the reference color feature. Based on the target color feature and the verification color feature of each of the at least one verification image, the verification module may determine the authenticity of the plurality of target images. For specific descriptions about generating target color features, and determining the authenticity of multiple target images based on the target color features and verifying color features, see FIG. 6 and related descriptions.
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • the plurality of colors of lighting in the lighting sequence may include at least one reference color and at least one verification color.
  • the verification color is one of the colors that is directly used to verify the authenticity of the image.
  • the reference color is a color among the colors used to assist the verification color to determine the authenticity of the target image.
  • the target image corresponding to the reference color also referred to as the reference image
  • the verification module may determine the authenticity of the plurality of target images based on the color of the illumination when the verification image was captured.
  • the illumination sequence e contains multiple benchmark colors of illumination “red light, green light, blue light”, and multiple verification colors of illumination “yellow light, purple light... cyan light”; the illumination sequence f contains multiple Lighting of the reference color “red light, white light...blue light”, and light of multiple verification colors “red light..green light”.
  • multiple colors exist for verification.
  • the plurality of verification colors may be identical.
  • the verification color can be red, red, red, red.
  • multiple verification colors can be completely different.
  • the verification color can be red, yellow, blue, green, purple.
  • the plurality of verification colors may be partially the same.
  • the verification color can be yellow, green, purple, yellow, red.
  • there are multiple reference colors and the multiple reference colors may be identical, completely different, or partially identical.
  • the verification color may contain only one color, such as green.
  • the at least one reference color and the at least one verification color may be determined according to default settings of the object recognition system 100, manually set by a user, or determined by a verification module.
  • the verification module can randomly choose the reference color and the verification color.
  • the verification module may randomly select a part of the colors from a plurality of colors as the at least one reference color, and the remaining colors as the at least one verification color.
  • the verification module may determine the at least one reference color and the at least one verification color based on a preset rule.
  • the preset rules may be rules regarding the relationship between verification colors, the relationship between reference colors, and/or the relationship between verification colors and reference colors, and the like.
  • the preset rule is that the verification color can be generated based on the fusion of the reference color, and so on.
  • each of the at least one verification color may be determined based on at least a portion of the at least one reference color.
  • the verification color may be obtained by fusion based on at least a part of the at least one reference color.
  • the at least one reference color may comprise a primary or primary color of the color space.
  • the at least one reference color may include the three primary colors of the RGB space, ie, "red, green, and blue".
  • multiple verification colors "yellow, purple...cyan” in the lighting sequence e can be determined based on three reference colors "red, green, blue”.
  • “yellow” can be obtained by fusing the reference colors “red, green, blue” based on the first ratio
  • “purple” can be obtained by fusing the reference colors "red, green, blue” based on the second ratio.
  • one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the at least one reference color and the at least one verification color may be completely identical or partially identical.
  • a certain one of the at least one verification color may be the same as a certain one of the at least one reference color.
  • the verification color can also be determined based on at least one reference color, that is, the specific reference color can be used as the verification color. As shown in Figure 3, in the illumination sequence f, multiple reference colors "red, white...blue” and multiple verification colors "red...green” all contain red.
  • the at least one reference color and the at least one verification color may also have other relationships, which are not limited herein.
  • the at least one reference color and the at least one verification color are the same or different in color family.
  • at least one reference color belongs to a warm color system (eg, red, yellow, etc.)
  • at least one reference color belongs to a cool color system (eg, gray, etc.).
  • the lighting corresponding to the at least one reference color may be arranged in front of or behind the lighting corresponding to the at least one verification color.
  • illuminations of multiple reference colors “red light, green light, blue light” are arranged in front of illuminations of multiple verification colors “yellow light, purple light...cyan light”.
  • illuminations of multiple reference colors “red light, white light...blue light” are arranged behind multiple verification colors “red light...green light”.
  • the illumination corresponding to the at least one reference color may also be arranged at intervals with the illumination corresponding to the at least one verification color, which is not limited herein.
  • FIG. 4 is an exemplary flowchart for determining the authenticity of multiple target images based on the illumination sequence and multiple target images according to some embodiments of the present specification.
  • flowchart 400 may be performed by a verification module. As shown in FIG. 4, the process 400 may include the following steps:
  • Step 410 for each of the at least one verification image, based on the at least one reference image and the verification image, determine the color of the illumination when the verification image is captured.
  • the verification module may determine the color of the illumination when the verification image was captured based on the verification color feature of the verification image and the reference color feature of the at least one reference image.
  • the reference color feature refers to the color feature of the reference image.
  • Verifying color features refers to verifying the color features of an image.
  • the color feature of an image refers to information related to the color of the image.
  • the color of the image includes the color of the light when the image is captured, the color of the subject in the image, the color of the background in the image, and the like.
  • the color features may include deep features and/or complex features extracted by a neural network.
  • Color features can be represented in a number of ways.
  • the color feature can be represented based on the color value of each pixel in the image in the color space.
  • a color space is a mathematical model that describes color using a set of numerical values, each of which can represent the color value of a color feature on each color channel of the color space.
  • a color space may be represented as a vector space, each dimension of the vector space representing a color channel of the color space. Color features can be represented by vectors in this vector space.
  • the color space may include, but is not limited to, RGB color space, L ⁇ color space, LMS color space, HSV color space, YCrCb color space, HSL color space, and the like.
  • the RGB color space includes red channel R, green channel G, and blue channel B, and color features can be represented by the color values of each pixel in the image on the red channel R, green channel G, and blue channel B, respectively.
  • color features may be represented by other means (eg, color histograms, color moments, color sets, etc.).
  • the histogram statistics are performed on the color values of each pixel in the image in the color space to generate a histogram representing the color features.
  • a specific operation eg, mean, squared difference, etc. is performed on the color value of each pixel in the image in the color space, and the result of the specific operation represents the color feature of the image.
  • the verification module may extract color features of the plurality of target images through a color feature extraction algorithm and/or a color verification model (or a portion thereof).
  • Color feature extraction algorithms include: color histogram, color moment, color set, etc.
  • the verification module can count the gradient histogram based on the color value of each pixel in the image in each color channel of the color space, so as to obtain the color histogram.
  • the verification module can divide the image into multiple regions, and use the set of binary indices of the multiple regions established by the color values of each pixel in the image in each color channel of the color space to determine the color of the image. set.
  • reference color features of at least one reference image may be used to construct a reference color space.
  • the reference color space has the at least one reference color as its color channel.
  • the reference color feature corresponding to each reference image can be used as the reference value of the corresponding color channel in the reference color space.
  • the color space (also referred to as the original color space) corresponding to the multiple target images may be the same as or different from the reference color space.
  • the multiple target images may correspond to the RGB color space, and the at least one reference color is red, blue and green, then the original color space corresponding to the multiple target images and the reference color space constructed based on the reference colors belong to the same color space.
  • two color spaces can be considered the same color space if their primary or primary colors are the same.
  • the verification color can be obtained by fusing one or more reference colors. Therefore, the verification module may determine the color corresponding to the verification color feature based on the reference color feature and/or the reference color space constructed by the reference color feature. In some embodiments, the verification module may map the verification color feature of the verification image based on the reference color space, and determine the color of the illumination when the verification image is captured. For example, the verification module can determine the parameters of the verification color feature on each color channel based on the relationship between the verification color feature and the reference value of each color channel in the reference color space, and then determine the color corresponding to the verification color feature based on the parameters, That is, verify the color of the light when the image was taken.
  • the verification module can use the reference color features extracted based on the reference images a, b, and c As the reference value of color channel I, color channel II and color channel III, respectively.
  • Color Channel I, Color Channel II and Color Channel III are the three color channels of the reference color space.
  • the verification module can extract verification color features based on the verification image d and based on the validation color feature and reference values for color channel I, color channel II and color channel III The relationship between Determining Verification Color Characteristics Parameters ⁇ 1 , ⁇ 2 and ⁇ 3 on color channel I, color channel II and color channel III, respectively.
  • the verification module may determine the color corresponding to the verification color feature based on the parameters ⁇ 1 , ⁇ 2 and ⁇ 3 , that is, the color of the light when the verification image is captured.
  • the corresponding relationship between parameters and color categories may be preset, or may be learned through a model.
  • the base color space may be the same color as the color channel of the original color space.
  • the original space color may be an RGB space
  • the at least one reference color may be red, green, and blue.
  • the verification module can construct a new RGB color space (that is, the reference color space) based on the reference color features of the three reference images corresponding to red, green, and blue, and determine the verification color features of each verification image in the new RGB color space. RGB values to determine the color of the lighting when the verification image was taken.
  • the verification module may process the reference color feature and the verification color feature based on the color classification layer in the color verification model, and determine the color of the illumination when the verification image is captured. For details, please refer to FIG. 5 and its related descriptions. Repeat.
  • Step 420 Determine the authenticity of the multiple target images based on the lighting sequence and the color of the lighting when the at least one verification image was captured.
  • the verification module may determine a verification color corresponding to the verification image based on the lighting sequence. Further, the verification module may determine the authenticity of the verification image based on the verification color corresponding to the verification image. For example, the verification module determines the authenticity of the verification image based on the first judgment result of whether the verification color corresponding to the verification image is consistent with the color of the illumination when the image was captured. If the verification color corresponding to the verification image is the same as the color of the light when it was photographed, it means that the verification image is authentic. For another example, the verification module determines the authenticity of the verification images based on whether the relationship between the verification colors corresponding to the multiple verification images (eg, whether they are the same) is consistent with the relationship between the colors of the illumination when the multiple verification images were captured.
  • the verification module may determine whether the terminal's image capture device has been hijacked based on the authenticity of the at least one verification image. For example, if the number of authentic verification images exceeds the first threshold, it means that the image acquisition device of the terminal is not hijacked. For another example, if the number of verification images that do not have authenticity exceeds the second threshold (for example, 1), it means that the image acquisition device of the terminal is hijacked.
  • the second threshold for example, 1
  • the preset thresholds (eg, the first threshold, the second threshold) set for the image authenticity determination in some embodiments of this specification may be related to the degree of shooting stability.
  • the shooting stability degree is the stability degree when the image acquisition device of the terminal acquires the target image.
  • the preset threshold is positively related to the degree of shooting stability. It can be understood that the higher the shooting stability, the higher the quality of the acquired target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when shooting, and the larger the preset threshold is.
  • the shooting stability may be measured based on a motion parameter of the terminal detected by a motion sensor of the terminal (eg, a vehicle-mounted terminal or a user terminal, etc.).
  • the motion sensor may be a sensor that detects the driving situation of the vehicle, and the vehicle may be the vehicle used by the target user.
  • the target user refers to the user to which the target object belongs.
  • the motion sensor may be a motion sensor on the driver's end or the in-vehicle terminal.
  • the preset threshold may also be related to the shooting distance and the rotation angle.
  • the shooting distance is the distance between the target object when the image acquisition device acquires the target image.
  • the rotation angle is the angle between the front of the target object and the terminal screen when the image acquisition device acquires the target image.
  • both the shooting distance and the rotation angle are negatively correlated with the preset threshold. It can be understood that the shorter the shooting distance, the higher the quality of the acquired target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when shooting, and the larger the preset threshold is. The smaller the rotation angle, the higher the quality of the acquired target image, and similarly, the larger the preset threshold.
  • the shooting distance and rotation angle may be determined based on the target image through image recognition techniques.
  • the verification module may perform specific operations (eg, average, standard deviation, etc.) on the shooting stability, shooting distance, and rotation angle of each target image, and based on the specific calculation, the shooting stability, shooting distance, and The shooting angle determines the preset threshold.
  • specific operations eg, average, standard deviation, etc.
  • obtaining the stability degree of the terminal when the multiple target images are acquired by the verification module includes acquiring the sub-stability degree of the terminal when each of the multiple target images is captured; fuse the multiple sub-stability degrees to determine the degree of stability.
  • acquiring the shooting distance between the target object and the terminal when the multiple target images are shot by the verification module includes: acquiring a sub-shooting distance between the target object and the terminal when each of the multiple target images is shot ; Fusing the plurality of sub-shooting distances to determine the shooting distance.
  • obtaining the rotation angle of the target object relative to the terminal when the multiple target images are captured by the verification module includes obtaining the rotation angle of the target object relative to the terminal when each of the multiple target images is captured.
  • the sub-rotation angle of the terminal; the multiple sub-rotation angles are fused to determine the rotation angle.
  • the determination of the authenticity of the target image is also more accurate. For example, when the lighting in the lighting sequence is weaker than the ambient light, the lighting hitting the target object may be difficult to detect. Alternatively, when the ambient light is colored light, the lighting hitting the target object may be disturbed. When the terminal is not hijacked, the reference image and the verification image are taken under the same (or substantially the same) ambient light.
  • the reference color space constructed based on the reference image incorporates the influence of ambient light, therefore, compared with the original color space, the color of the illumination when the verification image was captured can be more accurately identified. Furthermore, the methods disclosed herein can avoid interference of the light emitting elements of the terminal. When the terminal is not hijacked, both the reference image and the verification image are shot under the illumination of the same light-emitting element. Using the reference color space can eliminate or weaken the influence of the light-emitting element, and improve the accuracy of identifying the light color.
  • FIG. 5 is a schematic diagram of a color verification model according to some embodiments of the present specification.
  • the verification module may process the at least one reference image and the verification image based on a color verification model, and determine the color of the lighting when the verification image was captured.
  • the color verification model may include a reference color feature extraction layer, a verification color feature extraction layer, and a color classification layer. As shown in FIG. 5 , the color verification model may include a reference color feature extraction layer 530 , a verification color feature extraction layer 540 , and a color classification layer 570 . A color verification model may be used to implement step 410 . Further, the verification module determines the authenticity of the verification image based on the color and the lighting sequence of the lighting when the verification image was captured.
  • the color feature extraction layers can extract the color features of the target image.
  • the type of the color feature extraction layer may include a convolutional neural network model such as ResNet, DenseNet, MobileNet, ShuffleNet or EfficientNet, or a recurrent neural network model such as a long short-term memory recurrent neural network.
  • the types of reference color feature extraction layer 530 and verification color feature extraction layer 540 may be the same or different.
  • the reference color feature extraction layer 530 extracts reference color features 550 of at least one reference image 510 .
  • the at least one reference image 510 may include multiple reference images.
  • the reference color feature 550 may be a fusion of color features of the plurality of reference images 510 .
  • the plurality of reference images 510 may be spliced, and after splicing, the reference color feature extraction layer 530 may be input, and the reference color feature extraction layer 530 may output the reference color feature 550 .
  • the reference color feature 550 is a feature vector formed by splicing color feature vectors of the reference images 510-1, 510-2, and 510-3.
  • the verification color feature extraction layer 540 extracts the verification color features 560 of the at least one verification image 520 .
  • the verification module may perform a color judgment on each of the at least one verification image 520, respectively. For example, as shown in FIG. 5 , the verification module may input at least one reference image 510 into the reference color feature extraction layer 530 , and input the verification image 520 - 2 into the verification color feature extraction layer 540 .
  • the verification color feature extraction layer 540 may output the verification color feature 560 of the verification image 520-2.
  • the color classification layer 570 may determine the color of the lighting when the verification image 520-2 was captured based on the reference color feature 550 and the verification color feature 560 of the verification image 520-2.
  • the verification module may perform color judgment on multiple verification images 520 at the same time.
  • the verification module may input at least one reference image 510 into the reference color feature extraction layer 530, and input multiple verification images 520 (including the verification images 520-1, 520-2...520-n) into the verification color feature extraction layer 540.
  • the verification color feature extraction layer 540 can output the verification color features 560 of multiple verification images 520 at the same time.
  • the color classification layer 570 can simultaneously determine the color of the illumination when each of the multiple verification images is photographed.
  • the color classification layer 570 may determine the color of the illumination when the verification image was captured based on the reference color feature and the verification color feature of the verification image. For example, the color classification layer 570 may determine a value or probability based on the reference color feature and the verification color feature of the verification image, and then determine the color of the light when the verification image is captured based on the value or probability. The numerical value or probability corresponding to the verification image may reflect the possibility that the color of the light when the verification image is photographed belongs to each color.
  • color classification layers may include, but are not limited to, fully connected layers, deep neural networks, and the like.
  • the color verification model is a machine learning model with preset parameters.
  • the preset parameters of the color verification model can be determined during the training process.
  • the training module may train an initial color verification model based on a plurality of training samples to determine the preset parameters of the color verification model.
  • Each of the plurality of training samples includes at least one sample reference image, at least one sample verification image, and a sample label, where the sample label represents a color of illumination when each of the at least one sample verification image is captured.
  • the at least one reference color is the same as the color of the illumination when the at least one sample reference image is photographed. For example, if the at least one reference color includes red, green, and blue, the at least sample reference image includes three target images captured by the sample target subject under illumination of red light, green light, and blue light.
  • the verification module may input a plurality of training samples into the initial color verification model, and update the parameters of the initial verification color feature extraction layer, the initial reference color feature extraction layer and the initial color classification layer through training until the updated color verification The model satisfies the preset conditions.
  • the updated color verification model may be designated as the first verification model of preset parameters, in other words, the updated first verification model may be designated as the trained color verification model.
  • the preset condition may be that the loss function of the updated color feature model is smaller than the threshold, converges, or the number of training iterations reaches the threshold.
  • the verification module can train the initial verification color feature extraction layer, the initial reference color feature extraction layer and the initial color classification layer in the initial color verification model through an end-to-end training method.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • At least one sample reference image can be input into the initial reference color feature extraction layer, and at least one sample verification image can be input into the initial verification color feature extraction layer, based on the output results of the initial color classification layer
  • a loss function is established with the sample label, and the parameters of each initial layer in the initial color verification model are updated simultaneously based on the loss function.
  • the color verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may directly call the color verification model from the storage device.
  • the authenticity of the verification image is determined by the color verification model, which can improve the efficiency of the authenticity verification of the target image.
  • using the color verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image.
  • the color verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image.
  • the initial color verification model is learned in the training process, so that the trained color verification model can consider the terminal performance difference when judging the color of the target object, and more accurately determine the color of the target image. Moreover, when the terminal is not hijacked, since both the reference image and the verification image are captured under the same ambient light conditions. In some embodiments, when a model reference image is extracted based on the reference color feature in the color verification model, a reference color space is established, and the authenticity of multiple target images is determined based on the reference color space, the influence of external ambient light can be eliminated or reduced.
  • FIG. 6 is another exemplary flowchart for determining the authenticity of multiple target images based on a lighting sequence and multiple target images according to some embodiments of the present specification.
  • flowchart 600 may be performed by a verification module. As shown in Figure 6, the process 600 includes the following steps:
  • Step 610 Extract the verification color feature of the at least one verification image and the reference color feature of the at least one reference image.
  • step 410 For the specific description of extracting the verification color feature and the reference verification feature, please refer to step 410 and its related description.
  • Step 620 For each of the at least one verification image, based on the illumination sequence and the reference color feature, generate a target color feature of the verification color corresponding to the verification image.
  • the target color feature refers to the feature represented by the verification color corresponding to the verification image in the reference color space.
  • the verification module may determine a verification color corresponding to the verification image based on the illumination sequence, and generate a target color feature of the verification image based on the verification color and the reference color feature. For example, the verification module can fuse the color feature of the verification color with the reference color feature to obtain the target color feature.
  • Step 630 Determine authenticity of the multiple target images based on the target color feature and the verification color feature of each of the at least one verification image.
  • the verification module may determine the authenticity of the verification image based on the similarity between its corresponding target color feature and the verification color feature.
  • the similarity between the target color feature and the verification color feature can be calculated by vector similarity, for example, determined by Euclidean distance, Manhattan distance, and the like.
  • the similarity between the target color feature and the verification color feature is greater than the third threshold, the verification image has authenticity, otherwise it does not have authenticity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

本说明书实施例公开了一种目标识别方法和系统。该目标识别方法包括:获取多幅目标图像,多幅目标图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,多个光照有多个颜色,多个颜色包括至少一个基准颜色和至少一个验证颜色,至少一个验证颜色中的每一个基于至少一个基准颜色中的至少一部分颜色确定;以及基于光照序列和多幅目标图像,确定多幅目标图像的真实性。

Description

一种目标识别的方法和系统
优先权声明
本申请要求2021年04月20日提交的中国专利申请号202110423614.0的优先权,其内容全部通过引用并入本文。
技术领域
本说明书涉及图像处理技术领域,特别涉及一种目标识别方法和系统。
背景技术
目标识别是基于图像采集设备获取的目标进行生物识别的技术,例如,以人脸为目标的人脸识别技术,被广泛应用于权限验证、身份验证等应用场景。为了保证目标识别的安全性,需要确定目标图像的真实性。
因此,希望提供一种目标识别的方法和系统,可以确定目标图像的真实性。
发明内容
本说明书实施例之一提供一种目标识别方法,所述方法包括:获取多幅目标图像,所述多幅目标图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定;以及基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性。
本说明书实施例之一提供一种目标识别系统,其特征在于,包括:获取模块,用于获取多幅目标图像,所述多幅目标图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定;以及验证模块,用于基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性。
本说明书实施例之一提供一种目标识别装置,包括处理器,所述处理器用于执行本说明书披露的目标识别方法。
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行本说明书披露的目标识别方法。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详 细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的目标识别系统的应用场景示意图;
图2是根据本说明书一些实施例所示的目标识别方法的示例性流程图;
图3是根据本说明书一些实施例所示的光照序列的示意图;
图4是根据本说明书一些实施例所示的基于光照序列和多幅目标图像,确定多幅目标图像的真实性的示例性流程图;
图5是根据本说明书一些实施例所示的颜色验证模型的结构示意图;
图6是根据本说明书一些实施例所示的基于光照序列和多幅目标图像,确定多幅目标图像的真实性的另一示例性流程图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
目标识别是基于图像采集设备获取的目标对象进行生物识别的技术。在一些实施例中,目标对象可以是人脸、指纹、掌纹和瞳孔等。在一些实施例中,目标识别可以应用于权限验证。例如,门禁权限认证和账户支付权限认证等。在一些实施例中,目标识别还可以用于身份验证。例如,员工考勤认证和本人注册身份安全认证。仅作为示例,目标识别可以基于图 像采集设备实时采集到的目标图像和预先获取的生物特征进行匹配,从而验证目标身份。
然而,图像采集设备可能被攻击或劫持,攻击者可以上传虚假的目标图像通过身份验证。例如,攻击者A可以在攻击或劫持图像采集设备后,直接上传用户B的人脸图像。目标识别系统基于用户B的人脸图像和预先获取的用户B的人脸生物特征进行人脸识别,从而通过用户B的身份验证。
因此,为了保证目标识别的安全性,需要确定目标图像的真实性,即确定目标图像是图像采集设备在目标识别过程中实时采集到的。
图1是根据本说明书一些实施例所示的目标识别系统的应用场景示意图。
如图1所示,目标识别系统100可以包括处理设备110、网络120、终端130和存储设备140。
处理设备110可以用于处理来自目标识别系统100的至少一个组件和/或外部数据源(例如,云数据中心)的数据和/或信息。例如,处理设备110可以获取多幅目标图像,以及确定多幅目标图像的真实性等。在处理过程中,处理设备110可以直接或通过网络120从目标识别系统100的其他组件(如存储设备140和/或终端130)获取数据(如指令)和/或将处理后的数据发送给所述其他组件进行存储或显示。
在一些实施例中,处理设备110可以是单一服务器或服务器组。该服务器组可以是集中式或分布式的(例如,处理设备110可以是分布式系统)。在一些实施例中,处理设备110可以是本地的或者远程的。在一些实施例中,处理设备110可以在云平台上实施,或者以虚拟方式提供。仅作为示例,云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。
网络120可以连接系统的各组成部分和/或连接系统与外部部分。网络120使得目标识别系统100中各组成部分之间、目标识别系统100与外部部分之间可以进行通讯,促进数据和/或信息的交换。在一些实施例中,网络120可以是有线网络或无线网络中的任意一种或多种。例如,网络120可以包括电缆网络、光纤网络、电信网络、互联网、局域网络(LAN)、广域网络(WAN)、无线局域网络(WLAN)、城域网(MAN)、公共交换电话网络(PSTN)、蓝牙网络、紫蜂网络(ZigBee)、近场通信(NFC)、设备内总线、设备内线路、线缆连接等或其任意组合。在一些实施例中,目标识别系统100中各部分之间的网络连接可以采用上述一种方式,也可以采取多种方式。在一些实施例中,网络120可以是点对点的、共享的、中心式的等各种拓扑结构或者多种拓扑结构的组合。在一些实施例中,网络120可以包括一个或以上网络接入点。例如,网络120可以包括有线或无线网络接入点,例如基站和/或网络交 换点120-1、120-2、…,通过这些网络接入点,目标识别系统100的一个或多个组件可连接到网络120以交换数据和/或信息。
终端130指用户所使用的一个或多个终端设备或软件。在一些实施例中,终端130可以包括图像采集设备131(例如,摄像头、照相机),图像采集设备131可以拍摄目标对象,获取多幅目标图像。在一些实施例中,图像采集设备131在拍摄目标对象时,终端130(例如,终端130的屏幕和/或其他灯光发射原件)可以依次发射光照序列中的多个颜色的光照射目标对象。在一些实施例中,终端130可以通过网络120与处理设备110通信,并将拍摄的多幅目标图像发送到处理设备110。在一些实施例中,终端130可以是移动设备130-1、平板计算机130-2、膝上型计算机130-3、其他具有输入和/或输出功能的设备等或其任意组合。上述示例仅用于说明所述终端130的类型的广泛性而非对其范围的限制。
存储设备140可以用于存储数据(如光照序列、多幅目标图像等)和/或指令。存储设备140可以包括一个或多个存储组件,每个存储组件可以是一个独立的设备,也可以是其他设备的一部分。在一些实施例中,存储设备140可包括随机存取存储器(RAM)、只读存储器(ROM)、大容量存储器、可移动存储器、易失性读写存储器等或其任意组合。示例性地,大容量储存器可以包括磁盘、光盘、固态磁盘等。在一些实施例中,存储设备140可在云平台上实现。仅作为示例,云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。在一些实施例中,存储设备140可以集成或包括在目标识别系统100的一个或多个其他组件(例如,处理设备110、终端130或其他可能的组件)中。
在一些实施例中,所述目标识别系统100可以包括获取模块、验证模块和训练模块。
获取模块可以用于获取多幅目标图像,所述多幅目标图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定。
验证模块可以用于基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性。在一些实施例中,所述多幅目标图像包括至少一幅验证图像和至少一幅基准图像,所述至少一幅验证图像中的每一幅与所述至少一个验证颜色中的一个对应,所述至少一幅基准图像中的每一幅与所述至少一个多个基准颜色中的一个对应,对于所述至少一幅验证图像中的每一幅,验证模块可以基于所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色;以及基于所述光照序列和所述至少一幅验证图像被拍摄时光照的颜色,确定所述多幅目标图像的真实性。
在一些实施例中,验证模块还可以提取所述至少一幅验证图像的验证颜色特征和所述至少一幅基准图像的基准颜色特征;对所述至少一幅验证图像中的每一幅,基于所述光照序列和所述基准颜色特征,生成所述验证图像对应的验证颜色的目标颜色特征;以及基于所述至少一幅验证图像中每一幅的所述目标颜色特征和所述验证颜色特征,确定所述多幅目标图像的真实性。
在一些实施例中,验证模块可以基于颜色验证模型处理所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色。在一些实施例中,颜色验证模型为预置参数的机器学习模型。预置参数是指机器学习模型训练过程中,学习到的模型参数。以神经网络为例,模型参数包括权重(Weight)和偏置(bias)等。在一些实施例中,所述颜色验证模型包括基准颜色特征提取层、验证颜色特征提取层和颜色分类层。所述基准颜色特征提取层对所述至少一幅基准图像进行处理,确定所述至少一幅基准图像的基准颜色特征。所述验证颜色特征提取层对所述验证图像进行处理,确定所述验证图像的验证颜色特征。所述颜色分类层对所述至少一幅基准图像的基准颜色特征和所述验证图像的验证颜色特征进行处理,确定所述验证图像被拍摄时光照的颜色。
在一些实施例中,颜色验证模型的所述预置参数通过端到端训练方式获得。训练模块可以用于获取多个训练样本,所述多个训练样本中的每一个包括至少一幅样本基准图像、至少一幅样本验证图像和样本标签,所述样本标签表示所述至少一幅样本验证图像中每一幅被拍摄时光照的颜色,所述至少一个基准颜色与所述至少一幅样本基准图像被拍摄时光照的颜色相同。训练模块可以进一步基于所述多个训练样本训练初始颜色验证模型,确定所述颜色验证模型的所述预置参数。在一些实施例中,训练模块可以省略。
关于获取模块、验证模块和训练模块的更多详细描述可以参见图2-图6,在此不再赘述。
需要注意的是,以上对于目标识别系统及其模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。在一些实施例中,图1中披露的获取模块、验证模块和训练模块可以是一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。诸如此类的变形,均在本说明书的保护范围之内。
图2是根据本说明书一些实施例所示的目标识别方法的示例性流程图。如图2所示, 流程200包括下述步骤:
步骤210,获取多幅目标图像。所述多幅目标图像的拍摄时间与所述终端照射到目标对象的光照序列中多个光照的照射时间具有对应关系。
在一些实施例中,步骤210可以由获取模块执行。
所述目标对象指需要进行目标识别的对象。例如,目标对象可以是用户的特定身体部位,如面部、指纹、掌纹或瞳孔等。在一些实施例中,所述目标对象指需要进行身份验证和/或权限认证的用户的面部。例如,在网约车应用场景中,平台需要验证接单司机是否为平台审核过的注册司机用户,则所述目标对象是司机的面部。又例如,在人脸支付应用场景中,支付系统需要验证支付人员的支付权限,则所述目标对象是支付人员的面部。
为对所述目标对象进行目标识别,所述终端会被指示发射所述光照序列。所述光照序列包括多个光照,用于照射所述目标对象。所述光照序列中不同光照的颜色可以相同,也可以不同。在一些实施例中,所述多个光照包含至少两个颜色不同的光照,即所述多个光照有多个颜色。
在一些实施例中,所述多个颜色包括至少一个基准颜色和至少一个验证颜色。验证颜色是多个颜色中直接用于验证图像真实性的颜色。基准颜色是多个颜色中用于辅助验证确定目标图像真实性的颜色。在一些实施例中,至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定。关于基准颜色和验证颜色的更多细节可以参见图3及其相关描述,此处不再赘述。
所述光照序列中包含多个光照中每个光照的信息,例如,颜色信息、照射时间等。所述光照序列中多个光照的颜色信息可以采用相同或不同的方式表示。例如,所述多个光照的颜色信息可以用颜色类别来表示。示例的,所述光照序列中多个光照的颜色可以表示为红、黄、绿、紫、青、蓝、红。又例如,所述多个光照的颜色信息可以用颜色参数来表示。例如,所述光照序列中多个光照的颜色可以表示为RGB(255,0,0)、RGB(255,255,0)、RGB(0,255,0)、RGB(255,0,255)、RGB(0,255,255)、RGB(0,0,255)。在一些实施例中,光照序列也可以被称为颜色序列,其包含所述多个光照的颜色信息。
光照序列中多个光照的照射时间可以包括每个光照计划照射目标对象上的开始时间、结束时间、持续时长等或其任意组合。例如,红光照射目标对象的开始时间为14:00、绿光照射目标对象的开始时间为14:02。又例如,红光和绿光照射目标对象的持续时长均为0.1秒。在一些实施例中,不同光照照射目标对象的持续时长可以相同,也可以不同。照射时间可以通过其他方式表示,在此不再赘述。
在一些实施例中,终端可以按照特定顺序依次发射多个光照。在一些实施例中,终端可以通过发光元件发射光照。发光元件可以包括终端内置的发光元件,例如,屏幕、LED灯等。发光元件也可以包括外接的发光元件。例如,外接LED灯、发光二极管等。在一些实施例中,当所述终端被劫持或攻击时,所述终端可能会接受发射光照的指示,但实际并不会发出光照。关于光照序列的更多细节可以参见图3及其相关描述,此处不再赘述。
在一些实施例中,终端或处理设备(例如,获取模块)可以随机生成或者基于预设规则生成光照序列。例如,终端或处理设备可以从颜色库中随机抽取多个颜色生成光照序列。在一些实施例中,光照序列可以由用户在终端设定、根据目标识别系统100的默认设置确定、或由处理设备通过数据分析确定等。在一些实施例中,终端或者存储设备可以存储所述光照序列。相应的,获取模块可以通过网络从终端或者存储设备中获取光照序列。
多幅目标图像是用于目标识别的图像。所述多幅目标图像的格式可以包括Joint Photographic Experts Group(JPEG)、Tagged Image File Format(TIFF)、Graphics Interchange Format(GIF)、Kodak Flash PiX(FPX)、Digital Imaging and Communications in Medicine(DICOM)等。所述多幅目标图像可以是二维(2D,two-dimensional)图像或三维(3D,three-dimensional)图像。
在一些实施例中,获取模块可以获取所述多幅目标图像。例如,获取模块可以通过网络发送获取指令至终端,然后通过网络接收终端发送的多幅目标图像。或者,终端可以将所述多幅目标图像发送至存储设备中进行存储,所述获取模块可以从所述存储设备中获取所述多幅目标图像。所述目标图像中可能不包含或包含目标。
所述目标图像可以是由终端的图像采集设备拍摄,也可以是基于用户上传的数据(例如,视频或图像)确定。例如,在目标对象验证的过程中,目标识别系统100会给终端下发光照序列。当终端未被劫持或攻击时,终端可以根据所述光照序列依次发射所述多个光照。当终端发出多个光照中某一个时,其图像采集设备可以被指示在该光照的照射时间内采集一幅或多幅图像。或者,终端的图像采集设备可以被指示在所述多个光照的整个照射期间拍摄视频。终端或其他计算设备(例如处理设备110)可以根据各光照的照射时间从视频中截取各光照的照射时间内采集的一幅或多幅图像。所述终端在各个光照的照射时间内采集的一幅或多幅图像可以作为所述多幅目标图像。此时,所述多幅目标图像为所述目标对象在被所述多个光照照射时拍摄的真实图像。可以理解,所述多个光照的照射时间与所述多幅目标图像的拍摄时间之间存在对应关系。若在单个光照的照射时间内采集一幅图像,则该对应关系是一对一;若在单个光照的照射时间内采集多幅图像,则该对应关系是一对多。
当所述终端被劫持时,劫持者可以通过终端设备上传图像或视频。所述上传的图像或视频可以包含目标对象或者其他用户的特定身体部位,和/或其他物体。所述上传的图像或视频可以是由所述终端或者其他终端拍摄的历史图像或视频,或者是合成的图像或视频。所述终端或其他计算设备(例如处理设备110)可以基于所述上传的图像或视频确定所述多幅目标图像。例如,被劫持的终端可以根据所述光照序列中每个光照的照射顺序和/或照射时长,从所述上传的图像或视频中抽取所述每个光照对应的一幅或多幅图像。仅作为示例,所述光照序列中包含依次排列的五个光照,劫持者可以通过终端设备上传五幅图像。终端或其他计算设备会根据所述五幅图像被上传的先后顺序确定五个光照中每个光照对应的图像。又例如,所述光照序列中五个光照的照射时间分别为0.5秒,劫持者可以通过终端上传时长2.5秒的视频。终端或其他计算设备可以将所述被上传的视频分为0-0.5秒、0.5-1秒、1-1.5秒、1.5-2秒和2-2.5秒五段视频,并在每段视频中截取一幅图像。从视频中截取的五幅图像与所述五个光照依次对应。此时,所述多幅图像是被劫持者上传的虚假图像,而非所述目标对象在被所述多个光照照射时拍摄的真实图像。在一些实施例中,若图像是由劫持者通过终端上传,可以将该图像的上传时间或其在视频中拍摄时间视为其拍摄时间。可以理解,当所述终端被劫持时,多个光照的照射时间与多幅图像的拍摄时间之间同样存在对应关系。
如前所述,光照序列中多个光照对应的多个颜色包括至少一个基准颜色和至少一个验证颜色。在一些实施例中,至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定。所述多幅目标图像包括至少一幅基准图像和至少一幅验证图像,所述至少一幅基准图像的每幅与所述至少一幅基准颜色中的一个对应,所述至少一幅验证图像的每幅与所述至少一个验证颜色中的一个对应。
对于所述多幅图像中的每一幅,获取模块可以将光照序列中照射时间与所述图像拍摄时间对应的光照的颜色,作为所述图像对应的颜色。具体的,若光照的照射时间与一幅或多幅图像的拍摄时间相对应,则将所述光照的颜色作为所述一幅或多幅图像对应的颜色。可以理解,当终端未被劫持或攻击时,多幅图像对应的颜色应当和光照序列中多个光照的多个颜色相同。例如,光照序列多个光照的多个颜色是“红、黄、蓝、绿、紫、红”,当终端未被劫持或攻击时,终端获取的多幅图像对应的颜色应该也是“红、黄、蓝、绿、紫、红”。当终端被劫持或攻击时,多幅图像对应的颜色和光照序列中多个光照的多个颜色可能不同。
步骤220,基于光照序列和多幅目标图像,确定所述多幅目标图像的真实性。在一些实施例中,步骤220可以由验证模块执行。
多幅目标图像的真实性可以反映所述多幅目标图像是否是所述目标对象在多个颜色 的光照的照射下拍摄获取的图像。例如,当终端未被劫持或攻击时,其发光元件可以发射多个颜色的光照,同时其图像采集设备可以目标对象进行录像或拍照以获取的所述目标图像。此时,所述目标图像具有真实性。又例如,当终端被劫持或攻击时,所述目标图像是基于攻击者上传的图像或视频获取。此时,所述目标图像不具有真实性。
目标图像的真实性可以用于确定终端的图像采集设备是否被攻击者劫持。例如,所述多幅目标图像中若存在至少一幅目标图像不具有真实性,则说明图像采集设备被劫持。又例如,所述多幅目标图像中若超过预设数量的目标图像不具有真实性,则说明图像采集设备被劫持。
在一些实施例中,对于所述至少一幅验证图像中的每一幅,验证模块可以基于所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色。验证模块可以进一步基于所述光照序列和所述至少一幅验证图像被拍摄时光照的颜色,确定所述多幅目标图像的真实性。关于确定验证图像被拍摄时光照的颜色,以及基于光照序列和验证图像被拍摄时光照的颜色,确定多幅目标图像的真实性的具体描述参见图4及其相关描述。
在一些实施例中,验证模块还可以提取所述至少一幅验证图像的验证颜色特征和所述至少一幅基准图像的基准颜色特征。对所述至少一幅验证图像中的每一幅,验证模块可以基于所述光照序列和所述基准颜色特征,生成所述验证图像对应的验证颜色的目标颜色特征。基于所述至少一幅验证图像中每一幅的所述目标颜色特征和所述验证颜色特征,验证模块可以确定所述多幅目标图像的真实性。关于生成目标颜色特征,以及基于目标颜色特征和验证颜色特征,确定多幅目标图像的真实性的具体描述可以参见图6及其相关描述。
图3是根据本说明书一些实施例所示的光照序列的示意图。
在一些实施例中,光照序列中光照的多个颜色可以包含至少一个基准颜色和至少一个验证颜色。验证颜色是多个颜色中直接用于验证图像真实性的颜色。基准颜色是多个颜色中用于辅助验证颜色确定目标图像真实性的颜色。例如,基准颜色对应的目标图像(又称为基准图像)可以用于确定验证颜色对应的目标图像(又称为验证图像)被拍摄时光照的颜色。进一步的,验证模块可以基于验证图像被拍摄时光照的颜色确定多幅目标图像的真实性。如图3所示,光照序列e中包含多个基准颜色的光照“红光、绿光、蓝光”,多个验证颜色的光照“黄光、紫光…青光”;光照序列f中包含多个基准颜色的光照“红光、白光…蓝光”,多个验证颜色的光照“红光..绿光”。
在一些实施例中,验证颜色存在多个。所述多个验证颜色可以完全相同。例如,验证颜色可以是红、红、红、红。或者,多个验证颜色也可以完全不同。例如,验证颜色可以是 红、黄、蓝、绿、紫。又或者,多个验证颜色还可以部分相同。例如,验证颜色可以是黄、绿、紫、黄、红。与验证颜色类似地,在一些实施例中,基准颜色存在多个,所述多个基准颜色可以完全相同、完全不同或部分相同。在一些实施例中,验证颜色可以仅包含一个颜色,例如绿色。
在一些实施例中,所述至少一个基准颜色和至少一个验证颜色可以根据目标识别系统100的默认设定确定、由用户手动设定,或者由验证模块确定。例如,验证模块可以随机选取基准颜色和验证颜色。仅作为示例,验证模块可以从多个颜色中随机选取部分颜色作为所述至少一个基准颜色,剩余的颜色作为所述至少一个验证颜色。在一些实施例中,验证模块可以基于预设规则确定所述至少一个基准颜色和所述至少一个验证颜色。所述预设规则可以是关于验证颜色之间关系、基准颜色之间关系,和/或验证颜色和基准颜色之间关系等的规则。例如,所述预设规则为验证颜色可以基于基准颜色融合生成等。
在一些实施例中,至少一个验证颜色中的每一个可以基于至少一个基准颜色中的至少一部分颜色确定。例如,验证颜色可以基于至少一个基准颜色中的至少一部分颜色进行融合得到。在一些实施例中,至少一个基准颜色可以包含颜色空间的基色或原色。例如,所述至少一个基准颜色可以包括RGB空间的三原色,即“红、绿、蓝”。如图3所示,光照序列e中多个验证颜色“黄、紫…青”可以基于3个基准颜色“红、绿、蓝”确定。例如,“黄”可以基于第一比例对基准颜色“红、绿、蓝”进行融合得到,“紫”可以基于第二比例对基准颜色“红、绿、蓝”进行融合得到。
在一些实施例中,至少一个基准颜色中的一个或多个与至少一个验证颜色中的一个或多个相同。至少一个基准颜色和至少一个验证颜色之间可以全部相同或部分相同。例如,至少一个验证颜色中的某一个可以与至少一个基准颜色中特定一个颜色相同。可以理解的,该验证颜色也可以基于至少一个基准颜色确定,即,将该特定基准颜色作为该验证颜色即可。如图3所示,光照序列f中,多个基准颜色“红、白…蓝”和多个验证颜色“红..绿”均包含红色。
在一些实施例中,至少一个基准颜色和至少一个验证颜色还可以存在其他关系,在此不做限制。例如,至少一个基准颜色和所述至少一个验证颜色的色系相同或不同。示例的,至少一个基准颜色属于暖色系的颜色(如,红色、黄色等),至少一个基准颜色属于冷色系的颜色(如,灰色等)。
在一些实施例中,在所述光照序列中,所述至少一个基准颜色对应的光照可以排列在所述至少一个验证颜色对应的光照的前面或后面。如图3所示,光照序列e中,多个基准颜 色的光照“红光、绿光、蓝光”排列在多个验证颜色的光照“黄光、紫光…青光”前面。光照序列f中,多个基准颜色的光照“红光、白光…蓝光”排列在多个验证颜色“红光..绿光”的后面。在一些实施例中,所述至少一个基准颜色对应的光照还可以和所述至少一个验证颜色对应的光照间隔排列,在此不做限制。
图4是根据本说明书一些实施例所示的基于光照序列和多幅目标图像,确定多幅目标图像的真实性的示例性流程图。在一些实施例中,流程图400可以由验证模块执行。如图4所示,该流程400可以包括以下步骤:
步骤410,对于至少一幅验证图像中的每一幅,基于至少一幅基准图像和验证图像,确定所述验证图像被拍摄时光照的颜色。
在一些实施例中,验证模块可以基于验证图像的验证颜色特征和至少一幅基准图像的基准颜色特征,确定验证图像被拍摄时光照的颜色。
基准颜色特征是指基准图像的颜色特征。验证颜色特征是指验证图像的颜色特征。
图像的颜色特征是指与图像的颜色相关的信息。图像的颜色包括拍摄图像时光照的颜色、图像中拍摄对象的颜色、图像中背景的颜色等。在一些实施例中,颜色特征可以包括由神经网络提取的深度特征和/或复杂特征。
颜色特征可以通过多种方式表示。在一些实施例中,颜色特征可以基于图像中各像素点在颜色空间中的颜色值表示。颜色空间是使用一组数值描述颜色的数学模型,该组数值中每个数值可以表示颜色特征在颜色空间的每个颜色通道上的颜色值。在一些实施例中,颜色空间可以表示为向量空间,该向量空间的每个维度表示颜色空间的一个颜色通道。颜色特征可以用该向量空间中的向量来表示。在一些实施例中,颜色空间可以包括但不限于RGB颜色空间、Lαβ颜色空间、LMS颜色空间、HSV颜色空间、YCrCb颜色空间和HSL颜色空间等。可以理解,不同的颜色空包含不同的颜色通道。例如,RGB颜色空间包含红色通道R、绿色通道G和蓝色通道B,颜色特征可以用图像中各像素点分别在红色通道R、绿色通道G和蓝色通道B上的颜色值表示。
在一些实施例中,颜色特征可以通过其他方式表示(如,颜色直方图、颜色矩、颜色集等)。例如,对图像中各像素点在颜色空间中的颜色值进行直方图统计,生成表示颜色特征的直方图。又例如,对图像中各像素点在颜色空间中的颜色值进行特定运算(如,均值、平方差等),将该特定运算的结果表示该图像的颜色特征。
在一些实施例中,验证模块可以通过颜色特征提取算法和/或颜色验证模型(或其部分)来提取多幅目标图像的颜色特征。颜色特征提取算法包括:颜色直方图、颜色矩、颜色 集等。例如,验证模块可以基于图像中各像素点分别在颜色空间的每个颜色通道的颜色值,统计梯度直方图,从而获取颜色直方图。又例如,验证模块可以将图像分割为多个区域,用图像中各像素点分别在颜色空间的每个颜色通道的颜色值建立的多个区域的二进制索引的集合,以确定所述图像的颜色集。
在一些实施例中,至少一幅基准图像的基准颜色特征可以用于构建一个基准颜色空间。所述基准颜色空间以所述至少一个基准颜色作为其颜色通道。具体地,每个基准图像对应的基准颜色特征可以作为该基准颜色空间中对应颜色通道的基准值。
在一些实施例中,所述多幅目标图像对应的颜色空间(又称为原始颜色空间)可以与所述基准颜色空间相同或不同。例如,所述多幅目标图像可以对应RGB颜色空间,所述至少一个基准颜色为红、蓝和绿,则多幅目标图像对应的原始颜色空间和基于所述基准颜色构建的基准颜色空间属于相同的颜色空间。在本文中,如果两个颜色空间的基色或原色相同,则这两个颜色空间可以被视为相同的颜色空间。
如上所述,验证颜色可以基于一个或多个基准颜色进行融合得到。因此,验证模块可以基于基准颜色特征和/或其构建的基准颜色空间确定验证颜色特征对应的颜色。在一些实施例中,验证模块可以基于基准颜色空间,对验证图像的验证颜色特征进行映射,确定验证图像被拍摄时光照的颜色。例如,验证模块可以基于验证颜色特征和基准颜色空间中每个颜色通道的基准值之间的关系,确定验证颜色特征在每个颜色通道上的参数,再基于参数确定验证颜色特征对应的颜色,即验证图像被拍摄时光照的颜色。
示例的,验证模块可以将基于基准图像a、b、c提取的基准颜色特征
Figure PCTCN2022076352-appb-000001
分别作为颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ的基准值。颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ是所述基准颜色空间的三个颜色通道。验证模块可以基于验证图像d,提取验证颜色特征
Figure PCTCN2022076352-appb-000002
并基于验证颜色特征
Figure PCTCN2022076352-appb-000003
和颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ的基准值
Figure PCTCN2022076352-appb-000004
之间的关系
Figure PCTCN2022076352-appb-000005
确定验证颜色特征
Figure PCTCN2022076352-appb-000006
分别在颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ上的参数δ 1、δ 2和δ 3。验证模块可以基于参数δ 1、δ 2和δ 3确定验证颜色特征对应的颜色,即验证图像被拍摄时光照的颜色。在一些实施例中,参数和颜色类别的对应关系可以是预先设置的,也可以通过模型学习。
在一些实施例,基准颜色空间可以与原始颜色空间的颜色通道的颜色相同。例如,所述原始空间颜色可以是RGB空间,所述至少一个基准颜色可以是红、绿、蓝。验证模块可以基于红、绿、蓝对应的三幅基准图像的基准颜色特征构建新的RGB颜色空间(即基准颜色空间),并确定每幅验证图像的验证颜色特征在新的RGB颜色空间中的RGB值,从而确定验 证图像被拍摄时光照的颜色。
在一些实施例中,验证模块可以基于颜色验证模型中颜色分类层对基准颜色特征和验证颜色特征处理,确定验证图像被拍摄时光照的颜色,具体可以参见图5及其相关描述,在此不再赘述。
步骤420,基于光照序列和至少一幅验证图像被拍摄时光照的颜色,确定多幅目标图像的真实性。
在一些实施例中,对于至少一幅验证图像中的每一幅,验证模块可以基于光照序列确定所述验证图像对应的验证颜色。进一步地,验证模块可以基于验证图像对应的验证颜色,确定验证图像的真实性。例如,验证模块基于验证图像对应的验证颜色与被拍摄时光照的颜色是否一致的第一判断结果,确定验证图像的真实性。验证图像对应的验证颜色和被拍摄时光照的颜色相同表示验证图像具有真实性,验证图像对应的验证颜色和被拍摄时光照的颜色不相同表示该验证图像不具有真实性。又例如,验证模块基于多个验证图像对应的验证颜色之间的关系(例如,是否相同)与多个验证图像被拍摄时光照的颜色之间的关系是否一致,确定验证图像的真实性。
在一些实施例中,验证模块可以基于至少一个验证图像的真实性确定终端的图像采集设备是否被劫持。例如,具有真实性的验证图像的个数超过第一阈值说明终端的图像采集设备未被劫持。又例如,不具有真实性的验证图像的个数超过第二阈值(例如,1)说明终端的图像采集设备被劫持。
在一些实施例中,本说明书一些实施例中针对图像真实性判断设定的预设阈值(例如,第一阈值、第二阈值)可以和拍摄稳定程度相关。拍摄稳定程度是终端的图像采集设备获取目标图像时的稳定程度。在一些实施例中,预设阈值与拍摄稳定程度正相关。可以理解,拍摄稳定程度越高,则获取的目标图像质量越高,基于多幅目标图像提取的颜色特征越能真实反应被拍摄时光照的颜色,则预设阈值越大。在一些实施例中,拍摄稳定度可以基于终端(例如,车载终端或用户终端等)的运动传感器检测到的终端的运动参数衡量。例如,运动传感器检测到的运动速度、震动频率等。示例的,运动参数越大,或者运动参数变化率越大,说明拍摄稳定程度越低。运动传感器可以是检测车辆行驶情况的传感器,车辆可以是目标用户使用的车辆。目标用户是指目标对象所属的用户。例如,目标用户为网约车司机,则运动传感器可以是司机端或者车载终端的运动传感器。
在一些实施例中,预设阈值还可以与拍摄距离和转动角度相关。拍摄距离是图像采集设备获取目标图像时和目标对象之间的距离。转动角度是图像采集设备获取目标图像时目标 对象正面与终端屏幕的角度。在一些实施例中,拍摄距离和转动角度都与预设阈值负相关。可以理解,拍摄距离越短,则获取的目标图像质量越高,基于多幅目标图像提取的颜色特征越能真实反应被拍摄时光照的颜色,则预设阈值越大。转动角度越小,则获取的目标图像质量越高,同理,则预设阈值越大。在一些实施例中,拍摄距离和转动角度可以通过图像识别技术基于目标图像确定。
在一些实施例中,验证模块可以对每幅目标图像的拍摄稳定程度、拍摄距离和转动角度进行特定运算(如,求平均、标准差等),基于特定运算后的拍摄稳定程度、拍摄距离和拍摄角度确定预设阈值。
例如,验证模块获取所述多幅目标图像被获取时所述终端的稳定程度包括获取多幅目标图像中每一幅被拍摄时终端的子稳定程度;对所述多个子稳定程度进行融合,确定所述稳定程度。
又例如,验证模块获取所述多幅目标图像被拍摄时目标对象与所述终端的拍摄距离包括:获取所述多幅目标图像中每一幅被拍摄时目标对象与所述终端的子拍摄距离;对所述多个子拍摄距离进行融合,确定所述拍摄距离。
又例如,验证模块获取所述多幅目标图像被拍摄时所述目标对象相对于所述终端的转动角度包括获取所述多幅目标图像中每一幅被拍摄时所述目标对象相对于所述终端的子转动角度;对所述多个子转动角度进行融合,确定所述转动角度。
由于基准图像和验证图像都是在相同的外界环境光的条件下拍摄的,基于基准图像建立基准颜色空间,并基于基准颜色空间确定验证图像被拍摄时光照的颜色可以使确定结果更加准确。进一步的,目标图像真实性的确定也更加准确。例如,光照序列中的光照比环境光微弱时,照射到目标对象的光照可能难以被检测。或者,当环境光为彩色光时,照射到目标对象的光照可能会受到干扰。当终端未被劫持时,基准图像和验证图像是在相同(或基本相同)的环境光下拍摄的。基于基准图像构建的基准颜色空间融合了环境光的影响,因此,相比于原始颜色空间,可以较准确地识别出验证图像被拍摄时光照的颜色。此外,本文中披露的方法可以避免终端的发光元件的干扰。当终端未被劫持时,基准图像和验证图像都是在相同的发光元件照射下被拍摄的,利用基准颜色空间可以消除或减弱发光元件的影响,提高识别光照颜色的准确率。
图5是根据本说明书一些实施例所示的颜色验证模型的示意图。
在一些实施例中,验证模块可以基于颜色验证模型处理所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色。
颜色验证模型可以包括基准颜色特征提取层、验证颜色特征提取层和颜色分类层。如图5所示,颜色验证模型可以包括基准颜色特征提取层530、验证颜色特征提取层540以及颜色分类层570。颜色验证模型可以用于实现步骤410。进一步的,验证模块基于验证图像被拍摄时光照的颜色和光照序列,确定验证图像的真实性。
颜色特征提取层(如,基准颜色特征提取层530和验证颜色特征提取层540等)可以提取目标图像的颜色特征。在一些实施例中,颜色特征提取层的类型可以包括ResNet、DenseNet、MobileNet、ShuffleNet或EfficientNet等卷积神经网络模型,或长短记忆循环神经网络等循环神经网络模型。在一些实施例中,基准颜色特征提取层530和验证颜色特征提取层540的类型可以相同或不同。
基准颜色特征提取层530提取至少一幅基准图像510的基准颜色特征550。在一些实施例中,至少一幅基准图像510可以包括多幅基准图像。基准颜色特征550可以是所述多幅基准图像510的颜色特征的融合。例如,可以将所述多幅基准图像510进行拼接,拼接后输入基准颜色特征提取层530,所述基准颜色特征提取层530可以输出基准颜色特征550。示例的,基准颜色特征550是基准图像510-1、510-2、510-3的颜色特征向量拼接而成的特征向量。
验证颜色特征提取层540提取至少一幅验证图像520的验证颜色特征560。在一些实施例中,验证模块可以分别对至少一幅验证图像520中的每一幅进行颜色判断。例如,如图5所示,验证模块可以将至少一幅基准图像510输入基准颜色特征提取层530,将验证图像520-2输入验证颜色特征提取层540。验证颜色特征提取层540可以输出验证图像520-2的验证颜色特征560。颜色分类层570可以基于基准颜色特征550和验证图像520-2的验证颜色特征560,确定验证图像520-2被拍摄时光照的颜色。
在一些实施例中,验证模块可以同时对多幅验证图像520进行颜色判断。例如,验证模块可以将至少一幅基准图像510输入基准颜色特征提取层530,将多幅验证图像520(包括验证图像520-1、520-2…520-n)输入验证颜色特征提取层540。验证颜色特征提取层540可以同时输出多幅验证图像520的验证颜色特征560。颜色分类层570可以同时确定多幅验证图像中每幅验证图像被拍摄时光照的颜色。
对至少一幅验证图像中每一幅,颜色分类层570可以基于基准颜色特征和该验证图像的验证颜色特征,确定该验证图像被拍摄时光照的颜色。例如,颜色分类层570可以基于基准颜色特征和该验证图像的验证颜色特征确定数值或概率,再基于数值或概率确定验证图像被拍摄时光照的颜色。验证图像对应的数值或概率可以反映所述验证图像被拍摄时光照的颜色属于各颜色的可能性。在一些实施例中,颜色分类层可以包括但不限于全连接层、深度神 经网络等。
所述颜色验证模型为预置参数的机器学习模型。所述颜色验证模型的预置参数可以在训练过程确定。例如,训练模块可以基于多个训练样本训练初始颜色验证模型,以确定所述颜色验证模型的所述预置参数。多个训练样本中的每一个包括至少一幅样本基准图像、至少一幅样本验证图像和样本标签,所述样本标签表示所述至少一幅样本验证图像中每一幅被拍摄时光照的颜色。其中,至少一个基准颜色与所述至少一幅样本基准图像被拍摄时光照的颜色相同。例如,如果所述至少一个基准颜色包括红、绿、蓝,则所述至少样本基准图像包括样本目标对象在红光、绿光和蓝光照射下拍摄的三幅目标图像。
在一些实施例中,验证模块可以将多个训练样本输入初始颜色验证模型,通过训练更新初始验证颜色特征提取层、初始基准颜色特征提取层和初始颜色分类层的参数,直到更新后的颜色验证模型满足预设条件。更新后的颜色验证模型可以被指定为预置参数的第一验证模型,换言之,更新后的第一验证模型可以被指定为训练后的颜色验证模型。预设条件可以是更新后的颜色特征模型的损失函数小于阈值、收敛,或训练迭代次数达到阈值。
在一些实施例中,验证模块可以通过端到端的训练方式,训练初始颜色验证模型中的初始验证颜色特征提取层、初始基准颜色特征提取层和初始颜色分类层。端到端的训练方式是指将训练样本输入初始模型,并基于初始模型的输出确定损失值,基于所述损失值更新所述初始模型。所述初始模型可能会包含用于执行不同数据处理操作的多个子模型或模块,其会在训练中被视为整体,进行同时更新。例如,在初始颜色验证模型的训练中,可以将至少一幅样本基准图像输入初始基准颜色特征提取层,将至少一幅样本验证图像输入初始验证颜色特征提取层,基于初始颜色分类层的输出结果和样本标签建立损失函数,基于损失函数对初始颜色验证模型中各初始层的参数进行同时更新。
在一些实施例中,颜色验证模型可以由处理设备或第三方预先训练后保存在存储设备中,处理设备可以从存储设备中直接调用颜色验证模型。
本说明书一些实施例通过颜色验证模型确定验证图像的真实性,可以提高目标图像真实性验证的效率。此外,使用颜色验证模型可以提高目标对象真实性验证的可靠性,减少或者去除终端设备的性能差异的影响,进一步确定目标图像的真实性。可以理解,不同终端的硬件存在一定差异,例如,不同厂商的终端屏幕发射的相同颜色彩色光在饱和度、亮度等参数上可能会有差异,导致同一种颜色的类内差距比较大。初始颜色验证模型的多个训练样本可以是由不同性能的终端拍摄的。初始颜色验证模型在训练过程中通过学习,可以使得训练后的颜色验证模型在进行目标对象颜色判断时可以考虑终端性能差异,较为准确地确定目标 图像的颜色。而且,当终端未被劫持时,由于基准图像和验证图像都是在相同的外界环境光的条件下拍摄的。在一些实施例中,基于颜色验证模型中的基准颜色特征提取模型基准图像,建立基准颜色空间,并基于基准颜色空间确定多幅目标图像的真实性时,可以消除或减弱外界环境光的影响。
图6是根据本说明书一些实施例所示的基于光照序列和多幅目标图像,确定多幅目标图像的真实性的另一示例性流程图。在一些实施例中,流程图600可以由验证模块执行。如图6所示,该流程600包括如下步骤:
步骤610,提取所述至少一幅验证图像的验证颜色特征和所述至少一幅基准图像的基准颜色特征。
关于提取验证颜色特征和基准验证特征的具体描述可以参见步骤410及其相关描述。
步骤620,对所述至少一幅验证图像中的每一幅,基于所述光照序列和所述基准颜色特征,生成所述验证图像对应的验证颜色的目标颜色特征。
目标颜色特征是指验证图像对应的验证颜色在基准颜色空间中表示的特征。在一些实施例中,对于至少一幅验证图像中的每一幅,验证模块可以基于光照序列,确定验证图像对应的验证颜色,并基于该验证颜色和基准颜色特征生成验证图像的目标颜色特征。例如,验证模块可以将验证颜色的颜色特征与基准颜色特征进行融合,得到目标颜色特征。
步骤630,基于所述至少一幅验证图像中每一幅的所述目标颜色特征和所述验证颜色特征,确定所述多幅目标图像的真实性。
在一些实施例中,对于至少一幅验证图像中的每一幅,验证模块可以基于其对应的目标颜色特征和验证颜色特征之间的相似度,确定验证图像的真实性。其中,目标颜色特征和验证颜色特征之间的相似度可以通过向量相似度计算得到,例如,通过欧式距离、曼哈顿距离等确定。示例性地,当目标颜色特征和验证颜色特征的相似度大于第三阈值,则验证图像具有真实性,反之则不具有真实性。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或 “一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (10)

  1. 一种目标识别方法,所述方法包括:
    获取多幅目标图像,所述多幅目标图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定;以及
    基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性。
  2. 根据权利要求1所述的方法,所述多幅目标图像包括至少一幅验证图像和至少一幅基准图像,所述至少一幅验证图像中的每一幅与所述至少一个验证颜色中的一个对应,所述至少一幅基准图像中的每一幅与所述至少一个多个基准颜色中的一个对应,
    所述基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性包括:
    对于所述至少一幅验证图像中的每一幅,基于所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色;以及
    基于所述光照序列和所述至少一幅验证图像被拍摄时光照的颜色,确定所述多幅目标图像的真实性。
  3. 根据权利要求2所述的方法,所述基于所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色包括:
    基于颜色验证模型处理所述至少一幅基准图像和所述验证图像,确定所述验证图像被拍摄时光照的颜色,所述颜色验证模型为预置参数的机器学习模型。
  4. 根据权利要求3所述的方法,所述颜色验证模型包括基准颜色特征提取层、验证颜色特征提取层和颜色分类层,
    所述基准颜色特征提取层对所述至少一幅基准图像进行处理,确定所述至少一幅基准图像的基准颜色特征;
    所述验证颜色特征提取层对所述验证图像进行处理,确定所述验证图像的验证颜色特征;
    所述颜色分类层对所述至少一幅基准图像的基准颜色特征和所述验证图像的验证颜色特征进行处理,确定所述验证图像被拍摄时光照的颜色。
  5. 根据权利要求4所述的方法,所述颜色验证模型的所述预置参数通过端到端训练方式获得。
  6. 根据权利要求3所述的方法,所述颜色验证模型的预置参数通过训练过程生成,所述训练过程包括:
    获取多个训练样本,所述多个训练样本中的每一个包括至少一幅样本基准图像、至少一幅样本验证图像和样本标签,所述样本标签表示所述至少一幅样本验证图像中每一幅被拍摄时光照的颜色,所述至少一个基准颜色与所述至少一幅样本基准图像被拍摄时光照的颜色相同;以及
    基于所述多个训练样本训练初始颜色验证模型,确定所述颜色验证模型的所述预置参数。
  7. 根据权利要求1所述的方法,所述多幅目标图像包括至少一幅验证图像和至少一幅基准图像,所述至少一幅验证图像中的每一幅与所述至少一个验证颜色中的一个对应,所述多至少一幅基准图像中的每一幅与所述至少一个多个基准颜色中的一个对应,
    所述基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性包括:
    提取所述至少一幅验证图像的验证颜色特征和所述至少一幅基准图像的基准颜色特征;
    对所述至少一幅验证图像中的每一幅,基于所述光照序列和所述基准颜色特征,生成所述验证图像对应的验证颜色的目标颜色特征;以及
    基于所述至少一幅验证图像中每一幅的所述目标颜色特征和所述验证颜色特征,确定所述多幅目标图像的真实性。
  8. 一种目标识别系统,所述系统包括:
    获取模块,用于获取多幅目标图像,所述多幅目标图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定;以及
    验证模块,用于基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性。
  9. 一种目标判别装置,其特征在于,所述装置包括至少一个处理器以及至少一个存储器;
    所述至少一个存储器用于存储计算机指令;
    所述至少一个处理器用于执行所述计算机指令中的至少部分指令以实现如权利要求1至 7中任意一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,当所述计算机指令被处理器执行时实现如权利要求1至7中任意一项所述的方法。
PCT/CN2022/076352 2021-04-20 2022-02-15 一种目标识别的方法和系统 WO2022222585A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110423614.0 2021-04-20
CN202110423614.0A CN113111807B (zh) 2021-04-20 一种目标识别的方法和系统

Publications (1)

Publication Number Publication Date
WO2022222585A1 true WO2022222585A1 (zh) 2022-10-27

Family

ID=76718856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076352 WO2022222585A1 (zh) 2021-04-20 2022-02-15 一种目标识别的方法和系统

Country Status (1)

Country Link
WO (1) WO2022222585A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859117A (zh) * 2018-12-30 2019-06-07 南京航空航天大学 一种采用神经网络直接校正rgb值的图像颜色校正方法
CN111160374A (zh) * 2019-12-28 2020-05-15 深圳市越疆科技有限公司 一种基于机器学习的颜色识别方法及系统、装置
CN111460964A (zh) * 2020-03-27 2020-07-28 浙江广播电视集团 一种广电传输机房低照度条件下运动目标检测方法
CN111523438A (zh) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备
CN113111807A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别的方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859117A (zh) * 2018-12-30 2019-06-07 南京航空航天大学 一种采用神经网络直接校正rgb值的图像颜色校正方法
CN111160374A (zh) * 2019-12-28 2020-05-15 深圳市越疆科技有限公司 一种基于机器学习的颜色识别方法及系统、装置
CN111460964A (zh) * 2020-03-27 2020-07-28 浙江广播电视集团 一种广电传输机房低照度条件下运动目标检测方法
CN111523438A (zh) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备
CN113111807A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别的方法和系统

Also Published As

Publication number Publication date
CN113111807A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
US11972638B2 (en) Face living body detection method and apparatus, device, and storage medium
CN109446981B (zh) 一种脸部活体检测、身份认证方法及装置
WO2022222575A1 (zh) 用于目标识别的方法和系统
WO2019152983A2 (en) System and apparatus for face anti-spoofing via auxiliary supervision
WO2022222569A1 (zh) 一种目标判别方法和系统
US9652663B2 (en) Using facial data for device authentication or subject identification
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
CN110163078A (zh) 活体检测方法、装置及应用活体检测方法的服务系统
CN108664843B (zh) 活体对象识别方法、设备和计算机可读存储介质
CN104598882A (zh) 用于生物特征验证的电子欺骗检测的方法和系统
CN113111810B (zh) 一种目标识别方法和系统
KR102145132B1 (ko) 딥러닝을 이용한 대리 면접 예방 방법
CN112232323B (zh) 人脸验证方法、装置、计算机设备和存储介质
US20210256244A1 (en) Method for authentication or identification of an individual
CN115147936A (zh) 一种活体检测方法、电子设备、存储介质及程序产品
CN112507986B (zh) 基于神经网络的多通道人脸活体检测方法及装置
WO2022222585A1 (zh) 一种目标识别的方法和系统
WO2022222957A1 (zh) 一种目标识别的方法和系统
WO2019056492A1 (zh) 一种契约调查的处理方法、存储介质和服务器
CN113111807B (zh) 一种目标识别的方法和系统
WO2022222904A1 (zh) 图像验证方法、系统及存储介质
JP6878826B2 (ja) 撮影装置
JP2004128715A (ja) ビデオデータの記憶制御方法およびシステム、プログラム、記録媒体、ビデオカメラ
CN116152932A (zh) 活体检测方法以及相关设备
CN113989870A (zh) 一种活体检测方法、门锁系统及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790695

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22790695

Country of ref document: EP

Kind code of ref document: A1