CN118154831A - Method and device for identifying illumination sensitive target - Google Patents

Method and device for identifying illumination sensitive target Download PDF

Info

Publication number
CN118154831A
CN118154831A CN202410405863.0A CN202410405863A CN118154831A CN 118154831 A CN118154831 A CN 118154831A CN 202410405863 A CN202410405863 A CN 202410405863A CN 118154831 A CN118154831 A CN 118154831A
Authority
CN
China
Prior art keywords
image
angle
illumination
predetermined
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410405863.0A
Other languages
Chinese (zh)
Inventor
钱烽
罗涛
许诗起
张灵峰
于镳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Publication of CN118154831A publication Critical patent/CN118154831A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the specification provides a method and a device for identifying an illumination-sensitive object, wherein in the process of identifying whether the object to be identified contains illumination-sensitive materials, image acquisition can be performed from a plurality of angles, and at least one angle is a randomly generated angle. Meanwhile, images in two situations of natural light (without light filling) and light filling are respectively acquired at each acquisition angle. Therefore, the illumination sensitive material can be identified based on the pattern difference displayed by the illumination sensitive material under the condition of light ray transformation and the condition of angle transformation. Therefore, the identification effectiveness of the illumination sensitive material targets is improved through angle random generation and verification. When the method is applied to the fields of auxiliary anti-counterfeiting identification and the like, the target to be identified of the unreal illumination sensitive material (such as common printing or mobile phone photo) can be effectively identified, and the anti-counterfeiting identification effect is improved in an auxiliary mode.

Description

Method and device for identifying illumination sensitive target
Technical Field
One or more embodiments of the present disclosure relate to the field of secure computing technology, and in particular, to a method and apparatus for identifying illumination-sensitive objects.
Background
The illumination sensitivity is a property which generates character change under the condition of illumination change or shows special physical and chemical properties. The light sensitive property can be realized by light sensitive materials such as alumite, photosensitive resin and the like, and can also be realized by photoetching holographic technology such as making holographic images by photoetching technology. The application of the light sensitivity characteristic is wide, and common application fields include anti-counterfeit labels, process detection and the like. Taking the anti-counterfeiting label as an example, the anti-counterfeiting label can be used for identifying genuine and fake products, preventing the products from being imitated and tampered and protecting brand images. The security tag is typically a printed product that can be printed directly onto the product or onto the package, or can be attached to the product as a separate tag or label. Anti-counterfeit labels typically contain unique information features, such as graphic codes, images, text, or other forms of information, that are used to identify the authenticity of the product. However, in a practical use scenario, there are some cases where various interests are achieved by forging an anti-counterfeit label. For example: the merchant pseudo-creates a counterfeit label which is consistent with the pattern and the texture on the counterfeit label through the flapping means, and posts the counterfeit label on the commodity, thereby cheating the consumer; in a fleeing scene, the patch of a manufacturer is cheated by copying the anti-counterfeiting label and then scanning; etc. In addition, there are some scenes in which a user shoots a real anti-counterfeit label through an intelligent terminal, and then identifies an anti-counterfeit label image stored in advance in the intelligent terminal through verification equipment, so that the aim of anti-counterfeit verification is fulfilled, and the phenomenon is also called "space scanning code".
Therefore, how to effectively authenticate the real illumination sensitive material is an important technical problem worthy of research.
Disclosure of Invention
One or more embodiments of the present specification describe a method and apparatus for identifying an illumination-sensitive target to address one or more of the problems mentioned in the background.
According to a first aspect, there is provided a method of identifying an illumination-sensitive target, comprising: acquiring a first image and a second image which are acquired through a first angle aiming at an object to be identified, and a third image and a fourth image which are acquired through a second angle, wherein the first image and the third image are images acquired through light supplementing processing, the second image and the fourth image are images acquired through no light supplementing processing, and at least one of the first angle and the second angle is an angle randomly designated according to a preset mode; obtaining a fifth image, a sixth image, a seventh image and an eighth image after respectively correcting the first image, the second image, the third image and the fourth image under the condition that all detection passes through based on mark correction and validity detection which are carried out by taking a preset mark as a reference, wherein the validity comprises mark validity and angle validity; and determining whether the target to be identified contains an identification result of a preset illumination sensitive material or not based on difference detection among the fifth image, the sixth image, the seventh image and the eighth image.
In one embodiment, the first and second angles are acquisition angles between a forward direction of the object to be identified and a forward direction of the acquisition device.
In one embodiment, the predetermined mark comprises a predetermined pattern in the reference image and its physical dimensions; the marker correction includes: and carrying out stretching correction on the acquired image according to the relative relation between the acquisition angle and the reference angle and the relative relation between the size of the mark in the image and the physical size in the preset mark.
In one embodiment, the relative relationship is described by a relative angle between the acquisition angle and a reference angle; the stretch correction includes: rotating the image according to the relative angle; the predetermined mark in the image is stretched to conform to the predetermined mark size in the reference image.
In one embodiment, the angle validity is determined by whether a predetermined marker view angle obtained by rotating the image by the relative angle is within a predetermined range of a predetermined marker view angle in the reference image; wherein the angle is determined to be valid in the event that the error is within a predetermined range; otherwise, under the condition that the error exceeds the preset range, determining that the angle is invalid, and feeding back prompt information of the re-shooting.
In one embodiment, the determining whether the object to be identified includes the identification result of the predetermined light sensitive material based on the difference detection between the fifth image, the sixth image, the seventh image, and the eighth image includes: detecting a difference between the fifth image and the sixth image, a difference between the seventh image and the eighth image, a difference between the fifth image and the seventh image, a difference between the sixth image and the eighth image; and judging whether the target to be identified contains a preset illumination sensitive material or not based on the differences.
In one embodiment, the detection of the above differences is achieved by one of peak signal-to-noise ratio, structural similarity, normalized cross-correlation, gray level co-occurrence matrix, histogram similarity.
In one embodiment, the determining whether the object to be identified includes the identification result of the predetermined light sensitive material based on the difference detection between the fifth image, the sixth image, the seventh image, and the eighth image includes: extracting image features of the fifth image, the sixth image, the seventh image and the eighth image through a feature extraction network to obtain corresponding feature images;
Extracting difference features based on fusion feature graphs obtained by splicing the feature graphs; and determining whether the target to be identified contains an identification result of the predetermined illumination sensitive material according to the difference characteristics.
In one embodiment, the extracting the difference feature based on the fused feature map obtained by splicing the feature maps includes: fine-tuning the feature maps by a predetermined alignment network such that predetermined marks between the following sets of images are aligned at the pixel level: fifth and sixth images, seventh and eighth images, fifth and seventh images, sixth and eighth images.
In one embodiment, the predetermined alignment network is one of the following: cross-attention network, optical flow estimation network, spatial transformation network.
According to a second aspect, there is provided an apparatus for identifying a light sensitive object, comprising:
An acquisition unit configured to acquire a first image and a second image acquired through a first angle, and a third image and a fourth image acquired through a second angle, for an object to be identified, wherein the first image and the third image are images acquired through light supplementing processing, the second image and the fourth image are images acquired without light supplementing processing, and at least one of the first angle and the second angle is an angle randomly specified in a predetermined manner;
A correction unit configured to obtain a fifth image, a sixth image, a seventh image, and an eighth image after correction of the first image, the second image, the third image, and the fourth image, respectively, in the case where all the detection passes, based on a mark correction and a validity detection performed with reference to a predetermined mark, the validity including a mark validity and an angle validity;
And the identification unit is configured to obtain an identification result of whether the target to be identified contains a preset illumination sensitive material or not based on difference detection among the fifth image, the sixth image, the seventh image and the eighth image.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has executable code stored therein, and wherein the processor, when executing the executable code, implements the method of the second aspect.
According to the method and the device provided by the embodiment of the specification, in the process of identifying whether the target to be identified contains the illumination sensitive material, image acquisition can be performed from a plurality of angles, wherein at least one angle is a randomly generated angle. Meanwhile, images in two situations of natural light (without light filling) and light filling are respectively acquired at each acquisition angle. Therefore, the illumination sensitive material can be identified based on the pattern difference displayed by the illumination sensitive material under the condition of light ray transformation and the condition of angle transformation. Therefore, the identification effectiveness of the illumination sensitive material targets is improved through angle random generation and verification. When the method is applied to the fields of auxiliary anti-counterfeiting identification and the like, the target to be identified of the unreal illumination sensitive material (such as common printing or mobile phone photo) can be effectively identified, and the anti-counterfeiting identification effect is improved in an auxiliary mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic diagram of an electrochemical aluminum (or gold foil) label;
FIG. 2 shows a schematic diagram of a lithographic holographic label;
FIG. 3 illustrates a schematic diagram of an identification flow of a light sensitive object according to one embodiment of the present disclosure;
Fig. 4 shows an image acquisition schematic diagram of a specific example of a two-dimensional code anti-counterfeit label under different angles;
Fig. 5 shows a schematic diagram of a label correction process of a two-dimensional code anti-counterfeit label in a specific example;
FIG. 6 illustrates a flow diagram for identifying the presence of a light sensitive material based on an acquired image, according to one specific example;
Fig. 7 and fig. 8 are schematic flow diagrams respectively executed at the acquisition terminal and the server under a specific application scenario according to the present disclosure;
FIG. 9 shows a schematic block diagram of an identification device of a light sensitive object according to one embodiment of the present description.
Detailed Description
The technical scheme provided in the specification is described below with reference to the accompanying drawings.
Fig. 1 and 2 show specific examples of the use of a light sensitive material for a security label.
Fig. 1 shows a schematic diagram of an electrochemical aluminium (or so-called bronzing film) label. Electrochemical aluminum is an aluminum product that forms an aluminum oxide film on the surface of an aluminum material by an electrochemical treatment process, also known as anodized aluminum or anodized aluminum. The metal material can be further processed to manufacture labels with various anti-counterfeiting characteristics, and is widely applied to the identification and protection of products. An anodized aluminum surface typically has a natural metallic luster that is more pronounced under light, and if the anodized aluminum surface is dyed, it may exhibit different color effects under different angles and light.
Fig. 2 shows an example of a photo-lithographically holographic label. A photolithographic holographic label is a label that contains a holographic image that is made using photolithographic techniques. Holographic images are patterns capable of providing three-dimensional visual effects, are often used as anti-counterfeiting products due to the complex manufacturing process and the characteristic of difficulty in copying, and can exhibit complex optical effects under the irradiation of light, and the effects comprise the presentation of three-dimensional images, color change and image transformation during the change of visual angles.
In more practical application scenarios, there may be more scenarios where light sensitive materials (usually with specular reflection, refraction, diffraction properties) are used for target identification and protection, which are not exemplified here. The electrochemical aluminum, the photoetching hologram and the like are anti-counterfeiting technologies made of illumination angle sensitive materials, and the anti-counterfeiting label can present specific information under the condition of performing operations such as angle conversion and the like. An attacker can make the false label on the screen or photo paper pass the identification by means of screen shooting, color photo printing (tag photo shot under illumination) and the like. The Artificial Intelligence (AI) mode is adopted to replace human eyes to automatically identify the illumination angle sensitive material, and is an important direction for the technical transformation in the fields of marking and identification of articles.
For this reason, the present specification provides a technical idea of identifying an illumination-sensitive object. Here, the light sensitive object may be various entities implemented by light sensitive materials, such as electro-chemical aluminum labels, photo-etched holographic labels, etc. Such materials exhibit a change in gloss under light illumination, and as the viewing angle or illumination angle changes, the gloss areas and pattern appearance also change. The technical concept of identifying the illumination sensitive target is used for identifying the illumination sensitivity of the entity so as to determine that the current entity is the entity realized by the real illumination sensitive material.
Under the technical concept of the present specification, an image of an object to be identified under a plurality of different angles may be acquired, wherein at least one angle of the plurality of photographing angles is a randomly determined angle, and the image under a single angle includes two images of light supplement and light non-supplement. And then, firstly judging the acquisition angles of the images acquired by randomly determined angles, and under the condition that the judgment is effective, carrying out fusion processing on the acquired images at a plurality of angles through a pre-trained identification network, and mapping the image features extracted after fusion into a judgment result of an entity realized by whether the current entity is a real illumination sensitive material or not. Therefore, the user is prompted to switch the random shooting angles to shoot the image, so that the display effect and the identification of the change of the display effect are carried out according to the light filling, the light filling-free and the angle conversion, the authenticity of the target material to be identified is verified, and attacks in the modes of screen shooting, photo printing and the like are resisted. Meanwhile, the influence of the light emitting angle on the display effect can be reduced.
The technical idea of the present specification is described in detail below with reference to an embodiment shown in fig. 3 and the like.
FIG. 3 illustrates a process for identifying a light sensitive object according to one embodiment. The execution subject of the identification process of the illumination-sensitive object shown in fig. 3 may be any computer, device or server having a certain computing power. Referring to fig. 3, the identification process of the illumination sensitive target under the technical concept of the present specification may include the following steps: step 301, acquiring a first image and a second image acquired through a first angle and a third image and a fourth image acquired through a second angle aiming at an object to be identified, wherein the first image and the third image are images acquired through light supplementing processing, the second image and the fourth image are images acquired without light supplementing processing, and at least one of the first angle and the second angle is an angle randomly designated according to a preset mode; step 302, obtaining a fifth image, a sixth image, a seventh image and an eighth image respectively corresponding to the first image, the second image, the third image and the fourth image after correction under the condition that all detection passes through based on mark correction and validity detection which are carried out by taking a preset mark as a reference; step 303, determining whether the object to be identified contains the identification result of the predetermined illumination-sensitive material or not based on the difference detection between the fifth image, the sixth image, the seventh image and the eighth image by the pre-trained identification network.
First, in step 301, a first image and a second image acquired through a first angle with respect to an object to be identified, and a third image and a fourth image acquired through a second angle are acquired. The object to be identified can be anti-counterfeit labels, product marks and other entities possibly containing illumination angle sensitive materials.
It will be appreciated that different patterns may be observed through different angles of view for materials that are sensitive to illumination angles. Therefore, under the technical idea of the present specification, images of a plurality of angles can be acquired. The angle here may be the relative angle between the predetermined direction of the object to be identified and the forward direction of the acquisition device. Taking the two-dimensional code anti-counterfeiting label as an example, the forward looking direction (or referred to as the forward direction) determined according to the three 'back' font positioning points can be the preset direction of the two-dimensional code anti-counterfeiting label, and the forward direction of the acquisition equipment can be generally regarded as the front looking direction (or referred to as the forward direction) of human eyes, such as the direction presented by the smart phone in normal use. Fig. 3 shows a case where the predetermined directions of the acquisition device and the object to be recognized (two-dimensional code tag) are at different angles. In fig. 3 (1), the predetermined two-dimensional code direction is identical to the forward direction of the collecting device, and the angle is 0, in fig. 3 (2), the predetermined two-dimensional code direction is inclined at an angle of about 45 degrees, the angle between the predetermined two-dimensional code direction and the forward direction of the collecting device is 45 degrees, and in fig. 3 (3), the predetermined two-dimensional code direction is horizontal to the left, and the angle between the predetermined two-dimensional code direction and the forward direction of the collecting device is 270 degrees. In this specification, it is considered to acquire images at a plurality of (at least 2) angles to observe the change of the images with the angle. The first angle and the second angle may be two different angles at which the image acquisition is performed.
Generally, to increase randomness of acquisition angles, an attacker is prevented from interfering with the identification process by pre-storing multi-angle images, for a plurality of different acquisition angles, wherein at least one acquisition angle may be randomly specified in a predetermined manner in the current acquisition process. For example 45 degrees, 90 degrees, 135 degrees, etc. The random assignment may be by generating a random number as the acquisition angle in a predetermined interval or by randomly assigning one of the acquisition angles as the current acquisition angle in a predetermined angle set. Because the local area of the target to be identified can change under different relative angles, the appearance position or pattern of the acquired image changes. In an alternative embodiment, the angle difference between two consecutive shots may be greater than a predetermined angle threshold (e.g., greater than 30 degrees), so that it may be ensured that in most cases an image containing different effects of appearance is obtained quickly (e.g., a small number of switching angles, e.g., 1-2 times).
On the other hand, the object to be identified, which contains the illumination sensitive material, may also present different colors or patterns under different light irradiation. Such as laser and full engraving materials, can show different colors and luster under illumination and non-illumination. Thus, for one acquisition angle, two images under natural light and under non-natural light can be taken. Wherein, the natural light is not supplemented with light, and the non-natural light is irradiated with light (such as supplemented light). That is, the image subjected to the light compensation process and the image acquired without the light compensation process can be acquired at one acquisition angle.
To sum up, in step 301, for an object to be identified (such as a two-dimensional code in fig. 3), at least two images under two angles may be acquired, where the two angles are a first angle and a second angle respectively. The images acquired under the first angle are correspondingly marked as a first image and a second image, and the images under the second angle are correspondingly marked as a third image and a fourth image. Wherein, the images acquired through the light filling processing are assumed to be the first image and the third image, and the images acquired through the light filling processing are assumed to be the second image and the fourth image.
Then, according to step 302, a fifth image, a sixth image, a seventh image, and an eighth image, which are respectively corresponding to the first image, the second image, the third image, and the fourth image after correction when all the detection passes, are obtained based on the mark correction and the validity detection performed based on the predetermined mark.
It will be appreciated that in order to determine whether the object to be identified contains light sensitive material, marker correction and validity detection may be performed on the acquired image. And under the condition that the detection is effective, further judging by using the image after the correction mark, otherwise, re-acquiring the image until the effective image is acquired.
It will be appreciated that the marker correction may be based on a predetermined marker. The predetermined mark is a reference mark known or determined in advance, such as a two-dimensional code, logo and other patterns (characters can also be regarded as one of the patterns) and physical dimensions determined in the manufacturing process of the anti-counterfeit label in fig. 4. Mark correction may include both mark detection and mark adjustment. Correspondingly, the validity may include a flag validity and an angle validity. The correspondence is specifically described below.
In one aspect, the mark detection is a mark that can be detected in the image that meets the processing conditions. The processing conditions here include, for example, the presence of a predetermined identification (such as a two-dimensional code), definition qualification, and the like. Detection of the tag may be accomplished by a target detection algorithm such as DBNet, which is not described in detail herein. In case a corresponding flag is detected, it may be determined that the flag is valid.
On the other hand, the mark adjustment may be a process of stretching and correcting the acquired image according to the relative relationship between the acquisition angle and the reference angle and the relative relationship between the mark size in the image and the physical size in the predetermined mark, to obtain an image which is drawn toward the reference image containing the predetermined mark. The predetermined mark in the adjusted image has a direction consistent with the predetermined mark, indicating that the acquisition angle is valid. The stretching correction may be implemented in conventional manners such as image rotation and perspective transformation, a rotation neural network, a stretching neural network, and Spatial Transformer Networks (STN) neural networks, which are not described herein. The predetermined mark pattern between the corrected images may be aligned on each pixel with the reference image of the predetermined mark. In other words, the pixel ranges corresponding to the same mark on each image are identical, such as the ranges defined by the 10 th row to 70 th row and the 10 th column to 70 th column of the two-dimensional code.
Fig. 5 shows a specific flow diagram of the mark correction and validity detection. As shown in fig. 5, in the mark validity detection process, a pattern and a size known in advance, such as a Logo character, a two-dimensional code image, and the like, corresponding to the captured image may be detected. The corresponding mark is detected, the acquired image can be considered to have mark validity. Further, according to the relative relation between the standard angle and the acquisition angle, the acquired image is adjusted to the standard graph and the corresponding size. The standard angle may be a front view angle for image recognition, such as a two-dimensional code and Logo angle in a standard graph shown in fig. 5. The standard angle is predetermined and the randomly specified angle is also explicit, so that the relative relationship between the randomly specified angle and the standard angle (which can be described by the relative angle) is easily determined. And adjusting each image according to the relative angle. Alternatively, the standard angle may be aligned with the frontal angle of the acquisition device, which may be noted as 0 degrees, and thus the relative angle and acquisition angle may be mutually circumferential (and 360 degrees), or aligned in opposite directions. The standard angle may be described by referring to a standard image or the like. For example, in the image of fig. 5, the text direction description is performed via the positioning block of the two-dimensional code or the Logo. In this way, the acquired image is corrected by the relative angle using the stretching correction. And comparing whether the corrected graph angle is consistent with the angle of the standard graph. In the case of coincidence, it may be considered to be angularly effective, otherwise, in the case of non-coincidence, it may be determined that there is no angular coincidence, and information of the re-acquired image is fed back. Here, in view of convenience of a person in performing a photographing operation, angular uniformity generally allows a certain range of errors, and angular differences can be considered uniform within a predetermined range, for example, error ranges are considered uniform within 10 degrees.
For the first image, the second image, the third image, and the fourth image acquired in step 301, if the detection is valid, it is assumed that the corrected images are the fifth image, the sixth image, the seventh image, and the eighth image, respectively. It should be noted that, in the case where the acquisition angle is identical to the standard angle (such as the image acquired in the direction of fig. 3 (1)), the acquired image may not be subjected to the stretching correction, and the original image is maintained. Here, for convenience of description, the corrected image is collectively referred to.
Next, in step 303, based on the difference detection between the fifth image, the sixth image, the seventh image, and the eighth image, it is determined whether the object to be identified includes the identification result of the predetermined illumination-sensitive material.
It will be appreciated that for the corrected image, based on the properties of the illumination sensitive material, there is a difference between the light-supplemented image and the non-supplemented image for the same angle, and a difference in different angles for one of the light-supplemented image or the non-supplemented image. Therefore, the differences can be detected by utilizing a pre-trained recognition network, so that whether the target to be recognized contains a preset illumination sensitive material or not can be judged according to the difference, and a corresponding recognition result can be obtained. In particular, at least the difference between the fifth image and the sixth image, the difference between the seventh image and the eighth image, the difference between the fifth image and the seventh image, the difference between the sixth image and the eighth image can be compared to the images acquired at the first angle and the second angle.
The detection of these differences can be designed and implemented by neural networks, peak signal to noise ratio PSNR, structural similarity SSIM, normalized cross correlation NCC, gray level co-occurrence matrix GLCM, histogram similarity, etc. Fig. 6 shows a specific example of detecting image differences and determining materials through a neural network. As shown in fig. 6, it is assumed that the first angle corresponds to i-1, the corresponding two adjusted images are denoted as img_ { i-1,1} (e.g., corresponding to the fifth image), img_ { i-1,2} (e.g., corresponding to the sixth image), the second angle corresponds to i, and the corresponding two adjusted images are denoted as img_ { i,1} (e.g., corresponding to the seventh image), img_ { i,2} (e.g., corresponding to the eighth image). In the case of detecting a difference between images and determining whether to illuminate a sensitive object, first, image features may be extracted for each of the images after adjustment. The neural network that extracts features may be U-Net, moCo (Momentum Contrast ), DINO (DETR with Improved deNoising anchOra boxes, box prediction method to improve denoising), and so forth. Through the feature extraction network, image feature graphs with consistent sizes can be obtained, such as Feat _ { i-1,1}, feat _ { i-1,2}, feat _ { i,1}, feat _ { i,2}. The feature maps are spliced together along the channel (direction) to form a fused feature map. Further, the fused feature map may be processed by an image processing network (e.g., a network such as Vit), higher order difference features may be extracted, and the extracted difference features may be mapped to a recognition result of whether a predetermined illumination sensitive material is included.
Optionally, given that the alignment of the markers during the marker correction process in step 302 is not sufficiently accurate, the architecture shown in fig. 6 may further perform fine-tuning calibration on the respective image feature maps so that the marker features are further aligned on the feature maps. The fine tuning step may be implemented by the Cross Attention network Cross Attention shown in fig. 6, or may be implemented by an STN (Spatial Transformer Networks, space transformer network) module, or an optical flow estimation network such as FlowNet, PWC-Net, etc., which is not limited herein.
The neural network shown in fig. 6 may be trained via supervised training for several illumination-sensitive targets as samples, other targets as negative samples, each sampling images at various angles, and sample labels of illumination-sensitive targets.
It should be noted that, in the identifying process of the illumination sensitive target shown in fig. 3, the image acquisition and the identification of the illumination sensitive target may be performed by the same device, for example, by various intelligent terminals with image acquisition functions such as a smart phone, a tablet computer, etc., or may be performed by different devices, for example, the image acquisition may be performed by terminal devices with image acquisition functions such as a smart phone, a scanner, a camera, etc., and the identification of the illumination sensitive target may be performed by a computer, a server, etc.
In order to further clarify the technical concept of the present specification, a specific example of a specific application scenario is shown via fig. 7 and 8. Fig. 7 shows a flow executed by the acquisition terminal, and fig. 8 shows a flow executed by the server. The embodiment shown in fig. 3 may correspond to the flow performed in fig. 8. In the specific examples shown in fig. 7 and 8, image data at two angles (positive and random N-value angles, respectively) are acquired.
As shown in fig. 7, when the illumination sensitive target is identified, the acquisition terminal may prompt the user to take a front (first angle) photo or video frame, and automatically acquire two images of two situations, i.e., light-compensating (implemented by a flash) and light-non-compensating. Here, the front photograph may also be a 0-angle photograph, i.e. the forward direction of the acquisition device coincides with the forward direction of the object to be identified. Then, the acquisition terminal can upload the two acquired images to the server for the server to perform mark correction and validity detection on the acquired images. Then, according to the randomly generated current N value obtained from the server side, the user is reminded to shoot an image under the N angle (second angle), and under the condition that the user rotates the acquisition equipment or the target to be identified, two images of two situations of light filling and light non-filling can be automatically acquired. Likewise, the acquisition device can upload two images under the angle N to the server for the server to perform mark correction and validity detection on the acquired images. And under the condition that the detection is passed, the server can perform subsequent processing and feed back whether the identification result of the preset illumination sensitive material is contained or not. Otherwise, the server may return information such as failed images (e.g., incorrect angles or insufficient definition of images), etc. Under the condition that the image disqualification information is received, the acquisition terminal can remind the user to re-shoot the corresponding image.
As shown in fig. 8, the server may generate a current random N value during the shooting process, send the current random N value to the acquisition terminal, and acquire an N-angle image by the acquisition terminal. Under the condition that a forward image and a current angle image fed back by the acquisition terminal are received, marking correction and stretching correction can be respectively carried out on each image, and whether the current angle is consistent with the angle N is checked. If the images are consistent, difference detection is carried out on the corrected images and an identification result is fed back, otherwise, under the condition of inconsistent, information of unsuccessful identification is fed back to the acquisition terminal so as to enable the acquisition terminal to upload new N-angle images, or a new random N 'value is generated so as to enable the acquisition terminal to upload the N' -angle images. Until an image meeting the processing conditions under at least two angles is obtained. The server side can perform difference detection on the images meeting the processing conditions, so that the identification result of whether the images contain the illumination sensitive materials is output.
It will be appreciated that, in the case where the acquisition terminal is an intelligent terminal with a certain computing power, the execution flow shown in fig. 8 above may also be executed by the acquisition terminal.
In the identifying process of whether the object to be identified contains the illumination sensitive material or not, image acquisition can be performed from multiple angles, wherein at least one angle is a randomly generated angle. Meanwhile, images in two situations of natural light (without light filling) and light filling are respectively acquired at each acquisition angle. Therefore, the illumination sensitive material can be identified based on the pattern difference displayed by the illumination sensitive material under the condition of light ray transformation and the condition of angle transformation. Therefore, the identification effectiveness of the illumination sensitive material targets is improved through angle random generation and verification. When the method is applied to the fields of auxiliary anti-counterfeiting identification and the like, the target to be identified of the unreal illumination sensitive material (such as common printing or mobile phone photo) can be effectively identified, and the anti-counterfeiting identification effect is improved in an auxiliary mode.
According to an embodiment of another aspect, a device for identifying an illumination-sensitive object is also provided. The device can be arranged on a computer, equipment and a server with certain computing capability. FIG. 9 illustrates an identification device 900 of a light sensitive object according to one embodiment. As shown in fig. 9, the apparatus 900 for identifying an illumination-sensitive target includes:
An acquiring unit 901 configured to acquire a first image and a second image acquired through a first angle with respect to a target to be identified, and a third image and a fourth image acquired through a second angle, wherein the first image and the third image are images acquired through light supplementing processing, the second image and the fourth image are images acquired without light supplementing processing, and at least one of the first angle and the second angle is an angle randomly designated in a predetermined manner;
A correction unit 902 configured to obtain a fifth image, a sixth image, a seventh image, and an eighth image, respectively corrected for the first image, the second image, the third image, and the fourth image in the case where the detection is all passed, based on the mark correction and the validity detection performed with reference to a predetermined mark, the validity including the mark validity and the angle validity;
The identifying unit 903 is configured to obtain an identifying result of whether the object to be identified contains a predetermined illumination sensitive material based on the difference detection between the fifth image, the sixth image, the seventh image, and the eighth image.
It should be noted that, the apparatus 900 shown in fig. 9 corresponds to the method described in fig. 3, and the corresponding description in the embodiment of the method shown in fig. 3 is also applicable to the apparatus 900, which is not repeated herein.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 3 and the like.
According to an embodiment of yet another aspect, there is also provided a computing device including a memory having executable code stored therein and a processor that, when executing the executable code, implements the method described in connection with fig. 3 and the like.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present disclosure may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-described specific embodiments are used for further describing the technical concept of the present disclosure in detail, and it should be understood that the above description is only specific embodiments of the technical concept of the present disclosure, and is not intended to limit the scope of the technical concept of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical scheme of the embodiment of the present disclosure should be included in the scope of the technical concept of the present disclosure.

Claims (13)

1. A method of identifying an illumination-sensitive target, comprising:
Acquiring a first image and a second image which are acquired through a first angle aiming at an object to be identified, and a third image and a fourth image which are acquired through a second angle, wherein the first image and the third image are images acquired through light supplementing processing, the second image and the fourth image are images acquired through no light supplementing processing, and at least one of the first angle and the second angle is an angle randomly designated according to a preset mode;
Obtaining a fifth image, a sixth image, a seventh image and an eighth image after respectively correcting the first image, the second image, the third image and the fourth image under the condition that all detection passes through based on mark correction and validity detection which are carried out by taking a preset mark as a reference, wherein the validity comprises mark validity and angle validity;
and determining whether the target to be identified contains an identification result of a preset illumination sensitive material or not based on difference detection among the fifth image, the sixth image, the seventh image and the eighth image.
2. The method of claim 1, wherein the first and second angles are acquisition angles between a forward direction of the object to be identified and a forward direction of the acquisition device.
3. The method of claim 1, wherein the predetermined mark comprises a predetermined pattern in a reference image and its physical dimensions; the marker correction includes:
And carrying out stretching correction on the acquired image according to the relative relation between the acquisition angle and the reference angle and the relative relation between the size of the mark in the image and the physical size in the preset mark.
4. A method as claimed in claim 3, wherein the relative relationship is described by a relative angle between the acquisition angle and a reference angle; the stretch correction includes:
rotating the image according to the relative angle;
the predetermined mark in the image is stretched to conform to the predetermined mark size in the reference image.
5. The method of claim 4, wherein the angle validity is determined by whether an error between a predetermined mark view angle obtained by rotating the image by the relative angle and a predetermined mark view angle in the reference image is within a predetermined range;
Wherein the angle is determined to be valid in the event that the error is within a predetermined range;
Otherwise, under the condition that the error exceeds the preset range, determining that the angle is invalid, and feeding back prompt information of the re-shooting.
6. The method of claim 1, wherein the determining whether the object to be identified contains the identification result of the predetermined illumination-sensitive material based on the difference detection between the fifth image, the sixth image, the seventh image, and the eighth image comprises:
detecting a difference between the fifth image and the sixth image, a difference between the seventh image and the eighth image, a difference between the fifth image and the seventh image, a difference between the sixth image and the eighth image;
and judging whether the target to be identified contains a preset illumination sensitive material or not based on the differences.
7. The method of claim 1, wherein the detection of each of the above differences is achieved by one of peak signal-to-noise ratio, structural similarity, normalized cross-correlation, gray level co-occurrence matrix, histogram similarity.
8. The method of claim 1, wherein the determining whether the object to be identified contains the identification result of the predetermined illumination-sensitive material based on the difference detection between the fifth image, the sixth image, the seventh image, and the eighth image comprises:
Extracting image features of the fifth image, the sixth image, the seventh image and the eighth image through a feature extraction network to obtain corresponding feature images;
Extracting difference features based on fusion feature graphs obtained by splicing the feature graphs;
And determining whether the target to be identified contains an identification result of the predetermined illumination sensitive material according to the difference characteristics.
9. The method of claim 1, wherein the extracting the difference feature based on the fused feature map obtained by stitching the feature maps comprises:
fine-tuning the feature maps by a predetermined alignment network such that predetermined marks between the following sets of images are aligned at the pixel level: fifth and sixth images, seventh and eighth images, fifth and seventh images, sixth and eighth images.
10. The method of claim 1, wherein the predetermined alignment network is one of: cross-attention network, optical flow estimation network, spatial transformation network.
11. An apparatus for identifying an illumination-sensitive object, comprising:
An acquisition unit configured to acquire a first image and a second image acquired through a first angle, and a third image and a fourth image acquired through a second angle, for an object to be identified, wherein the first image and the third image are images acquired through light supplementing processing, the second image and the fourth image are images acquired without light supplementing processing, and at least one of the first angle and the second angle is an angle randomly specified in a predetermined manner;
A correction unit configured to obtain a fifth image, a sixth image, a seventh image, and an eighth image after correction of the first image, the second image, the third image, and the fourth image, respectively, in the case where all the detection passes, based on a mark correction and a validity detection performed with reference to a predetermined mark, the validity including a mark validity and an angle validity;
And the identification unit is configured to determine whether the target to be identified contains an identification result of a preset illumination sensitive material or not based on difference detection among the fifth image, the sixth image, the seventh image and the eighth image.
12. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-10.
13. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-10.
CN202410405863.0A 2024-04-03 Method and device for identifying illumination sensitive target Pending CN118154831A (en)

Publications (1)

Publication Number Publication Date
CN118154831A true CN118154831A (en) 2024-06-07

Family

ID=

Similar Documents

Publication Publication Date Title
US9805296B2 (en) Method and apparatus for decoding or generating multi-layer color QR code, method for recommending setting parameters in generation of multi-layer QR code, and product comprising multi-layer color QR code
CN108292456B (en) Identification method and identification medium
US9153005B2 (en) Method and system for authenticating a secure document
US8019115B2 (en) Object authentication using a portable digital image acquisition device
CN109472337B (en) Label anti-counterfeiting method and device based on random characteristics
US11049350B2 (en) Method for authenticating and/or checking the integrity of a subject
US10599925B2 (en) Method of detecting fraud of an iris recognition system
US10805523B2 (en) Article authentication apparatus having a built-in light emitting device and camera
WO2021179157A1 (en) Method and device for verifying product authenticity
CN108780506B (en) Counterfeit detection scheme using paper surface and moving camera
Murthy et al. Design and implementation of paper currency recognition with counterfeit detection
Centeno et al. Recurrent comparator with attention models to detect counterfeit documents
CN110533704B (en) Method, device, equipment and medium for identifying and verifying ink label
WO2020191520A1 (en) Microstructure detection based anti-counterfeiting paper product, and manufacturing method and authentication method therefor
CN111814567A (en) Method, device and equipment for detecting living human face and storage medium
CN110544424A (en) Random texture anti-counterfeiting mark structure with detection mark
CN108256608A (en) A kind of two dimensional image code and its recognition methods and equipment
CN116227524B (en) Anti-fake code generation and verification method and label-based anti-fake system
CN118154831A (en) Method and device for identifying illumination sensitive target
CN113544752A (en) Method for authenticating magnetic induction marks with a portable device
US20230077973A1 (en) Device and method for detecting counterfeit identification card
CN113837026B (en) Method and device for detecting authenticity of certificate
EP3992356A1 (en) Method for verifying product authenticity and establishing authorized product data with fabric features
CN113705486B (en) Method and device for detecting authenticity of certificate
CN109784454A (en) A kind of information concealing method based on two dimensional code, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication