CN116911872A - Article verification method, detection method for screen acquisition and label printed with graphic code - Google Patents

Article verification method, detection method for screen acquisition and label printed with graphic code Download PDF

Info

Publication number
CN116911872A
CN116911872A CN202310725250.0A CN202310725250A CN116911872A CN 116911872 A CN116911872 A CN 116911872A CN 202310725250 A CN202310725250 A CN 202310725250A CN 116911872 A CN116911872 A CN 116911872A
Authority
CN
China
Prior art keywords
video
terminal
light
coding sequence
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310725250.0A
Other languages
Chinese (zh)
Inventor
钱烽
罗涛
许诗起
陈琦
冯志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202310725250.0A priority Critical patent/CN116911872A/en
Publication of CN116911872A publication Critical patent/CN116911872A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F3/00Labels, tag tickets, or similar identification or indication means; Seals; Postage or like stamps
    • G09F3/02Forms or constructions
    • G09F3/0297Forms or constructions including a machine-readable marking, e.g. a bar code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

One or more embodiments of the present disclosure provide an item verification method, a detection method for screen acquisition, a label printed with a graphic code. The article authentication method includes: responding to a verification request for an article initiated by a terminal, randomly generating a first code sequence, and transmitting the first code sequence to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first code sequence, and collecting video of the article in the flashing process; the article is provided with a light-changing area, and the light-changing area generates light-changing reaction under the irradiation of the flashing light of the light-emitting component; receiving a video containing articles collected by a terminal; and verifying whether the video is acquired by a screen or not, and verifying whether the object contained in the video is true or false. Whether the video acquisition object of the detection terminal can generate a light-induced reaction in real time under the irradiation of the flickering light can prevent the screen from acquiring in the process of article verification.

Description

Article verification method, detection method for screen acquisition and label printed with graphic code
Technical Field
One or more embodiments of the present disclosure relate to the field of anti-counterfeiting technologies, and in particular, to an article verification method, an apparatus, an electronic device, a machine-readable storage medium, a detection method for screen capturing, and a label printed with a graphic code.
Background
The anti-counterfeiting label can also be called as anti-counterfeiting mark, anti-counterfeiting sticker and the like, and is a mark with anti-counterfeiting effect, which can be adhered, printed and transferred on the surface of a target object, the package of the target object or an accessory of the target object. The two-dimensional code label is a commonly used anti-counterfeiting label. The two-dimensional code label has the advantages of low cost, easy adhesion, easy inspection and the like, and is widely applied to the anti-counterfeiting field of commodities and the tracing field of commodities at present. Besides, the two-dimensional code label can play a key role in application scenes such as preventing goods from being fleed, scanning codes to obtain red packets and the like.
In the application scene of preventing the goods from being mixed, brands encourage distributors to scan the two-dimensional code labels stuck on the goods after receiving the goods, because the two-dimensional codes printed on the two-dimensional code labels can imply the original distribution range of the goods, and the occurrence of the goods mixing behavior can be prevented. And a bad dealer can screen the shot pictures of the two-dimensional code labels after cross-regional goods channeling.
In the application scene of scanning codes to obtain red packages, consumers can scan two-dimensional code labels in the red packages after opening bottle caps and scraping coatings so as to obtain 'red packages' rewards such as points, prizes, cash and the like. And an attacker may acquire the two-dimension code printed on the two-dimension code label through an illegal channel, and display the regenerated two-dimension code on a screen of the electronic equipment for scanning so as to achieve the purpose of batch 'weeding'.
At present, a technical scheme capable of effectively preventing screen scanning aiming at two-dimensional code labels is needed.
Disclosure of Invention
The application provides an article verification method, which comprises the following steps:
a first coding sequence is randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence is sent to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and video acquisition is carried out on the article in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
receiving a video containing the object collected by a terminal;
Verifying whether the video is acquired by a screen or not, and verifying authenticity of an article contained in the video;
the verification of whether the video is acquired by a screen or not comprises the following steps:
identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
The application also provides an article verification device, comprising:
the random generation unit is used for responding to a verification request for the article initiated by the terminal and randomly generating a first coding sequence;
the sending unit is used for sending the first coding sequence to the terminal so that the terminal controls the light-emitting component carried by the terminal to flash according to the flashing rule indicated by the codes contained in the first coding sequence, and the object is subjected to video acquisition in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
the receiving unit is used for receiving the video containing the object collected by the terminal;
The verification unit is used for verifying whether the video is acquired by a screen or not and verifying authenticity of articles contained in the video;
the verification of whether the video is acquired by a screen or not comprises the following steps:
identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
The application also provides a detection method for the screen acquisition, which comprises the following steps:
a first coding sequence is randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence is sent to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and video acquisition is carried out on the article in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
receiving a video containing the object collected by a terminal;
Identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
The application also provides electronic equipment, which comprises a communication interface, a processor, a memory and a bus, wherein the communication interface, the processor and the memory are mutually connected through the bus;
the memory stores machine readable instructions, and the processor executes the above-described item verification method or detection method for screen acquisition by invoking the machine readable instructions.
The application also provides a machine-readable storage medium storing machine-readable instructions that, when invoked and executed by a processor, implement the above-described item verification method or detection method for screen acquisition.
The application also provides a label printed with the graphic code, and a light variation area which generates light variation reaction under the irradiation of an external light source is also printed on the label; the optically variable area is used for verifying whether the terminal performs screen separation acquisition on the graphic code printed on the label;
The terminal controls the carried luminous assembly to flash according to a flashing rule indicated by codes contained in a first coding sequence issued by the server, and performs video acquisition on the graphic codes in the flashing process, so that the server verifies whether the video containing the object acquired by the terminal is a video acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
Through the above embodiment, on one hand, by adding the optically variable region that generates optically variable reaction under the irradiation of the external light source on the article, whether the video acquisition object of the terminal generates optically variable reaction under the irradiation of the external light source can be verified in response to the verification request for the article initiated by the terminal; if a light-induced reaction occurs, the video collected by the terminal is a video collected by a non-screen, so that the behavior of collecting the video by the screen can be prevented in the process of verifying the article.
On the other hand, the server side can randomly generate a first coding sequence and send the first coding sequence generated randomly to the terminal, so that the terminal controls the light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence generated randomly, and the video of the object is acquired in the flashing process. Compared with the implementation mode of whether the image to be verified uploaded by the detection terminal is an image collected in an illumination state, whether the video collection object of the verification terminal can generate a light-induced reaction in real time under the irradiation of the flickering light can be avoided, an attacker can be prevented from respectively shooting pictures of the object to be verified in the illumination state and the non-illumination state, and the verification of preventing the screen from collecting videos is bypassed by switching the two pictures, so that the effect of preventing the screen from collecting videos is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a label generation process shown in an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of item verification in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram of another label generation process shown in an exemplary embodiment;
FIG. 4 is a schematic diagram of another label generation process shown in an exemplary embodiment;
FIG. 5 is a schematic diagram of another label generation process shown in an exemplary embodiment;
FIG. 6 is a schematic diagram of an electronic device with an article verification device according to an exemplary embodiment;
FIG. 7 is a block diagram of an article authentication device shown in an exemplary embodiment;
fig. 8 is a flow chart of a detection method for screen capture.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
The anti-counterfeiting label can also be called as anti-counterfeiting mark, anti-counterfeiting sticker and the like, and is a mark with anti-counterfeiting effect, which can be adhered, printed and transferred on the surface of a target object, the package of the target object or an accessory of the target object.
The two-dimensional code label is a commonly used anti-counterfeiting label. Additional information such as special characters or patterns which are visible to naked eyes, hidden contents which are invisible to naked eyes and the like can be embedded in the two-dimensional code label in advance; further, the two-dimensional code label and the commodity can be bound by using an easy-tearing and pasting technology, such as self-adhesive glue and the like; subsequently, the user can scan the two-dimensional code label stuck on the commodity through a terminal such as a mobile phone to identify whether effective additional information is embedded in the two-dimensional code label, so as to identify the authenticity of the two-dimensional code label, and further judge the authenticity of the commodity stuck with the two-dimensional code label.
The two-dimensional code label has the advantages of low cost, easy adhesion, easy inspection and the like, and is widely applied to the anti-counterfeiting field of commodities and the tracing field of commodities at present. Besides, the two-dimensional code label can play a key role in application scenes such as preventing goods from being fleed, scanning codes to obtain red packets and the like.
In the application scene of preventing the goods from being mixed, brands encourage distributors to scan the two-dimensional code labels stuck on the goods after receiving the goods, because the two-dimensional codes printed on the two-dimensional code labels can imply the original distribution range of the goods, and the occurrence of the goods mixing behavior can be prevented. And a bad dealer can screen the shot pictures of the two-dimensional code labels after cross-regional goods channeling.
For example, brands are respectively supplied to a dealer a and a dealer B, where dealer a is responsible for the sales of the goods in area a and dealer B is responsible for the sales of the goods in area B. Normally, the brand may paste different two-dimensional code labels for the commodity provided to the dealer a for sale in the area a and the commodity provided to the dealer B for sale in the area B, and after the two-dimensional code labels pasted on the commodity are scanned by the dealer a and the dealer B, it is known whether the actually scanned two-dimensional code label of the dealer a corresponds to the commodity originally sold in the area a, and whether the actually scanned two-dimensional code label of the dealer B corresponds to the commodity originally sold in the area B. If the two-dimensional code label actually scanned by distributor a corresponds to the article originally sold in area B, it can be determined that distributor B privately provided the article originally provided by the brand for sale in area B to distributor a, that is, that a channeling has occurred between distributor a and distributor B. In order to avoid the occurrence of the blow-by, the dealer B may take a photograph of the two-dimensional code tag after the dealer a is privately provided with the merchandise originally intended for sale in the area B, and the dealer B may scan the photograph of the two-dimensional code tag taken by the dealer B on a screen of an electronic device such as a mobile phone or a computer.
In the application scene of scanning codes to obtain red packages, consumers can scan two-dimensional code labels in the red packages after opening bottle caps and scraping coatings so as to obtain 'red packages' rewards such as points, prizes, cash and the like. And an attacker may acquire the two-dimension code printed on the two-dimension code label through an illegal channel, and display the regenerated two-dimension code on a screen of the electronic equipment for scanning so as to achieve the purpose of batch 'weeding'.
In order to prevent an attacker from conducting screen scanning on the two-dimensional code tag, in an embodiment of the related art, whether the two-dimensional code tag is subjected to screen scanning can be determined by detecting whether the image to be verified acquired during terminal code scanning contains moire or not. The moire is a high-frequency interference fringe generated by a photosensitive element on equipment such as a digital camera, a scanner, a smart phone with a camera, and the like, and is a high-frequency fringe which can cause irregular and colored pictures.
However, in the above-described related-art embodiment, with continuous improvement of the material and the process of the display, the feature of whether the image to be verified contains moire collected during code scanning at the terminal is less and less obvious, so that the effect of preventing code scanning by the screen is weaker and weaker. Therefore, a technical scheme capable of effectively preventing screen scanning aiming at two-dimensional code labels is needed at present.
In the present specification, the object to be scanned and identified by the terminal may be a two-dimensional code printed on a two-dimensional code label or may be another article. Moreover, similar technical problems exist when the terminal scans and identifies other objects.
For example, in a commodity sales promotion scenario, a user may collect a commodity of a specified style, may take a photo or video of the collected commodity, and upload the photo or video taken as a credential to a service end corresponding to the merchant system for verification, so that the service end corresponding to the merchant system distributes a corresponding reward to the user after the photo or video uploaded by the user passes the verification. An attacker may flip photos or videos provided by other users or directly copy photos or videos provided by other users, so that rewards prepared for users by the merchant system are fraudulently obtained by using the flipped or copied photos and videos without collecting goods.
In view of this, the present disclosure aims to propose a technical solution for verifying whether a video acquisition object of a terminal will generate a photo-induced reaction in real time by adding a photo-induced region on an article, thereby verifying whether a screen is acquired.
In the present specification, the optically variable region refers to a region in which an optically variable reaction occurs under irradiation of an external light source; the light-variable reaction may specifically include, but is not limited to, color change, brightness change, texture change, etc.
In some possible embodiments, the optically variable region is printed with optically variable ink. Wherein, the optically variable ink is an ink material which can generate optically variable reaction under the irradiation of an external light source; the printed matter printed by the optically variable ink has obvious difference in color, brightness and the like in the illumination state and the non-illumination state.
For example, a light-changing ink of a white-to-red effect may be used to print light-changing areas on the label carrier; the visual observation effect of the optically variable region can be white without external light source irradiation; if the optically variable region is irradiated with an external light source such as a flashlight or a flashlight at a short distance, the visual observation effect of the optically variable region may be red, and the brightness of the optically variable region may be significantly improved in the photographed picture.
It should be noted that the optically variable ink using the white-to-red effect is only an exemplary embodiment, and is not particularly limited to the present specification; in practical application, optically variable ink with other color-changing effects can be adopted, and optically variable ink with various different color-changing effects can be adopted at the same time.
In other possible embodiments, the optically variable region is obtained using holographic laser printing techniques. The holographic laser printing technology may also be called as holographic printing technology, and refers to that the amplitude and phase of the scattered light wave of the object are recorded in the form of interference fringes by means of laser interference. The holographic laser printing technique may specifically include various types of holographic lenses.
For example, a holographic lens with a bright stripe effect may be printed on the label carrier as optically variable regions; when the light-changing region is irradiated with an external light source such as a flashlight or a flashlight at a short distance, the brightness of one wider stripe in the light-changing region is significantly higher than that of the other part, compared with the case where no external light source is irradiated. It should be noted that the hologram lens using the bright stripe effect is only an exemplary embodiment, and is not particularly limited to the present specification; in practical applications, holographic lenses with other effects, or other holographic laser printing techniques, may be used, which is not exhaustive.
In this specification, in order to prevent the act of screen capturing for an article, the optically variable region may be added to the article produced. In some possible embodiments, the article may specifically include, but is not limited to, a graphic code, a preset graphic, a preset shaped article, and the like. For convenience of description, the technical solution in the present specification will be described below by taking the article as an example of a graphic code printed on a label, which is not represented by the specific limitation of the present specification.
When generating a label, acquiring a code value of a graphic code, and generating an initial image containing the graphic code based on the code value of the graphic code; further, the initial image and the optically variable region may be printed on a label carrier to generate a label printed with the graphic code. Accordingly, when the label is verified, whether the code scanning object of the terminal is a label carrier printed with the graphic code can be determined by detecting whether the light-changing area is printed in the image to be verified, which is acquired by the terminal, namely, by verifying whether the code scanning object of the terminal generates light-changing reaction under the irradiation of an external light source.
For example, referring to fig. 1, fig. 1 is a schematic diagram illustrating a label generation process according to an exemplary embodiment. As shown in fig. 1, the graphic code may specifically be a two-dimensional code; optically variable region 12 and initial image 13 containing a two-dimensional code may be printed in sequence on label carrier 11 to create two-dimensional code label 10. It should be noted that, in fig. 1, the shape and the printing position of the light-changing region 12 are only shown by way of example with gray regions, and are not intended to represent the visual observation effect or the image acquisition result of the light-changing region 12 in the illuminated state or the non-illuminated state.
In the present specification, the shape, size, printing position, specific type of optically variable region, and the like are not particularly limited. The relative positional relationship between the optically variable region and the graphic code is not particularly limited, and when a user scans the graphic code printed on the label carrier through the terminal, the terminal can collect images of the optically variable region printed on the same label carrier at the same time, and the optically variable region printed on the label carrier does not influence the normal identification of the graphic code.
Further, in order to improve the effect of preventing the screen from collecting, whether the video collection object of the terminal can generate a light-induced reaction in real time under the irradiation of an external light source can be verified, so that an attacker is prevented from taking photos in an illumination state and a non-illumination state respectively aiming at the object, and the verification of preventing the screen from collecting is bypassed by switching the two photos.
When the method is realized, a first coding sequence can be randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence can be sent to the terminal, so that the terminal can control a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and the article is subjected to video acquisition in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component; further, the server side can receive the video containing the object collected by the terminal; further, the server can perform verification of whether the video is collected by a screen or not, and can perform verification of authenticity of articles contained in the video.
The verification of whether the video is acquired by the screen or not may specifically include: identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence; and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
Therefore, in the technical scheme in the specification, on one hand, by adding the light change area which generates light change reaction under the irradiation of the external light source on the article, whether the video acquisition object of the terminal generates light change reaction under the irradiation of the external light source can be verified in response to the verification request for the article initiated by the terminal; if a light-induced reaction occurs, the video collected by the terminal is a video collected by a non-screen, so that the behavior of collecting the video by the screen can be prevented in the process of verifying the article.
On the other hand, the server side can randomly generate a first coding sequence and send the first coding sequence generated randomly to the terminal, so that the terminal controls the light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence generated randomly, and the video of the object is acquired in the flashing process. Compared with the implementation mode of whether the image to be verified uploaded by the detection terminal is an image collected in an illumination state, whether the video collection object of the verification terminal can generate a light-induced reaction in real time under the irradiation of the flicker light of the light-emitting component carried by the terminal can be avoided, an attacker can be prevented from respectively shooting pictures of the object to be verified in the illumination state and the non-illumination state, and the verification of preventing the screen from collecting videos is bypassed by switching the two pictures, so that the effect of preventing the screen from collecting videos is further improved.
The present application is described below by way of specific embodiments and in connection with specific application scenarios.
Referring to fig. 2, fig. 2 is a flow chart illustrating a method of item verification according to an exemplary embodiment.
In this specification, the article authentication method may be applied to a server side. Specifically, the article verification method may be executed by an article verification program running on a server, where the article verification program may provide a user with a verification service of whether to perform screen capturing for the video, and a verification service of whether to perform authenticity for an article included in the video.
The item verification method as shown in fig. 2 may perform the steps of:
step 202: a first coding sequence is randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence is sent to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and video acquisition is carried out on the article in the flashing process; the article has a light-altering region that produces a light-altering reaction upon illumination of the light assembly with a blinking light.
For example, a user may scan the two-dimensional code tag 10 shown in fig. 1 through a terminal to identify a code value of a two-dimensional code included in an initial image 13 printed on the two-dimensional code tag 10, and may initiate a verification request for the two-dimensional code included in the initial image 13 to a server through the terminal, where the verification request may carry the code value of the two-dimensional code identified by the terminal; after receiving the verification request sent by the terminal, the server can determine that the terminal initiates the verification request aiming at the two-dimensional code contained in the initial image 13 according to the code value of the two-dimensional code carried in the verification request; in response to the verification request, the terminal can randomly generate a first coding sequence { on, off, on, on, … }, wherein the coding on in the first coding sequence can be used for indicating the terminal to control the light-emitting component carried by the terminal to emit light, and the coding off in the first coding sequence can be used for indicating the terminal to control the light-emitting component carried by the terminal to emit no light; further, after receiving the first code sequence { on, off, on, … }, the terminal may perform OOK modulation on the first code sequence { on, off, on, … }, and may control the light emitting component mounted thereon to flash according to the first code sequence after OOK modulation, and may also capture video of the two-dimensional code included in the initial image 13 during the flashing process.
The OOK modulation, which may also be referred to as binary amplitude keying (2 ASK) modulation, is to control the on/off of the sinusoidal carrier by using a unipolar non-return-to-zero code sequence, and for specific modulation and demodulation, please refer to the related art, which is not described in the present specification.
In step 202, the terminal may specifically include, but is not limited to: smart phones, tablet computers, and other electronic devices supporting image and video acquisition and carrying light emitting components. The light emitting assembly may specifically include, but is not limited to: flash lamp in the camera module, the luminous source corresponding to the flashlight function provided by the terminal, and the like.
In one embodiment shown, the graphic Code may be a QR Code (Quick Response Code). It should be noted that, in step 202, the graphic Code may be not only a QR Code, but also other two-dimensional codes, such as a Data Matrix, a Han Xin Code, etc.; in addition, the graphic code may be a two-dimensional code, or may be another graphic code, such as a one-dimensional bar code, which is not listed here.
In one embodiment shown, the first coding sequence may be a binary coding sequence in order to facilitate data transmission, ASK modulation by the terminal, and subsequent comparison of the similarity of the coding sequences. In this case, the codes included in the first code sequence may include: a first value for controlling the light emitting component mounted on the terminal to emit light, and a second value for controlling the light emitting component mounted on the terminal to not emit light.
For example, the first value may be 1 and the second value may be 0; in response to a terminal initiated verification request for the two-dimensional code contained in the initial image 13, the server may randomly generate a first code sequence {1,0,1, … }. It should be noted that the above illustrates only one exemplary implementation, and in other possible embodiments, the first value may be set to 0 and the second value may be set to 1.
In the embodiment, in order to further improve the effect of preventing the screen scanning, the first code sequence sent to the terminal by the server is prevented from being intercepted and cracked by an attacker, and the first code sequence generated randomly can be sent to the terminal after being encrypted.
In this case, the sending the first coding sequence to the terminal, so that the terminal controls the light emitting component carried by the terminal, flashes according to a flashing rule indicated by a code included in the first coding sequence, and performs video acquisition on the object in the flashing process, which specifically may include: encrypting the randomly generated first coding sequence, transmitting the encrypted first coding sequence to the terminal, decrypting the encrypted first coding sequence by the terminal, controlling a light emitting assembly carried by the terminal, flashing according to a flashing rule indicated by codes contained in the decrypted first coding sequence, and collecting video of the object in the flashing process.
The specific implementation manner of encrypting the first coding sequence is not particularly limited in this specification.
Step 204: and receiving the video containing the object collected by the terminal.
Step 206: and verifying whether the video is acquired by a screen or not, and verifying whether the object contained in the video is true or false.
In the step 206, the verification of whether the video is acquired by the screen may specifically include: identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence; and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence. And identifying whether each video frame contained in the video is a video frame acquired in an illumination state or not according to the light change result of the video. The codes in the second coding sequence may be used to indicate whether the corresponding video frame is a video frame acquired in an illumination state.
For example, the server may receive the video acquired by the terminal for the two-dimensional code included in the initial image 13 in the flashing process, and then extract each video frame included in the video, and identify whether each video frame is a video frame acquired in the illumination state; further, the server side can also encode the identified optically variable result to obtain a second coding sequence; the coded light in the second coding sequence may be used to indicate that the corresponding video frame is a video frame collected in an illumination state, and the coded dark in the second coding sequence may be used to indicate that the corresponding video frame is a video frame collected in a non-illumination state. Assuming that the first coding sequence generated randomly is { on, off, on, on, off }, and the second coding sequence is { light, dark, light, light, dark }, the first coding sequence is considered to be consistent with the second coding sequence, it can be determined that the video acquired by the terminal is a video acquired by a non-screen. In addition, the verification of whether the video acquired by the terminal is acquired by the screen is performed, and the verification of authenticity of the two-dimensional code included in the initial image 13 can be performed simultaneously, before or after the verification of whether the video is acquired by the screen is performed.
For another example, assuming that the first coding sequence generated randomly is { on, off, on, on, off }, and the second coding sequence is { light, dark, light, dark }, it may be determined that the video acquired by the terminal is a video acquired by a screen. In this case, the two-dimensional code requested to be verified by the terminal may be a two-dimensional code included in an image displayed on the display, which may be a photograph or a video frame obtained by photographing the tag carrier 14, or may be an initial image 13' of an electronic version regenerated by an attacker based on a code value of the two-dimensional code, which cannot be listed here.
In the embodiment shown above, the illumination state is a state opposite to the no-illumination state. The non-illuminated state does not refer to a state in which no light source is illuminated (for example, there may be ambient light, but the intensity of the ambient light is insufficient to cause a significant light-changing reaction in the light-changing region), but refers to a state in which no light-emitting component mounted on the terminal is illuminated as an external light source.
In one embodiment shown, the second coding sequence may be a binary coding sequence in particular in order to facilitate a subsequent comparison of the similarity of the first coding sequence to the second coding sequence. In this case, the codes included in the second code sequence may specifically include: the method comprises the steps of indicating that a corresponding video frame is a first numerical value of a video frame acquired in an illumination state, and indicating that the corresponding video frame is a second numerical value of the video frame acquired in a non-illumination state.
For example, the first value may be 1, which is used to indicate that the corresponding video frame is a video frame collected in the illumination state; the second value may be 0, for indicating that the corresponding video frame is acquired in the no-illumination state.
In step 204, the specific implementation manner of identifying whether each video frame included in the video collected by the terminal is a video frame collected in the illumination state is not particularly limited in this specification. For the purpose of facilitating a better understanding of the technical solutions in the present specification by a person skilled in the art, two ways of identification are shown here by way of example.
In the first recognition mode, whether each video frame is a video frame collected in an illumination state can be directly judged based on a neural network which is trained in advance. Wherein, the training sample of the neural network may include: and respectively carrying out image acquisition on the label in the illumination state and the no-illumination state to obtain an image sample, and marking the sample label in the illumination state or the no-illumination state for the image sample.
The second recognition mode can calculate the average brightness value of each video frame respectively, and then calculate the average brightness difference between two adjacent video frames; further, a classification threshold of the second coding sequence can be dynamically determined, and each video frame is classified into two types of video frames collected in an illumination state and video frames collected in a non-illumination state based on the determined classification threshold.
In one embodiment, the verifying whether the video is a video acquired without a screen according to the similarity between the first coding sequence and the second coding sequence may specifically include: calculating sequence similarity between the first coding sequence and the second coding sequence; if the sequence similarity reaches a first threshold value, determining that the video is acquired by a non-screen; and if the sequence similarity does not reach a first threshold value, determining that the video is a video acquired by a screen.
The specific value of the first threshold is not particularly limited in the specification, and a person skilled in the art can flexibly set the value according to the security level, the risk level and the like; the larger the value of the first threshold value is, the better the effect of preventing the screen from scanning the code is.
In particular, the specific implementation manner of calculating the sequence similarity between the first coding sequence and the second coding sequence is not particularly limited in this specification. For example, the similarity between the first code sequence and the second code sequence may be determined through a neural network such as Transformer, LSTM, and a probability value for representing that the first code sequence and the second code sequence are identical may be output. For another example, the similarity between the first and second coding sequences may be measured by a conventional algorithm such as DTW (Dynamic Time Warping ) algorithm, euclidean distance, or the like. Wherein the DTW algorithm can be used to align two sequences of different length but similar content.
For example, assuming that the first code sequence generated randomly is { on, off, on, on, off }, and the second code sequence is { light, light, dark, dark, light, light, light, light, light, light, dark }, the similarity between the first code sequence and the second code sequence is considered to be high, it may be determined that the two-dimensional code requested to be verified by the terminal is the two-dimensional code printed on the tag carrier 14.
In step 206, the authenticity of the graphic code itself may also be verified while preventing the screen scanning, thereby providing a more complete tag verification service for the user. In this case, the verifying of authenticity of the article contained in the video may specifically include: inquiring the graphic codes reserved in the graphic code generation process for comparison; extracting graphic codes in the video frames acquired in the state of no illumination from the video, and comparing the extracted graphic codes with the graphic codes for comparison in a similarity manner so as to verify whether the graphic codes are real graphic codes or not.
The graphic code used for comparison may be a code value used for generating the graphic code, or may be an image obtained by performing image acquisition on the graphic code after the graphic code is generated. The code value of the graphic code may include, but is not limited to, english characters, chinese characters, numerals, symbols, and the like, which are data contents that can be parsed from the graphic code.
In some possible embodiments, the graphic code for comparison may specifically be a printed image corresponding to the graphic code; the printed image is an image obtained by collecting an image of the label carrier printed with the graphic code in the label generation process. In this case, a video frame including the graphic code acquired in the no-illumination state may be extracted from the video as an image to be verified, and the printed image and the image to be verified may be subjected to similarity comparison to verify whether the graphic code is a true graphic code. That is, according to the image similarity between the print image and the image to be verified, whether the graphic code is a true graphic code is verified.
For example, in the label generation stage, after printing the two-dimensional code label 10 as shown in fig. 1, image acquisition may be performed on the two-dimensional code label 10 to obtain a printed image 15 corresponding to the two-dimensional code included in the initial image 13, and a correspondence relationship between the code value of the two-dimensional code and the printed image 15 may be stored; in the label verification stage, responding to a verification request for the two-dimensional code contained in the initial image 13 initiated by a terminal, and inquiring a printing image 15 corresponding to the verification request according to the code value of the two-dimensional code carried in the verification request; further, a clear video frame collected in the no-illumination state can be extracted from each video frame included in the video collected by the terminal as the image to be verified 16, and the authenticity of the graphic code can be verified according to the image similarity between the printed image 15 and the image to be verified 16.
In step 206, the order of execution of the verification of whether or not the video is collected by the screen and the verification of whether or not the object contained in the video is genuine is not particularly limited in this specification. For example, the verification of whether the video is collected by a screen or not is performed, and the verification of authenticity is performed on the object contained in the video. For another example, the verification of whether the video is collected by the screen may be performed first, and if the video is passed, the verification of authenticity may be performed on the object included in the video.
In one embodiment, the similarity comparison between the extracted graphic code and the graphic code for comparison may specifically include: calculating the similarity between the extracted graphic code printing image and the graphic code for comparison; if the similarity reaches a second threshold value, determining that the graphic code is a real graphic code; if the similarity does not reach a second threshold, it is determined that the graphic code is not a true graphic code.
The specific value of the second threshold is not particularly limited in this specification, and a person skilled in the art can flexibly set the value according to the security level, the risk level, and the like; the larger the value of the second threshold value is, the better the effects of preventing the screen from scanning the code and verifying the authenticity of the graphic code printed on the label are.
In one embodiment shown, in order to further increase the effect of preventing the screen from scanning the code, and to further increase the security of the label printed with the graphic code, a random graphic for security may also be printed on the label carrier.
In this case, the label carrier may have printed thereon an initial image containing the graphic code and a random graphic for security; wherein the random pattern is printed in the optically variable region on the label carrier.
For example, referring to fig. 3, fig. 3 is a schematic diagram illustrating another label generation process according to an exemplary embodiment. As shown in fig. 3, the graphic code may specifically be a two-dimensional code; optically variable region 32, initial image 33 containing a two-dimensional code, and random pattern 34 may be printed in this order on label carrier 31 to create two-dimensional code label 30. Wherein the printed position of the random pattern 34 on the two-dimensional code label 30 is in the optically variable region 32.
For another example, referring to fig. 4, fig. 4 is a schematic diagram illustrating another label generation process according to an exemplary embodiment. As shown in fig. 4, the graphic code may specifically be a two-dimensional code; an initial image 42 including a two-dimensional code, a optically variable region 43, and a random pattern 44 may be sequentially printed on the label carrier 41 to generate the two-dimensional code label 40. Wherein the printing position of the random pattern 44 on the two-dimensional code label 40 is in the optically variable region 43.
It should be noted that, printing a random pattern on the two-dimensional code label as shown in fig. 3 and 4 is merely an exemplary description, and the present disclosure is not limited thereto. In practical applications, the label carrier may be printed with two or more random patterns, or may be printed with two or more random patterns. In the label generation stage, the random pattern can be stored as an image file after being combined with an initial image containing a pattern code, and is printed on a label carrier once; alternatively, the random pattern and the initial image including the pattern code may be stored as a plurality of image files corresponding to different image layers, respectively, and printed onto the label carrier in a plurality of times.
In addition, the random pattern may be a random pattern that can be observed by naked eyes, or may be a random pattern (such as a watermark) that is invisible to naked eyes and can be identified by a terminal, which is not particularly limited in this specification.
In another embodiment shown, in order to further improve the effect of preventing the screen from scanning the code and further improve the anti-counterfeiting strength of the label printed with the graphic code, in the label generation stage, an initial image containing the graphic code may be processed based on a random image, and then the processed initial image may be printed on a label carrier.
In this case, the label carrier may have printed thereon a security image comprising the graphic code; the anti-counterfeiting image is the rest part obtained by splitting the random pattern from the initial image containing the pattern code; the anti-counterfeiting image supports identification of the graphic codes corresponding to the anti-counterfeiting image; the security image is printed in the optically variable region on the label carrier.
In particular, the initial image may include dark areas and light areas; the anti-counterfeit image may be a remaining portion obtained by separating a random pattern from a dark area included in the initial image.
For example, referring to fig. 5, fig. 5 is a schematic diagram illustrating another label generation process according to an exemplary embodiment. As shown in fig. 5, the graphic code may specifically be a two-dimensional code; an initial image 51 containing a two-dimensional code, and a random pattern 52 can be acquired; further, the random pattern 52 may be split from the initial image 51, and the split remaining image may be used as the anti-counterfeit image 53; further, the optically variable region 55 and the security image 53 may be printed in sequence on the label substrate 54 to generate the two-dimensional code label 50. Wherein, the printing position of the anti-counterfeiting image 53 on the two-dimensional code label 50 is in the optically variable area 55.
In one embodiment shown, the random pattern may be one or more patterns randomly selected from a preset pattern set.
For example, the preset graphic set may include five stars, triangles, rectangles, quincuncials, peach hearts, clouds, letters, numbers, and irregular curves. As shown in fig. 3 or 4, the random pattern may be a randomly selected irregular curve. As shown in fig. 5, the random pattern may be a randomly selected five-pointed star.
In another embodiment shown, the random number may be generated first and then the random pattern may be generated based on the random number.
For example, a random number may be generated first, and then the random number and a code value of a two-dimensional code are taken as input data together, and a predefined one-way function is input to obtain a numerical vector output by the one-way function; further, the numeric vector may be converted into a random pattern based on a pre-configured conversion rule. Correspondingly, if a plurality of random patterns need to be acquired, a plurality of random numbers can be generated first, and the corresponding plurality of random patterns can be obtained through conversion based on the random numbers.
In one or more embodiments shown above, the relative position between the initial image including the graphic code and the random pattern is not particularly limited in this specification, and when the user scans the graphic code printed on the label carrier through the terminal, the terminal may collect the image of the random pattern printed on the same label carrier at the same time, and the random pattern printed on the label carrier does not affect the normal identification of the graphic code.
In one embodiment shown, the optically variable region may be printed with optically variable ink; alternatively, the optically variable region may be obtained using holographic laser printing techniques.
For example, as shown in fig. 1 and 5, optically variable region 12 and optically variable region 52 can be printed using optically variable ink.
As another example, as shown in fig. 3 and 4, optically variable region 32 and optically variable region 42 can be holographic lenses obtained using holographic laser printing techniques.
According to the technical scheme, on one hand, by printing the optically variable region on the label containing the graphic code, whether the scanned object of the terminal generates optically variable reaction under the irradiation of an external light source can be verified in response to a verification request for the graphic code initiated by the terminal; if a light-induced reaction occurs, the scanned object of the terminal is a label object, and the graphic code can be determined to be printed on the label carrier, so that the behavior of screen scanning of the label can be prevented.
On the other hand, the first coding sequence can be randomly generated and sent to the terminal, so that the terminal controls the carried luminous component to flash according to the flashing rule indicated by the codes contained in the first coding sequence which is randomly generated, video acquisition is carried out on the graphic codes in the flashing process, whether each video frame contained in the video acquired by the terminal is the video frame acquired in the illumination state can be identified, the identification result is encoded to obtain the second coding sequence, and further whether the graphic codes are the graphic codes printed on the label carrier can be verified through the similarity between the first coding sequence and the second coding sequence. Compared with the realization mode of whether the image to be verified uploaded by the detection terminal is an image collected in an illumination state, whether the scanning object of the verification terminal can generate a light-induced reaction in real time under the illumination of an external light source can be avoided, an attacker can be prevented from respectively shooting pictures in the illumination state and the non-illumination state aiming at the two-dimension code label, and the verification of preventing the screen scanning code is bypassed by switching the two pictures, so that the effect of preventing the screen scanning code is further improved.
Corresponding to the embodiment of the article verification method, the specification also provides an embodiment of an article verification device, an embodiment of a detection method for the separation screen collection and an embodiment of a label printed with a graphic code.
Referring to fig. 6, fig. 6 is a hardware configuration diagram of an electronic device in which an article authentication device is shown in an exemplary embodiment. At the hardware level, the device includes a processor 602, an internal bus 604, a network interface 606, memory 608, and non-volatile storage 610, although other hardware requirements are possible. One or more embodiments of the present description may be implemented in a software-based manner, such as by the processor 602 reading a corresponding computer program from the non-volatile memory 610 into the memory 608 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device.
Referring to fig. 7, fig. 7 is a block diagram of an article authentication device according to an exemplary embodiment. The article verification device can be applied to the electronic equipment shown in fig. 6 to realize the technical scheme of the specification. Wherein the article authentication device may include:
A random generation unit 702, configured to randomly generate a first code sequence in response to a verification request for an article initiated by a terminal;
a sending unit 704, configured to send the first code sequence to the terminal, so that the terminal controls a light emitting component carried by the terminal, performs flicker according to a flicker rule indicated by a code included in the first code sequence, and performs video acquisition on the object in a flicker process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
a receiving unit 706, configured to receive a video including the object acquired by the terminal;
a verification unit 708, configured to perform verification of whether the video is acquired by a screen, and perform verification of authenticity of an article included in the video;
the verification of whether the video is acquired by a screen or not comprises the following steps:
identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
In this embodiment, the article includes: and (5) a graphic code.
In this embodiment, the verifying the authenticity of the object included in the video includes:
inquiring the graphic codes reserved in the graphic code generation process for comparison;
extracting graphic codes in the video frames acquired in the state of no illumination from the video, and comparing the extracted graphic codes with the graphic codes for comparison in a similarity manner so as to verify whether the graphic codes are real graphic codes or not.
In this embodiment, the first coding sequence and the second coding sequence are binary coding sequences;
wherein the code contained in the first code sequence comprises: a first value for controlling the light emitting element mounted on the terminal to emit light, and a second value for controlling the light emitting element mounted on the terminal to not emit light;
the code contained in the second code sequence comprises: the method comprises the steps of indicating that a corresponding video frame is a first numerical value of a video frame acquired in an illumination state, and indicating that the corresponding video frame is a second numerical value of the video frame acquired in a non-illumination state.
In this embodiment, the verifying whether the video is a video acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence includes:
Calculating sequence similarity between the first coding sequence and the second coding sequence;
if the sequence similarity reaches a first threshold value, determining that the video is acquired by a non-screen;
and if the sequence similarity does not reach a first threshold value, determining that the video is a video acquired by a screen.
In this embodiment, the verifying whether the graphic code is a true graphic code according to the similarity between the printed image and the image to be verified includes:
calculating the similarity between the extracted graphic code and the graphic code for comparison;
if the similarity reaches a second threshold value, determining that the graphic code is a real graphic code;
if the similarity does not reach a second threshold, it is determined that the graphic code is not a true graphic code.
In this embodiment, the sending unit 704 is specifically configured to:
encrypting the randomly generated first coding sequence, transmitting the encrypted first coding sequence to the terminal, decrypting the encrypted first coding sequence by the terminal, controlling a light emitting assembly carried by the terminal, flashing according to a flashing rule indicated by codes contained in the decrypted first coding sequence, and collecting video of the object in the flashing process.
In this embodiment, the optically variable region is printed with optically variable ink; alternatively, the optically variable region is obtained using holographic laser printing techniques.
In this embodiment, the graphic Code is a QR Code.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
Referring to fig. 8, a flowchart of a detection method for screen capturing is shown in an exemplary embodiment of fig. 8. The method can be applied to the server side. The method as shown in fig. 8 may perform the following steps:
step 802: a first coding sequence is randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence is sent to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and video acquisition is carried out on the article in the flashing process; the article has a light-altering region that produces a light-altering reaction upon illumination of the light assembly with a blinking light.
Step 804: and receiving the video containing the object collected by the terminal.
Step 806: and identifying the light variation result of the video, and coding the identified light variation result to obtain a second coding sequence.
Step 808: and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
In this specification, the specific implementation of step 802 to step 808 is similar to that of step 202 to step 206, and will not be described here again.
The present specification also provides an embodiment of a label printed with a graphic code. The label is also printed with a light change area which generates light change reaction under the irradiation of an external light source; the optically variable area is used for verifying whether the terminal performs screen separation acquisition on the graphic code printed on the label; the terminal controls the carried luminous assembly to flash according to a flashing rule indicated by codes contained in a first coding sequence issued by the server, and performs video acquisition on the graphic codes in the flashing process, so that the server verifies whether the video containing the object acquired by the terminal is a video acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
For example, referring to the two-dimensional code label 10 shown in fig. 1, a light-variable region 12 and an initial image 13 including a two-dimensional code are printed on the two-dimensional code label 10.
For the generation and verification of the tag, please refer to one or more embodiments shown in the present specification, which are not described herein.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are illustrative only, in that the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of related data is required to comply with the relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation entries for the user to select authorization or rejection.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) is (are) merely intended to illustrate the embodiment(s) of the present invention, and it is not intended to limit the embodiment(s) of the present invention to the particular embodiment(s) described.

Claims (14)

1. A method of item authentication, the method comprising:
a first coding sequence is randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence is sent to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and video acquisition is carried out on the article in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
receiving a video containing the object collected by a terminal;
verifying whether the video is acquired by a screen or not, and verifying authenticity of an article contained in the video;
the verification of whether the video is acquired by a screen or not comprises the following steps:
Identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
2. The method of claim 1, the article comprising: and (5) a graphic code.
3. The method of claim 2, the verifying authenticity of the item contained in the video, comprising:
inquiring the graphic codes reserved in the graphic code generation process for comparison;
extracting graphic codes in the video frames acquired in the state of no illumination from the video, and comparing the extracted graphic codes with the graphic codes for comparison in a similarity manner so as to verify whether the graphic codes are real graphic codes or not.
4. The method of claim 1, the first coding sequence and the second coding sequence being binary coding sequences;
wherein the code contained in the first code sequence comprises: a first value for controlling the light emitting element mounted on the terminal to emit light, and a second value for controlling the light emitting element mounted on the terminal to not emit light;
The code contained in the second code sequence comprises: the method comprises the steps of indicating that a corresponding video frame is a first numerical value of a video frame acquired in an illumination state, and indicating that the corresponding video frame is a second numerical value of the video frame acquired in a non-illumination state.
5. The method of claim 1, the verifying whether the video is a non-screen captured video based on a similarity between the first and second coding sequences, comprising:
calculating sequence similarity between the first coding sequence and the second coding sequence;
if the sequence similarity reaches a first threshold value, determining that the video is acquired by a non-screen;
and if the sequence similarity does not reach a first threshold value, determining that the video is a video acquired by a screen.
6. A method according to claim 3, said similarity comparing the extracted graphic code with the graphic code for comparison to verify whether the graphic code is a true graphic code, comprising:
calculating the similarity between the extracted graphic code and the graphic code for comparison;
if the similarity reaches a second threshold value, determining that the graphic code is a real graphic code;
If the similarity does not reach a second threshold, it is determined that the graphic code is not a true graphic code.
7. The method according to claim 1, wherein the sending the first code sequence to the terminal, so that the terminal controls the light emitting component carried by the terminal to flash according to the flashing rule indicated by the code included in the first code sequence, and performs video acquisition on the article during the flashing process, includes:
encrypting the randomly generated first coding sequence, transmitting the encrypted first coding sequence to the terminal, decrypting the encrypted first coding sequence by the terminal, controlling a light emitting assembly carried by the terminal, flashing according to a flashing rule indicated by codes contained in the decrypted first coding sequence, and collecting video of the object in the flashing process.
8. The method of claim 1, wherein the optically variable region is printed with optically variable ink; alternatively, the optically variable region is obtained using holographic laser printing techniques.
9. The method of claim 2, the graphic Code being a QR Code.
10. An article verification apparatus, the apparatus comprising:
The random generation unit is used for responding to a verification request for the article initiated by the terminal and randomly generating a first coding sequence;
the sending unit is used for sending the first coding sequence to the terminal so that the terminal controls the light-emitting component carried by the terminal to flash according to the flashing rule indicated by the codes contained in the first coding sequence, and the object is subjected to video acquisition in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
the receiving unit is used for receiving the video containing the object collected by the terminal;
the verification unit is used for verifying whether the video is acquired by a screen or not and verifying authenticity of articles contained in the video;
the verification of whether the video is acquired by a screen or not comprises the following steps:
identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
11. A detection method for screen acquisition, the method comprising:
a first coding sequence is randomly generated in response to a verification request for an article initiated by a terminal, and the first coding sequence is sent to the terminal, so that the terminal controls a light-emitting component carried by the terminal to flash according to a flashing rule indicated by codes contained in the first coding sequence, and video acquisition is carried out on the article in the flashing process; the article is provided with a light variation area, and the light variation area generates light variation reaction under the irradiation of the flashing light of the light emitting component;
receiving a video containing the object collected by a terminal;
identifying a light variation result of the video, and coding the identified light variation result to obtain a second coding sequence;
and verifying whether the video is acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
12. An electronic device comprises a communication interface, a processor, a memory and a bus, wherein the communication interface, the processor and the memory are connected with each other through the bus;
the memory stores machine readable instructions, the processor executing the method of any of claims 1-10 or 11 by invoking the machine readable instructions.
13. A machine-readable storage medium storing machine-readable instructions which, when invoked and executed by a processor, implement the method of any one of claims 1-10 or 11.
14. A label printed with a graphic code, the label being further printed with a light-changing region that generates a light-changing reaction under irradiation of an external light source; the optically variable area is used for verifying whether the terminal performs screen separation acquisition on the graphic code printed on the label;
the terminal controls the carried luminous assembly to flash according to a flashing rule indicated by codes contained in a first coding sequence issued by the server, and performs video acquisition on the graphic codes in the flashing process, so that the server verifies whether the video containing the object acquired by the terminal is a video acquired by a non-screen according to the similarity between the first coding sequence and the second coding sequence.
CN202310725250.0A 2023-06-16 2023-06-16 Article verification method, detection method for screen acquisition and label printed with graphic code Pending CN116911872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310725250.0A CN116911872A (en) 2023-06-16 2023-06-16 Article verification method, detection method for screen acquisition and label printed with graphic code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310725250.0A CN116911872A (en) 2023-06-16 2023-06-16 Article verification method, detection method for screen acquisition and label printed with graphic code

Publications (1)

Publication Number Publication Date
CN116911872A true CN116911872A (en) 2023-10-20

Family

ID=88352060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310725250.0A Pending CN116911872A (en) 2023-06-16 2023-06-16 Article verification method, detection method for screen acquisition and label printed with graphic code

Country Status (1)

Country Link
CN (1) CN116911872A (en)

Similar Documents

Publication Publication Date Title
US11256914B2 (en) System and method for detecting the authenticity of products
CN108229120B (en) Face unlocking method, face unlocking information registration device, face unlocking information registration equipment, face unlocking program and face unlocking information registration medium
US9153005B2 (en) Method and system for authenticating a secure document
KR102235215B1 (en) Augmenting barcodes with secondary encoding for anti-counterfeiting
CN110766594B (en) Information hiding method and device, detection method and device and anti-counterfeiting tracing method
CN110428028B (en) Identification and verification method, device, equipment and medium based on quasi-dynamic laser label
CN105046504A (en) Multiple key checking, inspection and forgery prevention source forming method and forgery prevention label
CN106056183B (en) The printed medium of printing press readable image and the system and method for scanning the image
Yan et al. An IoT-based anti-counterfeiting system using visual features on QR code
CN110533704B (en) Method, device, equipment and medium for identifying and verifying ink label
US9691208B2 (en) Mechanisms for authenticating the validity of an item
US10929625B2 (en) Authentication method for product packaging
CN106934756B (en) Method and system for embedding information in single-color or special-color image
KR102163119B1 (en) Apparatus for Detecting Authenticity by Using Scattered Reflection of Dots and Driving Method Thereof
CN116911872A (en) Article verification method, detection method for screen acquisition and label printed with graphic code
CN113837026B (en) Method and device for detecting authenticity of certificate
EP3992356A1 (en) Method for verifying product authenticity and establishing authorized product data with fabric features
CN109313701A (en) For generating method, imaging device and the system of the measurement of the authenticity of object
EP3982289A1 (en) Method for validation of authenticity of an image present in an object, object with increased security level and method for preparation thereof, computer equipment, computer program and appropriate reading means
CN116227524B (en) Anti-fake code generation and verification method and label-based anti-fake system
CN112597810A (en) Identity document authentication method and system
CN109784454A (en) A kind of information concealing method based on two dimensional code, device and electronic equipment
LU501374B1 (en) Method for reading a tag based on cholesteric spherical reflectors
US20100008561A1 (en) System and method for authenticating products and/or packages
CN117935669A (en) Anti-counterfeit label and manufacturing and verifying methods and devices thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination